Christopher Baus suggests that HTTP Servers should not send 400 Bad Request but should drop connections instead.
As a HTTP client developer let me beg people not to do this. While it is fairly rare that bad requests happen they still do occur occasionally and each one is a nightmare to debug.
For instance, Dave Johnson ran into a fairly typical problem last week. A particular server company had configured their software so it blocked any requests from software that had the word “Java” in the user agent. This took some effort to debug, but would have just about been impossible if the connections were being dropped.
Unfortunately software that intercepts, processes and sometimes modifies requests and responses like this are becoming increasingly commmon. While they seem to be a good idea, and appear to work okay when you browse the website with a common webbrowser they often break things in non-obvious ways.
The deeper I get into the internet software stack the more amazed I am that anything actually works at all. You'd think that TCP/IP->HTTP->XML/HTML is so comon that all the bugs would be ironed out by now – but that isn't true. It is full of edge cases and unexplored scenarios where things just break – or at least no one knows the correct way to do things.
Anyway – please don't go and create adhoc modifications to the HTTP spec like this suggestion (although it is fine to modify the error messageso it doesn't give too much information away)