Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HTTP/3 has a better goal in that it aims to push these elements back down to the transport layer using QUIC, in HTTP/2 transport abstractions bubbled up to the protocol layer making for a messy abstraction. HTTP/2 is a leaky abstraction and blends too much of the transport and protocol layers and unnecessarily complex.

Though I will add that much of the protocol and standards work in recent years or even the last 5-10 has largely been companies aiming to take control of the standards by implementing standards that benefit them the most over maybe sensible simplifying, adding complexity to own it more. That definitely is a factor and HTTP/2 was rushed probably for this reason.

SCTP would be doable as it is a transport layer and yes difficulty in rolling it out, but Google went after QUIC, which is also a transport layer and is similar to SCTP (UDP capabilities or essentially a transport version of RUDP mixed with ordering/verification), because they also call the shots on that. It makes sense for Google to push that but does it make sense for everyone to just allow that? People have to understand that standards are now pushed company level rather than from engineering solely. The needs of HTTP/2 went beyond a better system, it ventured into control the standards and market standards territory.

Hopefully HTTP/3 is better and less complexity, but judging by who wants it in and how companies want to control these layers more, I have my doubts. We now have 3 HTTP protocol layers to support, more and more it will box out engineers from being able to compete or other browsers/web servers to compete. I don't know that the pros outweigh the cons in some of these scenarios.

Who really gains/ed from HTTP/2 HTTP/3? UDP was always available as well as reliable UDP. HTTP/2 and HTTP/3 feel more like a standards grab with minimal benefits but major benefits to the pushers. I am not against progress in any way, I am against power grabs and iterations that provide little benefits from major overhauls and 'second-system syndrome' as well as complexity rather than simplicity to some of those ends.

Did we really benefit from obfuscating protocol layers of HTTP (the HyperTEXT Transfer Protocol) into binary? What did we gain? We lost plenty, easier debugging, simplification, control of the standard, competition etc. Hopefully we gained from it but I am not seeing it. We already had encryption and compression to stop ad networks/data collection, binary gains are minimal for lots of complexity. Simplification was destroyed for what? Resource inlining breaks caching. Multiplexing is nice but it came at great cost and didn't really improve the end result.

HTTP/2 reminds me of the over complexity in frameworks, SOAP vs REST, binary vs text, binary JSON and many other things that were edge cases that now everyone has to deal with. As engineers we must take complexity and simplify it, that is the job, I don't see lots of that in recent years with standards and development. Minimalism and simplicity should be the goal, complexity should be like authority, it should be questioned harshly and be allowed only when there is no other way.

Making another version and more complexity is easy, making something simple is extremely difficult.



Multiplexing is beneficial, otherwise we'd never have seen HTTP user agents opening up multiple TCP connections.

The gain of moving to binary was the ability to multiplex multiple requests is a single TCP connection. It also fixes a whole class of error around request and response handling - see the recent request smuggling coverage from Blackhat [1].

HTTP/2 isn't perfect but I dont buy that HTTP/1.1 is the ideal simple protocol. There are a ton of issues with its practical usage and implementation.

We live in complex times and the threat models are constantly advancing. Addressing those requires protocols that by their nature end up more complex. The "simple" protocols of the past weren't designed under the same threat models.

[1] https://portswigger.net/blog/http-desync-attacks-request-smu...


No doubt multiplexing is good, better in the transport layer though not protocol.

Protocols should be programmable. Currently HTTP/2 requires libraries and being binary by nature there are more changes for vulnerable libraries, that is a fact. Due to them being more complex there is more chance for error and holes.

The smarter move would have been transport or a combination of a protocol surface and a protocol transport layer if putting it in the transport layer was undoable.

Iterations are better than the "second-system effect" in most cases. Engineers are making too many breaking changes and new standards that benefit companies over the whole of engineering and internet freedom and control. Companies largely wanted to control these protocols and layers, and they have done that, you have to see that was a big part of this.

HTTP/2 benefits cloud providers and Google (especially since they drove this with SPDY then HTTP/2 then QUIC now HTTP/3) etc more than most engineering and it was done that way on purpose. The average company was not helped by adding this complexity for little gains. The layers underneath could have been smarter and simpler, making the top layers easy is difficult, but that is the job.

In a way HTTP was hijacked into a binary protocol and should have just been Binary Hypertext Transfer Protocol (BHTTP) or something that HTTP rides on top of. Just too much transport bubbled up from transport layer into the protocol layer with HTTP/2 and a bit of a mess or leaky abstraction or a big ball of binary.

> We live in complex times and the threat models are constantly advancing. Addressing those requires protocols that by their nature end up more complex. The "simple" protocols of the past weren't designed under the same threat models.

The top most layers can still be simple even if you are evolving. Surface layers and presentation layers can be simplified and making them more complex does not lower their threat models or vulnerabilities, as we see with the OP article, vulnerabilities always will exist and when it is more complex they happen more often in engineering.

I have had to implement MIME + HTTP protocols and RFCs for other standards like EDIINT AS2 and others. Making a standalone HTTP/HTTPS/TLS capable web/app server product is now many times more complex and will be moreso when HTTP/3 comes out. Not fully yet but as time goes on consolidation happens and competition melts away by making things more complex for minimal gains. There are lots of systems that can still use HTTP servers that are embedded and other things that will be more complex now, and again for minimal gains.

Software should evolve to be simpler, and when complexity is needed, it needs to be worth it to get to another level of simplicity. Making things simple is the job and what engineers and standards or product people should do. Proprietary binary blobs is where the internet is going when you start down this path. I am looking forward to another Great Simplification that happened when the internet and early internet standards were set out, as that spread technology and knowledge. Now there is a move away from that into complexity for power + control and little benefit. We are just in that part of the wheel, cycle or wave.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: