HTTP/2 has been ratified for months and browsers already support it. Everything we hear tells us that the new version of HTTP will provide significant performance benefits while requiring little to no change to our applications -- all the problems with HTTP/1.x have seemingly been addressed, we no longer need the "hacks" that enabled us to circumvent them. In this talk you will learn the story behind HTTP/2, its new shiny features (multiplexing, header compression and server push), how to enable it in jetty (live session) and a few tips to debug it in your local development environment.
4. @patrizio_munzi 4
100ms in delay results in 1% sales loss.
(potential hundreds of milions in lost revenues)
400ms delay results in 5-9% drop in full
page traffic
500ms delay drops search traffic by 20%.
• https://www.slideshare.net/pob1970/mobile-first-lukew/41-100ms_delay_results_in_1
• https://news.ycombinator.com/item?id=273900
• http://glinden.blogspot.it/2006/11/marissa-mayer-at-web-20.html
5. @patrizio_munzi 5
Agenda
1. A bit of HTTP history
2. HTTP/2 new features
3. HTTP/2 in practice
4. HTTP/2 adoption recipes
5. Q/A
21. @patrizio_munzi 21
2 - HPACK
Why HTTP/2 is faster?
• Static Dictionary: 61 commonly used header fields
• Dynamic Dictionary: A list of actual headers that were encountered
during the connection. Size limited.
• Static Huffman Encoding
28. 28
Put the static assets on a HTTP/2 CDN
HTTP/2 adoption
(tier 1)
Apache
Reverse
Proxy
CDN
Static
Content
HTTP/2
HTTP/1
Dynamic
Content
Web
Server
HTTP/1
Dynamic
Content
29. 29
Have a reverse proxy translating HTTP/2 calls to HTTP/1
HTTP/2 adoption
(tier 2)
Apache
Reverse
ProxyHTTP/2
Dynamic
Content
CDN
Static
Content
HTTP/2
HTTP/1
Dynamic
Content
Web
Server
HTTP/2
30. 30
The whole infrastructure over HTTP/2
HTTP/2 adoption
(tier 3)
Apache
Reverse
ProxyHTTP/2
Dynamic
Content
CDN
Static
Content
HTTP/2
Web
Server
HTTP/2
HTTP/2
Dynamic
Content
Ok, now, before we start, let me tell you the only thing I wanna you take away from this talk.
HTTP/2 is faster than HTTP/1. No matter what. Numerous studies confirm this.
having a faster website means more money.
Amazon makes 1% more revenue for every 100ms shaved off every page. Imagine amazon is making millions dollars, 1% is millions dollars.
Yahoo: they tested that every 400ms there was an increase on the traffic on the site by 9%
HTTP was born over 20 years ago and was thought for serving this kind of pages.
Text with a little formatting going on and link to other web pages.
And this is what the protocol was designed for.
A few years later, HTTP 1.0 came and he bought a couple of more methods and features
Only a year later a new version came.
It’s strange it was only a year but there were problems to address
It had OPTIONS method which was needed to do cross-origin communications.
HOST HEADER became mandatory because people started to host multiple websites on the same server.
And KEEP ALIVE.
Websites looked like this and weren’t anymore hypertext but hypermedia.
The release of HTTP/1.1 allowed only to solve a few of the limit the HTTP protocol had. Websites started to become more and more complex and heavy.
Websites like myspace, facebook, twitter were born and the expectations over the HTTP performances increased.
Developers started to abuse of HTTP/1 and defined best practices to get around its performance limit.
Actually it wasn’t really like that. We were just not using it for what it wasn’t design for.
Workaorund to increase HTTP/1 performances
Head Of Line blocking
When you send a req on a connection to a server, that connection is useless until the req is completed.
Originally a browser was only allowed to 2 concurrent connections so with the evolution of web 2 connections became insufficient and someone said, ok let’s raise the limit but you can realise that 6 conns is better than 2 but it’s anyway a limit and just postpone the problem
The HOL blocking as long as various other HTTP limitations pushed google to start defining SPDY, an experimental protocol only supported by chrome and google servers.
It was a binary protocol, it had headers compression and server push, these feature were taken and reworked by the HTTP/2 authority and HTTP/2 was born in 2015
it became apparent that SPDY was gaining traction with implementrs
SPDY/2 was chosen as the basis for HTTP/2.
Now at the beginning of the presentation I told you that HTTP/2 was faster and this was because of its 3 wonderful features.
Why’s faster? Multiplexing which mean 1 always open connection multiple concurrent requests so one latency.
Connection is split in multiple streams and every stream is split in ordered frames.
Since now all the reqs goes in one single connection as streams, potentially all the reqs can start in parallel.
And that’s is actually what happens
Why’s faster? Multiplexing which mean 1 always open connection multiple concurrent requests so one latency.
Connection is split in multiple streams and every stream is split in ordered frames.
Since now all the reqs goes in one single connection as streams, potentially all the reqs can start in parallel.
And that’s is actually what happens
in fact if we go back to the akamai demo and we load the same cropped images over HTTP/2 we can see that all the image pieces are loaded in parallel.
in fact if we go back to the akamai demo and we load the same cropped images over HTTP/2 we can see that all the image pieces are loaded in parallel.
They are disadvantageous
Header compression specifically for HTTP.
Why’s faster? HPACK. We can finally compress request headers.
The two main features here are:
Header content is compressed
Tables not at req level but at connection level – one more reason to have as less domain as possible.
A web browser requests a webpage (index.html in our example), and the server returns to the client three objects: index.html, and two extra objects: scripts.js and styles.css (PROMISES of resources the client is going to need soon), which are stored on a special cache reserved for that purpose. The client then parses index.html and realizes it needs three objects to load the page: scripts.js, styles.css and image.jpg. The first two are already in the browser cache as they have been pushed by the server, so the client just needs to request image.jpg to the server in order to render the page.
Drawbacks:
- Pushing resources that are already present in the browser's cache can waste precious bandwidth.
- Having push resources compete with the delivery of the HTML, which can impact page load times. This can be avoided by limiting the amount and size of what is pushed.
- Incredible how jetty is simple:
Requirements:
- JDK 8
Show logs of
HTTP connector on 8080
HTTPS connector
HTTP2 connector on 8443
DEMO
Dummiest project you could start playing around with HTTP/2
Traffic shaping with ”Traffic shaper control program”
Restart server and show that the resources are not anymore pushed, they will be pushed after the first load
Almost all brwoser supports it.
And we’re safe HTTP/2 kicks in only if browser supports it
Server side after over 1 year from the standardization all major servers support it.
To really get improvements we need to disable:
Spriting
Concatenation
Domain sharding
Even if in samples thisgives great improvements, in real life the performance improvements are of about 1%
To really get improvements we need to disable:
Spriting
Concatenation
Domain sharding
Even if in samples thisgives great improvements, in real life the performance improvements are of about 1%
To really get improvements we need to disable:
Spriting
Concatenation
Domain sharding
Even if in samples this gives great improvements, in real life the performance improvements are of about 1%
- HTTP/2 in hotels.com? hotels.com has started the adoption, but in a big company this takes sometime