out additional information/disclaimers required depending on your audience.
Know your audience
They already think they know everything about Servlet
They already think they know everything about HTTP/1.1
They’ve heard about HTTP/2.0
They are looking for what’s new. They’re looking for a reason to stick with Servlet over node.js
They like lots of code, they are suspicious of slideware.
Show of hands questions:
using Servlet 3.0, 3.1
heard something substantial about h2?
This section of the talk can follow the narrative arc and foreshadowing patterns to build to the climax of how Servlet 4.0 will provide answers to these problems.
The quest for better performance
If HTTP/1.1 was good enough for the last 15 years, why do we need something new?
30 resources
2 styelsheets
9 java scripts
8 jpg images
2 pngs
----- Meeting Notes (3/11/15 14:24) -----
Google research used to inform design of h2 from the beginning.
Given this basic reality, what sorts of problems exist in the current state of the art? The first is called Head of Line (HOL) Blocking.
With HTTP/1.1 pipelineing, you can send each request in order, the server is required to respond in the same order.
Lets say style1 and style2 are returned quickly, but script1 seems to take some time. Because of HOL blocking, none of the other
resources can be delivered until script1 is completely delivered.
Can we work around HOL blocking in HTTP/1.1?
Sort of, just open up more sockets to the same server! Most browsers support 6 – 8 simultaneous sockets.
We know that each socket is expensive. Particularly on servers!
Well, if the protocol itself has these performance problems, what can we do at the application level to work around them?
This is where the problem began. Prior to this, there was no concept of an inlined asset. Just goes to show the importance of going from "zero" to "one or more".
This technique works around the HOL blocking problem by concatenating many logical resources into a single physical resource.
Leverages the benefits of the TCP congestion algorithm, that the efficiency of the socket improves with larger files over time.
This concept also works with css and js. And don't forget compression. That really helps as well.
You play a stupid DNS trick to get more parallel access.
Lets you work around the browser imposed limitation on how many parallel connections to a single host
Goes back to the way it was before Mosaic came along: back to a single resource.
This has numerous problems:
inlined images can't be easily cached and shared across pages.
unnecessary base64 encoding decoding
Those are the big problems that are out there in the world.
This is important because HTTP/2 is essentially a new transport layer underneath the existing HTTP/1.1 semantics + a header compression specification.
Same request/response model
No new HTTP methods (except for PRI but that's just for the protocol)
No new headers (but new names/concepts for old headers)
No new usage pattern from application level
Same usage of URL spec and TCP ports
Care has been taken to avoid the "not invented here" syndrome, and to re-use concepts already proven successful.
Physical: electrical and physical specifications of the data connection.
Token ring, ethernet, etc.
Data Link: MAC addresses, node-to-node transfer, anyone remember PPP? Reliable communication.
Network: This is where routing comes in. This is how datagrams can make it between nodes that are not directly connected to each other. Multicast group management lives here.
Transport: This is where TCP and UDP live.
Session: home of the concept of a connection being created and closed. The connection lifecycle lives here. The UNIX Socket API lives here.
Presentation: This is where Transport Layer Security lives (incorrectly named, eh?). Mime is another example of a presentation layer protocol.
Does anyone here do anything with layers 6 and below here?
Different viewpoints on the problem of the network layer for the web.
As we've already seen, sockets are seen as a throw away resource.
This rule about how many parallel sockets may be opened is really a gentleman's agreement. The original Mosaic only opened one socket. When IE upped it to four, people started calling it "Internet Exploiter".
Another illustration of the socket angle.
what can we do at the application level to work around them?
Plug ACM
Plug ACM
Fully bi-directional at the protocol level, no HOL blocking.
Message, not just request and response, there are also control messages.
Headers must precede data.
ODD == client originated
EVEN == server originated
httpbis draft 14.
Length shows how long the entire frame is.
Flags are used for several purposes, one of which is to indicate that this is the end of the header, or the end of the stream.
DATA is request or response body
HEADERS is the request or response header
RST_STREAM corresponds to an error
SETTINGS allows you to send configuration data for a given stream.
PUSH_PROMISE: related to server push
PING if the connection is still alive. This is necessary because the impact of closing down and opening a new socket is a bigger deal. In h1, if there was a problem on a socket, just close it
and open up another one. In h2, sockets are treated with more respect.
GOAWAY allows graceful closing of the socket.
WINDOW_UPDATE is for flow control. If the server is sending more data than the client can handle, the client can tell the server to send less.
CONTINUATION when one frame is a continuation of another one.
HPACK is the single biggest thing that enables the HTTP WG to refute the claim that HTTP/2 is just a rubber stamping of Google's SPDY.
The designers of HTTP/2 observed that a lot of bytes of H1 are just headers. Furthermore, there is a lot of repetition.
Because we have the concept of stream ID within a channel, we can now have headers that correspond to that stream ID, and therefore
keep track of these header tables that correspond to that stream.
This is a lot harder to implement than h1! Let’s rewind the clock to 1993. These standards were designed to be easy to implement. This fact
was crucially important to the growth of the web. Remember, back then, the web was not the only game in town. There was archie, WAIS and
gopher. If they came out of the gate with such a complex protocol, http would not have caught on as fast as it did.
There is something to be said for simplicity of implementation. But there is also something to be said for a judicious use of complexity to
increase performance where it is appropriate.
The client and server have shared state! This means there is a data structure at both ends of the connection that must be kept in synch. Source of controversy. Note SPDY did not have this kind of header "compression". It just said that the regular HTTP/1.1 headers block must be compressed using a "normal" compression algorithm: zlib.
httpbis draft 14.
The + means "add this header to the table". The – means "remove this header from the table".
:method:, :scheme: etc are "Pseudo-Header Fields". These are special header fields defined by the HTTP/2 spec. They were defined to hold information that formerly was on the request line, or for other "must have" HTTP headers, and must start and end with a ":".
They must be in lower case or the request is treated as malformed, but the strings must be compared in a case-insensitive fashion.
httpbis draft 14.
Two ways to specify priority: in headers or in a separate frame. Make sure people understand that this is not just an integer. It's not just like, "a bigger integer means it is more important."
B and C depend on A. This information is included as a header in the HEADERS frame.
These numbers are weights that correspond to the priority.
If A is stuck, you can't do anything on A, you would like to do the things that are lower priority, which is B and C in this case. B receives one third the resources of C.
An exclusive flag allows for the insertion of a new level of dependencies. The exclusive flag causes the stream to become the sole dependency of its parent stream, causing other dependencies to become dependent on the prioritized stream.
Patterns: Foreshadow use of Server Push with JSF later
It's a facility to allow the server to pre-populate the browser's cache with data it knows the browser will need anyway.
Here is where you mention that this is not a replacement for WebSocket. It can be used in concert with SSE
HTTP RFC
7230 HTTP/1.1 messaging
7231 HTTP/1/1 semantics and content
7232 h1 conditional requests
7233 range requests
7234 Caching
7235 authentication
4648 base64 encoding
7323 TCP extensions for high performance
3986 URI spec
2046 MIME
6265 Cookies
HTTP RFC
7230 HTTP/1.1 messaging
7231 HTTP/1/1 semantics and content
7232 h1 conditional requests
7233 range requests
7234 Caching
7235 authentication
4648 base64 encoding
7323 TCP extensions for high performance
3986 URI spec
2046 MIME
6265 Cookies
Responsive: responds to user needs in a timely manner
Resilient: can withstand outages with graceful degradation
Elastic: system response is not too heavily degraded when demand is high, resource utilization is not to heavy when demand is low.
Message Driven: allows the parts of the system to be loosely coupled and interact with eachother in an asynchronous manner.
Servlet only concentrates on two of these concerns: responsive and message driven. The other parts are outside of the domain of Servlet.
One problem with being able to proudly sport the "We Are Reactive" banner is that the instant you have to block for IO, you are no longer reactive. The main purpose of
these new APIs in Servlet 3.1 is to make it so you can entirely avoid blocking if you are really careful.
Uses a listener approach to solve the non-blocking IO problem.
Introduced an API to query if the streams can be read from or written to without blocking.
Servlet started on the path of reactivity in 3.0.
SSE is where we handle the "Event Driven" concern of reactive.
Mention the discussion about SSE: conclusion: no additional API needed.
Use this is a segue to mention how new features at the protocol layer
may or may not be exposed to higher layers in the stack.
Request reliability mechanism in h2 8.1.4
The features in gray are deemed too low level to expose in Servlet, an application level abstraction.