News Ticker

HTTP/2 Topline Features

1. Introduction

Following on from my previous article HTTP/2.0: Why Do We Need It?HTTP/2 In A NutshellServlet 4.0 Features and Tracking HTTP/2 Adoption: Stagnation I would like to look at the HTTP/2 specification in detail.

I often post about HTTP/2.0 on Twitter.

2. Topline Feature

Let’s have a look at the topline features of HTTP/2 Specification.

HTTP/2 is comprised of two specifications:

  • Hypertext Transfer Protocol version 2 – RFC7540
  • HPACK – Header Compression for HTTP/2 – RFC7541

Request/Response Multiplexing – the most important change – Over a single TCP connection request and response is multiplexed with full bi-directional communication.

  • Binary Framing – The TCP connection is broken down into frames
  • Header Compression – remove duplication from headers
  • Stream Prioritization – give priority processing to streams
  • Server Push – anticipates required assets
  • Upgrade From HTTP 1.1 – how to upgrade to HTTP/2

3. Request/Response Multiplexing

Over a single TCP connection request and response is multiplexed with full bi-directional communication.

Let’s defined some terms used when talking about multiplexing:

  • Connection – A single TCP socket
  • Stream – A channel within a connection
  • Message – A logical message, such as a request or a response and control message
  • Frame – The smallest unit of communication in HTTP/2. A request/response it is broken down into smaller parts.

For a given request you would have just one HTTP request, with HTTP/2, we break it down into smaller frames and this is how we resolve head of line blocking.

Request-Response Multiplexing

Now that communication has been broken down into frames you can interweave the logical streams over a single TCP connection and the issue of head-of-line blocking is removed.

3.1. TCP is Broken Down Into Frames

For a given connection you have multiple streams, for each stream you can have multiple messages and for each message you have multiple frames.

You can see that there is a hierarchy and the frame is the fundamental unit of communication.

3.2. Interweave Streams

Once broken down into frames you can interweave the logical stream over a single TCP connection.

streams.png

Stream 3 sends its headers first then later it sends its data which is its HTTP request body and before the server receives the completed stream 3 its sends back a response to stream 2.

3.4. Binary Framing

The frame has a header and the header consists of some information:

A frame consists of length of payload (24), type(8), some configuration flags(8), a reserved bit, Stream identifier(31) and Frame Payload (0 …)

Binary Framing

The stream identifier refers to the 1, 2, 3 in the previous diagram.

There are many different types of frames: Type fields can be

  • DATA corresponds to the HTTP request body. If you have a large body you may have multiple of them data 1, data 2 etc
  • HEADER, corresponds to the HTTP headers
  • PRIORITY, refers to the stream priority
  • RST_STREAM, notifies there is an error and allows the client to rejects server push promise request because it already has resource
  • PUSH_PROMISE, notifies of server push intent
  • SETTING, PING, GOAWAY, WINDOW_UPDATE, CONTINUATION

Let’s see how the HTTP request is mapped to frames

4. Header Compression

4.1. Mapping the HTTP Request to Frames

On the left, we have an HTTP request and on the right we have it mapped into a header frame.

Mapping the HTTP Request to Frames.png

In the header frame, you have two configurations. The first is END_STREAM which is set to true (the plus sign means true) this means that this is the last frame for this request and if you set END_HEADERS to true then this frame is the last frame in the stream that contains header information.

Then we map the familiar header information from the HTTP 1.1 request.

The colon denotes that this is a pseudo header and references its definition in the HTTP/2 specification.

Let’s look at the response.

4.2. Mapping the HTTP Response to Frames

On the left is an HTTP1.1 header response. This splits into two frames: header frame and a data frame.

Mapping the HTTP Response to Frames.png

In the header frame, the END_STREAM is minus because this is not the last frame and the END_HEADER is true as this is the last frame with header information. In the data frame, the END_STREAM is mark plus as it is the last frame.

The header frame contains a lot of information that has been duplicated between requests. How can we optimize this?

4.3. HPACK Header Compression

Between requests, there is a lot of information that is the same. Between request 1 and 2 the only difference is the path. So why send the same information over and over again?

Instead of sending the data over and over again. We use HPACK header compression.

HPACK keeps a table of the headers on the client and servers, then when the second and subsequent headers are sent across it just references the header number on the header table.

HPACK_header_compression

Then the server/client knows which header you are actually using.

5. Stream Prioritization

HTTP/2 provides the ability to attach priority information to streams which give priority to one resource over another.

The priority can be entered in the header frame or the priority frame.

We can also define a hierarchy between stream however it is only advice to the server which is completely free to ignore the priority information. If it cannot respect the priority it can ignore it.

6. Server Push

Eliminates the need for resource inlining. The server can proactively send resources to the client. The server tries to prepopulate the browser’s cache with resources so that when the resource is needed it is already available. and does not need to be requested saving time.

6.1. The Way it Works

The client sends header frames to the server to get a file, in this case an index.html, and at some point the server will respond with the file.

The server knows that if I am requesting that file I will also want to request the resources that the file requires to render that HTML file. In this case a CCS and an image. the Server can decide to proactively send those resources to the client even though the client has not yet asked for those resources.

The server knows that those resources will be requested. So the server sends a push promise frame for the CSS and a push promise frame for the image. Then the server sends the index file.

The client can ignore those resources. It knows that those resources are in the cache so it declines the push promise so it will not be sent to the client.

7. Upgrade Negotiation

So how do we talk in HTTP2.

7.1. Two Ways to Upgrade

If you are using HTTP  in clear text you can use the upgrade mechanism in HTTP 1.1 to send upgraded header to promote to a protocol called h2c, the c means clear text, then the server will react and upgrade to h2c.

If you are using HTTPS you can use ALPN (application layer protocol negation) which is a TLS extension. You send an extension during the handshake to the server side and the server figures out that it is h2 and the communication continues in h2. However, Firefox or Chrome does not support h2c.

HTTP/2 over TLS and HTTP/2 over TCP have been defined as 2 different protocols, identified respectively by h2 and h2c.

You might be interested in the article Configure Tomcat 9 for HTTP 2

I often post about HTTP/2.0 on Twitter.

Related Articles

Get an overview of the changes coming in HTTP/2.0 and how these are implemented in Java EE 8. In this article, I talk about the specification and the support that will be available in Java 9 and JavaEE 8.

The adoption of HTTP/2.0 has stagnated over the last four months. In this new article, I show how server providers are not migration to HTTP/2.0.

One of the most important developments in JavaEE 8 and Java 9 will be support for HTTP/2. Tomcat 9 supports HTTP/2 but it must be configured to use TLS. Head on over to Configure Tomcat 9 for HTTP/2 to find out how to add the appropriate configurations.

%d