Monday, April 28, 2025

Java HTTP3 / QUIC implementation: theory

History of HTTP

Since HTTP/1.0, the HTTP protocol was a means for request - response communication. The client sends a request to the server, and the server sends a response. The request contains at least a request method (GET, POST etc.) and requested resource path, and can contain other headers and request body. The response contains at least the status code, and can contain other headers and response body.

HTTP/1.0 used a single TCP connection per request. The client opened a connection, sent the request, read the response and closed the connection. This scheme was inefficient for a few reasons:

  • TCP connections start out in a so-called "slow start" state. When they start, the data transfer rates are artificially limited, and the transfer rates increase as more and more data is transferred. Since HTTP/1.0 uses a different TCP connection for each request, all requests observe the slow transfers.
  • Using encryption (TLS) makes things even worse. TLS requires large amounts of CPU during connection establishment, and more connections mean more CPU usage.

HTTP/1.1 addressed these points by reusing connections. In HTTP/1.1, the client could send multiple requests over the same connection, and the server would respond with multiple responses over the same connection. This required some changes to the HTTP protocol, specifically, both the client and the server are now required to declare where the message body ends, either by sending a content-length header, a transfer-encoding header, or by using methods that are known to send no content. Other than this, the protocol looks exactly like before.

HTTP/1.1 improved the transfer speeds a lot, but still left some room for improvement:

  • the responses had to be sent in the order in which requests were received. So, if the client sent multiple requests and generating a response for the first request took time, the connection could not be used for transferring other data and was idle.
  • while multiple requests could be sent at the same time in theory, in practice many servers were unable to handle such pipelined requests due to implementation bugs. In practice the clients often wait for the server response before sending a follow-up request, leaving the connection idle again.

HTTP/2 addressed these limitations by introducing multiplexing. Multiplexing means that it is now possible to send multiple streams over a single TCP connection. Each stream enables data transfer in both directions, and the connection can alternate between different streams at any time. Each request / response exchange is done on its own stream.

HTTP/2 can fully utilize a TCP link. Both the client and the server can send data over any stream at any time, so the connection is only idle when there is nothing to send on any of the active streams.

Full speed of TCP was not sufficient for the authors of HTTP/3. TCP also has some room for improvement:

  • before any data can be exchanged, the connection goes through a 3-way handshake, which takes one round-trip time to complete,
  • if any TCP packet is lost, data in subsequent packets is not deliverable until the lost packet is retransmitted and received,
  • there are many other points where a new protocol could improve upon TCP, I will discuss them later.
Most importantly though, TCP is next to impossible to evolve (it's "ossified"). There were attempts to improve the TCP protocol (see TCP Fast Open for example), but they met with resistance. It turned out that in order to support TCP Fast Open, it is not sufficient to have two endpoints that understand TCP Fast Open. Many devices in the network infrastructure have their own understanding of which TCP packets are correct and which ones are not, and need to be updated to understand TCP Fast Open, otherwise they simply drop these packets, negating any possible performance gains.

Compared to HTTP/2, HTTP/3 offers only cosmetic changes. The big change comes from replacing TCP with QUIC as the underlying protocol.

QUIC protocol

QUIC replaces TCP as the underlying transport for HTTP/3. Similar to TCP, it offers reliable in-order delivery. Unlike TCP, QUIC is always encrypted. Also QUIC supports multiple data streams. HTTP/2 had to implement its own multiplexing, HTTP/3 delegates that to the QUIC layer. One advantage of multiplexing on the QUIC layer is that data loss on one stream does not block data delivery on other streams.

Importantly though, everything in QUIC is encrypted end to end, including packet numbers, acknowledgements, and reset packets. This limits the options for the network devices to interfere with QUIC traffic, or to ossify on a specific QUIC version.

Compared to TCP, I find the following differences interesting:

Path MTU detection (PMTUD)

Both QUIC and TCP prevent packet fragmentation and detect the maximum packet size (maximum transfer unit, MTU) that is supported by the path. Using larger packet sizes improves efficiency, that is, the same payload can be delivered in a smaller number of packets.

With TCP, MTU detection can be performed using one of the following methods:

- The SYN packet includes a Maximum Segment Size extension. This extension can be modified by routers along the way, and the recipient calculates the maximum transfer unit based on the received MSS.

- If a router on path receives a packet larger than it can handle without fragmentation, it drops the packet, and sends an ICMP message back to the sender with information about maximum supported size.

None of these methods is authenticated. When an endpoint receives a MSS or an ICMP packet, it has no way to determine if it is authentic or forged.

There were cases where the ICMP packets were used to trick TCP endpoints to use very small packet sizes, and as a result many implementations ignore or block the ICMP packets. This can sometimes lead to a situation where the TCP stack selects a MTU larger than supported by the network, and then the connection breaks once the connected parties try to send data.

With QUIC, MTU detection can be performed using one of the following methods:

- The handshake is performed using 1200-byte datagrams, and will fail if the network does not support this datagram size

- support for larger datagram sizes is probed. If a given packet size is acknowledged by the peer, that packet size is supported. ICMP packets may be used to drive the selection of the packet sizes to probe, but they cannot be used to reduce the packet size below 1200 bytes.

- an endpoint can advertise the maximum datagram size it is willing to accept. If this number is modified in transit, the connection will fail.

Connection resilience

When running TCP+TLS, a corruption of a single bit is usually enough to terminate the connection. QUIC on the other hand is able to detect and discard a corrupted packet, and continue processing non-corrupted packets. Once the handshake is completed, it is practically impossible for a third party to create a QUIC packet that would cause connection termination.

Connection closure

TCP options to close a connection are limited to:

  • closing the sending side of the connection
  • resetting the connection

This works well enough in many cases, but doesn't work when one peer needs to send a message and abruptly close a connection where the other peer is actively sending. In that case, the connection will usually be reset, and the final message will be lost.

QUIC separates closing the stream from closing the connection. For closing the stream, it offers the following options:

  • closing the sending side of the stream
  • resetting the sending side of the stream
  • notifying the peer of closing the receiving side of the stream

And for closing the connection:

  • closing the connection with an error message
  • resetting the connection, used only when the peer is sending over a connection that no longer exists
  • timing out after a negotiated period of inactivity

Congestion control

TCP only offers limited information to the congestion controller:

  • last acknowledged sequence number is always available
  • optionally, the endpoints can negotiate support for selective acknowledgements (SACKs) to acknowledge data received out of order (supported by most implementations). SACKs can be reneged, i.e. an endpoint can request retransmission of data it previously acknowledged.
  • optionally, the endpoints can support timestamps (only supported by some implementations) to indicate the order in which packets were transmitted.
  • optionally, the endpoints can negotiate support of ECN. This has to be supported by the devices on path, and there used to be bugs that prevented its adoption.
Compared to that, QUIC offers more information:

  • QUIC acknowledges packets, not sequence numbers. This way when a packet is retransmitted and later acknowledged, it is clear if the acknowledgement applies to the original packet, to the retransmitted one, or both.
  • Packets are always acknowledged, even in the presence of packet loss
  • Packets cannot be reneged - once acknowledged, the data may not be discarded
  • Acknowledgements contain timing information - it is always clear if an acknowledgement was delayed by the sender and by how much
  • ECN support is detected, and ECN information is only used when no bugs are detected.
The QUIC congestion control algorithm defined by RFC 9001 does not match the performance of the CUBIC TCP controller, but some QUIC implementations already offer CUBIC.

Handshake improvements

DoS prevention: when a TCP server deals with a flood of TCP SYN packets, it starts sending SYN cookies. They permit the server to defer allocating state for a connection until the client address is confirmed. However, the SYN cookies lose information about TCP extensions present in the SYN packet, like MSS or TCP window scale.

When a QUIC server deals with a flood of initial packets, it starts sending retry packets. They also permit the server to defer allocating state for a connection until the client address is confirmed. They do not lose any information, but they cost one round trip time.

Timing improvements: TCP + TLS handshake costs at least 1 RTT (1 RTT for TCP, 0 RTT for TLS 1.3); QUIC can send data in the first packet making it true 0-RTT.

MTU validation: QUIC sends 1200-byte datagrams during handshake, validating that the path supports this datagram size.

Path migration

A TCP connection is initiated between 2 given addresses. Changing any of the addresses requires establishing a new connection and, in case of TLS, performing a new handshake.

A QUIC connection is established between 2 given addresses. Changing the client address can be performed any time without affecting connection state, but requires path validation to remove the anti-amplification limit. Changing the server address has the same requirements as with TLS.

Wednesday, August 28, 2024

Java Http3/QUIC implementation security, part 5: HTTP/3

...continued from part 4

RFC 9204 QPACK: Field Compression for HTTP/3

7.1 Probing Dynamic Table Size

HttpClient only uses the dynamic table for known-safe fields: ":authority" and "user-agent".

Fields "cookie", "authorization" and "proxy-authorization" are flagged with never-indexed bit.

7.2. Static Huffman Encoding

No additional requirements.

7.3 Memory Consumption

HttpClient limits the maximum size of the dynamic table to 4096. Blocked streams are disallowed by default.

The encoder table size is limited to 4KB even if the decoder advertises a larger table size.

The decoder limits the allowed field section size to 384KB. When that size is reached, the processing is aborted.

We currently do not monitor the amount of unsent data on the encoder and the decoder stream.

7.4 Implementation Limits

Integer values that can't be encoded on a Java long are rejected. String literals longer than 2GB are rejected, but only after parsing. This will be improved before the final release.

EDIT 24.04.2025:

Long string literals are rejected without parsing. Maximum acceptable length of a header field is configurable.

Java Http3/QUIC implementation security, part 4: HTTP/3

 ...continued from part 3

RFC 9114 HTTP/3

10.1 Server Authority

HTTP/3 uses QUIC and TLS to verify the server authority. We always set endpoint identification algorithm to HTTPS to ensure that the server certificate identity is authoritative for the URL host name.

10.2 Cross-Protocol Attacks

The underlying TLS implementation ensures that both parties agree on the ALPN.

10.3 Intermediary-Encapsulation Attacks

HttpClient validates incoming field names and values. Responses containing invalid fields are treated as malformed, and are not delivered to the application.

10.4 Cacheability of Pushed Responses

HttpClient does not cache any responses. 

The default PushPromiseHandler rejects push promises where the :authority header does not match the hostname that was used to establish the connection. Custom push promise handlers might choose to implement different checks.

10.5 Denial-of-Service Considerations

The number of PUSH_PROMISE frames is limited to a maximum of 100 concurrently used push IDs at any given time.

The maximum allowable SETTINGS frame size is limited to 1280 bytes, which is more than enough to hold all defined settings.

HttpClient does not monitor the use of unknown frame types and unknown stream types. H3_EXCESSIVE_LOAD error is not generated.

HttpClient limits the maximum size of a field section and the maximum size of a field.

10.6 Use of Compression

HttpClient does not support compression. The Accept-Encoding and Content-Encoding headers are not set by the client. They may be set by the application.

10.7 Padding and Traffic Analysis

No additional requirements.

10.8 Frame Parsing

HttpClient checks the frame lengths.

10.9 Early data

HttpClient does not implement 0-RTT

10.10 Migration

No additional requirements.

10.11 Privacy Considerations

No additional requirements.

continued in part 5...

Monday, August 26, 2024

Java Http3/QUIC implementation security, part 3: QUIC

 ...continued from part 2

RFC 8999, 9368, 9369

The security considerations sections of these documents focus on downgrade prevention. No additional requirements beyond what is already discussed elsewhere in the documents.

RFC 9001 Using TLS to Secure QUIC

9.1 Session Linkability

JSSE TLS implementation does not reuse session tickets. It is also possible to prevent session resumption by using a different SSLContext for every connection.

9.2 Replay Attacks with 0-RTT

0-RTT requires support in HttpClient, QUIC and TLS. None of these is implemented.

9.3 Packet Reflection Attack Mitigation

This section discusses server anti-amplification limit. The requirements do not apply to the client side.

9.4 Header Protection Analysis

No additional requirements

9.5 Header Protection Timing Side Channels

We do not discard packets with duplicate packet number without decrypting them first.

We do not generate packet decryption keys while decrypting.

The packet decryption time might differ between current, previous and next key space. It might need further improvement.

9.6 Key Diversity

No additional requirements

9.7 Randomness

Connection IDs are generated with a secure random number generator.

RFC 9002 QUIC Loss Detection and Congestion Control

8.1 Loss and Congestion Signals

No additional requirements

8.2 Traffic Analysis

No additional requirements

8.3 Misreporting ECN Markings

Our QUIC implementation does not currently support sending or receiving ECN.

This concludes the overview of QUIC RFCs.

continued in part 4...

Java Http3/QUIC implementation security, part 2: QUIC

...continued from part 1

21.5 Request Forgery Attacks

This paragraph focuses on the risk posed by reflected datagrams. The concerns are somewhat similar to these in the anti-amplification section, except that here the focus is on sending datagrams to otherwise inaccessible services, and forging datagrams that would make the inaccessible services react in a specific way.

Most of the concerns listed here apply to the server side; other than using the server-supplied preferred address, the client does not migrate to other addresses. We do not support preferred address at the moment, so that doesn't apply either.

21.6 Slowloris attack

Slowloris aims at making the endpoint keep as many open connections as possible.

HttpClient may keep multiple connections to the same server. The number of open connections to a single server is at most the number of outstanding requests plus one. It's the user's responsibility to limit the number of concurrently executing requests.

21.7 Stream Fragmentation and Reassembly Attacks

QUIC implementation needs to buffer stream data on on the sending side until the data is acknowledged by peer, and on the receiving side until the data is received by the higher layer. If there are gaps in the received stream, the data needs to be buffered until the gaps are filled. This can lead to excessive memory consumption.

On the receiver side, HttpClient limits the MAX_DATA QUIC parameter to a maximum of 15 MB per connection at all times. If certain portions of stream data are received multiple times, only one copy is preserved until the data is received by the application. Buffer memory utilization is therefore bounded.

Memory structures used to store discontinuous ranges of stream data might consume excessive amounts of memory. The maximum memory usage has not been measured.

Crypto stream receive buffer is limited to 64KB per connection.

On the sender side, we buffer as much data as the congestion controller allows. This might lead to memory overcommit if the receiver successfully inflates the congestion window.

EDIT 24.04.2025:
We now detect when the peer sends excessive number of small frames, and close the connection if that happens. The detector is configured to verify that the average fragment size is above a certain threshold when the number of undeliverable fragments gets large.

21.8 Stream Commitment Attack

We limit the number of streams the peer can open at any time to 100 per stream type per connection.

21.9 Peer Denial of Service

This section recommends to "track cost of processing relative to progress and treat [excess] as indicative of an attack".

We do not track the cost of processing.

EDIT 24.04.2025:
see the edit on point 21.7 above

21.10 Explicit Congestion Notification Attacks

No additional requirements.

HttpClient's QUIC implementation does not support sending or receiving ECN yet.

21.11 Stateless Reset Oracle

Every QUIC endpoint uses a different randomly generated key for generating stateless reset tokens. The keys are never shared, so a stateless reset is only generated if a connection ID is not in use.

EDIT 24.04.2025:
Recipe for a stateless reset oracle:
The client can open multiple endpoints. All endpoints use the same key to generate stateless reset token, but each endpoint keeps its own list of active connection IDs.

Our client keeps a different key on every associated endpoint.

21.12 Version Downgrade

The current implementation only supports QUIC v1 and v2. These versions offer identical security properties, so version downgrade is not a concern.

21.13 Targeted Attacks by Routing

This section describes deployment concerns, as opposed to implementation concerns. No additional implementation requirements.

21.14 Traffic Analysis

Currently our QUIC API offers no way to obscure the length of the packet content.


This concludes the review of RFC 9000 security considerations.

continued in part 3...

Tuesday, August 20, 2024

Java Http3/QUIC implementation security, part 1: QUIC

Support HTTP/3 in the HttpClient

As part of the JEP, we implement:

  • RFC 9114: HTTP/3
  • RFC 9204: QPACK: Field Compression for HTTP/3
  • RFC 8999: Version-Independent Properties of QUIC
  • RFC 9000: A UDP-Based Multiplexed and Secure Transport
  • RFC 9001: Using TLS to Secure QUIC
  • RFC 9002: QUIC Loss Detection and Congestion Control
  • RFC 9368: Compatible Version Negotiation for QUIC
  • RFC 9369: QUIC Version 2

QUIC is implemented on top of TLS 1.3, defined in RFC 8446. TLS 1.3 support in JSSE was implemented in a prior JEP, and this JEP builds on top of that work.

The goal of the JEP is to implement a working implementation of HTTP/3 in the HttpClient. The QUIC implementation is supposed to be in a reasonably usable state; in particular, optional features and features that are only required by the server might not exist.

QUIC security considerations

This section structure mirrors the formal requirements specified in RFC 9000

21.1.1.1 Anti-Amplification

The QUIC implementation is supposed to limit the number of bytes it sends to an unvalidated address.
- The client only sends stateless reset messages to unvalidated addresses. We make sure that the stateless reset messages are only sent when they are strictly smaller than the incoming datagram.
- The server also sends handshake messages to unvalidated clients. Our server-side implementation does not have anti-amplification limit.

21.1.1.2 Server-Side DoS

In order to filter out forged handshake packets, the server can implement secure token generation, either in a retry packet, or in a new_token frame.
- Our client implementation of new_token and retry is complete
- The tokens generated by our server are not secure and easily forged, offering no protection against DoS.

21.1.1.3 On-Path Handshake Termination

We offer no extra protection against forged initial/retry packets.

21.1.1.4 Parameter Negotiation

No additional requirements

21.1.2 Protected packets

No additional requirements

21.1.3 Connection Migration

The server can offer a preferred address, and the client can choose to migrate to the preferred address or stay on the original one.
The client can switch addresses as a result of a (local) network change or as a result of a (remote) NAT rebinding.
In order to tell apart a real and a spoofed address migration, the QUIC endpoints are supposed to implement path validation. Until the path validation succeeds, the new address is subject to anti-amplification limit.
Our implementation:
- does not perform path validation. The handling of connection migration needs to be reevaluated.
- always sends packets to the same remote address. This is good enough on the client side, but not good enough on the server side.
- selects source address individually for each packet. The client source address might change in the middle of a connection if the routing tables change. This would be reasonable if we implemented path validation.
- optionally filters the source address on the incoming packets. This is good on the client side, but might be counterproductive on the server side.

EDIT 24.04.2025:
If we accidentally migrate to a different address, the server will send a PATH_CHALLENGE frame. We respond to that with a PATH_RESPONSE. This should enable the server to migrate to the new path.
We do not switch connection IDs when that happens, but we send connection IDs for the server to use.

21.2 Handshake Denial of Service

No additional requirements

21.3 Amplification Attack

Server guidance only. Our server implementation does not offer any guarantees for token validity.

21.4 Optimistic ACK Attack

(optional) We are vulnerable to optimistic ACK attack. We do not detect acknowledgements of non-existent packet numbers other than packet numbers that were not assigned yet.

continued in part 2...

Monday, December 20, 2021

Running cross-translation-unit static analysis on OpenJDK

Scan-build discusses in an earlier post is pretty effective at detecting issues within a single C file. Additional insights can be gained by applying cross-translation-unit analysis. This post will discuss how to install and use CodeChecker.

Prerequisites

Running build of OpenJDK. Clang tools v10 installed as discussed in the previous post.

Set up clang

By default clang v10 is only available as "clang-10" and not as "clang". This can be corrected using update-alternatives script:

sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-10   81 --slave /usr/bin/clang++ clang++ /usr/bin/clang++-10 

sudo update-alternatives --install /usr/bin/clang-tidy clang-tidy /usr/bin/clang-tidy-10   81

Install pip

Required to install CodeChecker

sudo apt install python3-pip

Install CodeChecker

pip3 install codechecker

Generate compilation database

CodeChecker log --build "make" --output ./compile_commands_ori.json
sed "s/-fno-lifetime-dse //" compile_commands_ori.json >compile_commands.json 

Run analysis

CodeChecker analyze --ctu compile_commands.json -o reports

Limiting the number of worker processes (with -j2) may be necessary; some processes consumed 6G of memory.

Results

None so far; the process takes ages to complete. Will try again on a more powerful machine.

Sources

https://askubuntu.com/a/1187858

https://github.com/Ericsson/codechecker

https://codechecker.readthedocs.io/en/latest/usage/