Skip to content

Memory leak during Response forwarding #4060

@sebadob

Description

@sebadob

Version

v1.9.0

Platform

Linux 6.19.14-200.fc43.x86_64

Summary

I am not sure if this is actually a leak, or if I screwed up and I am missing something.

The situation: I started to create a reverse proxy based on hyper. I am now at a point where I started to do some benchmarks and memory profiling to evaluate the whole thing and get an idea of whats happening in detail. In a specific scenario, when I am forwarding a Response<_> that I got from upstream, the Parts are leaking memory and I am not getting the memory back even after the client received the complete and correct response.

This happens when I accept HTTP/2 connections that are then forwarded to an HTTP/1.1 upstream. When I receive the h1 response from upstream, and I simply forward the Response<_> (after custom business logic, removing hop-by-hop headers, and so on ...), The size of the HeaderMap is leaked in memory. I can resolve this immediately when I parts.headers.drain() after the resp.into_parts(), and force a new allocation for each value with HeaderValue::from_bytes(value.as_bytes())?. I can forward the HeaderName as-is, but if I don't allocate a fresh HeaderValue for each entry, memory is leaked.

It also does not happen for any other combination, like:

  • h2 listener with h2 upstream
  • h1 listener with h1 upstream
  • h1 listener with h2 upstream

All of these are fine. The leaking one is only h2 listener with h1 upstream.

So my current "fix" after I got a response from upstream is something like this:

if upstream.protocol == UpstreamProtocol::Http11 && ctx.protocol == Protocol::Http2 {
    let (mut parts, body) = resp.into_parts();

    let mut resp = Response::builder().status(parts.status).body(body)?;
    let headers = resp.headers_mut();
    headers.reserve(parts.headers.len());

    for (name, value) in parts.headers.drain() {
        let Some(name) = name else { continue };
        let owned = HeaderValue::from_bytes(value.as_bytes())?;
        headers.insert(name, owned);
    }

    Ok(resp)
} else {
    Ok(resp)
}

This fixes the leak immediately, but it would be nice to not need this additional allocation for each header value, and it's definitely very unexpected (for me at least). Unfortunately, the project is not ready yet to be open-souced and the full code is not publicly available yet. But, after lots of debugging and profiling, and after I thought initially that the body stream would be the issue, I found this one.

So the question now is: Did I screw up and miss something, and is this to be expected, or is this actually a bug?

Code Sample

Expected Behavior

Expected behavior would be that "h2 proxy -> h1 upstream" behaves the same as the other 3 combinations.

Actual Behavior

"h2 proxy -> h1 upstream" and then forwarding the h1 upstream response to the original client leaks memory without re-allocating all HeaderValues.

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-bugCategory: bug. Something is wrong. This is bad!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions