Streaming Responses
DoStream() returns a response whose body you read incrementally. Use it for:
- Big downloads. Don't buffer a 2 GB file in RAM.
- Server-Sent Events. Long-lived connections that drip events.
- NDJSON / line-delimited streams. Read one record at a time.
- Anything chunked. When the server doesn't know the content length up front.
Plain Do() reads the full body into memory before returning. DoStream() returns the moment the response headers arrive, and you pull the body yourself.
Pre-1.6.6, DoStream didn't update the cookie jar from the response. On an older version? Upgrade, or extract Set-Cookie headers by hand. Bug fixed in 1.6.6.
The shape
- Go
- Python
- Node.js
- .NET
Session.DoStream(ctx, req) returns a *StreamResponse. It implements io.Reader, so anything that takes a Reader works: bufio.Scanner, json.Decoder, io.Copy, all of it.
package main
import (
"bufio"
"context"
"fmt"
httpcloak "github.com/sardanioss/httpcloak"
)
func main() {
s := httpcloak.NewSession("chrome-latest")
defer s.Close()
stream, err := s.GetStream(context.Background(), "https://httpbin.org/stream/10")
if err != nil {
panic(err)
}
defer stream.Close()
fmt.Println("status:", stream.StatusCode)
fmt.Println("content-length:", stream.ContentLength) // -1 if chunked
scanner := bufio.NewScanner(stream)
n := 0
for scanner.Scan() {
n++
fmt.Printf("chunk %d: %s\n", n, scanner.Text())
}
fmt.Printf("got %d lines\n", n)
}
Close() is mandatory. Defer it the second you have the stream. Skip it and you leak the underlying connection, which means it never goes back to the pool and you eat the dial cost on the next request.
The Python binding folds streaming into get(stream=True):
import httpcloak
s = httpcloak.Session(preset="chrome-latest")
with s.get("https://httpbin.org/stream/10", stream=True) as r:
print("status:", r.status_code)
n = 0
for line in r.iter_lines():
n += 1
print(f"chunk {n}: {line.decode()}")
print(f"got {n} lines")
iter_lines() and iter_content(chunk_size=N) both work. The with block calls Close for you when the body's done.
session.getStream() returns a StreamResponse you can iterate with for await:
const { Session } = require("httpcloak");
const s = new Session({ preset: "chrome-latest" });
const stream = s.getStream("https://httpbin.org/stream/10");
console.log("status:", stream.statusCode);
let n = 0;
for await (const chunk of stream) {
n++;
console.log(`chunk ${n}: ${chunk.toString()}`);
}
stream.close();
console.log(`got ${n} chunks`);
Always call stream.close() after iterating, otherwise the connection leaks.
Session.GetStream() (and RequestStream for non-GET) returns a StreamResponse with a Stream body:
using HttpCloak;
using var s = new Session(new SessionOptions { Preset = "chrome-latest" });
using var stream = s.GetStream("https://httpbin.org/stream/10");
Console.WriteLine($"status: {stream.StatusCode}");
using var content = stream.GetContentStream();
using var reader = new StreamReader(content);
int n = 0;
string? line;
while ((line = reader.ReadLine()) != null)
{
n++;
Console.WriteLine($"chunk {n}: {line}");
}
Console.WriteLine($"got {n} lines");
The using on the StreamResponse handles Close.
What you can read it as
The body's just bytes coming off the wire. You decide how to split them.
- Line-delimited.
bufio.Scanner(Go),iter_lines()(Python), readline loop (Node). - Fixed-size chunks.
Read(buf)(Go),iter_content(chunk_size=N)(Python),read(N)(Node). - JSON streams. Wrap in a JSON decoder. Go:
json.NewDecoder(stream).Decode(&v)in a loop for NDJSON. Python:for line in r.iter_lines(): obj = json.loads(line). - Pipe to a file. Go:
io.Copy(file, stream). Python:for chunk in r.iter_content(8192): f.write(chunk).
Lifetime and Close
The contract: caller must call Close when done. There's no GC fallback because the stream wraps real syscall resources: a TCP socket, an H2 stream window, an H3 stream.
Common ways to forget:
- Returning early from a function on an error path without
defer stream.Close(). Always defer right after the err check. - Iterating partway and bailing without closing.
- In Python, skipping
with. The non-with form needs an explicitr.close().
Closing partway through is totally fine. The lib reads-and-discards the rest in the background to keep the underlying connection clean for reuse, or hard-aborts the H2/H3 stream if there's a lot left.
ContentLength and chunked
stream.ContentLength (or content_length / contentLength in the bindings) is -1 when the server uses chunked transfer encoding (or H2/H3 without an explicit content-length frame). Don't assume it's positive when sizing a download progress bar.
Need to know the size up front? Fire a HEAD request first, read Content-Length from the response headers, then DoStream() the GET. Most servers send a length on HEAD even when they'd switch to chunked on GET.
Cookie jar parity (since 1.6.6)
Streaming responses now go through the same cookie extraction path as regular ones. Set-Cookie headers from the response (or any in-stream redirect the lib resolved before handing you the body) land in the session jar.
Before 1.6.6, streaming bypassed the jar update and you'd silently miss cookies from streamed endpoints. The fix landed in #5491c85, so Do and DoStream now behave identically.
Stuck on an older version and can't upgrade?
// Manual cookie extraction from a streamed response, pre-1.6.6 workaround.
for _, sc := range stream.Headers["Set-Cookie"] {
// parse sc with net/http or store as raw and inject on next request
}
A note on H2 and H3
Stream over HTTP/2 or HTTP/3 and the underlying transport still rides on a single multiplexed connection. So stream.Close() doesn't kill the connection itself, just the one stream on it. You can run multiple streaming requests in flight on the same H2 connection at once, which is great for SSE + an API call running side by side.
On HTTP/1.1, a streaming response holds the whole TCP connection until you close. Concurrent requests need separate connections. The lib handles connection pooling either way so you don't have to think about it, but heads up: 100 concurrent streams on H1 means 100 TCP connections.