Guide: LLM stream log viewer
↑ Back to toolWhat is this tool?
A free LLM streaming log viewer and SSE / NDJSON pretty-printer for pasted captures. When you debug Chat Completions, Responses API-style streams, or Anthropic Messages streams, raw logs are often Server-Sent Events (data: lines) or newline-delimited JSON (one JSON object per line). This page parses chunks in your browser, reconstructs assistant text from deltas, and merges OpenAI-style tool_call argument fragments by index. Use it to read DevTools Network tabs, terminal redirects, or saved log files — not for live proxying or production logging.
SSE & NDJSON
- Auto — If any non-empty line starts with
data:, the parser uses SSE mode; otherwise NDJSON. - SSE — Processes lines whose trimmed form starts with
data:; JSON follows the prefix. Linesdata: [DONE]and comment lines (:) are skipped. - NDJSON — Each non-empty line must be a single JSON object. Multi-line pretty-printed JSON is not auto-joined; reformat to one chunk per line or use SSE mode with
data:lines.
API chunk shapes (best-effort)
- OpenAI-style —
choices[0].delta.contentfor streaming text;choices[0].delta.tool_callsfor incremental function arguments; optionalchoices[0].message.tool_callsfor final tool payloads;choices[0].finish_reasonwhen present. - Anthropic Messages stream —
type: "content_block_delta"withdelta.type === "text_delta"anddelta.textappends to assistant text. Tool streaming shapes may differ; unsupported lines may show as parse errors. - Generic NDJSON — Top-level
textortokenstring fields are concatenated when present (simple adapters / lab logs).
Features
- Assistant text panel — Rebuilt string from content deltas; shows finish reason when seen.
- Merged tool calls — One block per tool index; arguments pretty-printed when valid JSON.
- Copy — Copy assistant text or per-call arguments.
- Demos — Built-in text stream and tool-call stream samples.
- localStorage — Draft text and format mode persist as
spoold-stream-log(best effort). - Parse errors — Non-JSON lines listed with line number and snippet (capped list in UI).
How to use
- Capture — Copy raw stream output from browser DevTools, a CLI log, or a
.txtfile. - Paste — Into the input area. Use Auto or force SSE / NDJSON if detection is wrong.
- Read — Check reconstructed assistant text and merged tool calls on the right.
- Copy — Copy text or arguments for tickets, docs, or diffs.
- Optional — Try Load text demo or Load tool-call demo to see the layout.
Use cases
| Scenario | How this helps |
|---|---|
| Streaming bugs | See the final assistant string and tool JSON without manually stitching JSON lines. |
| Function-calling QA | Verify merged arguments after fragmented deltas. |
| Support & repros | Paste a redacted log into a ticket with a readable reconstruction. |
| Teaching | Show how SSE chunks relate to the final reply vs tool payloads. |
Limits
- Best-effort parsing — Provider schemas and beta APIs change; unknown shapes may appear as invalid lines.
- No live stream — Paste-only; this is not a proxy or WebSocket client.
- Secrets — Don't paste API keys or PII you wouldn't put in a local editor.
- Token / cost math — Use the Token calculator or your provider's usage API for billing counts.
Related terms
People search for OpenAI streaming response debug, SSE data line parser, chat completions stream log, tool_calls merge online, NDJSON LLM log viewer, Anthropic stream log pretty print, Server-Sent Events JSON viewer, and LLM API stream inspector. This tool covers pasted logs with OpenAI- and Anthropic-oriented paths plus simple generic fields.
FAQ
Is the stream log viewer free?
Yes. Parsing runs entirely in your browser.
Why are some lines listed as invalid JSON?
Only lines that parse as a single JSON object (after the data: prefix in SSE mode) are treated as chunks. Heartbeats, metadata lines, or multi-line JSON may fail — adjust the capture or split lines.
Does this match OpenAI’s billing token count?
No. It reconstructs text and tool arguments for readability. Use your provider’s tokenizer or dashboard for exact token usage.
Can I use this for Gemini or other APIs?
Only if chunks match supported shapes (e.g. OpenAI-like choices, Anthropic content_block_delta, or generic text/token fields). Otherwise you may see parse errors or empty panels.
Similar tools
Format, diff, and HTTP debugging on Spoold:
Conclusion
Use LLM stream log viewer to turn noisy SSE or NDJSON captures into readable assistant output and merged tool calls. Pair with JSON format for pretty payloads, Token calculator for length estimates, and curl compare when debugging HTTP differences.