Spoold Logo
Home

LLM stream log viewer

Guide

SSE / NDJSON stream pretty-printer

Paste raw Server-Sent Events (data: lines) or one JSON object per line (NDJSON). We reconstruct assistant text and merge OpenAI-style tool-call argument fragments.

FormatDetected: ndjson · 0 JSON chunks
Input
Assistant text
— No text deltas found —
Tool calls (merged)

— None —

Guide: LLM stream log viewer

↑ Back to tool

What is this tool?

A free LLM streaming log viewer and SSE / NDJSON pretty-printer for pasted captures. When you debug Chat Completions, Responses API-style streams, or Anthropic Messages streams, raw logs are often Server-Sent Events (data: lines) or newline-delimited JSON (one JSON object per line). This page parses chunks in your browser, reconstructs assistant text from deltas, and merges OpenAI-style tool_call argument fragments by index. Use it to read DevTools Network tabs, terminal redirects, or saved log files — not for live proxying or production logging.

SSE & NDJSON

  • Auto — If any non-empty line starts with data:, the parser uses SSE mode; otherwise NDJSON.
  • SSE — Processes lines whose trimmed form starts with data:; JSON follows the prefix. Lines data: [DONE] and comment lines (:) are skipped.
  • NDJSON — Each non-empty line must be a single JSON object. Multi-line pretty-printed JSON is not auto-joined; reformat to one chunk per line or use SSE mode with data: lines.

API chunk shapes (best-effort)

  • OpenAI-stylechoices[0].delta.content for streaming text; choices[0].delta.tool_calls for incremental function arguments; optional choices[0].message.tool_calls for final tool payloads; choices[0].finish_reason when present.
  • Anthropic Messages streamtype: "content_block_delta" with delta.type === "text_delta" and delta.text appends to assistant text. Tool streaming shapes may differ; unsupported lines may show as parse errors.
  • Generic NDJSON — Top-level text or token string fields are concatenated when present (simple adapters / lab logs).

Features

  • Assistant text panel — Rebuilt string from content deltas; shows finish reason when seen.
  • Merged tool calls — One block per tool index; arguments pretty-printed when valid JSON.
  • Copy — Copy assistant text or per-call arguments.
  • Demos — Built-in text stream and tool-call stream samples.
  • localStorage — Draft text and format mode persist as spoold-stream-log (best effort).
  • Parse errors — Non-JSON lines listed with line number and snippet (capped list in UI).

How to use

  1. Capture — Copy raw stream output from browser DevTools, a CLI log, or a .txt file.
  2. Paste — Into the input area. Use Auto or force SSE / NDJSON if detection is wrong.
  3. Read — Check reconstructed assistant text and merged tool calls on the right.
  4. Copy — Copy text or arguments for tickets, docs, or diffs.
  5. Optional — Try Load text demo or Load tool-call demo to see the layout.

Use cases

ScenarioHow this helps
Streaming bugsSee the final assistant string and tool JSON without manually stitching JSON lines.
Function-calling QAVerify merged arguments after fragmented deltas.
Support & reprosPaste a redacted log into a ticket with a readable reconstruction.
TeachingShow how SSE chunks relate to the final reply vs tool payloads.

Limits

  • Best-effort parsing — Provider schemas and beta APIs change; unknown shapes may appear as invalid lines.
  • No live stream — Paste-only; this is not a proxy or WebSocket client.
  • Secrets — Don't paste API keys or PII you wouldn't put in a local editor.
  • Token / cost math — Use the Token calculator or your provider's usage API for billing counts.

People search for OpenAI streaming response debug, SSE data line parser, chat completions stream log, tool_calls merge online, NDJSON LLM log viewer, Anthropic stream log pretty print, Server-Sent Events JSON viewer, and LLM API stream inspector. This tool covers pasted logs with OpenAI- and Anthropic-oriented paths plus simple generic fields.

FAQ

Is the stream log viewer free?

Yes. Parsing runs entirely in your browser.

Why are some lines listed as invalid JSON?

Only lines that parse as a single JSON object (after the data: prefix in SSE mode) are treated as chunks. Heartbeats, metadata lines, or multi-line JSON may fail — adjust the capture or split lines.

Does this match OpenAI’s billing token count?

No. It reconstructs text and tool arguments for readability. Use your provider’s tokenizer or dashboard for exact token usage.

Can I use this for Gemini or other APIs?

Only if chunks match supported shapes (e.g. OpenAI-like choices, Anthropic content_block_delta, or generic text/token fields). Otherwise you may see parse errors or empty panels.

Similar tools

Format, diff, and HTTP debugging on Spoold:

Conclusion

Use LLM stream log viewer to turn noisy SSE or NDJSON captures into readable assistant output and merged tool calls. Pair with JSON format for pretty payloads, Token calculator for length estimates, and curl compare when debugging HTTP differences.