Summary
Add a new CLI flag --log-for human|machine to make Dev Proxy output consumable by coding agents and LLMs.
Problem
The current console output is optimized for human readability but creates problems when consumed by LLMs/coding agents:
| Problem |
Description |
| ANSI escape codes |
Color codes like \x1B[31m pollute text and waste tokens |
| Unicode box characters |
╭ ╰ │ are visual, not semantic — hard to parse programmatically |
| No structured output |
Console is human-readable prose; LLMs need JSON to reliably extract fields |
| Buffered output |
Messages held until request completes; agents cannot react in real-time |
| Terse/informal labels |
oops, skip, proc are cute but ambiguous for machines |
| Missing context |
Logs like "URL not matched" lack the actual URL |
| No correlation IDs |
Hard to trace request→response chains |
| Timestamps hidden |
Filtered by default; agents cannot measure timing |
Example of current output
req ╭ GET https://api.example.com/users
oops │ 429 TooManyRequests
warn │ Exceeded resource limit when calling https://api.example.com/users
╰
What an agent needs
{"type":"request","method":"GET","url":"https://api.example.com/users","requestId":"1","timestamp":"2026-01-19T10:30:00.000Z"}
{"type":"chaos","plugin":"RateLimitingPlugin","status":429,"message":"TooManyRequests","requestId":"1","timestamp":"2026-01-19T10:30:00.050Z"}
{"type":"warning","plugin":"RateLimitingPlugin","message":"Exceeded resource limit","url":"https://api.example.com/users","requestId":"1","timestamp":"2026-01-19T10:30:00.051Z"}
Proposed solution
Add a new flag:
devproxy --log-for human # Current behavior (default)
devproxy --log-for machine # Structured JSON Lines output
Why --log-for instead of --output json or --format json?
We evaluated common CLI conventions:
| Tool |
Flag |
| kubectl |
-o, --output json|yaml|wide |
| Azure CLI |
-o, --output json|table|tsv |
| GitHub CLI |
--json |
However, for LLM consumption, audience-centric naming is more intuitive than format-centric:
| Aspect |
--log-format json |
--log-for machine |
| Semantics |
Describes the how |
Describes the who |
| LLM intuition |
"Output in JSON format" |
"This output is meant for me" |
| Discoverability |
Need to know JSON is the right format |
Obvious: "I am a machine, I use machine" |
| Future-proof |
Locked to specific format |
Can evolve what "machine" means internally |
An LLM reading --help naturally self-identifies with --log-for machine.
Machine output requirements
When --log-for machine is set:
- JSON Lines format — one JSON object per line (streamable)
- No ANSI codes — strip all color/formatting escape sequences
- No Unicode decorations — no box-drawing characters
- Include correlation IDs —
requestId in every log entry
- Include timestamps — ISO 8601 format
- Full context — URL, method, plugin name in each entry
- Semantic message types —
request, response, warning, error, mock, chaos (not oops, skip)
Configuration
Also support in devproxyrc.json:
Related
- Reporters (
JsonReporter, PlainTextReporter) handle file output after recording
- This flag handles real-time console/stdout output
Summary
Add a new CLI flag
--log-for human|machineto make Dev Proxy output consumable by coding agents and LLMs.Problem
The current console output is optimized for human readability but creates problems when consumed by LLMs/coding agents:
\x1B[31mpollute text and waste tokens╭ ╰ │are visual, not semantic — hard to parse programmaticallyoops,skip,procare cute but ambiguous for machinesExample of current output
What an agent needs
{"type":"request","method":"GET","url":"https://api.example.com/users","requestId":"1","timestamp":"2026-01-19T10:30:00.000Z"} {"type":"chaos","plugin":"RateLimitingPlugin","status":429,"message":"TooManyRequests","requestId":"1","timestamp":"2026-01-19T10:30:00.050Z"} {"type":"warning","plugin":"RateLimitingPlugin","message":"Exceeded resource limit","url":"https://api.example.com/users","requestId":"1","timestamp":"2026-01-19T10:30:00.051Z"}Proposed solution
Add a new flag:
Why
--log-forinstead of--output jsonor--format json?We evaluated common CLI conventions:
-o, --output json|yaml|wide-o, --output json|table|tsv--jsonHowever, for LLM consumption, audience-centric naming is more intuitive than format-centric:
--log-format json--log-for machinemachine"An LLM reading
--helpnaturally self-identifies with--log-for machine.Machine output requirements
When
--log-for machineis set:requestIdin every log entryrequest,response,warning,error,mock,chaos(notoops,skip)Configuration
Also support in
devproxyrc.json:{ "logFor": "machine" }Related
JsonReporter,PlainTextReporter) handle file output after recording