Log Parser
Free web tool: Log Parser
About Log Parser
The Log Parser is a browser-based tool that ingests raw log text, automatically detects the format, parses each line into structured fields, and displays the results in a searchable, filterable table. It supports three common log formats: JSON structured logs (where each line is a JSON object with fields like timestamp, level, message, msg, severity, ts, or @timestamp), Apache/Nginx Combined Log Format (IP, timestamp, request, status code, bytes), and a general format matching lines that begin with an ISO 8601 or common date-time string followed by a log level keyword and message. Lines that do not match any pattern are still displayed as INFO-level entries so no data is silently dropped.
DevOps engineers, backend developers, and site reliability engineers use this tool to quickly triage log output without needing to install command-line tools like grep, awk, or jq. Paste a wall of logs from a Docker container, an ECS task, a serverless function, or a web server access log, and the tool will immediately show you just the ERROR and WARN lines, or let you search for a specific user ID, IP address, or exception message. This is particularly useful in situations where you are working in a restricted environment — such as a customer support session or a browser-only device — where installing local tooling is not possible.
Under the hood, the parser runs on every keystroke using React's useMemo hook, so results update instantly as you type. Format detection checks the first non-empty line: if it starts with "{" it tries JSON.parse on each line; if it matches the Apache regular expression pattern it uses that parser; otherwise it falls back to the general date-time regex. Detected log levels are extracted dynamically from the parsed entries and shown as filter buttons. The timestamp and message columns are both searched when you type in the search box. All parsing runs entirely in the browser — no log data is ever sent to a server.
Key Features
- Automatic format detection for JSON, Apache/Nginx Combined, and general timestamp log formats
- Parses JSON fields including timestamp, time, ts, @timestamp, level, severity, loglevel, message, msg, and text
- Derives log level from HTTP status codes in Apache/Nginx logs: 5xx=ERROR, 4xx=WARN, 2xx=INFO
- Dynamic level filter buttons generated from actual levels present in your log data
- Full-text search across both the message and timestamp columns simultaneously
- Color-coded level badges: red for ERROR, yellow for WARN, blue for INFO, gray for DEBUG
- Displays detected log format name and filtered/total line counts in real time
- 100% client-side processing — log data never leaves your browser, no server storage
Frequently Asked Questions
What log formats does this parser support?
The parser supports three formats: (1) JSON structured logs where each line is a valid JSON object; (2) Apache/Nginx Combined Log Format with IP address, datetime, quoted request, status code, and bytes fields; and (3) a general format where lines start with an ISO 8601 or similar datetime string followed by a log level keyword (ERROR, WARN, WARNING, INFO, DEBUG, TRACE) and the message. Unrecognized lines are shown as INFO entries.
How do I parse JSON structured logs?
Paste your JSON log lines (one JSON object per line, not a JSON array) into the text area. The parser will automatically detect the JSON format and extract timestamp from any of the standard timestamp field names (timestamp, time, ts, @timestamp), the log level from level, severity, or loglevel fields, and the message from message, msg, or text fields. Any JSON that fails to parse is displayed as a raw INFO line.
Can I filter logs to only show ERROR entries?
Yes. Once the parser has processed your logs, filter buttons appear automatically for each distinct log level found in your data (such as ALL, ERROR, WARN, INFO, DEBUG). Click any level button to show only entries at that level. Click ALL to return to the full view. You can also combine level filtering with the keyword search box.
How does the search box work?
The search box filters across both the message content and the timestamp string of each log entry. The search is case-insensitive. For example, searching for "NullPointerException" will show all entries whose message contains that text. Searching for a date string like "2024-01-15" will show entries from that date. The filter is applied in addition to any active level filter.
What does the tool do with Apache/Nginx log lines?
For Apache/Nginx Combined Log Format lines, the parser extracts the client IP, request datetime, the quoted request string (method + path + protocol), HTTP status code, and response size in bytes. The log level is derived from the status code: 500–599 become ERROR, 400–499 become WARN, and all other codes become INFO. The message column shows the IP, request, status, and byte count.
What happens to log lines that do not match any known format?
Lines that do not match any regex pattern are still displayed as INFO-level entries with an empty timestamp and the raw line as the message. This ensures that no log data is silently dropped, even if the format is unusual or the log contains mixed formats.
Is my log data sent to any server?
No. All parsing, filtering, and searching happens entirely within your web browser using JavaScript. The log text you paste never leaves your device. This makes the tool safe to use even with sensitive production logs, internal application logs, or logs containing customer data, since no server ever receives the data.
Can I use this tool for very large log files?
This tool is designed for pasting log text into a textarea, which works well for hundreds to a few thousand lines. For extremely large log files (tens of thousands of lines or more), browser performance may degrade because the entire text is parsed on every change using useMemo. For very large files, consider using command-line tools like jq, grep, or awk, or a dedicated log management platform like Splunk, Datadog, or Grafana Loki.