"User 123 placed order for $99.99", structured logging stores each piece of data separately — making logs searchable, filterable, and machine-readable.
Why Structured Logging?
Plain text logs are easy to write but hard to work with at scale:userId, search by level, aggregate by amount, and alert on tags — without parsing text.
Key Fields
Every Timberlogs entry has a standard set of fields:| Field | Description |
|---|---|
| message | Human-readable description of the event |
| level | Severity: debug, info, warn, or error |
| source | Application or service name (e.g., api-server, worker) |
| environment | Deployment target (e.g., production, staging) |
| data | Arbitrary JSON object with event-specific details |
| tags | Array of labels for categorization and filtering |
| timestamp | When the event occurred |
userId, sessionId, requestId, flowId, and stepIndex for correlation.
Structured Data Enables Analysis
With structured fields, you can:- Filter logs by level, source, environment, or tags
- Search within the
dataobject for specific values - Correlate related events using
requestId,sessionId, orflowId - Aggregate numeric fields for performance analysis
- Alert on specific tag or level combinations
Datasets
Datasets let you group and route logs by purpose. Set adataset field on any log to organize logs into logical collections — for example, separating billing logs from auth logs. This is useful for access control, retention policies, and focused analysis.
Further Reading
- Log Levels — severity hierarchy and when to use each level
- Flows — grouping related logs across multi-step operations
- SDK Logging Methods: TypeScript | Python — how to send structured logs from your application