Skip to main content

Overview

Advanced logging infrastructure with structured logs, powerful search, log streaming, and integrations with external log management systems. Status: 🔮 Planned for Q3 2026

Current State

Today, Triform provides basic logging:
  • Execution logs viewable in UI
  • Print statements captured from Actions
  • Error messages and stack traces
  • Retention per plan tier

What’s Coming

Structured Logging

Beyond plain text:
# Current (plain text)
print("User logged in")

# Future (structured)
log.info("user_logged_in", {
    "user_id": "user_123",
    "ip": "192.0.2.1",
    "timestamp": "2025-10-01T10:30:00Z"
})
Benefits:
  • Searchable by field
  • Aggregate and analyze
  • Create dashboards
  • Set up alerts

Log Levels

Standard severity levels:
  • TRACE: Most detailed
  • DEBUG: Debugging information
  • INFO: General information
  • WARN: Warnings
  • ERROR: Errors
  • FATAL: Critical failures
Control verbosity per environment:
  • Dev: DEBUG level
  • Staging: INFO level
  • Production: WARN level
Powerful queries:
level:ERROR AND user_id:user_123 AND timestamp:>2025-10-01
Search by:
  • Log level
  • Timestamp range
  • Component (Action, Agent, Flow)
  • Custom fields
  • Free text
  • Regular expressions

Log Aggregation

Group and count:
  • Errors by type
  • Actions by execution count
  • Users by activity
  • Cost by component

Log Streaming

Real-time logs:
  • Tail logs live
  • WebSocket streaming
  • Server-Sent Events
  • Filter while streaming

Log Exports

Send logs elsewhere:
  • Datadog: APM and logs
  • Splunk: Enterprise log management
  • CloudWatch: AWS logs
  • Elasticsearch: Search and analytics
  • Custom webhook: Your own system
Configuration:
log_export:
  provider: datadog
  api_key: ${DATADOG_API_KEY}
  filters:
    - level: ERROR
    - level: WARN

Log Context

Automatic enrichment:
  • Trace ID
  • Execution ID
  • User ID
  • Project ID
  • Environment
  • Git commit (if integrated)

Sampling

Control volume:
  • Log all in dev
  • Sample in production
  • Always log errors
  • Smart sampling (log slow requests)

Log Retention Policies

Custom retention:
retention:
  ERROR: 90 days
  WARN: 60 days
  INFO: 30 days
  DEBUG: 7 days

Use Cases

Debugging

Find issues quickly:
level:ERROR AND component:payment_processor AND timestamp:last_hour

Monitoring

Track system health:
  • Error rate trends
  • Slow request identification
  • Resource usage patterns

Analytics

Business insights:
  • User behavior analysis
  • Feature usage tracking
  • Performance metrics

Compliance

Audit trails:
  • Who accessed what
  • When and from where
  • What actions were taken

Cost Attribution

Track spending:
  • LLM token usage by component
  • API calls by user
  • Execution costs by Project

Log API

from triform import Logger

# In your Action
log = Logger()

log.debug("Starting processing", {
    "item_count": len(items)
})

log.info("Processing complete", {
    "processed": 42,
    "failed": 2,
    "duration_ms": 1234
})

log.error("Failed to process item", {
    "item_id": "item_123",
    "error": str(e)
})

# Query logs
logs = Logger.query(
    level="ERROR",
    start_time="2025-10-01",
    filters={"component": "payment_processor"}
)

Log Dashboard

Visual log analysis:
  • Recent logs table
  • Error rate chart
  • Log volume by level
  • Top error messages
  • Slowest executions

Alerts on Logs

Get notified:
alert:
  name: "High error rate"
  condition: "count(level:ERROR) > 10 in 5 minutes"
  notification:
    - slack: "#alerts"
    - email: "oncall@example.com"

Log Sampling Strategies

Tail-based sampling

Sample after execution:
  • Always keep errors
  • Keep slow requests
  • Sample normal requests

Head-based sampling

Sample at start:
  • Percentage-based (1%, 10%, 100%)
  • Deterministic (by trace ID)

Priority sampling

Keep important logs:
  • All errors
  • Specific users (VIP customers)
  • High-value transactions
  • Compliance-required events

Integration Examples

Datadog

integrations:
  datadog:
    api_key: ${DATADOG_API_KEY}
    service: triform-prod
    tags:
      - env:production
      - team:backend

Splunk

integrations:
  splunk:
    endpoint: https://splunk.example.com:8088
    token: ${SPLUNK_HEC_TOKEN}
    index: triform_logs

Custom Webhook

integrations:
  webhook:
    url: https://logs.example.com/ingest
    headers:
      Authorization: "Bearer ${API_KEY}"
    format: json

Performance Considerations

Logging overhead:
  • Structured logging: ~1-2ms per log
  • Async logging: Minimal impact
  • Sampling: Reduces volume and cost
Storage:
  • Compressed storage
  • Tiered retention
  • Auto-archive to S3

Pricing

Included in all plans:
  • Basic logging (current)
  • Structured logging
  • Search and filtering
Add-on for Enterprise:
  • Extended retention (>90 days)
  • High-volume exports
  • Premium integrations

Timeline

Q3 2026: Structured logging, log levels
Q4 2026: Advanced search, streaming
2027 Q1: External integrations, alerts

Get Notified

Sign up: triform.ai/advanced-logging-beta

Questions?

I