SIEM Integration Guide: Connecting Audit Logs to Splunk, Datadog & Elastic
Why Do Enterprise Customers Need Audit Logs in Their SIEM?
Security Information and Event Management (SIEM) platforms are the central nervous system of enterprise security operations. Security analysts monitor Splunk, Datadog, or Elastic dashboards for anomalies across every system in their environment — identity providers, cloud infrastructure, SaaS applications, and internal tools. If your audit logs are not in their SIEM, your application is a blind spot.
For enterprise SaaS sales, SIEM integration is increasingly a procurement requirement. Security teams evaluate vendors on whether audit events can be ingested into their existing monitoring pipeline. Providing native SIEM integration removes a common objection during enterprise sales cycles and demonstrates security maturity.
Should You Stream Events or Batch Export Them?
There are two primary delivery models for getting audit events into a SIEM:
| Approach | Latency | Complexity | Best For |
|---|---|---|---|
| Real-time streaming | Seconds | Higher (webhook/HTTP delivery, retry logic, backpressure) | Active security monitoring, incident detection, SOC dashboards |
| Batch export | Minutes to hours | Lower (scheduled job, file upload to S3/SFTP) | Compliance reporting, historical analysis, cost-sensitive deployments |
Most enterprise customers want real-time streaming for their Security Operations Center (SOC) and batch export as a fallback for data completeness verification. AuditKit supports both: webhook-based streaming with configurable batching windows (1 second to 5 minutes) and scheduled exports to S3, Azure Blob, or GCS.
How Do You Format Events for Splunk?
Splunk ingests data via HTTP Event Collector (HEC). Events must be formatted as JSON with Splunk-specific envelope fields:
{
"time": 1711036800,
"sourcetype": "auditkit:event",
"source": "auditkit",
"host": "your-app.com",
"index": "audit_logs",
"event": {
"actor": {"id": "user_831", "type": "user"},
"action": "document.deleted",
"target": {"id": "doc_492", "type": "document"},
"tenantId": "org_55",
"timestamp": "2026-03-18T14:22:01Z",
"metadata": {"reason": "user_request"}
}
}
Key considerations for Splunk: use epoch timestamps in the time field (not
ISO 8601) for accurate time indexing. Set a meaningful sourcetype so Splunk
administrators can build field extractions and dashboards specific to your audit events.
Use a dedicated index if the customer's Splunk admin allows it — this simplifies retention
management and access control within Splunk.
How Do You Format Events for Datadog?
Datadog ingests audit events via its Log Management API or through the Datadog Agent. The API approach is simpler for SaaS-to-SIEM integration:
{
"ddsource": "auditkit",
"ddtags": "env:production,service:your-app,tenant:org_55",
"hostname": "your-app.com",
"message": "document.deleted by user_831",
"service": "your-app",
"status": "info",
"audit": {
"actor": {"id": "user_831", "type": "user"},
"action": "document.deleted",
"target": {"id": "doc_492", "type": "document"},
"tenantId": "org_55"
}
}
Datadog uses tags extensively for filtering and aggregation. Include the tenant ID as a
tag so security teams can build per-tenant dashboards. Set ddsource to enable
Datadog's built-in parsing pipelines, and include a human-readable message
field for the log explorer view.
How Do You Format Events for Elastic (ELK Stack)?
Elasticsearch accepts JSON documents directly. For audit events, use the Elastic Common Schema (ECS) to maximize compatibility with Kibana dashboards and detection rules:
{
"@timestamp": "2026-03-18T14:22:01.000Z",
"event.kind": "event",
"event.category": ["iam"],
"event.type": ["deletion"],
"event.action": "document.deleted",
"event.outcome": "success",
"user.id": "user_831",
"user.name": "jane.doe@company.com",
"source.ip": "192.168.1.42",
"organization.id": "org_55",
"auditkit.target.id": "doc_492",
"auditkit.target.type": "document",
"auditkit.hash": "a4f2e8c1..."
}
ECS compliance matters because Elastic's built-in security detection rules and SIEM
dashboards expect ECS-formatted fields. Mapping your audit events to ECS fields like
event.action, user.id, and event.outcome means
customers get working dashboards immediately without custom configuration.
How Do You Handle Delivery Reliability?
SIEM integration must handle network failures, SIEM outages, and rate limiting without losing events. The standard reliability pattern:
- Persistent queue — write events to a durable queue (PostgreSQL, Redis Streams, or Kafka) before attempting delivery. The audit event is committed to your database regardless of SIEM availability.
- At-least-once delivery — retry failed deliveries with exponential backoff. Accept that the SIEM may receive duplicates; most SIEMs handle deduplication by event ID.
- Dead letter queue — after a configurable number of retries (typically 5-10), move the event to a dead letter queue for manual review. Alert the customer's admin that delivery has stalled.
- Backpressure handling — if the SIEM returns HTTP 429 (rate limited), respect the Retry-After header. Batch events during backpressure periods and flush when the rate limit clears.
AuditKit's SIEM streaming uses a persistent queue backed by the same database that stores audit events. Events are never lost — even if a SIEM is offline for hours, queued events are delivered once connectivity resumes. The dashboard shows delivery status, queue depth, and error rates per SIEM destination.
What About Multi-SIEM and Per-Tenant Configuration?
Enterprise deployments often involve multiple SIEM targets. Your largest customer might use Splunk for their SOC while their compliance team uses a separate Elastic instance. Some customers want all events; others want only high-severity events like permission changes and failed authentication.
Design your SIEM integration with per-tenant, per-destination configuration:
- Destination — URL, authentication credentials, format (Splunk HEC, Datadog API, Elastic bulk)
- Filter — which event types to send (e.g., only
*.deletedandauth.*events) - Format — output schema (Splunk JSON, Datadog JSON, ECS, raw AuditKit schema, or OCSF)
- Batching — delivery window (real-time, every 30 seconds, every 5 minutes)
AuditKit supports multiple SIEM destinations per tenant on Business and Enterprise plans. Each destination is independently configured with its own filter rules, format, and delivery schedule. Credential storage uses AES-256 encryption at rest.
How Do You Test SIEM Integration?
Testing SIEM integration is notoriously difficult because you need a running SIEM instance to validate formatting and delivery. Three approaches that work:
- Test event endpoint — provide a "Send Test Event" button in your dashboard that delivers a sample event to the configured SIEM destination. The customer can verify it appears in their SIEM before enabling production streaming.
- Local SIEM containers — maintain Docker Compose configurations for Splunk, Elastic, and a Datadog log receiver. Run integration tests against these containers in CI.
- Format validation — validate event payloads against the SIEM's expected schema before delivery. Catch formatting errors at the source rather than discovering them in the SIEM's error logs.
Key Takeaways
- SIEM integration is a procurement requirement for enterprise SaaS — not a nice-to-have.
- Support both real-time streaming and batch export; most customers want both.
- Format events for each SIEM's native schema: Splunk HEC, Datadog tags, Elastic ECS.
- Use a persistent queue with at-least-once delivery and dead letter handling for reliability.
- Support per-tenant, per-destination configuration with independent filters and formatting.
- Provide a test event endpoint so customers can validate integration before enabling production delivery.
Ready to ship audit logging?
AuditKit gives you tamper-evident audit trails and SOC 2 evidence collection in one platform. Start free.
Get Started FreeRelated Articles
Why Your B2B SaaS Needs Audit Logs Before SOC 2
Audit logs are a core SOC 2 requirement. Learn why building them early saves months of compliance work and builds enterprise trust.
Read moreHash Chaining Explained: How AuditKit Creates Tamper-Proof Logs
Learn how SHA-256 hash chaining makes audit logs tamper-proof. A technical deep dive into cryptographic integrity for audit trails.
Read moreAudit Logging Best Practices for Multi-Tenant SaaS
A practical guide to designing audit logs for multi-tenant SaaS applications. Covers schema design, tenant isolation, retention, and compliance.
Read more