SIEM Audit Sinks
Arcan can stream audit events to external SIEM platforms in real-time. Every secret access, policy change, and auth event is forwarded to one or more configured sinks alongside the built-in audit log.
Architecture
All sinks implement a generic Sink interface (Send, Name, Close). A Dispatcher fans out each audit event to all registered sinks asynchronously. Failed deliveries are retried with exponential backoff (1s, 2s, 4s) for HTTP-based sinks.
Audit Event
└── Dispatcher
├── Built-in store (SQLite/PostgreSQL)
├── Sink: Splunk
├── Sink: Datadog
└── Sink: File (for log shippers)
Supported Platforms
| Sink | Protocol | Auth Method |
|---|---|---|
| Splunk | HTTPS (HEC) | HEC token |
| Microsoft Sentinel | HTTPS (Data Collector API) | Workspace ID + shared key |
| Elastic | HTTPS (Bulk API) | API key |
| CrowdStrike Falcon LogScale | HTTPS (HEC-compatible) | Ingest token |
| Palo Alto Cortex XSIAM | Syslog (TCP/TLS, CEF) | TLS certificate |
| Datadog | HTTPS (Log Intake) | API key |
| Google Chronicle | HTTPS (Ingestion API) | OAuth token |
| Syslog | TCP/UDP/TLS (RFC 5424) | None / TLS |
| Webhook | HTTPS (POST) | Bearer token / custom headers |
| File | Local filesystem (JSON-lines) | N/A |
Configuration
Add an audit.sinks section to your config.yaml:
audit:
sinks:
- type: splunk
enabled: true
endpoint: "https://splunk.example.com:8088"
token: "your-hec-token"
options:
index: security
sourcetype: "arcan:audit"
source: arcan
Sinks with enabled: false are ignored. Multiple sinks can run simultaneously.
Platform Setup
Splunk
Uses the HTTP Event Collector (HEC). Create an HEC token in Splunk under Settings > Data Inputs > HTTP Event Collector.
- type: splunk
enabled: true
endpoint: "https://splunk.example.com:8088"
token: "your-hec-token"
options:
index: security
sourcetype: "arcan:audit"
source: arcan
Microsoft Sentinel
Uses the Log Analytics Data Collector API. Get the workspace ID and shared key from Azure Portal > Log Analytics workspace > Agents.
- type: sentinel
enabled: true
endpoint: "" # auto-derived from workspace_id
token: "base64-encoded-shared-key"
options:
workspace_id: "your-workspace-id"
log_type: ArcanAudit
The token field must contain the base64-encoded shared key (as shown in the Azure portal).
Elastic
Uses the Elasticsearch Bulk API. Create an API key in Kibana under Stack Management > API Keys.
- type: elastic
enabled: true
endpoint: "https://elastic.example.com:9200"
token: "your-api-key"
options:
index: arcan-audit
CrowdStrike Falcon LogScale
Uses a HEC-compatible ingestion endpoint. Create an ingest token in LogScale under Settings > Ingest tokens.
- type: crowdstrike
enabled: true
endpoint: "https://cloud.community.humio.com"
token: "your-ingest-token"
options:
repository: arcan
Palo Alto Cortex XSIAM
Sends CEF-formatted syslog messages over TCP or TLS.
- type: cortex
enabled: true
endpoint: "cortex-syslog.example.com:514"
options:
facility: local0
tls: "true"
Datadog
Uses the HTTP log intake API. Get your API key from Datadog under Organization Settings > API Keys.
- type: datadog
enabled: true
token: "your-dd-api-key"
options:
site: datadoghq.com
tags: "env:prod,service:arcan"
Google Chronicle
Uses the Ingestion API with OAuth authentication. Create a service account with Chronicle Ingestion permissions.
- type: chronicle
enabled: true
endpoint: "https://malachiteingestion-pa.googleapis.com"
token: "oauth-service-account-token"
options:
customer_id: "your-chronicle-customer-id"
Syslog
Standard RFC 5424 syslog over TCP, UDP, or TLS.
- type: syslog
enabled: true
endpoint: "syslog.example.com:514"
options:
protocol: tcp
facility: auth
format: rfc5424
tls: "false"
Webhook
Generic HTTP POST to any endpoint. Works with Slack, PagerDuty, custom APIs, etc.
- type: webhook
enabled: true
endpoint: "https://hooks.example.com/audit"
token: "bearer-token"
options:
method: POST
headers: "X-Source=arcan,X-Env=prod"
File
JSON-lines output for consumption by log shippers (Fluentd, Filebeat, Vector).
- type: file
enabled: true
options:
path: /var/log/arcan/audit.jsonl
rotate: daily
max_size: "104857600"
max_files: "10"
Multiple Sinks
Enable multiple sinks to forward events to several platforms simultaneously:
audit:
sinks:
- type: splunk
enabled: true
endpoint: "https://splunk.example.com:8088"
token: "hec-token"
options:
index: security
- type: file
enabled: true
options:
path: /var/log/arcan/audit.jsonl
rotate: daily
- type: webhook
enabled: false # disabled — won't receive events
endpoint: "https://hooks.example.com/audit"
Troubleshooting
Sink fails to initialize: Check server logs at startup. Missing required fields (endpoint, token, workspace_id) produce clear error messages with the sink name and missing field.
Events not arriving: Verify enabled: true in your config. Check that the endpoint is reachable from the Arcan server. HTTP sinks retry 3 times with exponential backoff before dropping an event.
TLS errors: For self-signed certificates on sink endpoints, the HTTP client uses the system trust store. Add the CA certificate to the system trust store or use the file sink as a local buffer and ship with a log agent that handles TLS.
High latency: Sinks send events asynchronously via the dispatcher. Slow sinks do not block request processing. If a sink is consistently slow, check network connectivity and consider using the file sink with a log shipper instead.
Complete Splunk Setup
Full config.yaml
server:
address: ":8443"
tls:
cert: /etc/arcan/tls/server.crt
key: /etc/arcan/tls/server.key
audit:
sinks:
- type: splunk
enabled: true
endpoint: "https://splunk-hec.example.com:8088"
token: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
options:
index: security
sourcetype: "arcan:audit"
source: arcan-prod
What events look like in Splunk
Each audit event arrives in Splunk as a JSON object under the arcan:audit sourcetype. The key fields:
| Field | Example | Description |
|---|---|---|
event.action | secret.read | What happened |
event.actor | user:alice@example.com | Who did it |
event.realm | production | Which realm |
event.resource | DATABASE_URL | Which secret/resource |
event.env | prod | Environment |
event.ip | 10.0.1.42 | Source IP address |
event.timestamp | 2026-04-03T14:23:01Z | When it happened |
event.status | success or denied | Whether it succeeded |
Common event actions: secret.read, secret.write, secret.delete, auth.login, auth.logout, auth.login_failed, realm.create, token.create, token.revoke, policy.update, engine.generate, engine.revoke.
Setting up Splunk alerts
To alert on failed authentication attempts, create a Splunk saved search:
Search query:
sourcetype="arcan:audit" event.action="auth.login_failed"
| stats count by event.actor, event.ip
| where count > 5
Alert condition: Trigger when the number of results is greater than 0.
This fires when any single actor or IP has more than 5 failed login attempts in the search window.
Other useful alert queries:
# Alert on secret deletion in production
sourcetype="arcan:audit" event.action="secret.delete" event.realm="production"
# Alert on new token creation
sourcetype="arcan:audit" event.action="token.create"
# Alert on policy changes
sourcetype="arcan:audit" event.action="policy.update"
Complete Datadog Setup
Full config.yaml
server:
address: ":8443"
tls:
cert: /etc/arcan/tls/server.crt
key: /etc/arcan/tls/server.key
audit:
sinks:
- type: datadog
enabled: true
token: "dd_api_key_abc123def456"
options:
site: datadoghq.com
tags: "env:prod,service:arcan,team:security"
What events look like in Datadog
Events appear in Datadog Logs under the arcan service. Each log entry includes:
| Attribute | Example | Description |
|---|---|---|
@action | secret.read | The audit action |
@actor | user:alice@example.com | Who performed the action |
@realm | production | Realm scope |
@resource | DATABASE_URL | Target resource |
@env | prod | Environment |
@status | success | Outcome |
@ip | 10.0.1.42 | Client IP |
host | arcan-prod-01 | Arcan server hostname |
Setting up Datadog monitors
Create a log-based monitor to alert on suspicious activity:
- Go to Monitors > New Monitor > Log Monitor
- Define the search query:
service:arcan @action:auth.login_failed - Set the alert condition: Alert when count is above 5 over the last 5 minutes
- Configure notification: Route to your security Slack channel or PagerDuty
Additional monitor examples:
# Production secret access outside business hours
service:arcan @action:secret.read @realm:production @env:prod
# High volume of secret reads (potential enumeration)
service:arcan @action:secret.read
# Alert threshold: > 100 events in 1 minute
# Any engine credential generation
service:arcan @action:engine.generate
Use Datadog's tags option to add env, service, and team tags. This lets you filter audit events in dashboards and set up team-specific alerting without modifying the Arcan configuration.
Audit Event Fields Reference
Every audit event sent to any SIEM sink contains the same JSON structure. Here is the complete field reference:
| Field | Type | Example | Description |
|---|---|---|---|
event.action | string | secret.read | The action performed (see full list below) |
event.actor | string | user:alice@example.com | The identity that performed the action (user:, token:, or system:) |
event.realm | string | production | The realm scope of the action |
event.resource | string | DATABASE_URL | The target resource (secret key, realm name, token name) |
event.env | string | prod | The environment within the realm |
event.ip | string | 10.0.1.42 | Source IP address of the request |
event.timestamp | string (ISO 8601) | 2026-04-03T14:23:01Z | When the event occurred |
event.status | string | success or denied | Whether the action succeeded |
event.user_agent | string | arcan-sdk-go/0.4.2 | Client identifier (SDK, CLI, browser) |
event.token_name | string | k8s-eso-operator | Name of the API token used (if token auth) |
event.request_id | string | req_a1b2c3d4 | Unique ID for correlating with server logs |
event.server | string | arcan-prod-01 | Hostname of the Arcan server |
All Event Actions
| Action | Category | Description |
|---|---|---|
secret.read | Secrets | A secret value was read |
secret.write | Secrets | A secret was created or updated |
secret.delete | Secrets | A secret was deleted |
auth.login | Auth | Successful authentication |
auth.logout | Auth | Session ended |
auth.login_failed | Auth | Failed authentication attempt |
realm.create | Realms | A new realm was created |
token.create | Tokens | A new API token was created |
token.revoke | Tokens | An API token was revoked |
policy.update | Policies | An access policy was changed |
engine.generate | Engines | A dynamic credential was generated (e.g., PostgreSQL temp user) |
engine.revoke | Engines | A dynamic credential was revoked |
Sample Raw Event (JSON)
This is the exact JSON payload sent to HTTP-based sinks (Splunk HEC, Datadog, Elastic, webhooks):
{
"event": {
"action": "secret.read",
"actor": "token:k8s-eso-operator",
"realm": "production",
"resource": "DATABASE_URL",
"env": "prod",
"ip": "10.0.1.42",
"timestamp": "2026-04-03T14:23:01.847Z",
"status": "success",
"user_agent": "external-secrets/0.9.11",
"token_name": "k8s-eso-operator",
"request_id": "req_7f3a2b1c",
"server": "arcan-prod-01"
}
}