Skip to main content
Calico Enterprise 3.22 (latest) documentation

Archive logs

Big picture

Archive Calico Enterprise logs to SIEMs like Syslog, Splunk, or Amazon S3 to meet compliance storage requirements.

Value

Archiving your Calico Enterprise Elasticsearch logs to storage services like Amazon S3, Syslog, or Splunk are reliable options for maintaining and consolidating your compliance data long term.

Before you begin

Supported logs for export

  • Syslog - flow, dns, idsevents, audit
  • Splunk - flow, audit, dns
  • Amazon S3 - l7, flow, dns, audit
note

If you're archiving logs for a system that includes Kubernetes clusters and non-cluster hosts or VMs, the default configuration archives all logs (both cluster and non-cluster) together. For information on how to configure this behaviour, see Control which hosts have their logs archived.

Non-cluster hosts only generate a subset of the log types generated by Kubernetes cluster hosts. For more information, see the non-cluster hosts documentation.

How to

Set up log archiving

note

Because Calico Enterprise and Kubernetes logs are integral to Calico Enterprise diagnostics, there is no mechanism to tune down the verbosity. To manage log verbosity, filter logs using your SIEM.

  1. Create an AWS bucket to store your logs. You will need the bucket name, region, key, secret key, and the path in the following steps.

  2. Create a Secret in the tigera-operator namespace named log-collector-s3-credentials with the fields key-id and key-secret. Example:

     kubectl create secret generic log-collector-s3-credentials \
    --from-literal=key-id=<AWS-access-key-id> \
    --from-literal=key-secret=<AWS-secret-key> \
    -n tigera-operator
  3. Update the LogCollector resource named, tigera-secure to include an S3 section with your information noted from above. Example:

    apiVersion: operator.tigera.io/v1
    kind: LogCollector
    metadata:
    name: tigera-secure
    spec:
    additionalStores:
    s3:
    bucketName: <S3-bucket-name>
    bucketPath: <path-in-S3-bucket>
    region: <S3-bucket region>

    This can be done during installation by editing the custom-resources.yaml by applying it, or after installation by editing the resource with the command:

    kubectl edit logcollector tigera-secure

Control which hosts have their logs archived

By default, logs are archived from both Kubernetes cluster hosts and non-cluster hosts. To archive logs for only non-cluster hosts or VMs, use the hostScope field when you set your additional storage spec on the LogCollector resource.

Example spec for Splunk
apiVersion: operator.tigera.io/v1
kind: LogCollector
metadata:
name: tigera-secure
spec:
additionalStores:
splunk:
# Splunk HTTP Event Collector endpoint, in the format protocol://host:port
endpoint: https://1.2.3.4:8088

# Set host scope to non-cluster only: only non-cluster logs will be archived to Splunk.
hostScope: NonClusterOnly