Publication Boundary
The guide documents current container boundaries without claiming official Lightmetrics images, Dockerfiles, or production Compose support.
Current public document
Operator-owned container build/run recipe, volumes, ports, token/TLS mounts, and MinIO/S3 wiring.
The guide documents current container boundaries without claiming official Lightmetrics images, Dockerfiles, or production Compose support.
The repository does not publish container images; examples are local recipes that operators must validate in their environment.
Lightmetrics does not currently ship a repository-owned Dockerfile, Compose
stack, or published image. This guide documents the current container boundary
for operators who want to build and run their own local image from the source
tree. Treat the build and run commands as conceptual recipes until a tracked
Docker artifact exists in the repository.
For the shortest verified local evaluation path, use docs/quickstart.md and
scripts/lightmetrics-demo. For manual non-container binary layout, use
docs/install.md.
The container examples use one image that contains both lm-collector and
lm-agent. Runtime behavior still comes from TOML config files and secret file
paths, not environment-only configuration.
Conceptual local image build:
docker build -t lightmetrics:local -f - . <<'DOCKERFILE'
FROM rust:1.85-bookworm AS build
WORKDIR /src
RUN apt-get update \
&& apt-get install -y --no-install-recommends capnproto \
&& rm -rf /var/lib/apt/lists/*
COPY Cargo.toml Cargo.lock ./
COPY crates ./crates
COPY schemas ./schemas
RUN cargo build --locked --release -p lm-collector -p lm-agent
FROM debian:bookworm-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates \
&& rm -rf /var/lib/apt/lists/*
RUN useradd --system --uid 10001 --create-home \
--home-dir /var/lib/lightmetrics \
--shell /usr/sbin/nologin lightmetrics
COPY --from=build /src/target/release/lm-collector /usr/local/bin/lm-collector
COPY --from=build /src/target/release/lm-agent /usr/local/bin/lm-agent
USER 10001:10001
WORKDIR /var/lib/lightmetrics
DOCKERFILE
This image intentionally does not bake in config, tokens, TLS keys, object-store credentials, or dashboard overrides. Mount those at runtime.
Use persistent volumes or bind mounts for state. Deleting these directories can remove accepted collector data or queued agent data.
| Container path | Component | Purpose |
|---|---|---|
/usr/local/bin/lm-collector | Collector | Source-built collector binary. |
/usr/local/bin/lm-agent | Agent | Source-built agent binary. |
/etc/lightmetrics/server.toml | Collector | Mounted collector config. |
/etc/lightmetrics/agent.toml | Agent | Mounted agent config. |
/etc/lightmetrics/ingest-tokens.toml | Collector | Mounted ingest token table. |
/etc/lightmetrics/query-token | Collector | Mounted private query/UI token. |
/etc/lightmetrics/token | Agent | Mounted ingest bearer token for one agent. |
/etc/lightmetrics/tls/ | Collector | Mounted ingest TLS certificate and key. |
/etc/lightmetrics/ca.pem | Agent | Mounted CA bundle when ingest TLS uses a private CA. |
/etc/lightmetrics/dashboards.d/ | Collector | Optional mounted custom dashboard TOML directory. |
/var/lib/lightmetrics/spool | Collector | Persistent accepted local spool. |
/var/lib/lightmetrics/cache | Collector | Persistent collector cache directory. |
/var/lib/lightmetrics/objects | Collector | Optional filesystem object-store bucket path. |
/var/lib/lightmetrics/queue | Agent | Persistent upload queue, sequence state, and log offsets. |
Prepare a local layout before writing config and secret files. The run examples
below use --user "$(id -u):$(id -g)" so bind-mounted files stay owned,
readable, and writable by the invoking host user.
mkdir -p container-config/tls container-config/dashboards.d
mkdir -p container-state/collector/spool container-state/collector/cache
mkdir -p container-state/collector/objects container-state/agent/queue
chmod 0700 container-config container-config/tls container-config/dashboards.d
chmod 0700 container-state container-state/collector container-state/agent
chmod 0700 container-state/collector/spool container-state/collector/cache
chmod 0700 container-state/collector/objects container-state/agent/queue
Token files are secrets. Keep them out of committed config and avoid pasting their values into logs or transcripts.
Collector ingest token file shape:
[[tokens]]
agent_id = "container-agent"
token = "<same value as /etc/lightmetrics/token in the agent container>"
After writing config, token, and TLS files, keep secrets readable only by the host user that will run the containers:
chmod 0600 container-config/agent-token
chmod 0600 container-config/query-token
chmod 0600 container-config/ingest-tokens.toml
chmod 0600 container-config/tls/*
If you run the image’s default UID/GID 10001 instead of overriding --user,
make the mounted config and state paths readable or writable by UID/GID 10001
using a local ownership, Docker secret, named-volume, or orchestrator-specific
strategy. Do not make token files or TLS private keys world-readable to work
around ownership problems.
Start from config/server.toml, then adapt container paths and listener
addresses. The query listener must bind inside the container on 0.0.0.0 if it
will be reached through a published Docker port; keep the host-side port binding
private.
version = 1
spool_dir = "/var/lib/lightmetrics/spool"
spool_max_bytes = 1073741824
cache_dir = "/var/lib/lightmetrics/cache"
cache_max_bytes = 5368709120
ram_index_max_bytes = 134217728
[ingest]
acceptance = "local_spool"
object_landing_interval_ms = 60000
listen_addr = "0.0.0.0:8443"
public_base_url = "https://collector:8443"
token_file = "/etc/lightmetrics/ingest-tokens.toml"
max_frame_bytes = 1048576
max_series_per_batch = 2048
max_samples_per_series = 4096
max_histogram_buckets_per_sample = 128
max_labels_per_series = 32
max_label_key_bytes = 128
max_label_value_bytes = 512
max_log_message_bytes = 16384
max_logs_per_batch = 4096
max_alerts_per_batch = 1024
[ingest.tls]
cert_file = "/etc/lightmetrics/tls/fullchain.pem"
key_file = "/etc/lightmetrics/tls/key.pem"
[query]
listen_addr = "0.0.0.0:8080"
token_file = "/etc/lightmetrics/query-token"
[dashboards]
definition_dirs = ["/etc/lightmetrics/dashboards.d"]
definition_files = []
[object_store]
kind = "fs"
bucket = "/var/lib/lightmetrics/objects"
prefix = "lightmetrics/"
[[rollups]]
window_seconds = 60
retention_seconds = 7776000
For local_spool without object landing, omit [object_store]. For
s3_manifest, [object_store] is required and accepted-manifest creation is
part of the ingest acknowledgement path.
The ingest TLS certificate must be valid for the name agents use in
collector_url. In the network examples below, that name is collector.
Conceptual collector container run:
docker network create lightmetrics-net
docker run --rm --name lm-collector \
--network lightmetrics-net \
--network-alias collector \
--user "$(id -u):$(id -g)" \
-p 8443:8443 \
-p 127.0.0.1:8080:8080 \
-v "$PWD/container-config/server.toml:/etc/lightmetrics/server.toml:ro" \
-v "$PWD/container-config/ingest-tokens.toml:/etc/lightmetrics/ingest-tokens.toml:ro" \
-v "$PWD/container-config/query-token:/etc/lightmetrics/query-token:ro" \
-v "$PWD/container-config/tls:/etc/lightmetrics/tls:ro" \
-v "$PWD/container-config/dashboards.d:/etc/lightmetrics/dashboards.d:ro" \
-v "$PWD/container-state/collector:/var/lib/lightmetrics:rw" \
lightmetrics:local \
lm-collector /etc/lightmetrics/server.toml
Publish the private query/console port only on trusted interfaces. The example
maps it to 127.0.0.1:8080 on the host.
Start from config/agent.toml, then set the collector URL to the collector’s
network alias and persist the queue under the mounted state directory.
version = 1
agent_id = "container-agent"
collector_url = "https://collector:8443/ingest/v1/batch"
auth_token_file = "/etc/lightmetrics/token"
tls_ca_cert_file = "/etc/lightmetrics/ca.pem"
queue_dir = "/var/lib/lightmetrics/queue"
queue_max_bytes = 268435456
scrape_interval_ms = 10000
flush_interval_ms = 5000
max_batch_bytes = 262144
max_log_message_bytes = 16384
max_logs_per_batch = 4096
max_alerts_per_batch = 1
# [[log_inputs]]
# name = "host-syslog"
# path = "/host/var/log/syslog"
Agent log limits must not exceed the collector’s corresponding ingest limits.
Host log files must be mounted into the agent container and referenced by their
container path, such as /host/var/log/syslog.
Conceptual agent container run:
docker run --rm --name lm-agent \
--network lightmetrics-net \
--user "$(id -u):$(id -g)" \
-v "$PWD/container-config/agent.toml:/etc/lightmetrics/agent.toml:ro" \
-v "$PWD/container-config/agent-token:/etc/lightmetrics/token:ro" \
-v "$PWD/container-config/tls/ca.pem:/etc/lightmetrics/ca.pem:ro" \
-v "$PWD/container-state/agent:/var/lib/lightmetrics:rw" \
-v /var/log:/host/var/log:ro \
lightmetrics:local \
lm-agent --daemon /etc/lightmetrics/agent.toml
lm-agent --daemon is still a foreground process. Use the container runtime or
an external orchestrator for restart policy and log capture.
The collector’s S3 backend reads credentials from AWS_ACCESS_KEY_ID,
AWS_SECRET_ACCESS_KEY, and optional AWS_SESSION_TOKEN. Do not put those
secret values in server.toml.
Conceptual MinIO setup on the same Docker network:
cat > container-config/minio.env <<'EOF'
MINIO_ROOT_USER=<replace-me>
MINIO_ROOT_PASSWORD=<replace-me>
EOF
chmod 0600 container-config/minio.env
cat > container-config/collector-s3.env <<'EOF'
AWS_ACCESS_KEY_ID=<same value as MINIO_ROOT_USER>
AWS_SECRET_ACCESS_KEY=<same value as MINIO_ROOT_PASSWORD>
EOF
chmod 0600 container-config/collector-s3.env
docker run -d --name minio \
--network lightmetrics-net \
--env-file container-config/minio.env \
-p 127.0.0.1:9000:9000 \
-p 127.0.0.1:9001:9001 \
-v "$PWD/container-state/minio:/data" \
minio/minio:<pinned-tag> \
server /data --address :9000 --console-address :9001
docker run --rm \
--network lightmetrics-net \
--env-file container-config/minio.env \
--entrypoint sh \
minio/mc:<pinned-tag> \
-c 'mc alias set local http://minio:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD" && mc mb -p local/lightmetrics'
Use this collector object-store section for MinIO:
[object_store]
kind = "s3"
bucket = "lightmetrics"
prefix = "lightmetrics/"
region = "us-east-1"
endpoint = "http://minio:9000"
Run the collector with --env-file container-config/collector-s3.env so the S3
credential variables are available to the collector process. For s3_manifest,
set ingest.acceptance = "s3_manifest" so the accepted manifest conditional
create is part of acknowledgement. For single-collector asynchronous landing,
keep ingest.acceptance = "local_spool" and configure
ingest.object_landing_interval_ms.
Validate mounted config before starting long-running containers:
docker run --rm \
--network lightmetrics-net \
--user "$(id -u):$(id -g)" \
-v "$PWD/container-config/server.toml:/etc/lightmetrics/server.toml:ro" \
-v "$PWD/container-config/ingest-tokens.toml:/etc/lightmetrics/ingest-tokens.toml:ro" \
-v "$PWD/container-config/query-token:/etc/lightmetrics/query-token:ro" \
-v "$PWD/container-config/tls:/etc/lightmetrics/tls:ro" \
-v "$PWD/container-config/dashboards.d:/etc/lightmetrics/dashboards.d:ro" \
-v "$PWD/container-state/collector:/var/lib/lightmetrics:rw" \
lightmetrics:local \
lm-collector --check-config /etc/lightmetrics/server.toml
docker run --rm \
--network lightmetrics-net \
--user "$(id -u):$(id -g)" \
-v "$PWD/container-config/agent.toml:/etc/lightmetrics/agent.toml:ro" \
-v "$PWD/container-config/agent-token:/etc/lightmetrics/token:ro" \
-v "$PWD/container-config/tls/ca.pem:/etc/lightmetrics/ca.pem:ro" \
-v "$PWD/container-state/agent:/var/lib/lightmetrics:rw" \
lightmetrics:local \
lm-agent /etc/lightmetrics/agent.toml
After the collector and agent are running, query the private API through the host-local published query port:
QUERY_TOKEN="$(cat container-config/query-token)"
curl -fsS \
-H "Authorization: Bearer $QUERY_TOKEN" \
"http://127.0.0.1:8080/api/v1/query?query=lmagent_up"
Expected shape is the same Prometheus-style envelope shown in
docs/quickstart.md. An empty vector immediately after start usually means no
agent scrape has reached the collector yet.
Stop named containers before removing the network:
docker rm -f lm-agent lm-collector minio
docker network rm lightmetrics-net
Remove bind-mounted state only after confirming that queued agent data, collector spool data, local filesystem object-store data, and MinIO data are no longer needed.
log_inputs.gcs remains parsed by config but unsupported at runtime.