Configuration
Each project in NavFlow has its own pipeline configuration. You can manage it through the NavFlow dashboard or the API.
Pipeline
The pipeline processes incoming events before forwarding them to an AI agent. It has three parts:
Trigger
An optional expression-based trigger that controls when the agent is invoked. Only events matching the trigger expression are forwarded to the agent. Uses the expr language.
Example expressions:
severityNumber >= 17Only invoke the agent for ERROR and FATAL severity events.
resourceAttributes["service.name"] == "my-service"Only invoke the agent for events from a specific service.
status == "failed" && amount > 100Only invoke the agent for high-value failed events (works with any JSON structure).
Transform
Optional field transformations applied to each record after filtering. Each transform extracts or computes a value and adds it to the record.
| Field | Description |
|---|---|
expression | An expr expression evaluated against the record |
output_name | The field name to write the result to |
output_type | The result type: string, int, int64, float64, bool |
Example:
| Expression | Output name | Type |
|---|---|---|
resourceAttributes["service.name"] | service | string |
severityNumber >= 17 | is_error | bool |
Agent endpoint
The URL where the pipeline sends batched, filtered records. Set this to your AI agent’s processing endpoint (e.g. http://my-agent:8000/process).
The pipeline only activates for projects that have an agent endpoint configured. Leave it empty to disable pipeline processing.
The pipeline batches up to 10 records per request. Each batch is sent as a JSON array in the POST body, with X-Project-ID and X-Request-ID headers.
Context Window
Context windows give your agent temporal awareness. Instead of receiving events in isolation, the agent receives a trigger event plus a sliding window of recent context events.
How it works
- Every event is evaluated against the trigger expression — if it matches, the agent is invoked with the current window as context.
- Every event is independently evaluated against the context filter — if it matches, the event is stored in the Redis window for future triggers.
- The trigger event itself is stored in the window after the agent call, so it appears as context for future triggers but not in its own invocation.
- Events expire from the window naturally via Redis TTL — no manual flush.
Trigger and context filter are independent. An event can trigger without being stored, be stored without triggering, do both, or do neither.
Context Filter
An optional expression that decides which events are stored in the window. Uses the same expr-lang syntax as the trigger.
Examples:
severityNumber >= 9— store all warnings and above as context (OTLP events)status == "failed" || status == "error"— store failure events as context (custom JSON events)
Window Duration
How long events stay in the window (in seconds). Default: 300 (5 minutes). Events older than this are automatically expired by Redis.
Group Key
An optional expr-lang expression that groups events into separate windows. Events are grouped by the evaluated result.
Examples:
resourceAttributes["service.name"]— separate window per service (OTLP events)user_id— separate window per user (custom JSON events)- Leave empty — all events share a single window
Agent Payload (Window Mode)
When context windows are enabled, the agent receives a different payload format:
{
"trigger": {
"severityNumber": 17,
"severityText": "ERROR",
"body": "payment failed",
"resourceAttributes": {"service.name": "payment"}
},
"window": {
"key": "default",
"events": [
{"data": {"severityText": "WARN", "body": "high latency detected"}, "timestamp": "2024-01-15T10:29:55Z"},
{"data": {"severityText": "WARN", "body": "retry attempt 3 of 5"}, "timestamp": "2024-01-15T10:29:58Z"}
],
"stats": {
"count": 2,
"duration_ms": 3000,
"first_at": "2024-01-15T10:29:55Z",
"last_at": "2024-01-15T10:29:58Z"
}
}
}Context windows are fully managed by NavFlow — no additional infrastructure setup is required on your end.
Sinks
Sinks receive enriched output from AI agents (via the NATS output stream) and dispatch it to external destinations.
Webhook
Sends the agent output as an HTTP request:
| Field | Description |
|---|---|
| URL | The webhook endpoint |
| Method | HTTP method (POST, PUT, etc.) |
| Headers | Custom headers (JSON object) |
Slack
Posts the agent output to a Slack channel via an incoming webhook:
| Field | Description |
|---|---|
| Webhook URL | Slack incoming webhook URL |
API keys
Each project has API keys used to authenticate:
- Event ingestion — your applications include the key in the
X-API-Keyheader when sending OTLP data or JSON events - Agent output — the NavFlow SDK uses the key to send enriched results back to the receiver
Create and manage keys in the project’s API Keys page.
API reference
All configuration is available via the control plane REST API:
| Endpoint | Method | Description |
|---|---|---|
/api/v1/projects/{id}/pipeline | GET / PUT | Pipeline config (filter, transforms) |
/api/v1/projects/{id}/bridge | GET / PUT | Agent endpoint URL |
/api/v1/projects/{id}/sinks | GET / PUT | Sink configuration |
/api/v1/projects/{id}/keys | GET / POST | API key management |