Skip to Content
Getting Started

Getting Started

Get up and running with NavFlow in minutes. Sign up, create a project, send events, and connect your AI agent.

1. Sign up

Create your free account at app.navflow.ai . The free tier includes 10,000 events per month — no credit card required.

2. Create a project

After logging in, click Create Project and give it a name. Each project is an isolated pipeline with its own configuration, API keys, agent endpoint, and sinks.

3. Create an API key

Navigate to your project’s API Keys page and create a key. You’ll use this key to:

  • Send events to NavFlow from your applications
  • Connect your AI agent to send results back

4. Send events

Point your event sources at your project’s receiver endpoint. NavFlow accepts events via OTLP or HTTP/JSON.

HTTP/JSON events

Send any JSON object as an event:

curl -X POST https://receiver.navflow.ai/v1/events \ -H "Content-Type: application/json" \ -H "X-API-Key: YOUR_API_KEY" \ -d '{"type": "order_failed", "user_id": "u123", "error": "payment_declined"}'

OTLP logs

Send OpenTelemetry-formatted logs:

curl -X POST https://receiver.navflow.ai/v1/logs \ -H "Content-Type: application/json" \ -H "X-API-Key: YOUR_API_KEY" \ -d '{ "resourceLogs": [{ "resource": { "attributes": [ {"key": "service.name", "value": {"stringValue": "my-service"}} ] }, "scopeLogs": [{ "logRecords": [{ "timeUnixNano": "1700000000000000000", "severityNumber": 17, "severityText": "ERROR", "body": {"stringValue": "Connection refused: database pool exhausted"} }] }] }] }'

OpenTelemetry Collector

Configure your OTLP exporter to point at NavFlow:

exporters: otlphttp: endpoint: https://receiver.navflow.ai headers: X-API-Key: YOUR_API_KEY

5. Configure the pipeline

In the dashboard, go to your project’s Pipeline page:

  1. Set a trigger expression to control when your agent is invoked (e.g. severityNumber >= 17 for errors only)
  2. Optionally add transforms to extract or reshape fields
  3. Set the Agent Endpoint URL — the publicly reachable URL of your AI agent (e.g. https://my-agent.example.com/process)
  4. Optionally enable Context Windows to give the agent temporal awareness — configure a context filter, window duration, and group key so your agent receives the full trail of events, not just isolated triggers

6. Connect an agent

Your AI agent is a simple HTTP server that receives events from NavFlow, processes them, and sends results back. See Agents for how to build one.

A minimal example:

from fastapi import FastAPI, Header, Request from navflow import NavFlow app = FastAPI() nf = NavFlow( api_key="YOUR_API_KEY", endpoint="https://receiver.navflow.ai", ) @app.post("/process") async def process(request: Request, x_request_id: str = Header(default="")): records = await request.json() results = await analyze(records) # your AI logic nf.send_output(payload=results, request_id=x_request_id) return {"status": "ok"}

Deploy this anywhere publicly reachable (a cloud VM, serverless function, container service, etc.) and set the URL in your project’s pipeline configuration.

7. Configure output routing

In the Sinks page, add destinations for your agent’s output:

  • Slack — paste an incoming webhook URL to post results to a channel
  • Webhook — send results to any HTTP endpoint

Next steps

  • Architecture — understand how NavFlow processes your events
  • Configuration — deep dive into pipelines, context windows, filters, and transforms
  • Agents — build agents with batch mode or context window mode
Last updated on