Define agents as code. Deploy with a single command. Astro handles models, knowledge bases, tool integrations, and observability so you build what matters.
Purpose-built infrastructure primitives, not repurposed web hosting.
Attach any LLM (OpenAI, Anthropic, open-source) declaratively. Swap providers without touching code.
Vector-indexed knowledge stores with automatic ingestion and chunking. Your agents stay grounded.
Connect APIs, databases, and SaaS tools. Agents call them natively with built-in auth and retries.
Traces, metrics, and logs out of the box. See every decision your agent makes in production.
Each agent runs in its own container with scoped credentials. Zero shared state between deployments.
From zero to thousands of concurrent sessions. Infrastructure that responds to demand, not configuration.
The Astro spec (astropods.yml) is your agent's single source of truth — models, knowledge stores, tools, and observability — all defined in one file.
name: support-agent
version: 1.0.0
container:
image: astro/support-agent:latest
runtime: python3.11
models:
- provider: openai
model: gpt-4o
role: reasoning
knowledge:
- source: s3://docs-bucket
index: vector
embeddings: text-embedding-3-small
tools:
- name: ticket-lookup
type: api
endpoint: https://api.internal/tickets
- name: slack-notify
type: integration
channel: "#escalations"
observability:
tracing: true
metrics: trueShared knowledge sources and consistent deployments mean you experiment and ship to production faster.
One command generates the project structure and a starter Astro spec so your agent is fully specified as repeatable infrastructure.
Run your agent locally with hot-reload. Test against live models and tools with built-in adapters for voice, text, Slack, and streaming interfaces.
Push to the Astro registry. Your agent is live with tracing, scaling, and versioning, available to others in your organization to use.



Not just another hosted runtime. Astro gives you control, flexibility, and independence at the infrastructure level.
Change agent configurations, swap models, update guardrails, and adjust policies in production without redeploying. Your agents keep running while you iterate.
Bring agents built with CrewAI, LangChain, Mastra, or your own custom framework. Astro doesn't lock you into a single SDK. It runs whatever you build.
Run Astro in your own cloud, on-prem, or use our managed platform. Your agents, your data, your infrastructure — with no vendor lock-in.
Build your agents using popular frameworks of your choice. Frameworks are adapted with a unified SDK to make using any framework as easy as a single import.
Real agents built and deployed by teams on Astro, complete with ingestion and inference engines. Because you can't deploy real agents without first building the right knowledge sources.
Searches and summarizes documentation across multiple sources with context-aware retrieval.
Reviews pull requests for bugs, security issues, and style violations with full repo context.
Generates and A/B tests email campaigns, landing page copy, and social content.
Triages and resolves support tickets end-to-end with knowledge base lookup and escalation.
Scores and qualifies inbound leads using CRM data, enrichment APIs, and custom criteria.
Monitors application logs in real-time, detects anomalies, and surfaces actionable insights.
Every agent gets a verifiable identity. Track provenance, capabilities, and performance at a glance.
Astro is open source and Apache 2.0 licensed. We believe the best agent infrastructure should be transparent, extensible, and community-driven.
Build and share agents on the platform. Contributors earn free compute credits to run their agents in production.
Deploy agents privately within your organization or publish them to the community for anyone to use and customize.
Be part of a growing community of agent developers building, iterating, and shipping real AI agents together.
Free to start. No credit card required. Deploy your first agent in minutes.