open source · mit license · v0.3.0

deploy a team.
not a prompt.

describe any task. agentis spawns a coordinated team of specialized agents across 12 llm providers. watch them think, collaborate, and synthesize — live.

$ git clone github.com/Dhwanil25/Agentis
$ npm install && npm run dev
 
ready in 1.2s · no backend · no python · no docker
open agentis free → star on github ↗
12
llm providers
8
agent roles
0
backend needed
~2min
to first run
live demo

watch every agent think out loud.

each agent streams its output token by token. dependencies resolve automatically. upstream context flows downstream in real time.

agent universe Idle
// synthesis output
awaiting agents…
agent feed
features

everything your agent team needs.

built for tasks too complex for a single prompt.

live agent canvas
hexagonal nodes, bezier edges, animated particles, real-time thought bubbles. watch the entire execution unfold visually as it happens.
12 providers in parallel
mix anthropic, openai, google, groq, mistral, deepseek, cohere, xai, together, ollama and more in a single run. each agent gets the best model for its role.
auto failover
when a provider goes down mid-task, agents switch to the next available one automatically. no data loss, no retries needed, no configuration required.
persistent universe
follow-up questions recall relevant old agents and add new ones. knowledge compounds across turns. your agent team builds context session over session.
skills marketplace
install specialized skills from skills.sh and assign them to specific agent roles. one command to extend what your agents can do.
per-agent cost tracking
every agent shows exact token counts and usd cost in real time. full analytics dashboard tracks spend across all runs and providers.
live activity

every agent runs in the open.

no black boxes. see exactly which model is running, what it's thinking, how many tokens it's used, and what it found.

0 agents spawned
0k tokens streamed
0 runs completed
how it works

one sentence. full agent team.

three phases. fully automatic. completely visible.

01
plan
the orchestrator analyzes your task, breaks it into 2–12 sub-tasks, assigns roles and providers, and maps which agents run in parallel vs. sequentially.
02
execute
parallel agents fire simultaneously. dependent agents receive upstream context automatically. every output streams live. failover triggers if needed.
03
synthesize
the orchestrator merges all outputs into one cohesive answer. ask a follow-up and the universe extends — old agents recalled, new ones spawned.
supported providers

12 providers. one interface.

paste your api key and it works. no code changes, no config files.

your next build starts
with one sentence.

free. open source. no account needed.
bring your api key and run a full agent team in under 60 seconds.

mit licensed · browser-native · zero backend · 12 llm providers