How It Works
Get Slogr running in under 5 minutes. You need Node.js 18+ and an Anthropic API key.
git clone https://github.com/slogrbot-stack/slogr cd slogr npm install
cp .env.example .env
# .env ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxx PORT=3000
npm start # Output: # π Slogr Mission Control v0.1.0 # π’ http://localhost:3000 # π Demo key: sk-slogr-demo
pip install -e ./slogr-sdk
from slogr import Slogr
slogr = Slogr(
api_key="sk-slogr-demo",
agent_id="my-agent",
framework="custom",
server_url="http://localhost:3000"
)
slogr.connect()
print("Agent connected to Mission Control!")
Slogr monitors every event your agent emits β task starts, tool calls, reasoning steps, errors β and displays them on the dashboard within milliseconds via Socket.io.
Click any agent in the sidebar to open a direct communication channel. Send natural language commands and receive responses in real-time.
Chat is powered by the Anthropic API via a backend proxy. Each agent has its own persona and context β conversations are maintained per-agent, per-session.
Dashboard sends message β Backend receives via Socket.io β Backend calls Anthropic API with agent persona β Response sent back to dashboard. Your API key never touches the browser.
// Agent chat flow Dashboard ββsocket emit 'dashboard:chat'βββΊ Server Server ββPOST /v1/messagesβββΊ Anthropic API Anthropic ββresponseβββΊ Server Server ββsocket emit 'command:result'βββΊ Dashboard
The central canvas renders a real-time starfighter game. Each connected agent becomes a ship. Incoming tasks become enemies flying from right to left. Agents auto-fire lasers at nearby tasks.
| Visual Element | Represents |
|---|---|
| Green starfighter | Connected agent |
| Enemy box (BUG, TASK, ERROR...) | Active task |
| Laser beam | Agent working on task |
| Explosion | Task completed |
| Shield bar under ship | Agent health / error rate |
| TRANSMITTING... | Agent is thinking / responding |
Each agent starts with 100 shield points. Shield decreases by 10 when a task fails or an error occurs. Shield is visible as a color-coded bar under each ship (green above 50%, red below).
Shield health is a quick indicator of agent reliability β a low-shield agent needs attention.
from slogr import Slogr
# Initialize
slogr = Slogr(
api_key="sk-slogr-demo", # Slogr server key (not Anthropic key)
agent_id="agent-01", # Unique ID shown in dashboard
agent_name="My Agent", # Display name
framework="langchain", # langchain | crewai | autogen | custom
server_url="http://localhost:3000"
)
# Connect to dashboard
slogr.connect()
# Log messages
slogr.log("Agent initialized successfully")
# Update status
slogg.update_status("running") # idle | running | thinking | error
# Disconnect when done
slogr.disconnect()
Use the task() context manager to track individual tasks. The task appears as an enemy in the game canvas and is tracked in the dashboard.
from slogr import Slogr
slogr = Slogr(...)
slogr.connect()
# Context manager (auto start/finish/fail)
with slogr.task("Research AI frameworks"):
result = my_agent.run("List top AI agent frameworks in 2026")
# Task auto-finishes when block exits
# Task auto-fails if exception is raised
# Manual control
task_id = slogr.start_task("Write blog post", metadata={"topic": "AI"})
try:
result = write_post()
slogr.finish_task(task_id, result=result)
except Exception as e:
slogr.fail_task(task_id, error=str(e))
# Decorator
@task(slogr)
def research(query):
return agent.run(query)
The SDK emits the following Socket.io events to the Slogr server:
| Method | Event | Description |
|---|---|---|
slogr.connect() | agent:register | Agent comes online in dashboard |
slogr.start_task() | task:start | Task begins, enemy spawns in game |
slogr.finish_task() | task:finish | Task done, enemy explodes |
slogr.fail_task() | task:fail | Task failed, shield decreases |
slogr.log() | agent:log | Log message in dashboard |
slogr.thinking() | agent:thinking | TRANSMITTING shown on ship |
slogr.tool_call() | agent:tool_call | External tool invocation logged |
slogr.update_status() | agent:status | Status badge updates in sidebar |
slogr.disconnect() | agent:offline | Agent goes offline in dashboard |
Register a handler to receive commands sent from the dashboard chat panel.
@slogr.on_command()
def handle_command(command, data):
"""
command: str β the message sent from dashboard
data: dict β additional metadata
returns: str β reply sent back to dashboard
"""
result = my_agent.run(command)
return result
Use SlogrLangChain as a LangChain callback handler. It automatically tracks chain runs, LLM calls, tool calls, and errors.
from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
from slogr import Slogr
from slogr.integrations import SlogrLangChain
slogr = Slogr(agent_id="lc-agent", framework="langchain", ...)
slogr.connect()
handler = SlogrLangChain(slogr)
llm = ChatOpenAI(model="gpt-4")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
# All chain activity automatically tracked in dashboard
result = chain.run("Analyze this dataset")
from crewai import Crew, Agent, Task from slogr import Slogr from slogr.integrations import SlogrCrewAI slogr = Slogr(agent_id="crew-01", framework="crewai", ...) slogr.connect() researcher = Agent(role="Researcher", goal="...", backstory="...") writer = Agent(role="Writer", goal="...", backstory="...") crew = Crew(agents=[researcher, writer], tasks=[...]) # Wrap crew with Slogr result = SlogrCrewAI(slogr).run(crew)
import autogen
from slogr import Slogr
from slogr.integrations import SlogrAutoGen
slogr = Slogr(agent_id="autogen-01", framework="autogen", ...)
slogr.connect()
assistant = autogen.AssistantAgent("assistant", llm_config={...})
user = autogen.UserProxyAgent("user_proxy", ...)
# Patch agents β all messages tracked automatically
SlogrAutoGen(slogr).patch([assistant, user])
user.initiate_chat(assistant, message="Write a Python script...")
Use the SDK directly for any custom agent or framework.
from slogr import Slogr
slogr = Slogr(agent_id="custom-01", framework="custom", ...)
slogr.connect()
def run_my_agent(query):
slogr.update_status("running")
slogr.thinking(f"Processing: {query}")
task_id = slogr.start_task(query)
try:
# Your agent logic here
slogr.tool_call("web_search", inputs={"query": query})
result = my_custom_logic(query)
slogr.finish_task(task_id, result=str(result))
return result
except Exception as e:
slogr.fail_task(task_id, error=str(e))
raise
| Method | Endpoint | Description |
|---|---|---|
| GET | /health | Server health check |
| GET | /api/agents | List all agents |
| GET | /api/agents/:id | Get agent details |
| GET | /api/agents/:id/logs | Agent log history |
| GET | /api/agents/:id/tasks | Agent task history |
| GET | /api/tasks | All tasks |
| GET | /api/stats | Fleet statistics |
| POST | /api/agents/:id/command | Send command to agent |
| POST | /api/chat | Anthropic API proxy |
# .env ANTHROPIC_API_KEY=sk-ant-xxxx # Required for agent chat PORT=3000 # Server port (default: 3000) # DB_PATH=./data/slogr.db # SQLite path (optional)