// DOCUMENTATION
Features &
How It Works
Everything you need to connect your agents to Mission Control.
Quick Start

Get Slogr running in under 5 minutes. You need Node.js 18+ and an Anthropic API key.

01
Clone & Install
Clone the repository and install Node.js dependencies.
git clone https://github.com/slogrbot-stack/slogr
cd slogr
npm install
02
Configure API Key
Copy the example env file and add your Anthropic API key. Get one at console.anthropic.com.
cp .env.example .env
# .env
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxx
PORT=3000
03
Start the Server
Launch Mission Control. The dashboard will be available at localhost:3000.
npm start

# Output:
# πŸš€ Slogr Mission Control v0.1.0
# 🟒 http://localhost:3000
# πŸ”‘ Demo key: sk-slogr-demo
04
Connect Your Agent
Install the Python SDK and connect your first agent.
pip install -e ./slogr-sdk
from slogr import Slogr

slogr = Slogr(
    api_key="sk-slogr-demo",
    agent_id="my-agent",
    framework="custom",
    server_url="http://localhost:3000"
)
slogr.connect()
print("Agent connected to Mission Control!")
Real-Time Monitoring

Slogr monitors every event your agent emits β€” task starts, tool calls, reasoning steps, errors β€” and displays them on the dashboard within milliseconds via Socket.io.

πŸš€
Live Task Feed
Every task your agent starts appears as an enemy in the game canvas. Completed tasks explode. Failed tasks damage shield.
πŸ”§
Tool Call Tracking
Every external tool your agent calls is logged with timing, inputs, and outputs. Identify bottlenecks instantly.
🧠
Thinking Indicator
When an agent is reasoning, "TRANSMITTING..." appears above its starfighter in the canvas.
πŸ“Š
Fleet Overview
The sidebar shows all connected agents with status, task count, and shield health at a glance.
Agent Chat

Click any agent in the sidebar to open a direct communication channel. Send natural language commands and receive responses in real-time.

Chat is powered by the Anthropic API via a backend proxy. Each agent has its own persona and context β€” conversations are maintained per-agent, per-session.

// HOW IT WORKS

Dashboard sends message β†’ Backend receives via Socket.io β†’ Backend calls Anthropic API with agent persona β†’ Response sent back to dashboard. Your API key never touches the browser.

// Agent chat flow
Dashboard  ──socket emit 'dashboard:chat'──►  Server
Server     ──POST /v1/messages──►              Anthropic API
Anthropic  ──response──►                       Server
Server     ──socket emit 'command:result'──►   Dashboard
Game Visualization

The central canvas renders a real-time starfighter game. Each connected agent becomes a ship. Incoming tasks become enemies flying from right to left. Agents auto-fire lasers at nearby tasks.

Visual ElementRepresents
Green starfighterConnected agent
Enemy box (BUG, TASK, ERROR...)Active task
Laser beamAgent working on task
ExplosionTask completed
Shield bar under shipAgent health / error rate
TRANSMITTING...Agent is thinking / responding
Shield System

Each agent starts with 100 shield points. Shield decreases by 10 when a task fails or an error occurs. Shield is visible as a color-coded bar under each ship (green above 50%, red below).

Shield health is a quick indicator of agent reliability β€” a low-shield agent needs attention.

Basic Usage
from slogr import Slogr

# Initialize
slogr = Slogr(
    api_key="sk-slogr-demo",   # Slogr server key (not Anthropic key)
    agent_id="agent-01",        # Unique ID shown in dashboard
    agent_name="My Agent",      # Display name
    framework="langchain",      # langchain | crewai | autogen | custom
    server_url="http://localhost:3000"
)

# Connect to dashboard
slogr.connect()

# Log messages
slogr.log("Agent initialized successfully")

# Update status
slogg.update_status("running")  # idle | running | thinking | error

# Disconnect when done
slogr.disconnect()
Task Tracking

Use the task() context manager to track individual tasks. The task appears as an enemy in the game canvas and is tracked in the dashboard.

from slogr import Slogr

slogr = Slogr(...)
slogr.connect()

# Context manager (auto start/finish/fail)
with slogr.task("Research AI frameworks"):
    result = my_agent.run("List top AI agent frameworks in 2026")
    # Task auto-finishes when block exits
    # Task auto-fails if exception is raised

# Manual control
task_id = slogr.start_task("Write blog post", metadata={"topic": "AI"})
try:
    result = write_post()
    slogr.finish_task(task_id, result=result)
except Exception as e:
    slogr.fail_task(task_id, error=str(e))

# Decorator
@task(slogr)
def research(query):
    return agent.run(query)
Events Reference

The SDK emits the following Socket.io events to the Slogr server:

MethodEventDescription
slogr.connect()agent:registerAgent comes online in dashboard
slogr.start_task()task:startTask begins, enemy spawns in game
slogr.finish_task()task:finishTask done, enemy explodes
slogr.fail_task()task:failTask failed, shield decreases
slogr.log()agent:logLog message in dashboard
slogr.thinking()agent:thinkingTRANSMITTING shown on ship
slogr.tool_call()agent:tool_callExternal tool invocation logged
slogr.update_status()agent:statusStatus badge updates in sidebar
slogr.disconnect()agent:offlineAgent goes offline in dashboard
Receiving Commands

Register a handler to receive commands sent from the dashboard chat panel.

@slogr.on_command()
def handle_command(command, data):
    """
    command: str β€” the message sent from dashboard
    data: dict β€” additional metadata
    returns: str β€” reply sent back to dashboard
    """
    result = my_agent.run(command)
    return result
LangChain

Use SlogrLangChain as a LangChain callback handler. It automatically tracks chain runs, LLM calls, tool calls, and errors.

from langchain.chains import LLMChain
from langchain_openai import ChatOpenAI
from slogr import Slogr
from slogr.integrations import SlogrLangChain

slogr = Slogr(agent_id="lc-agent", framework="langchain", ...)
slogr.connect()

handler = SlogrLangChain(slogr)

llm = ChatOpenAI(model="gpt-4")
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])

# All chain activity automatically tracked in dashboard
result = chain.run("Analyze this dataset")
CrewAI
from crewai import Crew, Agent, Task
from slogr import Slogr
from slogr.integrations import SlogrCrewAI

slogr = Slogr(agent_id="crew-01", framework="crewai", ...)
slogr.connect()

researcher = Agent(role="Researcher", goal="...", backstory="...")
writer = Agent(role="Writer", goal="...", backstory="...")

crew = Crew(agents=[researcher, writer], tasks=[...])

# Wrap crew with Slogr
result = SlogrCrewAI(slogr).run(crew)
AutoGen
import autogen
from slogr import Slogr
from slogr.integrations import SlogrAutoGen

slogr = Slogr(agent_id="autogen-01", framework="autogen", ...)
slogr.connect()

assistant = autogen.AssistantAgent("assistant", llm_config={...})
user = autogen.UserProxyAgent("user_proxy", ...)

# Patch agents β€” all messages tracked automatically
SlogrAutoGen(slogr).patch([assistant, user])

user.initiate_chat(assistant, message="Write a Python script...")
Custom Framework

Use the SDK directly for any custom agent or framework.

from slogr import Slogr

slogr = Slogr(agent_id="custom-01", framework="custom", ...)
slogr.connect()

def run_my_agent(query):
    slogr.update_status("running")
    slogr.thinking(f"Processing: {query}")
    
    task_id = slogr.start_task(query)
    try:
        # Your agent logic here
        slogr.tool_call("web_search", inputs={"query": query})
        result = my_custom_logic(query)
        slogr.finish_task(task_id, result=str(result))
        return result
    except Exception as e:
        slogr.fail_task(task_id, error=str(e))
        raise
REST API
MethodEndpointDescription
GET/healthServer health check
GET/api/agentsList all agents
GET/api/agents/:idGet agent details
GET/api/agents/:id/logsAgent log history
GET/api/agents/:id/tasksAgent task history
GET/api/tasksAll tasks
GET/api/statsFleet statistics
POST/api/agents/:id/commandSend command to agent
POST/api/chatAnthropic API proxy
Environment Variables
# .env
ANTHROPIC_API_KEY=sk-ant-xxxx   # Required for agent chat
PORT=3000                        # Server port (default: 3000)
# DB_PATH=./data/slogr.db       # SQLite path (optional)