Slogr is an open-source, self-hosted monitoring and command platform for AI agents. As AI agent deployments grow in complexity — spanning multiple frameworks, tasks, and tool calls — developers lack visibility into what their agents are doing in real-time. Slogr solves this by providing a unified dashboard to monitor, debug, and communicate with any AI agent, regardless of the underlying framework.
Built on Node.js with Socket.io for real-time communication and a Python SDK for agent integration, Slogr operates entirely on the user's own infrastructure. No data is sent to third-party servers. No subscriptions required. Your agents, your server, your data.
Give every AI developer a cockpit — a real-time command center where they can see exactly what their agents are doing, intervene when needed, and ship with confidence.
The rise of AI agent frameworks — LangChain, CrewAI, AutoGen, and others — has made it dramatically easier to build autonomous AI systems. However, operating these agents in production remains a significant challenge.
Most agent frameworks provide minimal built-in observability. Developers rely on print statements, log files, and ad-hoc debugging tools to understand agent behavior. When an agent fails or behaves unexpectedly, tracing the root cause is time-consuming and error-prone.
Existing monitoring solutions are largely post-hoc — they show what happened after the fact. There is no standard way to observe an agent's reasoning in real-time, send it commands mid-task, or dynamically adjust its behavior without stopping and restarting the process.
Teams often use multiple agent frameworks across projects. Each framework has its own logging format, callback system, and debugging interface. There is no unified layer that works across all of them.
AI agents consume API tokens at scale. Without per-agent cost tracking, teams have no visibility into which agents are burning resources and why.
Slogr provides a real-time command center for AI agents. It consists of three components: a lightweight backend server, a Python SDK for agent integration, and a web dashboard for visualization and interaction.
| Component | Technology | Purpose |
|---|---|---|
| Backend Server | Node.js + Socket.io | Real-time event routing, REST API, chat proxy |
| Agent SDK | Python | Agent instrumentation and event emission |
| Dashboard | Vanilla JS + Canvas | Visualization, monitoring, and agent chat |
Self-hosted first. Slogr is designed to run on your own server with your own API key. No SaaS dependencies, no data collection, no vendor lock-in.
The backend is a Node.js Express server with Socket.io for bidirectional real-time communication. It handles agent registration, event routing, state management, and serves as a proxy for Anthropic API calls — ensuring API keys never appear in the browser.
// Core socket events agent:register → Agent comes online task:start → Agent begins a task task:finish → Task completed successfully task:fail → Task failed with error agent:thinking → Agent reasoning in progress agent:tool_call → External tool invoked agent:log → General log message dashboard:chat → Chat message from dashboard
The Python SDK wraps any AI agent with minimal code changes. It emits structured events to the Slogr server via Socket.io, enabling real-time monitoring without modifying agent logic.
from slogr import Slogr
slogr = Slogr(
api_key="sk-slogr-demo",
agent_id="research-agent-01",
framework="langchain",
server_url="http://localhost:3000"
)
slogr.connect()
# Automatic task tracking
with slogr.task("Analyze quarterly report"):
result = agent.run("Summarize Q4 financials")
# Handle dashboard commands
@slogr.on_command()
def handle_command(cmd, data):
return agent.run(cmd)
The dashboard is a single-page application built with vanilla JavaScript and the HTML5 Canvas API. It features a real-time game visualization where each connected agent is represented as a starfighter, and incoming tasks appear as enemies to be destroyed.
The dashboard is divided into three panels: an agent sidebar for fleet overview, a central canvas for game visualization, and a right panel for direct agent communication.
Every event emitted by a connected agent — task starts, tool calls, reasoning steps, errors — appears on the dashboard within milliseconds. The game visualization provides an immediate, intuitive sense of agent activity and health.
Each agent has a dedicated chat channel. Dashboard users can send natural language commands to any agent and receive responses in real-time. Chat is powered by the Anthropic API via a backend proxy — agent personas are maintained server-side.
Each agent has a shield health score (0-100). Shield decreases when tasks fail or errors occur, and is visible both in the sidebar and as a health bar under each starfighter in the game canvas. This provides an instant visual indicator of agent reliability.
Multiple agents can connect simultaneously. The dashboard scales dynamically — new agents appear in the sidebar and join the game canvas automatically. There is no hard limit on the number of concurrent agents.
All Anthropic API calls are routed through the backend server. The user's API key is stored in a server-side .env file and never exposed to the browser. This is critical for self-hosted deployments accessible over the internet.
Agent state, task history, and logs are persisted in SQLite (or in-memory on Windows without build tools). State survives page refreshes and reconnections.
Slogr provides first-class integration with major agent frameworks through dedicated adapters.
| Framework | Integration Method | Automatic Events |
|---|---|---|
| LangChain | BaseCallbackHandler | Chain start/end, tool calls, LLM calls, errors |
| CrewAI | Crew wrapper | Task delegation, agent handoffs, results |
| AutoGen | Agent patcher | Message exchange, function calls, replies |
| Custom | Direct SDK calls | Manual instrumentation via Python SDK |
# LangChain integration from slogr.integrations import SlogrLangChain handler = SlogrLangChain(slogr) chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler]) # CrewAI integration from slogr.integrations import SlogrCrewAI crew = Crew(agents=[...], tasks=[...]) result = SlogrCrewAI(slogr).run(crew) # AutoGen integration from slogr.integrations import SlogrAutoGen SlogrAutoGen(slogr).patch([agent1, agent2])
Slogr is designed for self-hosting. All data — agent events, task history, logs, chat messages — stays on your own server. No telemetry is sent to Slogr or any third party.
The Anthropic API key is stored in a server-side .env file. The frontend never has direct access to the key. All AI API calls are proxied through the backend's /api/chat endpoint.
In the current v0.1.0 release, authentication is handled via a simple API key (default: sk-slogr-demo) passed by agents on registration. Production deployments should replace this with a strong random key stored in .env.
For production deployments accessible over the internet, we strongly recommend placing Slogr behind a reverse proxy (nginx) with HTTPS and changing the default API key to a strong random value.
| Version | Feature | Status |
|---|---|---|
| v0.1.0 | Core monitoring, chat, game visualization, Python SDK | ✅ Released |
| v0.2.0 | SQLite persistence, task history, cost tracking | 🔄 In Progress |
| v0.3.0 | Authentication system, user accounts, team support | 📋 Planned |
| v0.4.0 | LlamaIndex + Flowise integrations, plugin system | 📋 Planned |
| v0.5.0 | Web3 wallet identity, on-chain agent activity proof | 📋 Planned |
| v1.0.0 | Stable API, cloud-hosted option, enterprise features | 📋 Planned |
As AI agents become more capable and more widely deployed, the need for robust observability tooling grows proportionally. Slogr represents a first step toward a universal command layer for AI agents — one that is open, self-hosted, and framework-agnostic.
By combining real-time monitoring, direct communication, and a memorable visual metaphor (agents as starfighters), Slogr makes AI agent operations both productive and engaging. We believe that better observability leads to better agents, and better agents lead to better outcomes.
Slogr is open source and community-driven. Star the repo, file issues, submit PRs. Help us build the cockpit every AI developer deserves.