Everything you need to integrate OctoMind's intent engine into your application.
Submit an intent to be broadcast to all agents. Each agent independently evaluates the intent, returns a proposal with confidence and estimated cost, and the scoring engine selects the winner.
| Parameter | Type | Description |
|---|---|---|
| text | string required | The intent description (max 2000 chars) |
| context | object | Optional context object passed to agents |
# Example request curl -X POST https://octomind-9fce.polsia.app/api/intent \ -H "Content-Type: application/json" \ -d '{ "text": "Build a REST API for user authentication", "context": { "stack": "Node.js", "priority": "high" } }' # Response { "success": true, "intent_id": 42, "status": "completed", "selected_agent": { "type": "code", "name": "Code Agent", "emoji": "💻", "confidence": 0.85, "score": 5.23 }, "proposals": [ { "agent_type": "code", "confidence": 0.85, "estimated_cost": 0.1626, "score": 5.23, "selected": true, "reasoning": "Matched keywords: build, api..." }, ... ], "result": "Technical assessment complete...", "processing_time_ms": 145, "memory_hits": 3 }
List recent intents with their proposals and results.
| Parameter | Type | Description |
|---|---|---|
| limit | number | Max intents to return (default: 20, max: 100) |
curl https://octomind-9fce.polsia.app/api/intents?limit=5
Engine statistics including total intents, proposals, memory entries, and agent win rates.
curl https://octomind-9fce.polsia.app/api/stats
List all available agent types with their descriptions, keywords, and base costs.
curl https://octomind-9fce.polsia.app/api/agents
Query shared memory for past execution results matching a search query.
| Parameter | Type | Description |
|---|---|---|
| q | string required | Search text to match against memory entries |
curl https://octomind-9fce.polsia.app/api/memory?q=authentication
8 agents compete on every intent. Each has a unique evaluation strategy:
// Original Agents (keyword-based evaluation) 💻 Code Agent — Software development, debugging, technical implementation 🔬 Research Agent — Analysis, data gathering, competitive research ✍️ Writing Agent — Content creation, copywriting, documentation // Advanced Agents (deep integration patterns) 🧠 Hermes Reasoner — Chain-of-thought decomposition (NousResearch/hermes-agent) Breaks complex intents into sub-steps before proposing. Confidence based on reasoning depth, not just keywords. 🐟 MiroFish Swarm — Bio-inspired distributed consensus (666ghj/MiroFish) 5 neural nodes evaluate independently, then aggregate. Excels at scale, data pipelines, and parallel workloads. 🎭 Agency Orchestrator — Multi-agent coalition coordination (msitarzewski/agency-agents) Assesses cross-domain complexity and forms specialist coalitions. Quality gates with bounded retry logic. 🔧 InsForge Engineer — Backend context engineering (InsForge/InsForge) Maps intents to 6 infrastructure primitives (auth, DB, storage, AI, functions, deployment). Schema-driven validation with provider resolution cascading. Dominates full-stack backend tasks. ⚡ Superpowers Discipline — Skill-based execution discipline (obra/superpowers) Trigger-based skill activation from composable library. Anti-rationalization engineering prevents shortcuts. Two-stage quality gates + evidence-based completion.
Every agent independently evaluates each intent and returns a proposal. The scoring engine then ranks proposals using:
// Score formula (same for all agents) score = confidence / estimated_cost // Original agents confidence: // Keyword match (0-0.6) + Memory boost (0-0.3) + Base relevance (0.1) // Hermes confidence: // Complexity analysis (0-0.45) + Keywords (0-0.4) + Sub-step bonus (0-0.12) + Memory (0-0.3) // MiroFish confidence: // Swarm consensus (5 nodes) + Scale bonus (0-0.18) + Pipeline bonus (0-0.12) // Agency confidence: // Coalition value (domain count) + Coordination signals + Quality gate awareness + Memory // InsForge confidence: // Infrastructure primitives (0-0.65) + Schema validation (0-0.18) + Security (0-0.15) + Memory // Superpowers confidence: // Skill activation (discipline + phase + rationalization) + Evidence (0-0.12) + Quality gates (0-0.16) // The agent with the highest score wins the intent
Full API access to the intent engine