Everything you need to run AI at scale.

Octopus combines deep AI reasoning with workflow automation — every agent decision is planned, tool-assisted, and fully auditable from the first token to the final result.

AI reasoning meets workflow automation

Octopus is a workflow-driven AI Agent Engine. Agents reason, plan, call tools, execute sub-workflows, and explain every decision — all inside a single auditable runtime.

AI Reasoning Engine

Each agent uses its assigned LLM to reason step-by-step, form plans, and select tools dynamically. The engine surfaces the chain-of-thought so every decision is explainable and reviewable.

ProcessServer Integration

AI agents are first-class citizens inside ProcessServer workflows. An AI Agent Node runs a full Octopus agent as one workflow step — combining intelligent reasoning with 26+ standard executors like IF/Switch, HTTP Request, Loop, and Human Approval.

Full Audit Trail

Every agent action, tool call, LLM response, and workflow step is written to an immutable audit log. Debug, replay, and prove compliance at any point in time — nothing is a black box.

Multi-LLM, provider-agnostic

Connect any major LLM provider — or run local models — without changing your agent logic. Each agent carries its own model config independently from every other agent.

Supported Providers

First-class support for every leading LLM provider today.

  • OpenAI — GPT-4, GPT-3.5 Turbo
  • Anthropic — Claude (all versions)
  • Azure OpenAI — enterprise-hosted deployments
  • DeepSeek — cost-efficient reasoning models
  • Google Gemini — multimodal foundation models
  • Local models via Microsoft Semantic Kernel
  • HuggingFace — planned Phase 2

Per-Agent Configuration

Every agent carries a fully independent LLM configuration.

  • Provider selection per agent
  • Model version pinning
  • Temperature control (0.0 – 2.0)
  • Max token budget per call
  • Custom stop sequences
  • Provider-specific extended settings

Fallback Chains

Define ordered fallback sequences that keep pipelines running even when a provider is unavailable or over budget.

  • Ordered fallback model list per agent
  • Cost cap triggers — switch on budget threshold
  • Latency-based routing rules
  • Automatic retry before escalating fallback
  • Zero downtime for production pipelines
  • All fallback decisions written to audit log

AI nodes inside ProcessServer workflows

ProcessServer hosts two special AI executors alongside 26+ standard nodes — letting you blend deterministic workflow logic with generative AI in a single, unified pipeline.

AI Agent Node

Runs a complete Octopus agent as a single workflow step inside ProcessServer.

  • Receives input variables from upstream steps
  • Injects conversation context and RAG knowledge
  • Routes to success or error output ports
  • ProcessServer knows nothing about LLMs — all AI logic delegated to AgentService
  • Full agent audit trail attached to workflow run
  • Composable with any other ProcessServer executor

AI Function Node

Describe logic in plain language — the LLM generates executable code at runtime.

  • Natural language function specification
  • LLM generates code on first execution
  • Sandboxed execution environment
  • Binds outputs directly to workflow variables
  • Generated code is cached for subsequent runs
  • No manual coding required for custom transforms

Mixed Workflow Execution

AI nodes and standard executors work side by side in a single workflow graph.

  • IF / Switch conditional branching
  • HTTP Request — call any external API
  • Loop — iterate over collections
  • Human Approval gate — pause for review
  • Sub-workflow execution (nested pipelines)
  • 26+ standard executors available today

Tool calling via MCP

Octopus implements the Model Context Protocol for tool calling — giving agents a standardised, discoverable way to act on the world beyond text generation.

Built-In Tools

A production-ready tool library ships with Octopus out of the box.

  • GetInvoices — query financial records
  • QueryDatabase — structured data retrieval
  • SendEmail — transactional messaging
  • CallAPI — generic HTTP for any endpoint
  • GenerateReport — structured output generation
  • ExecuteWorkflow — trigger sub-workflows from within an agent

Custom Tool Registration

Extend the tool library with your own business-specific tools in minutes.

  • Implement the IToolProvider interface
  • Declare JSON schema for inputs and outputs
  • Automatic discovery by the AgentService
  • Async execution with full error propagation
  • Tool call records written to audit log
  • Per-user and per-agent tool access control

Web Automation

The WebDriver plugin gives agents full browser control via Selenium and Playwright.

  • Navigate to any URL
  • Extract page content and structured data
  • Fill and submit forms programmatically
  • Capture screenshots for evidence or review
  • Headless or headed execution modes
  • Works as a standard MCP tool — any agent can use it

RAG — grounded in your enterprise data

Octopus retrieves relevant knowledge before every LLM call, so agents answer from your actual business data rather than model priors. Fully configurable per agent, per tenant.

Knowledge Sources

Ingest enterprise content from wherever it lives today.

  • PDF documents — contracts, manuals, reports
  • Word documents — policies, SOPs
  • HTML pages — intranet and public web content
  • Database tables — live structured data
  • API endpoints — real-time external knowledge
  • Automatic chunking and embedding on upload

Semantic Retrieval

Vector similarity search returns the most relevant chunks before each LLM prompt.

  • Qdrant vector database — high-performance retrieval
  • PGVector — PostgreSQL-native vector search
  • Configurable top-K results per agent
  • Adjustable similarity threshold per agent
  • Chunk size tuning for context precision
  • Retrieved chunks injected into agent context window

Knowledge Collections

Organise documents into named, reusable collections — assigned per agent.

  • Named collections — logical grouping of related documents
  • Per-agent collection assignment in AgentComposite
  • Tenant-isolated — no cross-tenant knowledge leakage
  • Collection-level access control
  • Managed through the knowledge-app micro-frontend
  • Retrieval tuning configurable per collection

Multi-turn conversation management

ConversationComposite carries the full interaction state from the first message to the final outcome — across agents, sessions, and users — with nothing lost between turns.

Context Preservation

Every turn is remembered and made available to the LLM within the configured context window.

  • Full message history persisted in ConversationComposite
  • Automatic context window management
  • Recent messages always included
  • Extracted variables carried across turns
  • Tool call history available for reasoning
  • Workflow execution state preserved mid-run

Session Continuity

Conversations can be paused, handed off to another agent, and resumed without losing a single byte of state.

  • Pause and resume at any conversation turn
  • Hand-off between agents with full context transfer
  • Complete rehydration from persisted CompositeState
  • Cross-session history per user
  • Tenant-scoped — UserComposite links all sessions
  • Human-in-the-loop approval gates supported

Real-Time Updates via SignalR

Users watch every step of agent reasoning and execution as it happens — no polling, no page refresh.

  • SignalR WebSocket connection in the chat-app
  • Each tool call streamed to the UI as it executes
  • Each LLM reasoning step surfaced in real time
  • Workflow node transitions shown live
  • Error states and retries visible immediately
  • Structured step log available for post-run review

Plugin ecosystem

Octopus is built on a composable plugin model. Five plugins are complete and production-ready today; the ecosystem grows with every release.

SqlServerStorage

Composite persistence for all Octopus objects via SQL Server. Efficient selective hydration and caching keep query costs low even as conversation histories grow.

SemanticKernel

Microsoft Semantic Kernel integration for LLM orchestration. Enables local model execution and memory connectors — keeping data on-premises when required by policy.

WebDriver

Selenium and Playwright browser automation surfaced as a standard MCP tool. Agents can navigate, extract, fill forms, and screenshot any web page as part of their tool-calling chain.

ChatbotUI

Embeddable chatbot component that connects directly to the Octopus runtime via SignalR. Fully brandable — drop it into any web app and your users interact with agents instantly.

Process

The full ProcessServer workflow engine as a plugin. Brings AI Agent Node, AI Function Node, and 26+ standard executors into the Octopus runtime as a single installable unit.

Build Your Own

The plugin SDK is open. Implement a plugin interface, declare your capabilities, and the Octopus runtime discovers and loads your plugin automatically. The extension-app gives admins a UI for install, configure, and health monitoring.

Security and multi-tenancy

TenantID is enforced at every service layer — AgentService, ConversationService, KnowledgeService, ProcessEngine. No cross-tenant data is ever accessible, even by accident.

Tenant Isolation

Complete data separation baked into the architecture — not a filter layer bolted on top.

  • TenantID enforced at every service layer
  • AgentService, ConversationService, KnowledgeService, ProcessEngine all tenant-scoped
  • Knowledge collections are tenant-isolated
  • User and conversation composites are tenant-bound
  • No shared state between tenants at any level
  • Suitable for SaaS multi-tenant deployment

Granular Permissions

Access control is defined at the resource level — not just the service level.

  • Agent execution permission — per user, per agent
  • Per-tool calling permission — control exactly which tools each user can trigger
  • Knowledge collection access grants
  • Workflow execution permission checks
  • Role-based defaults with per-user overrides
  • All permission decisions captured in audit log

Audit and Encryption

Immutable records and enterprise-grade encryption protect every interaction end to end.

  • Immutable audit log on all agent actions
  • Tool calls, LLM responses, workflow steps all logged
  • AES-256 encryption at rest for all data
  • TLS in transit — all API and SignalR traffic
  • Azure KeyVault for encryption key management
  • Full trace replay from audit log for compliance review

Four purpose-built interfaces

Every role gets a focused micro-frontend, all loading into a shared shell via Module Federation (React/Next.js). Admin, builder, end-user, and developer — each sees exactly what they need.

agents-app

Admin and builder interface for creating and managing agents. Configure LLM providers, prompt templates, tool registrations, knowledge collection assignments, fallback chains, and related agent hand-offs — all without touching code.

chat-app

End-user multi-turn conversation interface. Connected via SignalR — every tool call, LLM reasoning step, and workflow node transition is visible to the user as it happens. Structured step log available for post-run review.

knowledge-app

Knowledge curator interface for document upload, collection management, and retrieval tuning. Upload PDFs, Word docs, and HTML pages; organise them into named collections; adjust top-K, similarity threshold, and chunk size per collection.

extension-app

Developer and admin interface for the plugin ecosystem. Install, configure, and monitor plugin health in one place. Supports all five current plugins and any future custom plugins that implement the standard plugin interface.

Composite architecture

Three rich composite objects carry the complete runtime state for every agent, conversation, and user. Selective hydration keeps memory lean; full rehydration restores every detail instantly.

AgentComposite

The complete definition of an agent, loaded in one object.

  • LLM provider config and model version
  • Prompt templates and system instructions
  • Registered tool list with JSON schemas
  • Named knowledge collections and RAG settings
  • Related agent references for hand-off
  • Ordered fallback model list with cost caps

ConversationComposite

Full multi-turn state persisted across every interaction.

  • Complete message history
  • Active context window with extracted variables
  • Tool call records with inputs and outputs
  • Workflow execution state and step results
  • Pause / resume / hand-off with full rehydration
  • Tenant-scoped isolation enforced at service layer

UserComposite

Rich user model linking identity, access, and history.

  • Role-based permissions and agent access list
  • Per-user preferences and UI settings
  • Full conversation history for this user
  • Accessible knowledge collections
  • Tenant-scoped — no cross-tenant leakage
  • Tool-level permission grants per user

What is coming next

The MVP is complete and production-ready. The roadmap extends Octopus with deeper memory, richer models, and self-improving agent capabilities across the next two phases.

Phase 2 — Q2 2026

Deeper intelligence and collaborative agents arriving in the next release cycle.

  • Long-term memory across conversations
  • Advanced RAG reranking for higher retrieval precision
  • Agent-to-agent collaboration — structured inter-agent communication
  • Prompt auto-optimisation — agents improve their own prompts over time

Phase 3 — Q3 2026

Foundation model capabilities and agentic self-improvement on the horizon.

  • Fine-tuning on company-specific data
  • Vision input — agents that understand images and documents visually
  • Voice input and output — spoken interaction with agents
  • Agentic self-improvement loops — agents that learn from their own runs

Planned Plugins

Two additional plugins are on the confirmed roadmap to extend the storage and retrieval ecosystem.

  • HuggingFace — open-source model hosting and inference
  • Qdrant — dedicated vector database plugin with advanced indexing
  • Both follow the same plugin interface as today's five production plugins
  • Available via extension-app when released

Ready to build with Octopus?

MVP complete. Production-ready.