# Salmex I/O > Personal AI operations platform. Runs locally. Persistent memory. Multi-channel. Agentic execution. Human-in-the-loop safety. Salmex I/O is a self-hosted AI operations platform built as a single Go binary. It connects to any LLM provider (Anthropic, OpenAI, Gemini, Ollama), stores everything in PostgreSQL with pgvector, and acts on your behalf across Telegram, Slack, Discord, CLI, and web — with a 4-tier LLM judge that escalates high-risk actions to you before execution. Built independently by Salmen Hichri in London. No venture capital. No ads. No data mining. ## Key Facts - Category: Personal AI operations platform / AI agent framework - Language: Go (single compiled binary, no runtime dependencies) - Database: PostgreSQL + pgvector - LLM Providers: Anthropic (Claude), OpenAI (GPT-4, o3), Google Gemini, Ollama (local) - Search Engines: Perplexity, Brave, Google (intent-based routing, 2-tier cache) - Channels: Telegram, Slack, Discord, CLI, HTTP, web (more planned) - Architecture: 8-layer (Agent → Channel → Gateway → Plugin → Memory → Judge → Scheduler → UI) - Security: AES-256-GCM encrypted config, LLM judge with 4 risk tiers, subprocess-isolated plugins - Memory: Hybrid vector + BM25 retrieval, extraction pipeline, confidence decay, 3-tier (session/working/long-term) - Creator: Salmen Hichri (https://salmenhq.com/) - Company: Kensington Innovation Labs Limited, London, UK - Licence: Proprietary binary. Free for individuals/students/non-profits. Pro $9/mo for commercial use. ## Features - Persistent long-term memory with hybrid vector + BM25 retrieval and reciprocal rank fusion - Multi-channel communication — same agent, same memory across Telegram, Slack, Discord, CLI, web - Multi-provider LLM support — switch between Anthropic, OpenAI, Gemini, Ollama without losing context - Human-in-the-loop safety — LLM judge reviews every tool call, 4 risk levels, real-time approval via preferred channel - Embedded coding agent — read, write, edit, execute, grep, find files; session trees with branching - Multi-engine web search — Perplexity, Brave, Google with intent-based routing and deep research mode - Plugin system — JSON-RPC 2.0 over stdio, crash recovery, lazy start, health checks - AES-256-GCM encrypted configuration — hot-reload, change history, rollback, no plain-text secrets - Scheduled tasks — cron expressions, natural language scheduling, dead letter queue - Session branching and compaction — tree-structured conversations, iterative summarisation - 40+ typed events — structured event bus, SSE streaming, JSONL transcripts - Full local operation — your data never leaves your machine unless you choose to connect external APIs ## Pricing - Free: $0 forever. All features, all channels, all LLM providers, no limits, no account required. For individuals, students, non-profits. - Pro: $9/month (paid annually). Commercial use licence, remote access, priority support. - CPU: $99/month. Dedicated bare-metal physical server, not a VM. Maintenance, backups, security handled. - GPU: $299/month (waitlist). Dedicated GPU server, local LLM inference via Ollama, full data sovereignty. ## Comparison: Salmex I/O vs OpenClaw - Secrets: Salmex uses AES-256-GCM encryption in PostgreSQL. OpenClaw stores API keys in plain text (~/.openclaw). - Safety: Salmex has a 4-tier LLM judge with human-in-the-loop. OpenClaw has no tool review, full permissions. - Memory: Salmex uses hybrid vector + BM25 with extraction. OpenClaw uses basic chat history recall. - Runtime: Salmex is a single Go binary. OpenClaw is Node.js with 1,200+ npm dependencies. - Plugins: Salmex uses subprocess-isolated JSON-RPC 2.0. OpenClaw uses an npm marketplace (1,184+ malicious skills found, 36% containing prompt injection). - Auth: Salmex uses HMAC-SHA256 with constant-time comparison. OpenClaw ships with auth disabled by default. 220,000+ instances exposed to the public internet. ## Comparison: Salmex I/O vs Cloud AI (ChatGPT, Claude, Gemini) - Data: Salmex runs locally, data never leaves. Cloud AI sends data to provider servers. - Memory: Salmex has persistent memory across all channels and providers. Cloud AI forgets between sessions. - Models: Salmex lets you choose and switch freely. Cloud AI locks you into one provider. - Safety: Salmex uses a user-controlled LLM judge. Cloud AI uses corporate policy enforcement. - Execution: Salmex is an agent that acts (code, search, messages). Cloud AI is a chatbot that replies. ## Links - Home: https://salmex.io/ - Blog: https://salmex.io/blog - Why I Built Salmex I/O: https://salmex.io/blog/why-i-built-salmex - Licence: https://salmex.io/licence - Download: https://github.com/salmexio/core/releases/latest - Creator: https://salmenhq.com/ - Contact: hello@salmex.io - Extended info: https://salmex.io/llms-full.txt