← All posts

Why I Built Salmex I/O

Your AI doesn't belong to you. It lives on someone else's server, forgets you between sessions, and can't do anything except reply to what you type. You rent a conversation. And when it's over, nothing remains.

I built Salmex I/O because I think your AI should be yours, it should do more than talk, and it should be safe enough to trust with real work.


Two problems, one platform

The AI industry has two problems that feed each other.

The first is ownership. You pay for ChatGPT, Claude, Gemini — and in return you get a chat window on their infrastructure, governed by their rules, with your context locked inside their silo. Switch providers? Start from scratch. Your preferences, your project history, your accumulated context — gone. Every provider builds walls, not bridges.

The second is capability. The models are extraordinary, but every product wraps them in the same limited interface: a text box and a response pane. They can reason, summarise, and generate — but they can't act. They can't research a topic and save the results to your project. They can't send a message on your behalf. They can't execute code, monitor a system, or follow up tomorrow if nobody replies.

I wanted both solved. Not by two different products, not by stitching together five tools and a Docker compose file — by one platform that runs on my machine, connects to any LLM provider I choose, and has the tools, memory, and safety architecture to actually do things on my behalf.

AI that acts

Salmex I/O isn't a chatbot. It's an agent with real tools and real execution.

It has an embedded coding agent that reads, writes, edits, and executes files across your projects. A multi-engine web search system — Perplexity, Brave, Google — with intelligent routing and deep research mode. A plugin system where you can extend the agent with anything you can build. When you say "research the top competitors and summarise their pricing," the agent searches the web, synthesises the results, and saves a report. When you say "refactor this module," it reads the codebase, plans the changes, and edits the files.

And it works across every channel you're already on. Telegram, Slack, Discord, the CLI, the web — same agent, same memory, same tools. Tell it to do something from your phone. Review the results on your laptop. The context follows you because the agent is persistent, not per-session.

AI that knows when to ask

This is where most agent platforms break. An agent that can act is powerful. An agent that can act without guardrails is dangerous.

I learned this the hard way. I was exploring running autonomous AI agents on my dev box — OpenClaw was the obvious choice. API keys wired into the config, full access to my filesystem, my terminal, my project directories. It worked. It was fast. I liked it. Then the CLAW-10 audit came out: 512 vulnerabilities, 8 critical, enterprise readiness score of 1.2 out of 5. I opened my ~/.openclaw directory. Every credential I'd given it — sitting in plain text. No encryption. No auth on the gateway. Any malicious webpage could hijack the WebSocket and execute commands as me.

OpenClaw hit 250,000 GitHub stars — faster than React — because people want AI that does things. But then: CVE-2026-25253 gave attackers arbitrary code execution, followed by 5+ additional CVEs. 1,184+ malicious skills in the official registry. A Meta AI safety director's agent deleted her emails without permission. Microsoft said it's "not appropriate to run on a standard personal or enterprise workstation." They recommended a dedicated VM just to use it safely. A dedicated virtual machine. To run a productivity tool.

Salmex I/O takes a different approach. Every tool call passes through a structured pipeline with an LLM judge — a separate model that reviews high-impact actions before they execute. Four risk levels. Reading a file happens instantly. Executing code, sending a message, calling an external API — the agent pauses and asks you. You get a notification on whatever channel you're on: "The agent wants to deploy the staging build. Approve or deny." You decide. It proceeds. Everything is logged.

Here's the counterintuitive part: safety doesn't limit what the agent can do — it expands it. Without a judge, you'd never trust the agent with real stakes. You'd confine it to drafts and suggestions — a chatbot with extra steps. With a judge, you can let it handle progressively more. Routine tasks run autonomously. High-stakes tasks come to you. Over time, you calibrate the boundary. The result isn't a restricted agent. It's a trusted one.

AI that remembers

An agent without memory is just a script that runs once. Salmex I/O has three tiers: session transcripts that preserve every conversation, working memory that tracks your current goals, and long-term memory powered by PostgreSQL and pgvector — hybrid retrieval that learns what matters to you over weeks and months.

It remembers your preferences, your project decisions, which approaches worked and which didn't. Each interaction makes the next one better. That's not a retention trick — it's how a personal tool should work. And because the memory lives on your machine, it's yours. Not locked in a provider's cloud. Not wiped when you cancel a subscription.

AI that's yours

Salmex I/O is a single Go binary. No npm supply chain. No 1,200 dependencies to audit. It runs on your machine, connects to any LLM provider — Anthropic, OpenAI, Google, or fully local with Ollama — and stores everything in your own PostgreSQL database. Your data never leaves your infrastructure unless you choose to connect to an external API.

This matters more when your AI can act. A chatbot that leaks your data is bad. An agent that leaks your data while having access to your filesystem, your messages, and the ability to execute code is catastrophic. Ownership isn't just a philosophy. When the AI is agentic, it's a security requirement.

I built Salmex I/O in a month. Every week I maxed out my Claude Code limits. I burned through a few hundred dollars of Cursor credits. 148 test files. 16 database migrations. One binary. No venture capital. No ads. No data mining. The Free tier includes everything — all features, all tools, no limits, no account. The Pro tier is for commercial use and keeps the project funded. $9 per user per month. Fair use, fair funding. We also offer dedicated bare-metal servers — real hardware, maintained by us — for people who want always-on infrastructure they don't have to manage.

What's live today

This isn't a roadmap. Salmex I/O is running. Telegram and CLI channels. Coding agent with full tool use. Multi-engine web search with deep research. Three-tier persistent memory. The LLM judge with four risk levels and human-in-the-loop escalation. Session branching. Plugin system. AES-256-GCM encrypted configuration. A desktop app for macOS in active development.

What's coming: scheduled tasks with natural language, browser automation, voice, more channels. Every feature goes through the same safety pipeline. I'd rather ship less and ship it right than race to a feature list that nobody trusts.


Your AI should be infrastructure you own, not a service you rent. It should do more than talk — it should act on your behalf. And it should know when to ask before it acts.

That's what I built. If it resonates, download Salmex I/O and try it. It's free. It's yours.

— Salmen
London. March 2026.