Things You Didn't Think About
"Most agents still need to be told what to do. The real breakthrough will come when AI assistants understand your goals and help you reach them without waiting for you to prompt them." — Fidji Simo, OpenAI
The Story
You're reviewing a client proposal on your desktop. Somewhere in the conversation — almost in passing — you mention a tight timeline. "We'd need to deliver by March 20." You move on. The proposal isn't the main thing you're working on today. You have three other threads going, a Slack conversation about infrastructure, and a Telegram message from your cofounder about next week's investor call. The March 20 commitment gets filed away in your brain under "things I'll deal with later."
Three days later, your phone buzzes. Telegram message from your agent.
Heads up — you committed to delivering the Smith project by March 20. That's 11 days away. Based on your current task load, the fact that the research phase hasn't started, and your schedule next week (investor prep Monday-Tuesday, team offsite Thursday), you probably need to start the research by Wednesday at the latest to hit the deadline.
Want me to block Wednesday afternoon for it?
oh wow, yeah I completely forgot about that. Yes block it and also remind me Tuesday evening to prep the research brief
Done. Wednesday 1-5pm blocked for Smith research. Reminder set for Tuesday 7pm to prep the brief. I'll also flag it again on March 17 if the deliverable isn't looking close to done.
You didn't ask for this. You didn't set a reminder. You didn't create a task. You mentioned a date in passing during a conversation about something else, and the agent stored the commitment, tracked the timeline, correlated it with your schedule, and reached out at the point where the information became actionable.
This is what proactive AI looks like. Not notifications. Not alerts. Not a calendar that beeps at you. An agent that understands your commitments, monitors the context around them, and intervenes when its knowledge can prevent a problem you haven't noticed yet.
The Idea
The agent doesn't just respond to prompts. It thinks in the background. It periodically reviews your memory — commitments, deadlines, patterns, open threads — and surfaces things you need to know before you know you need to know them.
Pattern recognition. You mention feeling tired on Monday. You note poor sleep on Wednesday. You have a headache on Friday. Three isolated complaints across different conversations. You don't connect them because they're spread across days and contexts. The agent does.
I've noticed a pattern this week: fatigue Monday, poor sleep Wednesday, headache Friday. Looking at your recent data, your average screen time increased by 2.3 hours/day compared to last week — you've been doing late-night research sessions for the newsletter. The last time this pattern appeared (January), it resolved when you set a hard 10pm screen cutoff. Worth considering?
Cross-context connections. On Tuesday, you're researching competitor pricing for a strategy document. On Thursday, a customer asks about your pricing in a Slack thread. Two separate conversations, two different channels, two different purposes. The agent connects them.
Sarah Chen just asked in Slack about enterprise pricing tiers. Relevant: on Tuesday you were researching competitor pricing and noted that Acme Corp dropped their enterprise tier from $499 to $299/seat. You also flagged that their feature set at that price point overlaps with your Pro tier. Want me to surface your Tuesday analysis in the Slack thread?
Forgotten follow-ups. You sent a prospect an email two weeks ago. They haven't replied. You've moved on to other things. The agent hasn't.
Quick follow-up note: you sent David Park at Meridian a proposal on March 3. No reply in 14 days. Based on your previous interactions, he usually responds within 3-5 days. This is unusual. Want me to draft a gentle follow-up, or would you rather wait?
draft something casual, just checking in
Here's a draft:
"Hi David — wanted to check in on the proposal I sent over a couple weeks ago. No rush at all, just wanted to make sure it didn't get buried. Happy to jump on a quick call if it'd be easier to discuss. Hope things are going well at Meridian."
Want me to adjust the tone, or is that good to send?
Scheduled serendipity. While you're researching a topic for your newsletter, the agent finds a blog post from a company you evaluated back in January during a completely different project. The connection isn't obvious — you wouldn't have searched for it — but it's relevant.
While looking into edge computing trends for your newsletter, I found a post from Lattice Labs about their new edge deployment model. You evaluated Lattice back in January for the DevOps tooling project and liked their architecture but flagged concerns about their pricing model. Looks like they've restructured pricing since then — might be worth a second look for both the newsletter angle and the tooling evaluation.
What It Looks Like
Proactive deadline warnings. Commitments mentioned in passing — in conversation, in documents you're reviewing, in messages across any channel — are stored and tracked. The agent doesn't wait for you to create a task. It monitors timelines and alerts you when action is needed, factoring in your schedule, workload, and the dependency chain of what needs to happen first.
Contextual connections. The agent maintains a map of your active threads — projects, relationships, decisions, open questions. When new information arrives that's relevant to an existing thread, it connects them. This works across channels: a Telegram conversation on Monday can surface relevant context from a Slack discussion on Friday, or from a desktop coding session two weeks ago.
Pattern detection. Over time, the agent recognises recurring patterns in your behaviour, energy, communication, and work. Sleep quality correlating with late screen time. Meeting-heavy weeks correlating with missed deadlines. Certain types of tasks consistently getting deprioritised. It surfaces these patterns without judgement — just observation and, when relevant, a suggestion.
Memory-powered briefings. Before a meeting, the agent can compile everything it knows about the person you're meeting, the project you're discussing, the decisions that have been made, and the open questions. Not because you asked for a briefing — because it knows the meeting is coming and that you'd benefit from context.
Proactive research. When the agent notices you're working on a topic, it can preemptively search for relevant information and surface it at the right moment. Not a wall of links — a curated, contextualised finding that connects to what you're already thinking about.
How It Works
- Memory — The foundation of proactive behaviour. The agent can only notice patterns and connections if it remembers everything. Three-tier memory (session, working, long-term) means commitments mentioned in March are still retrievable in September. Vector search means semantic connections are found even when the exact words differ.
- Scheduler — Proactive check-ins aren't ad-hoc — they're scheduled background tasks. The agent periodically reviews memory for time-sensitive items: approaching deadlines, stale follow-ups, recurring patterns. The scheduler also handles delivery timing — proactive messages arrive through your preferred channel at appropriate times, not at 3am.
- Search — When the agent identifies a relevant connection that requires current information (competitor pricing changes, news about a contact's company, updated research on a topic), it can proactively search and include fresh context in its message. Multi-provider search with caching means these background checks are fast and cost-effective.
- Channels — Proactive messages go where you are. The agent knows your channel preferences — Telegram for personal, Slack for work, desktop for focused sessions. A deadline warning about a client deliverable goes to Slack during work hours and Telegram on weekends. Channel routing is learned from your behaviour, not configured manually.
- Channel preferences — The agent learns when and where you prefer to receive different types of information. Urgent deadline warnings can interrupt anywhere. Pattern observations wait for a quiet moment. Follow-up suggestions arrive during your typical admin hours. These preferences are refined over time based on how you respond — or don't respond — to proactive messages.
What Breaks Without This
ChatGPT is purely reactive. It responds when prompted and does nothing in between. It cannot notice patterns across sessions because it barely remembers individual sessions. It cannot warn you about a deadline because it doesn't know your deadlines. It cannot connect Tuesday's research to Thursday's customer question because it doesn't know about either one unless you paste them into a prompt.
Google's Project Mariner / Gemini agents operate within Google's ecosystem — Gmail, Calendar, Drive. They can surface connections between emails and calendar events, but only within that walled garden. They can't access your Slack conversations, your code repositories, your Telegram messages, or your local files. Cross-context connections require cross-context access.
ChatGPT Pulse (part of the $200/month Pro plan) offers "proactive" insights based on your conversation history. In practice, these are periodic summaries of topics you've discussed — not actionable interventions connected to real-world timelines. It has no access to your tools, no scheduling capability, and no ability to act on the insights it surfaces. It tells you things you already know, when you don't need to know them.
Siri Proactive Intelligence makes suggestions based on device usage patterns — suggesting apps, surfacing contacts, predicting destinations. But it has no reasoning capability. It can notice you drive to the gym on Tuesdays but can't tell you that your knee has been bothering you and maybe you should do upper body instead. Pattern matching without understanding is just autocomplete for your life.
Previous-generation agents (including OpenClaw's heartbeat system) attempted proactive behaviour but were limited by memory degradation. An agent that forgets your commitments within weeks can't warn you about them months later. Proactive AI requires perfect, persistent, local memory — which is why this use case depends on the memory architecture described in the "It Remembers Everything" page.
Build This
This is not a concept — it's buildable today.
Salmex I/O's persistent memory stores every commitment and context. The scheduler runs background sweeps to surface forgotten follow-ups, approaching deadlines, and cross-conversation connections — delivered to the right channel at the right time.