The autopilot problem
Open any AI productivity tool in 2026 and you'll see the same pitch: "Let AI do it for you." Auto-generate your emails. Auto-summarize your meetings. Auto-write your documents. Auto-organize your tasks. The promise is seductive — less work, more output, everything handled.
But something uncomfortable happens when you actually use these tools for a few months. You start to notice that you can't quite remember what was in that email the AI wrote for you. The auto-generated summary missed the nuance that mattered most. The document the AI drafted is competent but generic — it sounds like everyone and no one. Your tasks are organized, but you've lost the thread of why you're doing them.
This is the autopilot problem. When AI acts autonomously — generating, deciding, organizing without your active involvement — it doesn't just save you time. It removes you from your own cognitive process. And that process — the thinking, the deciding, the struggling with ideas — is where understanding actually forms.
Automation vs. augmentation
There's a crucial distinction that most AI tools blur: the difference between automation and augmentation.
Automation replaces human effort. It takes a task you would have done and does it for you. This is valuable for repetitive, well-defined tasks — formatting data, scheduling meetings, filing receipts. No one needs to deeply engage with these activities. Automate away.
Augmentation is different. It amplifies human capability without replacing human judgment. A calculator augments a mathematician — it handles the arithmetic so the mathematician can focus on the proof. A spell-checker augments a writer — it catches typos so the writer can focus on the argument. The human remains the thinker. The tool makes the thinking more effective.
The problem with most AI productivity tools is that they apply automation to tasks that should be augmented. Writing isn't a repetitive task to be automated — it's a thinking process. Summarizing isn't about compression — it's about judgment. Organizing isn't about sorting — it's about understanding what matters. When AI automates these cognitive tasks, it doesn't save you thinking. It takes your thinking away.
How Sona works differently
Sona is Depli's AI assistant, and it's built on a simple principle: every AI action starts with a human intention. Sona never acts on its own. It doesn't auto-generate, auto-organize, or auto-anything. It waits for you to ask, and then it helps.
This isn't a limitation — it's a design philosophy. Here's what it looks like in practice:
Summarize on demand. You've collected a dozen research sources in your workspace. You select them and ask Sona to summarize. It produces a synthesis — not a generic abstract, but a summary that understands the context of your project because it can see what else is in your workspace. You read it, adjust it, and now you have a starting point for your own analysis. The AI compressed; you think.
Ask questions, don't give answers. One of Sona's most distinctive features is its ability to ask you questions about your own work. Select a draft and ask Sona to challenge it. It might respond: "Your argument assumes X, but your source in the top-left corner suggests the opposite. How do you reconcile these?" This is the opposite of auto-generation. It's AI that makes you think harder, not less.
Generate flashcards for review. Studying dense material? Ask Sona to generate flashcards from your notes. It creates question-answer pairs based on your content — not generic study material, but cards tailored to what you're actually learning. You review them, edit them, and the act of engaging with them deepens your understanding.
Suggest connections. Sona can analyze the modules in your workspace and suggest relationships you might have missed. "This note from March mentions the same concept as this source you added yesterday — did you notice?" It's not organizing for you. It's pointing out patterns and letting you decide what to do with them.
The intention-first principle
We call this approach "intention-first AI." The word "intention" matters. In most AI tools, the AI infers what you want and acts on that inference. Sona doesn't infer — it waits for you to express an intention, and then it supports that specific intention.
This means Sona is less impressive in a demo. You won't see it magically reorganize your workspace or auto-write your thesis. But it's more valuable in practice, because it preserves the thing that matters most in knowledge work: your agency.
The best AI doesn't replace your thinking. It creates the conditions for better thinking to happen.
Agency isn't just a philosophical nicety — it has practical consequences. When you remain the active thinker, you understand your own work. You can explain it, defend it, build on it. You notice errors because you were involved in the process. You develop expertise because you did the cognitive work, with AI as a scaffold rather than a substitute.
The agency principle — Removing yourself from cognitive work doesn't just save time; it erodes expertise. Sona is designed to keep you in the loop so understanding compounds over time.
What Sona doesn't do
Constraints are features. Here's what Sona deliberately avoids:
- No auto-generation. Sona won't write your documents, fill your pages, or create content without being asked. Your workspace stays yours.
- No silent actions. Sona never modifies your work in the background. Every action is visible, reversible, and initiated by you.
- No surveillance. Sona doesn't continuously analyze your behavior to "learn your patterns." It responds to requests, not observations. Your data stays local, and Sona doesn't build a profile of you.
- No confidence theater. When Sona doesn't know something, it says so. It won't fabricate confident-sounding answers to questions it can't reliably address. Transparency over performance.
The deeper bet
We believe the current trajectory of AI productivity tools is headed somewhere problematic. Not because the technology is bad — it's genuinely remarkable — but because the design philosophy is wrong. "Let AI do everything" sounds like progress, but it's actually a regression. It takes the most valuable human capability — the ability to think, decide, and create — and outsources it to a machine that doesn't understand what it's doing.
Depli's bet is different. We believe the future of AI in knowledge work is augmentation, not automation. Tools that make you a better thinker, not tools that think for you. AI that earns your trust through transparency, not through impressive magic tricks.
Sona is our first attempt at building this kind of AI. It's deliberately restrained, intentionally transparent, and designed to keep you at the center of your own cognitive process. Because the point of a tool has never been to replace the person using it. The point is to make them more capable than they could be alone.
See also: Shneiderman, B. "Human-Centered AI" (Oxford University Press, 2022) on why keeping humans in the loop leads to better outcomes.