Did this land?

What a Personal AI OS Should Know About You - And What It Shouldn’t

Context is what makes a personal AI useful. The more it knows about your patterns, commitments, and working style, the better it can help you.

But context without limits is surveillance. Surveillance - even self-directed - produces a system that knows too much about you in ways that change how you behave around it.

The question is not “how much context should a Personal AI OS have?” It’s “what kind of context serves you, and what kind creates a system you’d rather not live with?”


What a Personal AI OS should know

Your schedule. The pattern of your week - when you do focused work, when you take calls, when you’re typically unavailable, how often plans change. This lets the AI reason about your time in ways that a simple calendar view can’t.

Your communication patterns. The rhythm of your messages - how quickly you typically respond, which conversations you prioritise, which you defer. Not the content of every message, but the structure of your communication life.

Your active work context. What you’re currently working on, what’s blocked, what’s coming due. The AI should know enough about your work to help you prepare, prioritise, and not miss things.

Your preferences and style. How you write. What you consider important. How you prefer information presented. These don’t need to be explicitly programmed - they emerge from observing how you interact with the system over time.

Your recent activity. What you’ve been doing in the last few hours and days. Not a permanent record, but enough recent context to understand where you are in your work and what’s front of mind.


What a Personal AI OS should not know

Historical records you don’t need it to have. The value of persistent context comes from understanding patterns, not from storing everything indefinitely. A Personal AI OS that retains five years of messages and location history is building a liability, not a feature. Context should have a horizon - enough to be useful, not so much that it becomes a permanent record of your life.

Sensitive personal domains you haven’t explicitly opened. Your financial accounts, your medical records, your private relationships - these require explicit, intentional access grants. The AI should not assume that access to your calendar means access to everything connected to it.

Inferences you haven’t verified. A Personal AI OS can notice patterns - “you seem to do your best work in the mornings” - but it should surface those observations for your confirmation rather than silently acting on them. Inferences about your mental state, your relationships, or your intentions are especially dangerous to act on without verification.

Enough to manipulate you. The line between a helpful personal AI and a manipulative one is whether it’s optimising for your outcomes or for its engagement with you. A system that knows your emotional patterns well enough to time notifications for moments of vulnerability is not an assistant - it’s an adversary. The Personal AI OS should have this line built in from the start.


The right framework for a Personal AI OS is explicit consent for each category of access, with the ability to revoke at any time.

Calendar access is not messages access. Messages access is not health data access. Each extension of context should be a deliberate choice - not an opt-out buried in settings, but an opt-in made with a clear understanding of what the AI gains and what you gain from the exchange.

This is a design principle as much as a privacy one. A system you don’t fully trust is a system you won’t give full context. A system you’ve explicitly consented to is one you can actually use without reservation.


The audit principle

A Personal AI OS should show its work.

Not in a technical sense - not raw logs of every inference step. But in a legible sense: if the AI says “I prioritised this message because you typically respond to this contact quickly,” that reasoning should be accessible and correctable.

Opacity breeds distrust. A system that makes recommendations without explanation creates anxiety about what it might know or infer that it isn’t saying. Transparency about what context the AI has used, and the ability to correct its model of you, is part of what makes it a tool rather than something that’s just happening to you.


The minimalism principle

A Personal AI OS should know as much as it needs to help you and no more.

This is good design, not just privacy hygiene. A system with too much context becomes slow, noisy, and prone to surfacing irrelevant information. A system tuned to the right level of context is fast, accurate, and feels like it actually understands you.

The goal is a system that knows the right things - your schedule, your priorities, your working style - well enough to reduce friction and surface what matters, without becoming a burden of its own.


Off Grid processes context locally. You control what it can access. Download for iPhone or Android.

Run the personal AI OS before anyone else. Join the waitlist — early access members get 6 months free.
Join the waitlist