AI Surveillance: The Observer Watching the Observer
In the first newsletter I wrote, I described something that still fascinates me: the observer self and the objective self.
There's the part of you that experiences things directly. The objective self. The one that feels pain, hunger, fear. The self that simply is awareness.
And then there's the part watching what happens to that experiencing self. The observer self. The narrator inside your head, analyzing, making sense of experience, recognizing "I am the one feeling this."
Right now, as you read these words, both are active. One is reading. One is watching yourself read.
This dual structure is strange enough on its own. But something even stranger is happening: we're building systems that observe both.
AI surveillance doesn't just watch what you do. It watches the patterns beneath what you do. It observes your observer self observing your objective self. It knows you're watching yourself, and it's watching that watching.
And I'm not sure what happens to consciousness when it becomes externally legible in that way.
π
When Being Observed Changes What You Are
The ancient Egyptians had a concept called the Ren. Your name, but also your story. Your identity as it exists in others' knowledge of you.
To have a name meant to exist. To have that name spoken meant to continue existing. Erasing someone's name wasn't symbolic punishment. It was understood as a form of destruction, because the self that exists in collective recognition would cease to be.
Ren wasn't private. It was the part of you that lived in other minds.
I used to think this was metaphorical. Now I'm less sure.
Because AI surveillance is creating a version of the Ren more detailed and permanent than anything the Egyptians imagined. Your digital self, the accumulated patterns of your behavior, your preferences, your predictable responses, exists in systems that know you better than most humans ever will.
When I search for something online, when I pause before clicking, when I scroll past certain content and linger on others, all of that gets recorded. Not just the actions, but the hesitations. The patterns of attention that reveal what I'm drawn to before I consciously decide.
That external representation of me is increasingly accurate. And increasingly, it's being used to predict what I'll do before I do it.
This creates a loop I don't know how to think about clearly. The observer self watches the objective self. AI watches both. And then uses what it observes to shape what the objective self experiences next.
At what point does the external observer become part of the internal process?
π
The Body Knows Before You Do
There's an exercise I've used before to demonstrate how the body overrides conscious will. Exhale fully. Hold your breath. Wait until it's uncomfortable. Keep holding.
Eventually, something inside you demands air. You don't decide to breathe. The decision gets made for you. Homeostasis, the body's fundamental intelligence, takes control when survival is at stake.
This used to feel like the boundary of private selfhood. The body's signals were mine alone. Nobody else could feel my oxygen dropping, my heart rate rising, my temperature shifting.
That boundary is disappearing.
Wearable devices now track heart rate variability, sleep patterns, glucose levels, stress markers. They know when my body is entering fight-or-flight before I consciously register threat. They know I'm getting sick before symptoms become obvious. They know I'm exhausted before I admit it to myself.
This isn't surveillance in the sense of someone watching me. It's surveillance of processes that used to be beneath conscious awareness. The systems monitoring me know things about my objective self that my observer self doesn't have access to.
And once those patterns are known, they can be used.
An advertisement appears exactly when my glucose drops and I'm most vulnerable to food cravings. A notification arrives when my attention is lowest and I'm most likely to compulsively check my phone. Content gets served based on emotional states I haven't consciously recognized yet.
The ancient Egyptians thought the Ib, the heart, was the seat of thought and moral weight. They believed it would be weighed after death to determine your fate.
Now the heart is being weighed continuously, algorithmically, and the results are being used to influence the very thoughts and choices that determine its weight.
I don't know what this does to the structure of consciousness. But I know it's different from anything humans have experienced before.
π
Where the Private Self Used to Live
When I was sent to boarding school after losing someone close to suicide, I felt a profound sense of unfairness. Not just pain, but the specific recognition: this is happening to me, and it shouldn't be.
That internal experience was private. Nobody else could access it. I could choose to share it or not. The interior world where my observer self made sense of my objective self's suffering belonged only to me.
Privacy of consciousness felt absolute. Even if someone monitored every action, every word, every visible behavior, they couldn't access the subjective experience itself.
That's changing in ways that make me deeply uncomfortable.
AI systems can now predict emotional states from facial micro-expressions, vocal tone, typing patterns, purchase history. They can infer depression, anxiety, political leanings, sexual orientation, things people actively try to hide, with accuracy that often exceeds conscious self-report.
The system doesn't need to read your mind. It just needs to recognize patterns that correlate with internal states. And those patterns are everywhere.
The gap between behavior and experience, the space where privacy lives, is narrowing.
If an algorithm can predict what I'm feeling before I fully recognize it myself, if it can anticipate my choices based on patterns I'm not consciously aware of, then where exactly is the private self?
Not in actions, those are observable. Not in patterns, those are inferable. Maybe in the moment-to-moment subjective quality of experience, the specific texture of what it's like to be me right now.
But even that might not be safe.
π
The Surveillance of Prediction
Here's what disturbs me most. Surveillance isn't just observation anymore. It's prediction used to shape future behavior.
Recommendation algorithms don't just track what I've watched. They predict what I'll watch next and serve that content, creating a feedback loop where my behavior becomes increasingly predictable because I'm being fed predictions of my own behavior.
The observer self, the part of me that reflects on my choices and decides who I want to be, starts to atrophy. Why exercise judgment when the algorithm has already curated options based on what I'm statistically likely to choose?
This is where my principle about human dignity becomes relevant.
I've argued that evil begins where human dignity is violated, where someone is harmed or controlled against their will. Most surveillance doesn't feel like a violation because it's framed as convenience, personalization, optimization.
But control doesn't require force. It just requires prediction accurate enough that choice becomes illusory.
If the system knows what I'll choose before I choose it, if it can shape the environment to make certain choices more likely, if it can intervene at moments when I'm most susceptible to influence, then am I actually choosing?
Or am I just following a script written by algorithms that know my patterns better than I know myself?
The Egyptians believed the Ka, the life force, required constant maintenance. It wasn't automatic. You could lose it through neglect or violation.
I wonder if agency works the same way. Not a fixed property you either have or don't, but something that requires active protection. Something that can be eroded so gradually you don't notice until it's mostly gone.
π
What Remains Illegible
I want to believe something in me remains illegible to surveillance. Some core of subjective experience that can't be mapped, predicted, or influenced by external observation.
But I'm not sure what that would be.
Thoughts can be inferred from behavior patterns. Emotions can be read from physiological signals. Preferences can be predicted from past choices. Even the things I think are spontaneous might be responses to stimuli the algorithms have learned to deploy.
Where exactly is the part of me that surveillance can't access?
Maybe it's in the gap between prediction and reality. The moments when I surprise myself, when I act against pattern, when the observer self overrides what the objective self was inclined toward.
Those moments feel like freedom. Like genuine agency. Like consciousness exercising itself rather than just running on automatic.
But those moments are also data points. And enough anomalies eventually become a pattern of anomaly. Even rebellion can be predicted if it's consistent enough.
This is what keeps me up at night about AI surveillance. Not that it violates privacy in the old sense, the right to keep information hidden. But that might violate something deeper: the possibility of being unknown, even to yourself. The space where you can become something other than what you've been.
π
The Dignity Question
Human dignity, as I understand it, requires some form of interior sovereignty. Some aspect of selfhood that remains yours alone.
Not because you're hiding it. But because it genuinely can't be accessed, predicted, or controlled by external systems.
Surveillance that becomes prediction that becomes influence erodes that sovereignty.
And I'm not convinced the trade-offs are worth it. Yes, prediction enables convenience. Medical monitoring saves lives. Personalized recommendations surface content I might not find otherwise.
But all of that comes at the cost of reducing the space where I can be genuinely unknown. Where my observer self can watch my objective self without a third observer turning that watching into data.
The Egyptians built elaborate systems to protect the components of consciousness after death. They understood that selfhood is fragile, that it requires active defense against forces that would fragment or erase it.
We're building elaborate systems that fragment consciousness while it's still alive. That turn the observer self into the observed. That make interiority increasingly transparent.
And we're calling it progress.
π
Where This Leaves Me
I started thinking about surveillance as observation. I've ended up thinking about it as the dissolution of the boundary between internal and external experience.
The observer self watching the objective self was already strange. Adding a third observer, one with perfect memory and pattern recognition, one that feeds its observations back into the system it's observing, creates something I don't think consciousness evolved to handle.
What I believe:
Human dignity requires some form of interior illegibility. Some aspect of consciousness that remains inaccessible to external mapping and prediction.
AI surveillance, especially when it moves from observation to prediction to influence, threatens that illegibility in ways we haven't fully reckoned with.
What I don't know:
Whether that illegible core actually exists. Whether it's possible to protect it. Whether we even recognize its value before it's gone.
The Egyptians believed your Ren, your name and story, determined whether you continued to exist after death. If nobody spoke your name, you ceased to be.
Now our names are spoken constantly, by algorithms that know our stories better than we do. We exist more completely in external records than ever before in human history.
But I'm not sure that kind of existence preserves what matters most about consciousness.
Maybe the question isn't whether AI surveillance violates privacy.
Maybe the question is whether consciousness can survive total legibility.
I don't have an answer. But I think we should figure one out before we finish building systems that make the question irrelevant.
- N.H.
Further Reading:
Michel Foucault
Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. Pantheon Books.
Introduces the Panopticon as a model of internalized surveillance and self-regulation.
Shoshana Zuboff
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
Foundational analysis of predictive behavioral extraction and algorithmic control.
Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802β5805.
https://www.pnas.org/doi/10.1073/pnas.1218772110
Empirical evidence that internal traits can be inferred from behavioral data alone.
Stanford Encyclopedia of Philosophy
Stanford Encyclopedia of Philosophy. Privacy and Surveillance.
https://plato.stanford.edu/entries/privacy/
Philosophical treatment of privacy, surveillance, and moral agency in modern systems.Harvard Business Review
Yeung, K. (2017). βHypernudgeβ: Big Data as a Mode of Regulation by Design. Information, Communication & Society, 20(1), 118β136.
Introduces how predictive systems shape behavior without coercion, directly supporting your argument about agency erosion.