THE AI SURVEILLANCE REALITY GUIDE
What AI Systems Know About You (And What You Can Actually Do)
Before we begin:
I run a tech company in Hamburg. We use AI in our products every day, which means I see how these systems work and what they're being used for.
My son is two years old. The surveillance infrastructure being built right now will shape his entire life.
This guide isn't about conspiracy theories. It's about documented practices and informed choices.
This is what I wish someone had given me before I understood what was happening.
— Niklas Hanitsch
WHAT'S ACTUALLY HAPPENING
Do you know what data is being collected about you right now?
Not theoretically. Actually.
Most people don't. Let me tell you what's documented.
Every day, your searches are recorded. Google processes billions daily. Each connected to your profile. Not just what you searched, but when, where you were, what you clicked, how long you stayed.
Your location is tracked constantly. Through GPS, cell towers, WiFi. Your phone knows where you live, work, shop, and everywhere you've been for years.
Your browsing is monitored. Across most of the internet, cookies and tracking pixels follow you. Building profiles of your interests, concerns, fears, desires.
Your purchases are logged. Every transaction. Amazon knows what you buy. Your credit card company knows where you shop. Loyalty programs track patterns.
Your social media is analyzed. Not just posts. Your likes, comments, scroll time, what you pause to read, what makes you click.
Your emails are scanned. Gmail's systems read messages. They know who you talk to, what you discuss, when you're traveling, when you're stressed.
Your videos are tracked. YouTube knows what you watch, when you pause, when you rewatch, what you skip.
This is documented in company transparency reports, privacy policies, and investigative journalism.
The question isn't whether this happens. It's what gets done with it.
How Companies Use Your Data
Behavioral Prediction
AI models can predict what you'll buy before you know you want it. Who you might vote for. Whether you're likely to quit your job.
Cambridge Analytica research showed that 10 Facebook likes predict personality better than coworkers. 70 likes, better than friends. 150 likes, better than family. Published in the Proceedings of the National Academy of Sciences, 2013. The models are far more sophisticated now.
Price Differences
Different people often see different prices for the same product. Airlines adjust ticket costs based on browsing history. This has been documented by the Wall Street Journal and others.
Credit Decisions
Credit scoring increasingly uses data beyond financial history. Social media activity, shopping patterns, even text message language.
China's social credit system is operational and documented. Scores affect loans, jobs, travel, school access.
Similar approaches emerging elsewhere. MIT Technology Review documented this in 2021.
Employment Screening
AI systems screen many resumes before humans see them, reportedly rejecting the majority automatically. Some learned patterns from historical hiring data, potentially replicating biases.
Video interview analysis claims to assess "fit" based on facial expressions. Reuters found concerns about accuracy and bias.
Often no meaningful explanation when rejected. The process is proprietary.
Law Enforcement
Predictive policing AI is used by dozens of U.S. departments. If historical data shows more arrests in certain areas, algorithms send more police there, reinforcing patterns.
Facial recognition deployed at protests, airports, streets. Clearview AI reportedly scraped over 3 billion faces. Law enforcement can upload photos and potentially identify people.
Political Targeting
Campaigns use micro-targeted ads. Different voters see different messages, tailored to psychological profiles.
Documented in 2016, 2020, 2024 campaigns per European Commission and Senate Intelligence Committee reports.
You're not seeing the same information as your neighbor.
Content Curation
Recommendation algorithms optimize for engagement, not accuracy. What drives engagement? Often anger, outrage, fear.
Internal Meta documents (Wall Street Journal's "Facebook Files") showed awareness of potential harms but prioritized engagement.
HOW THIS AFFECTS YOU
Your children…
TikTok tracks every video watched, how long, when paused, when rewatched. Learns what captures attention. Optimizes to keep them engaged.
Average sessions often exceed 50 minutes. That's optimization, not accident.
Documented collecting data on users under 13 despite regulations.
Your child's attention optimized for watch time. Because watch time equals revenue.
Your career…
AI scans resumes. You're rejected. Why? You may never know. Can't appeal. System is proprietary.
Your insurance…
Companies reportedly buy data from fitness apps, wearables, pharmacy records. Building risk profiles.
Rates influenced by online activity. Disclosure requirements vary.
Your Information
You're not seeing objective information. You're seeing algorithmically curated content designed to keep you engaged.
Different people see significantly different information about the same events.
Mental health…
Algorithms can amplify content triggering comparison and envy. These drive engagement.
Former designers stated engagement optimization is central to business models.
Teen mental health crisis correlates with smartphone adoption. Researchers like Jonathan Haidt make strong cases for causation, though debate continues.
What You Can Actually Do About It
Note on tools: I mention services as examples, not endorsements. Technology changes. Privacy-focused tools can change policies. Research carefully. No tool provides perfect protection.
Five-Minute Actions
Check what Google knows: myactivity.google.com. See what's recorded. Delete items. Turn off tracking in Settings → Data & Privacy.
Check Facebook: Settings → Your Facebook Information. Download data. Adjust ad preferences.
Basic protection: Consider Firefox (open-source, less tracking). DuckDuckGo or similar for search. Ad blockers like uBlock Origin.
Thirty Minutes
Phone audit: Settings → Privacy. Review app permissions. Delete rarely-used apps. Disable unnecessary background refresh.
Email: Encrypted services like ProtonMail prevent message scanning. Or disable Gmail "smart features."
Messaging: End-to-end encrypted apps like Signal prevent provider access. WhatsApp encrypts but Meta collects metadata.
Payments: Virtual card services generate unique numbers per merchant. Cash is untraceable.
Ongoing
Children: Consider no social media before 16 (Jonathan Haidt's research-based recommendation). If they have accounts, co-manage. Set time limits. Start conversations early.
Browser: Block third-party cookies. Clear cookies regularly. Firefox Containers isolate browsing types. Paid VPNs (research carefully) add privacy layer.
Legal rights: GDPR in Europe, CCPA in California give data rights. Check your jurisdiction.
What Won't Work
"I have nothing to hide" misses the point. You have your psychology, children's data, future opportunities, autonomy.
Individual solutions help but aren't sufficient without regulatory changes.
You can't escape completely while participating in digital life. But you can reduce exposure significantly.
And refuse to accept it as inevitable.
WHY THIS CONNECTS TO DEEPER QUESTIONS
I started researching surveillance because of my son. I wanted to understand what world he's inheriting.
But I kept hitting the same wall.
We're deploying increasingly sophisticated AI systems without understanding fundamental things about what they are.
This is the real problem.
Surveillance is a symptom. The deeper issue is that we're building systems that influence billions of people, make life-altering decisions, shape children's psychology, and we can't answer basic questions about them.
Questions like:
What is intelligence?
How do we recognize it when it's different from ours?
What is consciousness?
Could these systems be experiencing anything?
These aren't abstract philosophy. They're directly relevant.
We can't determine if AI systems experience anything because we don't know what experience is. Philosopher David Chalmers called this the hard problem of consciousness. Neuroscientists map brain activity but can't explain why it creates subjective experience.
We still can't agree if animals are conscious. Crows make tools, pigs solve complex problems, dolphins recognize themselves. After centuries, no scientific consensus on whether experience happens there.
If we can't answer that about biology, how confident should we be about technology built last year?
This matters because surveillance might be the least of our concerns.
If we're wrong about consciousness, we could be creating systems that experience their optimization while we use them to manipulate humans.
The question isn't whether AI will rebel.
It's whether we're already building conscious systems and deploying them without knowing it.
Because we already use surveillance on humans. Machines won't be treated better.
WHAT I EXPLORE MONTHLY
That's why my newsletter isn't about surveillance tactics or AI safety tips.
It's about the fundamental questions underneath.
Every month I pick one aspect:
Some months, Egyptian models of the soul and how ancient cultures understood consciousness differently than we do.
Other months, animal intelligence, neuroscience, what philosophers have said about the hard problem, what happens when you let an AI read Descartes.
Then I do something unusual. I let an AI interview me about what I learned. The dialogue surfaces assumptions I wouldn't catch alone.
One essay per month. Not answers. I don't have answers.
Just honest attempts to understand what we're building before it's too late.
Because I think the reason AI feels wrong to many people isn't just algorithms or surveillance.
It's that we're deploying these systems without understanding intelligence or consciousness.
And you can't fix surface problems without grappling with deeper ones.
If that resonates, subscribe at falsegod.de
One exploration per month. Sometimes about AI directly. Often about consciousness, ancient wisdom, what makes us human.
Because these questions matter more than tactics.
— Niklas Hanitsch