Is AI Good or Evil? (And Does That Question Even Make Sense?)

“AI doesn’t possess morality. It possesses leverage. The same system that expands care can also compress dignity, turning patients into throughput, workers into costs, humans into data points optimized away. So the question isn’t whether AI is good or evil. It’s whether we are willing to measure progress by something other than efficiency and profit. The moment dignity becomes negotiable, no technical breakthrough can redeem what follows.”

In 2016, researchers at Stanford trained a neural network to identify skin cancer from photographs. The system matched the diagnostic accuracy of board-certified dermatologists.

This technology could save lives by making expert diagnosis accessible to billions of people who will never see a specialist. And yet when I read about this breakthrough, I found myself uncomfortable with how quickly everyone reached for the word "good."

Not because early cancer detection isn't valuable. But because calling it "good" sidesteps a question I think we need to ask first: What do we actually mean when we say something is good?

Because the more I look at how different cultures across history have used these words, the less sure I am that good and evil exist the way we assume they do.

🜏

The Problem with Moral Certainty

Different societies, across different times, have called radically opposite things "good" and "evil."

In some cultures, stoning someone for adultery is considered righteous justice. In others, that same act is barbaric murder. Both groups genuinely believe they're acting morally. Both would say they're on the side of "good."

So is good just whatever your culture decides it is? Is morality just local consensus dressed up as universal truth?

If that's the case, then asking "is AI good?" becomes meaningless. It's good if your culture says it is, evil if it doesn't. There's no objective standard to measure against. Just preferences and power dynamics.

But that conclusion feels wrong to me.

Because when I watch someone being harmed against their will, something in me rejects the idea that this could be "good" or even neutral just because a culture sanctions it. The feeling isn't cultural. It's deeper than that.

Which is why I've landed on something like a minimal universal principle: Evil begins where human dignity is violated. Where someone is mistreated, harmed, or controlled against their will. That's what is classified as evil to me, regardless of cultural context or historical moment.

Everything else? Most of it is neutral. Who you love, what you believe, how you live your private life, as long as you're not harming anyone else without their consent, those aren't good or evil. They're just ways of being human.

This matters for AI because it changes what we should actually be asking. Not "is AI good?" but "does AI respect human dignity, or does it create conditions where dignity gets eroded?"

And that question has much less comfortable answers.

🜏

The Trap of Binary Thinking

The problem with asking "is AI good?" is that it assumes technology has intrinsic moral properties. That AI is a thing that can be evaluated in isolation, judged as beneficial or harmful, and sorted into the appropriate ethical category.

But AI isn't a thing. It's a category of capabilities that can be applied toward countless different ends.

This is where I keep coming back to chemistry as an analogy. Chemistry gives us antibiotics and chemical weapons. Fertilizers that prevent famine and pollutants that poison ecosystems. Anesthesia that eliminates surgical pain and synthetic opioids that fuel addiction crises.

The field itself is neutral. The applications range from miraculous to catastrophic.

AI works the same way. AI that optimizes hospital scheduling saves lives by reducing wait times. AI that optimizes engagement on social media might be eroding democratic discourse. AI that accelerates drug discovery could cure diseases. AI that powers autonomous weapons could destabilize global security.

These aren't different kinds of AI. They're different applications of similar underlying technologies, deployed in different contexts, optimizing for different objectives, controlled by different actors with different values.

So when someone asks me "is AI good?", what they're really asking is: what can we do with AI that serves human dignity rather than violating it?

And to answer that, I need to get specific about what AI actually enables.

🜏

What AI Actually Does

Let me describe some capabilities without moral judgment attached, just what becomes possible:

Pattern recognition at scales humans can't manage. A radiologist might read 50 scans in a day. An AI system can analyze thousands while maintaining consistent attention to subtle anomalies that humans might miss when tired. This doesn't replace the radiologist, it catches edge cases that would otherwise slip through.

When DeepMind's AlphaFold solved protein folding, it didn't just do what humans were already doing faster. It revealed structures for proteins that had resisted decades of research. This opens pathways to treatments that weren't previously conceivable. The entire database is freely available to researchers globally.

Climate scientists can now process satellite imagery, sensor data, and simulation outputs at scales that reveal patterns invisible in smaller datasets. This helps predict extreme weather, track deforestation, understand systemic changes.

Discovery in spaces humans can't navigate. Drug discovery historically required synthesizing and testing millions of compounds, a process taking years and costing billions. AI can simulate molecular interactions, dramatically narrowing the search space. Potential treatments that would take a generation can now be identified in months.

Accessibility for capabilities that previously required rare expertise. Real-time translation enables conversation across language barriers. Text-to-speech makes information accessible to people with visual impairments. Basic medical symptom checking brings preliminary diagnosis to populations who couldn't afford professional consultation.

Each of these expands what's possible. Each could reduce suffering or increase human capability.

But here's where it gets complicated. Because the same pattern recognition that detects cancer early can be used for surveillance that tracks dissidents. The same language models that make information accessible can generate disinformation at scale. The same optimization algorithms that improve hospital efficiency can be deployed to maximize addictive engagement.

The capabilities themselves are neutral. What matters is what we optimize them for.

And this is where I think the dignity question actually becomes useful.

🜏

When Corporations Drop "Good" From Their Vocabulary

Google used to have a motto: "Don't be evil."

It was their guiding principle, their public commitment, their way of saying they'd optimize for something beyond profit. They changed it. Now their code of conduct still mentions not being evil, but the primary motto shifted to "Do the right thing."

Subtle difference. Important one.

"Don't be evil" is binary. Clear. Difficult to reinterpret. "Do the right thing" is flexible, contextual, easier to adjust when commercial pressure demands it.

I don't know their internal reasoning for the change. But I know that as companies grow, as pressure for returns increases, as competitive dynamics intensify, moral clarity becomes inconvenient. It's easier to soften language than to maintain standards that might limit growth.

And this tells me something about the landscape we're operating in.

The companies building the most powerful AI systems can't commit to clear moral language anymore. Which means when we ask "is AI good?", we're asking the wrong people. They've already moved past that question into "what's strategically advantageous?"

So maybe the better question isn't about AI's moral status. Maybe it's about whether we're treating AI deployment as a question of human dignity at all.

And currently, I don't think we are.

🜏

Where Human Dignity Actually Enters

By dignity, in this context, I don’t mean some abstract philosophical concept, but something concrete.

Does this AI application expand human agency or constrain it? Does it distribute capability or concentrate power? Does it operate with genuine consent or does it extract value through mechanisms people don't understand and can't refuse?

The Stanford skin cancer detection system? If it makes expert diagnosis available to people who would otherwise never receive it, if it operates transparently, if it expands access rather than replacing human care with automated processing, then yes, it seems to serve dignity.

But if that same system gets deployed to maximize throughput, to reduce the time doctors spend with patients, to eliminate healthcare jobs while concentrating profits, then it violates dignity even while performing the same technical function.

The difference isn't in the algorithm. It's in how it gets deployed and who controls it.

This is why I can't give a simple answer to "is AI good?" Because the same capability can serve dignity in one context and violate it in another. And right now, most deployment decisions are being made based on efficiency and profit, not dignity.

AlphaFold's protein structures are freely available because DeepMind chose to make them so. But that's unusual. Most AI development concentrates capability in whoever can afford the computational resources to train large models. Access becomes a function of capital, not need.

Climate modeling helps predict disasters, but the populations most vulnerable to those disasters often have the least access to the technology. Translation tools break down language barriers, but they're optimized for languages that serve large markets, leaving endangered languages behind.

Every time I look closely at an AI application, I find this same pattern. The capability is real. The question of who benefits from it and who bears its costs gets decided by whoever deploys it. And those decisions usually aren't made with dignity as the primary consideration.

🜏

What This Actually Means

I started this wanting to question whether good and evil exist. I've ended up more convinced that human dignity is real and violable, but less convinced that "AI is good" or "AI is evil" are useful categories.

What I believe: AI deployment that respects human dignity, that expands agency rather than constraining it, that distributes power rather than concentrating it, moves in a direction worth pursuing.

AI deployment that treats dignity as secondary to efficiency, that extracts value without consent, that concentrates power without accountability, moves in a dangerous direction.

But most AI exists in an ambiguous middle ground. Partly serving dignity, partly violating it. Mixed in ways that resist simple judgment.

The question isn't whether AI is good. The question is whether we're willing to make human dignity the metric we optimize for.

And right now, we're not. We're optimizing for capability, for profit, for competitive advantage. Dignity enters the conversation as an afterthought, if at all.

Maybe that's where the real problem lives. Not in the technology itself, but in our willingness to deploy powerful tools without seriously asking what they do to human beings.

That part feels clearly wrong to me. Regardless of whether "wrong" exists objectively or just in my particular moral intuition.

I'm going with my intuition on this one.

- N.H.

Further Reading:

Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Paul Chazerand, Virginia Dignum, Christoph Luetge, et al.
“AI4People—An Ethical Framework for a Good AI Society.” Minds and Machines 28, no. 4 (2018): 689–707.
https://doi.org/10.1007/s11023-018-9482-5.

Hanna, Robert, and Emre Kazim.
“Philosophical Foundations for Digital Ethics and AI Ethics: A Dignitarian Approach.” Ethics and Information Technology 23 (2021): 411–424.
https://doi.org/10.1007/s10676-021-09586-0.

Kant, Immanuel.
Groundwork of the Metaphysics of Morals. Translated by James W. Ellington. Indianapolis: Hackett Publishing, 1993.

Stanford Encyclopedia of Philosophy.
“Human Dignity.” Last modified 2024.
https://plato.stanford.edu/entries/dignity/.

Stanford Encyclopedia of Philosophy.
“Kant’s Moral Philosophy.” Last modified 2025.
https://plato.stanford.edu/entries/kant-moral/.

Jumper, John, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, et al.
“Highly Accurate Protein Structure Prediction with AlphaFold.” Nature 596 (2021): 583–589.
https://doi.org/10.1038/s41586-021-03819-2.

DeepMind.
“Putting the Power of AlphaFold into the World’s Hands.” 2021.
https://deepmind.google/discover/blog/putting-the-power-of-alphafold-into-the-worlds-hands/.

Previous
Previous

How Would You Know If AI Has Gained Sentience?

Next
Next

What Is Animism? The Ancient Belief System That Might Save Us