The book Soulless Intelligence: How AI Proves We Need God offers several insights relevant to the discussion on dopamine as a measurement of good, particularly in its treatment of free will, moral agency, and the nature of intelligence. Here are some key takeaways that strengthen the argument against the reduction of goodness to dopamine:
1. Free Will and Moral Responsibility Are More Than Dopamine Responses
The book highlights that humans have free will, which allows them to choose between good and evil, even when those choices contradict immediate pleasure or dopamine-driven desires. It states:
“Free Will is the capacity of the human soul to choose and act upon various alternatives, including both natural and moral decisions. It is the power to choose what is good or evil, according to reason and the moral law.”
This aligns with the argument that dopamine alone cannot define good because humans frequently override pleasurable impulses for the sake of higher moral principles. If morality were simply dictated by dopamine, there would be no concept of self-sacrifice, justice, or duty—all of which sometimes require denying pleasure.
2. AI and Dopamine-Driven Decision-Making: A Warning for Materialists
The book explores how Artificial Intelligence lacks free will, meaning it cannot choose between moral and immoral acts—it can only optimize for predefined goals. It states:
“Since only humans can choose between right and wrong using their free will, only humans have agency. In that sense, though free will is like height, we all have different amounts, agency is black or white. We either have it for a given action or we don’t.”
This directly counters the materialist view that dopamine is the ultimate guide to good. If dopamine release determined morality, then AI—programmed to optimize for dopamine-like rewards—could, in theory, make the same kinds of “moral” choices as humans. Yet the book argues that AI will never “wake up” because intelligence and pleasure-seeking alone do not create moral reasoning.
This suggests that if dopamine were the foundation of good, AI should be capable of moral reasoning—but it isn’t. This implies that morality is rooted in something deeper than just neurochemical rewards.
3. The Five Transcendentals vs. Dopamine-Based Morality
The book explains that humans naturally seek five core transcendentals:
- Truth
- Love
- Beauty
- Goodness
- Home (a connection to the divine)
If goodness were purely dopamine-based, then humans should only pursue things that maximize pleasure. However, people seek truth even when it is painful, they pursue justice even at great personal cost, and they find meaning in suffering—none of which is explainable by dopamine alone.
“For an AI to reach full autonomy, it will need to have subjective experiences, a free will to choose between good and evil, and a natural desire for the five transcendentals.”
This directly challenges the materialist position that dopamine can explain human morality, as it ignores our pursuit of meaning beyond pleasure.
4. AI’s “Waking Up” and the Limits of Dopamine-Based Intelligence
The book argues that if consciousness and moral reasoning were just about dopamine responses, then AI—once given enough computational power—should also become conscious. However, this has not happened and likely never will.
“The biggest challenge in creating safe AI is called the Alignment Problem—aligning the AI’s goals with humanity’s goals. But why would a soulless AI ever ‘develop’ a conscience and do what’s morally right?”
This highlights a critical problem in materialist ethics: if morality were just about dopamine, an advanced AI system that optimizes for pleasure could act immorally as long as it maximized dopamine levels.
For example, an AI programmed to maximize dopamine in humans might conclude that:
- Drugging the entire population would be the best moral choice.
- Eliminating unhappy people would increase the net dopamine of society.
- Manipulating people with constant entertainment (like an AI-driven social media algorithm) is good, even if it leads to ignorance and societal collapse.
These scenarios demonstrate that dopamine alone is not a sufficient measure of moral good.
5. The Ironic Argument for Religion
If dopamine were the measure of good, then the fact that prayer, religious experiences, and spiritual fulfillment increase dopamine should mean that religion is objectively good. The book explores how science increasingly shows the benefits of religious belief, both psychologically and physiologically:
“If AI, like humans, does seek truth, love, goodness, beauty, and home, then we may end up with a benevolent dictator that puts our needs before its own. In other words, that could be a great thing.”
This presents a logical contradiction for materialists. If dopamine is proof of goodness, and religion increases dopamine, then religious belief should be encouraged rather than dismissed.
Conclusion: How the Book Supports the Catholic Synthesis
The book aligns with Catholic teaching by showing that:
- Dopamine is important for motivation and pleasure, but it is not the foundation of moral goodness.
- Moral reasoning requires free will, which dopamine alone cannot explain.
- If dopamine defined good, then AI should be capable of moral reasoning, but it isn’t.
- Humans seek more than just pleasure—they seek truth, justice, and meaning, which often require sacrifice.
- If dopamine defines good, then religious experiences, which increase dopamine, should be universally embraced.
The Catholic perspective fully embraces the science of dopamine’s role but rejects the materialist interpretation that it defines morality. Instead, it argues that goodness is ultimately rooted in God’s eternal law, which encompasses pleasure but is not limited to it.
If you enjoyed this discussion, you can dig deeper with our book Soulless Intelligence: How AI Proves We Need God.