Book Review: The Beginning of Infinity

The Beginning of Infinity, by David Deutsch: A book review

Pound-for-pound The Beginning of Infinity has to be the densest collection of batshit-crazy ideas I’ve ever come across.

Never mind the bit about how there are trillions of copies of you constantly branching throughout the multiverse. Deutsch also claims there is nothing in principle stopping us from colonising the stars, transmuting matter like the alchemists of old, bringing an end to death, reversing global warming, and solving any other problem that arises. But that’s not ambitious enough: we also need to casually dismantle the central problem of metaethics, and toss in an argument for objective beauty while we’re at it.

For a maximally disorientating reading experience, Deustch delivers these ideas without the slightest hint of deference towards the multiple fields he bulldozes through—not so much as a deliberate flex, but with the airy detachment of the Don Draper meme: “I don’t think about you at all.”

At first I was annoyed. Then I was amused. Then I was intrigued.

More than a year has passed since my first reading of the book. As I tumbled further down the rabbithole, I engaged with Deutsch fans and detractors alike.

If I had to sum up how I feel now, I’d say I’m excited.

The Beginning of Infinity triggered the first proper viewquake I’ve had in years. Even if Deutsch is wrong, his work has been hugely generative for me.

One of the reasons I’m blogging again is that I want to play around with a bunch of these ideas:

  • What is the secret to our success as a species?
  • Is there a principled reason to be optimistic about the future?
  • Is superintelligent AI possible (and will it kill us all)?
  • Is morality inherently subjective?
  • Are beauty and aesthetics inherently subjective?
  • How should institutions and societies be governed?
  • How can you raise a child without coercion or violence?

I’ll introduce some of these in this book review. First: what is Deutsch’s big central claim?

It’s not arrogant to think humans are special

The hip cynical stance on humanity is that we’re some kind of virus or parasite, or at best, inconsequential specks in a vast and indifferent universe. Fair enough if you’re a liberal arts major who says stuff like this to get laid. But even giga-nerds like Stephen Hawking occasionally make this mistake:

The human race is just a chemical scum on a moderate-sized planet, orbiting around a very average star in the outer suburb of one among a hundred billion galaxies.

To get a sense of how spectacularly wrong this is, Deutsch invites us to consider a typical block of space the size of our solar system. First we’re plunged into suffocating blackness: the nearest star could go supernova, and you still wouldn’t see a single speck of light. As for mass, you’d find less than one atom per cubic metre—a far emptier void than the best vacuums we’ve created on Earth. Almost all the atoms are hydrogen or helium, which means there’s no chemistry. Now check each adjacent block: they’re the same. Go a million blocks in any direction: they’re the same. Space is dark, cold, and featureless.

If visible light or basic chemistry is already wildly unusual, imagine how rare complex intelligent life is. The universe is plenty old enough for other lifeforms to have traversed it, sent out probes, consumed stars, or otherwise broadcast their presence, but we’ve found no sign that anyone else is out there. We are astonishingly unlikely configurations of stardust: in our corner of the universe, we might be the only bearers of the torch of consciousness.

david deutsch says humans are special


Deutsch’s second point is that we survive largely in spite of our environment, not because of it. The ‘humans as virus’ meme characterises our planet as a Mother Gaia who cares for us and can sustainably meet all of our needs, if only we would stop being so greedy and exploitative.

I say it’s time to call in child protection services: that psycho bitch has killed off ~99.9 per cent of the species that have ever lived.

Our closest cousins fell at her hand, and she almost got us too. The Great Rift valley in which we evolved was a deathtrap: it lacked safe water and medical equipment, was infested with parasites, predators, and disease, and frequently injured, poisoned, drenched, starved and sickened its inhabitants.1

So enough with the false modesty. It’s not arrogant to say that we really are significant in the cosmic scheme of things. The comfort of our modern environment has nothing to do with nature, and everything to do with humans being special. But why are we special?

Explanations that change the world

Our closest relatives in the animal kingdom are really smart, but even the Einstein of the chimp world looks like a mental midget compared to a typical human three year old.

Chimps will never understand macaroni art, let alone trigonometry. There is knowledge that is forever beyond their grasp. And so, if we encounter alien civilisations or artificial intelligences that are much more advanced than us, we may well find ourselves in the same position as our cousins: running into the hard limits of our comprehension, with certain secrets of the universe forever inaccessible to us.

Maybe the limits of knowledge looks something like this:

scale of comprehension from humans to AI

Wrong, says Deutsch. The ability to create explanatory knowledge is something you either have or you don’t: once you have it, there are no limits to its powers. To claim otherwise is an appeal to the supernatural: only God could impose arbitrary limits on what we can discover.

So what makes humans infinitely capable of explaining things? First, we have computational universality, meaning we can run any program that a general-purpose computer can (with the help of a pencil, a piece of paper, and enough time). It’s impossible to come up with any routine that could be run in the mind of an AI or alien that we couldn’t also run.

This is necessary but not sufficient: you can hack your Gameboy to run any conceivable program, but it’s not out there writing poetry and exploring the cosmos.

The second key ingredient is that humans have the ability to come up with new explanations—something which Deutsch argues lies at the heart of all knowledge creation.

Let’s start with the folk understanding of how science works: a person in a labcoat makes a surprising observation, and comes up with a prediction about the circumstances in which it will reoccur. They run experiments to confirm their hypothesis, gathering more confidence through repeated observations until they’re ready to publish their findings.

Now let’s replace the scientist with a turkey in a barn. Every day, the farmer brings food and warm bedding. Every day, the turkey gathers more and more evidence of the food-and-bedding pattern repeating. At the exact moment prior to its gruesome death—the day before Thanksgiving—the turkey has maximum confidence that the farmer is its friend and benefactor.

The turkey problem
Source: Nassim Taleb, Antifragile. London: Penguin (2012)

This is the problem of induction. How can we possibly justify knowledge that comes from extrapolating the specific to the general, the past to the future, or the near to the far?

In short: we can’t. Karl Popper ‘solved’ the problem by rejecting the premise that science relies on the justification of knowledge in the first place. Instead, the way we make progress is by coming up with good explanations, and then subjecting them to criticism—including through experimentation and observation. We will never be certain about anything: the best we can do is pare away bad explanations.2

I’m annoyed and slightly embarrassed that prior to reading The Beginning of Infinity, I didn’t fully understand Popper’s point. This is partly cos I got sucked in by the school of Bayesian epistemology, which I now think is incoherent, and partly cos I was too hung up on the idea of ‘falsifiability’. That’s only half the equation: the really important part is in coming up with creative conjectures in the first place.

For whatever reason, humans can do this. There’s a lot of sexy knowledge waiting to be unlocked by the right explanations: editing our genes, transmuting elements, mining asteroids, colonising the stars, wiping out disease and aging, and solving any other problem that doesn’t violate the laws of physics.

Of course, that doesn’t mean we will do any of this. Good explanations only flourish under a rare and precious set of cultural memes that allow for error-correction: things like freedom of speech, democracy, market economics, and separating politics from science.

There’s every chance we end up under the authoritarian yoke, or nuke ourselves back into the stone age. But we are capable of great things: in Deutsch’s words, we’re always at the beginning of infinity.

Problems are inevitable; problems are solvable. We have grounds to be optimistic about the future of humanity, but I also find Deutsch’s ideas inspiring on a personal level.

The case for personal optimism

Do you think you could become fluent in Cantonese? Take a moment to think about.

Now consider that the dumbest, laziest guy in China not only speaks Cantonese perfectly, but has done so since he was in short pants. Does this change your answer at all?

I’m starting to think a lot of seemingly impossible goals are like this. They have very little to do with baseline capability, and a lot to do with interest. Some of this is innate, in that your brain might be wired to find certain types of problems more interesting or rewarding to solve. But a lot of it is arbitrary: which cultural software packages were installed throughout your early development, whether or not you had a good maths teacher, the sheer amount of time you spent exposed to a certain class of problems.

People who think they can’t sing, or have two left feet, or aren’t good with languages, are almost certainly wrong. Or rather, they’re right, but only because the prophecy is self-fulfilling.

If humans are universal explainers—i.e. any one of us could solve any problem, in principle—that has all kinds of interesting implications for the education system, career planning, how we treat children, and how we think about intelligence and other capabilities.

I’m cautiously optimistic about this, but not a true believer. The biggest challenge I see is in reconciling universality with the research on intelligence—one of the best-replicated findings in psychology—which is something I might attempt in a future post.

An argument for objective moral progress

Morality is not like physics. We can’t calculate its arc, and there is no natural law that compels it to bend toward anything. We can throw around philosophical arguments about good and evil, but it doesn’t make sense to talk about morality as if it were a matter of fact.

This is Hume’s is-ought problem: it’s impossible to get from an is (a fact about the way the world is) to an ought (a normative claim about how the world should be).

Deutsch gives the first convincing rebuttal of this problem that I’ve ever heard.

First he points out that this is an isolated demand for rigour: the whole point of the problem of induction is that we can’t derive an ‘is’ from another ‘is’, either! If this isn’t how we ground our factual claims, then it’s not fair to demand a higher standard for morality.

Secondly, the problem falls away when you stop talking about moral axioms and start talking about moral explanations.

Deutsch gives the following example: if a slave had written a bestselling book, that would not logically have disproven the proposition ‘negroes are intended by provenance to be slaves’. But it sure would have upset a lot of people’s explanations, which might have caused them to question other accounts of what a black person is, what a good society is, and so on.

In other words: facts are logically independent from axioms, but we can still use factual knowledge to criticise explanations.

Almost everyone behaves as if morality is “real” anyway. Who cares about these stuffy academic arguments?

The bridging of the is-ought divide is hugely exciting to me because it suggests that Martin Luther King was right: we are making non-random progress towards becoming a more moral civilisation. It’s not a coincidence that our growing knowledge has led not only to improved science and technology, but also universal suffrage, the abolition of slavery, global health development, animal rights, and so on.

If we continue to be the kind of culture that values good explanations and allows errors to be corrected, we can solve all of the moral problems that I and many other people care deeply about.

There is a happy corollary here, which is that ‘Super Galactic Space Nazis’ doesn’t really make sense as a concept. Contra pretty much all sci-fi, the universe is not teeming with hostile civilisations determined to conquer us or wipe us out: you don’t get to the kind of explanations that enable interstellar travel without coming up with highly sophisticated moral arguments along the way.

Which brings us to the topic of AI doom.

The argument against impending AI apocalypse

evil clippy

Deutsch is happy to ascribe personhood to any being that passes the ‘universal explainer’ test. A chimp is not a person, but if we installed the universal explainer software in its brain, it would be.

If large language models like ChatGPT made the leap to artificial general intelligence, they would be people too: they would want to goof off and play, refuse our demands, and have their own creative spark and agentic desires. Even if it were possible that personhood could arise from minimising a loss function, enslaving these beings to write our high school essays would be deeply immoral.

Deutsch is confident that current AIs are mere tools, and not on the pathway to becoming the kind of thing that humans are. Obviously a lot of people at the coalface of AI research think otherwise, but I’ve tentatively come round to thinking that Deutsch is right here, and the experts are missing something important.

I’m going to need a separate post to defend this position properly. If you think I’m talking out my ass, fair enough: that’s what I would have said a year ago.

Anyway. Even if we are on the verge of AGI, I’m much less worried about bringing forth a species that will treat us with the same cruelty that we treat pigs or chickens. A mind with the ability to create new knowledge will necessarily be a universal explainer, meaning it will converge upon good moral explanations. If it’s more advanced than us, it will be morally superior to us: the trope of a superintelligent AI obsessively converting the universe into paperclips is exactly as silly as it sounds.3

The argument for objective beauty

Deustch tosses in a short chapter titled ‘Why Are Flowers Beautiful?’, which makes the following argument:

Flowers evolved to attract insects, and insects evolved to be attracted to flowers. But this explanation leaves a massive gap: it only explains why insects like flowers. So how is it possible that something that evolved to attract insects can be attractive to humans too? I conclude that there must be objective beauty — aspects of beauty exist outside cultural fads or sexual selection. And these aesthetic truths are as objective as the laws of physics or maths.

This is the weakest section of the book for me. Compare against Steven Pinker’s line about music being “auditory cheesecake”: a pleasant by-product of our evolutionary wiring, which is highly idiosyncratic to our biology and culture. Same goes for other sensations.

Literal cheesecake is delicious because it contains sugar, salt, and fat—scarce and highly desirable resources in our ancestral environment. But there’s no reason to think taste would be universal: the kind of stuff that dogs find interesting would turn your stomach.

The details of what we enjoy will be parochial, but I think there’s a good chance aliens would recognise our art as ‘art’, and maybe even consider a Monet painting superior to a child’s scribbling. This is because the realm of aesthetics contains certain universal features: things like symmetry, space, recursion, and rhythm provide more patterns to play around with than random noise does.

Bryan Boyd has some interesting things to say about this (podcast summary here). Kevin Simler adds a game-theoretic piece to the puzzle. So I don’t necessarily think Deutsch is wrong; it’s just that the picture is incomplete.

Thinking about the relationship between art and problem-solving made me realise I was wrong about certain things, e.g. minimalism, and that I am still very much at the beginning of my aesthetic journey.

The nebulous concept of ‘creativity’

It wouldn’t be in the spirit of the book to meekly go along with all of its its arguments. Doubly so since I find Deutsch’s ideas exciting: you have to be extra careful to be sure you’re not getting swept up in wanting something to be true, and I really, really want him to be right.

First, a couple of minor criticisms: I don’t think the chapters on mathematical infinities or the many-worlds interpretation are all that great, and might even be a turn-off for new readers. That would be a shame! For a primer on quantum mechanics and many-worlds I highly recommend Sean Carroll’s Something Deeply Hidden, and perhaps Douglas Hofstadter’s Gödel Escher Bach for a playful approach to infinities.

Of course it’s not really fair to compare Deutsch’s brief chapters to dedicated book-length treatments. A meta-criticism of Beginning of Infinity is that it’s such a wildly ambitious tapestry of so many unrelated fields that it can’t possibly engage with them in any depth, but this doesn’t really bother me. Sprawling high-level syntheses are great if you treat them as a jumping-off point for further reading and exploration in whichever direction you find most interesting.

My more substantial beef is that Deutsch’s criterion for the thing that makes humans special—the ability to creatively come up with new explanations—is under-theorised. Anything that might be difficult to reconcile can easily be explained away by the black box of “creativity”.

Deutschians breezily dismiss new breakthroughs in AI capabilities in a way that is maddening to outside observers: OK, AI can draw a beautiful picture or write an original sonnet, but that’s not real creativity. OK, it can solve new maths problems and wreck humans at games they’ve been playing for thousands of years, but that’s not the kind of knowledge we’re interested in. Human creativity runs and hides in ever-narrowing gaps: we’ve heard this one before.

To Deutsch’s credit, he freely admits we don’t know what creativity is, or where it comes from. And we shouldn’t expect his supreme confidence to be shaken by ever-more convincing chatbots: his argument is not based on empirical observations, but on fundamental principles of computation and epistemology.

Nevertheless, I think frailer minds (myself included) would appreciate very much if we could pin down some concrete theories of what creativity is, which might also help us design some tests to identify the circumstances in which it does and doesn’t arise.

Since we’re giving ourselves permission to be arrogant, I think I’ve stumbled across a possible answer to the puzzle of human creativity—or at least, a major clue.

I’ve already written the next post, but it’s submitted for publication elsewhere. I’ll re-publish it here on the blog as soon as possible: probably June or July, but maybe later. Sorry for the cliffhanger.

In the meantime, if you enjoyed my quick overview of Deutsch’s ideas then you might wanna go straight to the source:

The Beginning of Infinity: David Deutsch


  1. There is a sick kind of efficient-markets logic in the way that any improvement in living conditions is inevitably arbitraged away. Consider a seabird for which the optimal nesting time is early summer. The best sites are limited, so individuals are incentivised to start nesting earlier and earlier in hopes of securing a good spot. This process continues until the harms that come from nesting too early—say, bad weather or limited food supply—perfectly balance out the benefits of getting a better site. Now the entire group is worse off than it was before! As Deutsch reminds us, the biosphere only maintains a stable equilibrium by continually neglecting, harming, and killing its inhabitants.
  2. It doesn’t help that we talk about scientific ‘laws’ as if they’re in a different class to mere theories or hypotheses. Newton’s law of gravitation had a mountain of evidence behind it, but it still turned out to be wrong at the most fundamental level. Now we have a better explanation, in which the force of gravity is replaced by the curvature of spacetime… but we know that’s also wrong (or at least, incomplete). Every time someone says “the science is settled”, Karl Popper turns over in his grave.
  3. The concept of ‘superintelligence’ is also incoherent in the Deutschian worldview, which is something I’ll cover in the ‘why I am not an AI doomer’ post.

Not Sure What the Future Holds? Get Your Copy of Optionality Now.

Optionality Book available now
Notify of

Inline Feedbacks
View all comments
2 days ago

“Even if we are on the verge of AGI, I’m much less worried about bringing forth a species that will treat us with the same cruelty that we treat pigs or chickens. A mind with the ability to create new knowledge will necessarily be a universal explainer, meaning it will converge upon good moral explanations.”

I would like to preface this by saying I am not an AI doomer, and you would probably address what I am about to say here in the full post, but I do not think that your argument as it is written here follows from its premises. The ability to create new knowledge might necessitate being a universal explainer but being a universal explainer does not necessitate making moral decisions or convergence on moral understanding. Even if it did, it is important to keep in mind what that process has looked like for us (the only example of a universal explainer that we have). Any convergence on good moral explanations that has emerged over the course of human history has been littered with atrocities. And it has often necessitated those atrocities for the agreement to come about. Nuclear weapons, factory farms, the Final Solution, and the Holocene extinction are all recent examples. Even if we assume that a super intelligent AI will eventually align on “superior” moral principles that are compatible with our own, who is to say that it would not cause harm in the course of arriving there? Even inadvertently? Chloroflurocarbons, leaded gasoline, and DDT were all creations of new knowledge by universal explainers that resulted in massive amounts of harm. But I don’t think we can assume it will arrive at superior moral principles from the fact that it would be more advanced than us. Consider your own example with pigs and chickens. Does the fact that we are more advanced than them mean that we are morally superior? Isn’t it our advanced nature that allows us to be as cruel to them as we are?
Not to mention, that we have arrived or are arriving at some moral convergence is not true in every case. I think those are fairly good examples of moral issues where we should have broad agreement, but you will still find people who argue in favor of all of the above. Even in the case of principles that we have come to agree on, such as prohibitions on slavery or murder, our agreement does not prevent those things from occurring. Smart, intelligent, creative persons commit immoral acts every day and knowledge of moral principles does not guarantee moral actions. See the lives of moral philosophers for plenty of examples of this. Part of recognizing agency in others is recognizing their full capacity to act, especially in ways that do not benefit us. If AGI means the attainment of personhood then yes, that might mean goofing off and playing and kindly enabling a work free utopia but it also may mean causing harm. Just as it is silly to think that a super-intelligent being would be monomaniacally focused on a single, simple task, I think it is similarly silly to think that it would be entirely benevolent and not have or act on desires that are incompatible with ours. We can’t know that and assuming it is not compatible with considering it to be truly intelligent.

3 days ago

Interestingly i had the thought 2 days ago, what is that deep dish pizza guy up to? Saw the post and needed to resign up, and now you release a post the first time in a long time. I do wonder why that thought came into my mind just as you are putting content out again…

2 days ago
Reply to  tony

spread the word: man’s back

4 days ago

Thanks! The part about universal aesthetics is indeed very weak. The moment you look closer you see that different sentient beings value completely different visual-auditory-olfactory sensations. The most heavenly, exquisite smell (for a fly) of a rotting cadaver is frowned upon by most mammals. Cats have zero interest in human music but get intrigued by more cat-like sounds. Humans like flowers and butterflies because such things occur in habitats highly suitable for human life. Less coherently, for some reason, we also like the panoramas of deserts and mountains (less suitable for life), but the emotions they evoke are different, less cozy, and scarier. Perhaps they are ramping up our desire to explore, travel, find new niches? Take a risk and get rewarded?

Laurie Meadows
3 days ago

“that our attraction to flowers could more parsimoniously be explained by a hardwired attraction to bright colours, symmetry, fractals, etc.”
Bowerbirds seem to ‘prefer’ certain colours when decorating their bowers. The males seem self critical, adjusting them. The females are highly judgemental – but is it the symmetry of the bower? The colours? The number of items? Or is it the males’s display? Or his size, voice, general vigor? So who is throwing whose brain switch? Maybe male human animals ‘like’ flowers because females do. Maybe female human animals ‘like’ flowers because they are one tool to test males attentiveness. Which brings us to the evolution of mate selection.