The universe as code: a cosmological reckoning with the simulation hypothesis

Epigraph

Praise belongs to God, Lord and Sustainer of the Worlds. (Al Quran 1:2)

God keeps the heavens and earth from vanishing; if they did vanish, no one else could stop them. God is most forbearing, most forgiving. (Al Quran 35:41)

Promoted post: Videos: Are We Living in a Simulation? Could God be Running the Simulation?

Presented by Zia H Shah MD

Abstract

The simulation hypothesis proposes that what we call reality — the cosmos, its physical laws, our own minds — is not base reality but a high-fidelity computation executed on a substrate outside our observable universe. Formalized by Oxford philosopher Nick Bostrom in 2003, Hipporeads the idea has migrated from philosophy into cosmology, physics, and popular culture, Wikipediasimulation-argument drawing endorsements from Elon Musk, Neil deGrasse Tyson, Max Tegmark, and computer scientists such as Rizwan Virk, while also triggering rigorous rebuttals from Frank Wilczek, Sabine Hossenfelder, and David Kipping. This article surveys the hypothesis across four registers: its logical architecture (Bostrom’s trilemma), its scientific touchstones (quantum indeterminacy, the holographic principle, error-correcting codes in supersymmetry, proposed lattice-signature tests), its principal critiques (Lorentz invariance, computational limits, unfalsifiability), and its theological echoes (Plato’s cave, Hindu māyā, the Gnostic demiurge, Cartesian doubt). The central finding is that the hypothesis is neither established science nor pure metaphysics but a probabilistic argument of unusual philosophical weight — one that, true or false, forces a reckoning with what it means for a universe to exist.

Introduction: when physics began to resemble code

The simulation hypothesis is the claim that our reality is a computer program. More precisely, it is the proposal that the whole of perceived existence — stars, cells, thoughts, the laws governing them — is instantiated as information processed by a computing substrate belonging to some outer, more fundamental world. The hypothesis sits at an unusual crossroads: it draws on quantum mechanics (where reality behaves discretely, probabilistically, and observer-dependently), on computer science (where virtual worlds have become photorealistic within one human generation), and on philosophy of mind (where consciousness is increasingly treated as substrate-independent information processing).

The idea is older than the vocabulary we use for it. Plato’s prisoners mistook shadows for substance; Descartes imagined an evil demon manufacturing his every sensation; Wikipedia the Vedāntic sages called the veil of appearance māyāSacred Illusion What the twenty-first century added was a plausibility mechanism. Once Moore’s Law produced virtual environments indistinguishable from lived experience in narrow domains, and once physicists began writing the laws of the universe in the grammar of information — bits, codes, holograms, lattices — the question shifted from “could reality be a dream?” to “is reality computable, and if so, by whom?” The answer one gives now depends less on metaphysical taste than on how seriously one takes the trajectory of computation and the informational turn in physics.

Core concepts: the architecture of the argument

Bostrom’s trilemma

The modern simulation hypothesis has a definite birthdate: April 2003, when Nick Bostrom published “Are You Living in a Computer Simulation?” Hipporeads in Philosophical Quarterly (53:211, pp. 243–255). The paper’s abstract states that at least one of three propositions must be true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history or variations thereof; or (3) we are almost certainly living in a computer simulation. Wikipedia +5

The logic is probabilistic, not deductive. Bostrom invokes two assumptions — substrate-independence (“mental states can supervene on any of a broad class of physical substrates”) Simulation Argument and computational feasibility Stanford University (a planetary-mass computer could simulate the entire mental history of humankind using less than a millionth of its processing power for one second, given estimates of 10³³–10³⁶ operations per simulated mind-history). On Earth As It Issimulation-argument If even a small fraction of civilizations reach computational maturity and choose to run “ancestor-simulations,” simulated minds will vastly outnumber biological ones. Wikipedia Applying a Bland Indifference Principle — if x per cent of observers with your experiences are simulated, your credence of being simulated should equal x Oxford University Research Archivesimulation-argument — the conclusion is that either civilizations don’t reach maturity, don’t want to simulate ancestors, or we ourselves are almost certainly simulated.

Bostrom closes the paper with caution rather than triumph: Hipporeads “In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3).” Wikipedia +2 He assigns his personal credence to the simulation hypothesis at “substantial” but under 50 per cent Simulation Argument — historically closer to 20 per cent Howardrudd — and insists on a distinction often lost in popular coverage: his argument (the trilemma) is not identical to the hypothesis (that we are simulated). One can accept the former without being forced to the latter. The Think Institute +2

Computational physics and the rendering analogy

The most visually compelling version of the argument reframes quantum mechanics as the physics of a rendering engine. In the double-slit experiment, particles behave as probability waves until they are observed, at which moment they collapse into definite locations. Superposition — the coexistence of multiple possible states — dissolves on measurement. Rizwan Virk, a video-game designer and MIT computer scientist, ASU Search argues that this is precisely how efficient simulations behave: PodmarizedBlogger a game engine renders only what the player sees. The Jordan Harbinger Show “It only needs to render those parts which are being observed by one person,” he writes, “and then when they’re observed by the next person, there’s something we call caching.” The Jordan Harbinger Show The universe, on this reading, is practicing just-in-time rendering to conserve processing.

Brian Whitworth’s 2007 paper “The Physical World as a Virtual Reality” (arXiv:0801.0337) arXiv formalizes the intuition: if a photon is a pixel on a multi-dimensional grid, the speed of light could reflect the grid’s refresh rate, and the Big Bang corresponds to the moment “the system was booted up.” PhilPapers Mainstream physics explains the observer effect through decoherence rather than rendering, and most physicists consider the analogy suggestive rather than derivational — but the structural parallel is undeniable, and it is what makes the simulation picture psychologically vivid.

Digital physics and the holographic universe

Independently of simulation rhetoric, a strand of twentieth-century physics has argued that information, not matter, is fundamental. Konrad Zuse — the German engineer who built the first programmable computer, History of Information the Z3, in 1941 Theexperiencemachine — published Rechnender Raum (“Calculating Space”) in 1969, proposing that the universe is a cellular automaton executing local rules on a discrete substrate. Edward Fredkin extended this into a program he called digital philosophyCoherentuniverse insisting that “information is more fundamental than matter and energy.” In-Sight PublishingBoing Boing Stephen Wolfram’s A New Kind of Science (2002) and subsequent Wolfram Physics Project argue that the entire edifice of modern physics, including relativity and quantum mechanics, may emerge from simple hypergraph-rewriting rules. Grokipedia

The most startling corroboration arrived from gravitational physics. Jacob Bekenstein (1973) and Stephen Hawking (1975) showed that the entropy of a black hole scales not with its volume but with its horizon area, Wikipedia at one bit per roughly four Planck areas: S = A/4 in Planck units. Gerard ‘t Hooft (1993) and Leonard Susskind (1995) generalized this into the holographic principle: the maximum information contained in any region of space is fixed by the area of its boundary, as if the three-dimensional world were projected from a two-dimensional data sheet. Academic Block “The three-dimensional world of ordinary experience,” Susskind wrote, “is a hologram, an image of reality coded on a distant two-dimensional surface.” Wikipedia Juan Maldacena’s 1997 AdS/CFT correspondence gave the principle a concrete mathematical realization. None of this proves simulation, but it suggests the universe’s informational capacity is finite, bounded, and surface-encoded — exactly what one would expect of a rendered world.

Glitches, Planck pixels, and the grid hypothesis

If the cosmos is a computation, its parameters should betray it. The Planck length (ℓₚ ≈ 1.616 × 10⁻³⁵ metres) and Planck time (tₚ ≈ 5.39 × 10⁻⁴⁴ seconds) mark theoretical limits below which classical spacetime concepts break down; Cosmic Ventures proponents reframe them as the universe’s pixel size and clock tick. The speed of light — exactly ℓₚ divided by tₚ — becomes the maximum propagation rate the simulator permits, a hardware constraint rather than a law of nature.

The most concrete version of this argument is a proposed empirical test. In 2012, Silas Beane, Zohreh Davoudi, and Martin Savage published “Constraints on the Universe as a Numerical Simulation” ResearchGate +2 (arXiv:1210.1847; Eur. Phys. J. A 50:148, 2014), reasoning that if the universe is computed on a cubic spacetime lattice analogous to modern lattice-QCD simulations, the underlying grid should imprint an anisotropy on the highest-energy cosmic rays: arxiv ultra-high-energy particles should cluster along the simulation’s lattice axes rather than distribute isotropically. The Greisen–Zatsepin–Kuzmin cutoff places the inverse lattice spacing at b⁻¹ ≳ 10¹¹ GeVarxiv +3 No such anisotropy has been observed by the Pierre Auger or Telescope Array collaborations, which rules out naïve lattices but leaves finer discretizations intact.

Mathematical precision as evidence

A subtler argument treats the very intelligibility of the universe as a clue. Eugene Wigner called it “the unreasonable effectiveness of mathematics in the natural sciences.” Max Tegmark (MIT cosmologist) pushes this further: his Mathematical Universe Hypothesis holds that “our external physical reality is a mathematical structure” SFVRarXiv — not described by mathematics but identical to it, with conscious observers as “self-aware substructures.” Wikipedia His Computable Universe Hypothesis further restricts reality to computable structures. arXivWikipedia If true, the universe is executable by definition; whether it is being executed is a separate question, but the metaphysical distance has collapsed.

Principal proponents

Nick Bostrom (born Helsingborg, 1973) remains the argument’s philosophical anchor. Trained in philosophy, physics, computational neuroscience, and mathematical logic, he directed Oxford’s Future of Humanity Institute Spotify from 2005 until its closure in 2024. Wikipedia His stance is notably more restrained than his popularizers: “I would assign a ‘substantial probability’ to the simulation hypothesis. I tend to refrain from providing a specific number… it could convey a false sense of precision.” Simulation Argument

Elon Musk, the Tesla and SpaceX entrepreneur, has done more than anyone to broadcast the argument. At the 2016 Code Conference, he reasoned from the trajectory of video games: “40 years ago, we had Pong — two rectangles and a dot. Now we have photorealistic 3D simulations with millions of people playing simultaneously… If you assume any rate of improvement at all, the games will become indistinguishable from reality.” Space.comVICE He concluded that “the odds that we’re in base reality is one in billions,” Wikipedia +2 adding — with characteristic inversion — “we should hope that’s true, because otherwise if civilization stops advancing, that could be due to some calamitous event that erases civilization.” VICEInformation Age

Neil deGrasse Tyson, director of the Hayden Planetarium, moderated and then joined the 2016 Isaac Asimov Memorial Debate on the question. Hipporeads Citing the roughly two per cent genetic difference between humans and chimpanzees, he argued that even modestly superior beings could simulate us trivially: Scientific AmericanHipporeads “We would be drooling, blithering idiots in their presence… everything in our lives is just a creation of some other entity for their entertainment.” TNW | Distract +2 The press distilled his position as roughly 50-50Futurism +2 though Tyson’s own framing emphasized qualitative plausibility rather than a precise number. Scientific American

Rizwan Virk, founder of Play Labs @ MIT and author of The Simulation Hypothesis Amazon +2 (2019) Commonwealth ClubAmazon and The Simulated Multiverse (2021), ASU Searchzenentrepreneur supplies the video-game vocabulary — NPCs versus RPG avatars, the “Simulation Point” at which rendered reality becomes indistinguishable from physical reality, rendering optimization as quantum behavior. In a 2025 Boston Globe interview, he raised his own probability estimate to “over 70 percent, maybe as high as 80 percent likely that we are inside a video game.” The Boston Globe

Hans Moravec, the Carnegie Mellon roboticist and author of Mind Children (1988), WikipediaImagining the Internet prefigured Bostrom’s argument by nearly a decade. In a 1995 Wired interview, Moravec stated that if advanced descendants can reconstruct ancestral worlds, “statistically speaking, it’s much more likely we’re living in a vast simulation than in the original version.” His neural-substitution argument — that neurons could in principle be replaced by functionally equivalent circuits without interrupting consciousness — provides the philosophical scaffolding for substrate-independence. Wikipedia

Konrad Zuse Wikipedia and Edward FredkinConway’s Game of Life described above, supply the digital-physics lineage. Max Tegmark offered a probability of 17 per cent at the 2016 debate Hipporeads +2 and advises simulated inhabitants to “go out there and live really interesting lives, and do unexpected things, so the simulators don’t get bored and shut you down.” Open CultureSpace.com

S. James Gates Jr., theoretical physicist at the University of Maryland The On Being ProjectAmerican Association for the Advancement of Science and later Brown, Physics Today occupies a more ambiguous position. While studying adinkras — graphical objects encoding off-shell supersymmetry representations arxiv — Gates and collaborators Adinkra SymbolsAmerican Association for the Advancement of Science (arXiv:0806.0051, 2008) discovered that certain foldings produce doubly-even self-dual linear binary error-correcting block codesAdinkra SymbolsBlogger the same family of codes that keep modern digital communications functioning. “I was driven to error-correcting codes — they’re what make browsers work,” Gates told the 2016 Asimov Debate. “So why were they in the equations I was studying about quarks and electrons and supersymmetry? This brought me to the stark realization that I could no longer say people like Max are crazy.” The On Being Project +2 Crucially, Gates himself assigned the simulation hypothesis only one per cent probabilityHipporeadsSpace.com and in a 2019 NBC News interview cautioned that, for non-scientists, the hypothesis starts to “look like a religion.” American Museum of Natural History +3

Scientific and cosmological evidence

The body of empirical support remains circumstantial and contested, but five categories of evidence structure the case.

First, quantum discreteness and observer-dependence. Energy levels are quantized, particles occupy superpositions that collapse on measurement, and entangled systems exhibit non-local correlations. Realm 33 None of these features require the simulation hypothesis, but each is compatible with it and awkward for a classical-continuous worldview.

Second, the holographic entropy bound. That black-hole entropy — and thus the universe’s information capacity — scales with surface area in Planck units Cosmic Ventures is the most mathematically rigorous hint that reality is fundamentally informational. Jacob Bekenstein, revisiting the principle in Scientific American (2003), wrote that the field’s trajectory suggests scientists may soon “regard the physical world as made of information, with energy and matter as incidentals.” Wikipedia

Third, error-correcting codes in supersymmetry. Gates’s adinkra discovery is a genuine mathematical result: doubly-even self-dual codes, including the extended Hamming [8,4,4] and Golay [24,12,8] codes, really do appear in the folding of supersymmetric representations. Adinkra Symbols +2 Whether this reflects a cosmic programmer, a mathematical coincidence, or a deep structural principle yet to be named is unresolved.

Fourth, lattice signatures. The Beane–Davoudi–Savage proposal remains the only concrete empirical test proposed to date. arxiv Its null result to date constrains, rather than confirms, the hypothesis. ResearchGate

Fifth, fine-tuning. The cosmological constant is roughly 10¹²⁰ times smaller than the naïve quantum-field expectation; the strengths of the fundamental forces, the proton-electron mass ratio, and the initial expansion rate lie within narrow life-permitting windows. Anthropic-principle The anthropic principle supplies one explanation, the multiverse another, Wikipedia and simulation a third: the constants were chosen by whoever configured the run.

Seth Lloyd’s 2002 paper “Computational Capacity of the Universe” (Phys. Rev. Lett. 88:237901) quantifies the ceiling: the observable universe has performed at most 10¹²⁰ elementary logical operations on 10⁹⁰ bits (10¹²⁰ including gravitational degrees of freedom). arxivAPS Journals Any simulator must exceed this budget — which leads directly to the critiques.

Critiques and counter-arguments

Frank Wilczek, Nobel laureate, published a Wall Street Journal column in January 2020 titled “Are We Living in a Simulated World?” His answer: “Probably not, but the idea is just crazy enough to be worth taking seriously.” mit Wilczek’s most cited objection is the hidden-complexity argument: “Our world contains a lot of hidden complexity. We can calculate a proton’s properties based on fundamental laws, but those calculations are extremely complicated. It would be a poor strategy to build a simulated world out of such hard-to-compute ingredients.” mitsupplysideliberal A sensible simulator would cut corners we cannot find. He closes by echoing Samuel Johnson’s famous stone-kick: “If ours is such a world, then the mind that creates it, made of God knows what, works in very mysterious ways.” mit

Sabine Hossenfelder, theoretical physicist, has been the hypothesis’s most pointed critic. Shtetl-Optimized In a March 2017 post at Backreaction, she argued that a discrete lattice substrate is incompatible with special relativity: “The idea that our universe is discretized clashes with observations because it runs into conflict with special relativity. The effects of violating the symmetries of special relativity aren’t necessarily small and have been looked for — and nothing’s been found.” blogspot In a 2021 post titled “The Simulation Hypothesis is Pseudoscience,” she concluded: “‘The programmer did it’ isn’t science. It’s not even pseudoscience. It’s just words.” blogspot +2 Her charge is that the hypothesis cannot be distinguished empirically from ordinary Standard Model plus General Relativity; adaptive-simulation escape clauses, in which the simulator patches anomalies before inhabitants detect them, require the simulator already to know the physics they are supposedly generating. blogspot

Computational-resource objections take Lloyd’s figures and invert them. If our universe requires 10¹²⁰ operations on 10⁹⁰–10¹²⁰ bits, the host universe must exceed that budget substantially, ResearchGate pushing the resource question outside our physics rather than answering it. No physical system can fully simulate itself, which generates the infinite-regress problem: what is the computer made of, and what simulates the simulator? Wilczek’s blunt formulation — “the idea that the physical world we experience is a computer simulation begs a basic question: what is the computer made of?” mit — admits no clean answer. Either the chain is infinite, or it terminates in an unexplained base reality from which we are distant; in the latter case, Ockham’s razor suggests stopping at our own.

David Kipping, Columbia astrophysicist, applied formal Bayesian reasoning in a 2020 paper, Substack “A Bayesian Approach to the Simulation Argument” (Universe 6:109; arXiv:2008.12254). MDPI Using model averaging and the notion that most simulated worlds are nulliparous — lacking the resources to spawn further simulations Scientific American — he concluded that “the probability that we are sims is in fact less than 50%, tending towards that value in the limit of an infinite number of simulations.” MDPI +2 The moment humanity itself births a conscious simulation, the odds invert dramatically. Semantic Scholar +2 Bostrom has responded that Kipping’s use of the principle of indifference is “rather shaky.” Scientific American

Other critics round out the opposition. Harvard’s Lisa Randall rated the probability at the 2016 debate as “effectively zero,” Open Culture +3 adding: “I actually have a problem with that. We mostly are interested in ourselves. I don’t know why this higher species would want to simulate us.” Scientific AmericanScience Alert Columbia mathematician Peter Woit has characterized the argument as “remarkably unromantic” creationism in programmer’s clothing. Scientific American Sean Carroll points to a self-undermining feature: if we are a typical observer and we cannot simulate universes, this contradicts the argument’s premise that most observers are simulated. Medium +2

Is the simulator God or an advanced civilization?

Bostrom himself anticipated the theological question. In Section VI of his 2003 paper, he wrote: “Although all the elements of such a system can be naturalistic, even physical, it is possible to draw some loose analogies with religious conceptions of the world. In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation: the posthumans created the world we see; they are of superior intelligence; they are ‘omnipotent’ in the sense that they can interfere in the workings of our world even in ways that violate its physical laws; and they are ‘omniscient’ in the sense that they can monitor everything that happens.” Simulation Argument +2 He even entertained a naturalistic afterlife, since simulators “could resurrect us.” simulationresearch

The structural affinity is old. Plato’s prisoners in Republic VII took shadows on a cave wall for reality; Wikipedia genuine knowledge required the soul’s “turning around” to face the true source. Descartes, in the 1641 Meditations, imagined “some malicious demon of the utmost power and cunning” deceiving him about “the sky, the air, the earth, colours, shapes, sounds and all external things.” The Think Institute Hindu Advaita Vedānta teaches that the phenomenal world is māyā — appearance superimposed on Brahman — and that liberation is recognition of this fact. CBC RadioThe Express Tribune Gnostic cosmology, in texts like the Apocryphon of John, identified the material world as the flawed construction of a lesser deity, the Demiurge, ignorant of the true transcendent godhead; salvation was gnōsis, the recognition of the simulacrum and escape to the Pleroma beyond.

These parallels are not incidental. The simulation hypothesis is a creation story told in the vocabulary of the information age, Medium and its simulator inherits many of the theological attributes once reserved for God: creator, sustainer, outside the system, capable of miraculous intervention. But it differs in three crucial respects. The simulator need not be benevolent, need not be ultimate, and need not be singular. Philosophicalapologist David Chalmers, in Reality+ (2022), captures the tonal difference: “Our creator isn’t especially spooky — it’s just some teenage hacker in the next universe up.” Built InScientific American The simulator is, in Chalmers’s phrase, a god in a “limited, non-omnipotent sense.” Neil deGrasse Tyson made the same distinction: “We don’t think of ourselves as deities when we program Mario. There’s no reason to think they’re all-powerful just because they control everything we do.” Scientific AmericanPhilosophy Bites

Rizwan Virk embraces the overlap rather than resisting it. PodmarizedEmbodied Philosophy His subtitle — Why AI, Quantum Physics, and Eastern Mystics All Agree We Are in a Video Game — advertises a convergence argument. The Boston Globe +5 He treats karma as “an algorithmic feedback system embedded in the code of the universe,” reincarnation as serial avatars for a persisting player-consciousness, and enlightenment as “understanding the code of existence — to see through the illusion and recognize oneself as both the player and the creator of the game.” ResearchGate S. James Gates, by contrast, draws the line: the simulation hypothesis, if held as a belief without empirical anchor, is “equivalent to the notion of a deity,” and should be labeled as such. American Museum of Natural History +3

Preston Greene of Nanyang Technological University has pushed the ethical consequences of the overlap in a new direction. PhilPapers In an August 2019 New York Times op-ed, “Are We Living in a Computer Simulation? Let’s Not Find Out,” Productivity HubNew World Notes and in Erkenntnis 85:489 (2020), he argues: “If we were to prove that we live inside a simulation, this could cause our creators to terminate the simulation — to destroy our world.” PhilArchiveNew World Notes A research simulation loses its value when the subjects detect the experiment. Springer On this view, the question of whether our simulator is a god or a graduate student matters less than the question of whether announcing our discovery would end us — a technological Pascal’s wager.

Does the distinction between divine and merely advanced creator matter philosophically? For metaphysics, perhaps not much: a creator outside the system is functionally equivalent whether described in Sanskrit, Scholastic Latin, or Python. For ethics, it matters a great deal. A benevolent God grounds moral realism in love; an indifferent posthuman may ground it in nothing at all. For theology, the simulation hypothesis is most congenial to Neoplatonism and Gnosticism — hierarchical emanation rather than single sovereign creation — and least congenial to classical monotheism, which asserts the uniqueness and ultimacy of its God. PhilArchive +3 Paul Davies, reflecting in The Mind of God (1992), put the ambivalence well: “Whether one wishes to call that deeper level ‘God’ is a matter of taste and definition.” WordPress

Epilogue: meaning in a rendered world

Suppose the hypothesis is true. What would change? Less, perhaps, than one expects. A simulated apple still nourishes a simulated body; simulated love still moves simulated hearts; simulated suffering still hurts. The ontological status of the substrate does not revise the phenomenology of the experience. Bostrom himself has argued that we ought to continue to live as if nothing depended on the answer — because, experientially, nothing does. Simulation Argument

Yet something is different. The simulation hypothesis dissolves a distinction Western thought has struggled to maintain for four centuries: the bright line between mind and world, observer and observed, the dreamer and the dream. Quantum mechanics had already blurred that line; the holographic principle blurred it further; the simulation hypothesis erases it. If our world is code, the code was written — by us in some future loop, by beings we cannot conceive, or by a Demiurge who, in the end, is only what the pre-scientific traditions always named: that which lies on the far side of the veilarXiv

The deeper consequence is a relocation of the human condition. To be inside a simulation is to be, at last, metaphysically humble — not at the center of a cosmos made for us, but one node in a computation whose purpose, if any, we cannot read from within. And yet, paradoxically, this humility is accompanied by a strange dignity. Whoever, or whatever, caused the program to run, the program includes beings who ask whether they are a program. That reflexive turn — the capacity of the simulated to suspect its own simulation — is not a glitch. It is the argument’s most striking feature, and perhaps its most hopeful one. The universe, whether computed or uncaused, has produced minds that can inquire into it; and inquiry, in any reality, remains the closest thing we have to a compass.

Whether we live in base reality or the basement of a stacked server, the ethical injunction is the same one Plato’s freed prisoner faced on returning to the cave: to think clearly, to live well, and to tell the truth about the shadows. The simulator, if there is one, has granted us that much. What we do with it is the only variable still running on our side of the screen.

Leave a Reply