People whose work I keep returning to. Most sit at the seams between fields — physics and biology, math and computation, energy and information. Reverse chronological, newest first.
Sergey Levine
Berkeley professor whose work casts reinforcement learning, control, and planning as variational inference under one Bayesian frame. The 2018 RL and Control as Probabilistic Inference tutorial pulled together threads I keep returning to whenever I think about decision-making under uncertainty.
Blaise Agüera y Arcas (1975– )
Leads machine-learning research at Google’s Paradigms of Intelligence team and writes across AI, biology, philosophy, and art with equal seriousness — Who Are We Now? and a steady stream of essays on what intelligence is and how it emerges. The kind of breadth I want to be capable of.
Michael Levin
Tufts biologist studying bioelectricity, regeneration, and the agency of non-neural tissue — xenobots, anthrobots, and a body of work arguing that cells navigate a platonic space of possible forms. The clearest current case I know that intelligence and goal-directedness predate brains by a long way.
Stephen Wolfram (1959– )
Built Mathematica, then spent decades arguing that simple computational rules — most famously cellular automata like rule 30 — can generate the complexity we see in nature. A New Kind of Science and the Wolfram Physics Project are the most ambitious recent attempts to take “the universe is computation” literally as a research program.
Karl Friston (1959– )
The free energy principle and active inference — a single variational quantity unifying perception, action, and learning under Bayesian inference. Living evidence that big foundational frames are still being written.
Geoffrey Hinton (1947– )
Decades of work on neural networks before they were fashionable — Boltzmann machines, backpropagation as a learning rule, and the AlexNet moment that finally vindicated it all. Nobel in physics, 2024. The patience and the breadth — statistical physics into learning into language — is what I keep noticing.
Ray Solomonoff (1926–2009)
Defined a universal prior over computable hypotheses by weighting them by their description length — Solomonoff induction, the Bayes-optimal predictor and one half of AIXI. The original answer to “how should one learn from data?”, and the answer everything else still approximates.
Edwin Thompson Jaynes (1922–1998)
Made Bayesian probability and the maximum-entropy principle into a working methodology. Probability Theory: The Logic of Science is one of those books I want to have read three times. Argued, persuasively, that statistical mechanics and inference are the same activity seen from different angles.
Richard Feynman (1918–1988)
QED, path integrals, the diagrams that bear his name. But what inspires me is the style — the insistence on understanding things from scratch, the joy in figuring things out, the conviction that if you cannot explain it simply you do not understand it.
Ilya Prigogine (1917–2003)
Showed that systems far from equilibrium can spontaneously organize into ordered structures by dissipating energy — dissipative structures. Made it possible to talk about life, weather, and cities as physics, not exceptions to it. Nobel in chemistry, 1977.
Claude Shannon (1916–2001)
Invented information theory in one 1948 paper. Defined entropy as the expected log-improbability of a message, and from that single quantity reconstructed bandwidth, channel capacity, error correction, and how every form of communication has worked since.
Alan Turing (1912–1954)
Defined what computation is. Then asked whether machines could think, and on the side wrote a paper showing how chemical reactions can generate biological pattern (morphogenesis). Range matched only by depth.
John von Neumann (1903–1957)
The polymath ideal. Foundations of quantum mechanics, the computer architecture every modern machine still uses, game theory, cellular automata, self-replicating machines — all in one career, none of it shallow.
Andrey Kolmogorov (1903–1987)
Soviet polymath. Axiomatized probability theory in 1933, then half a century later co-founded algorithmic information theory — Kolmogorov complexity, the length of the shortest program that produces a string, the substrate Solomonoff induction sits on. Few people have written down so much of what we now take as given.
R.T. Cox (1898–1991)
Showed that any system of plausible reasoning consistent with a few mild desiderata must obey the rules of probability — Cox’s theorem (1946). The clearest derivation of why probability is the calculus of belief, not just one option among many.
Harold Jeffreys (1891–1989)
Geophysicist who, in Theory of Probability (1939), built modern Bayesian statistics — priors, posterior odds, model comparison — decades before any of it was respectable. The Jeffreys prior still bears his name.
George Pólya (1887–1985)
Wrote How to Solve It, the field manual for mathematical thinking. Patterns of Plausible Inference set up the plausibility-as-extended-logic frame that Cox and Jaynes later formalized. Heuristics, plausible reasoning, and combinatorics in equal measure.
Erwin Schrödinger (1887–1961)
Wrote down the wave equation, then in What Is Life? (1944) asked how the laws of physics could give rise to the informational order of living systems. That question helped seed molecular biology, and I think it is still the right question.
Nikola Tesla (1856–1943)
Engineer-visionary with an almost preternatural feel for electromagnetic phenomena. Alternating current, the induction motor, polyphase distribution — the substrate of modern electrification. Imagined wireless power transmission decades before the math caught up.
Ludwig Boltzmann (1844–1906)
Built statistical mechanics. S = k log W carved entropy into the bridge between microscopic chaos and macroscopic order — the equation that makes thermodynamics, information theory, and most of what comes after possible.
Thomas Bayes (1701–1761)
British minister and mathematician. An Essay towards solving a Problem in the Doctrine of Chances, published posthumously in 1763, gave us Bayes’ theorem — the rule for updating beliefs in light of evidence. Probabilistic inference, machine learning, and the platonic optimal agent all trace back to one short paper.
Marcus Aurelius (121–180)
Roman emperor and Stoic. The Meditations — written to himself, never meant for publication — are the clearest reminder I know that the work of being a person is mostly internal, and that most of the obstacles in the way are you. Two millennia later, still useful at 6 AM.
Plato (c. 428–348 BCE)
The Republic, the Theory of Forms, the dialogue as a method of inquiry. Most of Western philosophy is still working in his vocabulary, and the platonic in “platonic ideal” is not metaphorical — it is the conviction that perfect forms exist beyond the noisy instances we encounter, and that thinking is partly the work of remembering them.