What does it mean for a sequence of 0s and 1s to be random? One way to answer this question is to use tools from mathematical logic, specifically computability theory: a sequence is random if it contains no regularities that can be detected by an idealized computer that has no time or space limitations. In the past sixty years, this research area has developed and been connected to other areas of mathematical logic and mathematics in general. Our book Algorithmic Randomness: Progress and Prospects provides a summary of these connections as well as an introduction to the field.
In 1965, the mathematician A.A. Kolmogorov gave a precise definition of random finite strings in this computational vein, defining a string to be random if it cannot be compressed by any Turing machine (an abstract model of computation). An alternative definition of randomness for both finite strings and infinite sequences, based on computably presented statistical tests, was offered by Per Martin-Löf in 1966.
How do these two approaches relate to one another? First, in the case of finite strings, Martin-Löf showed that the strings that are statistically random in his sense are precisely the strings that are incompressible in Kolmogorov’s sense. In the case of infinite sequences, C.P. Schnorr and Leonid Levin independently proved in the early 1970s that, using a variant of Kolmogorov’s definition of randomness, a sequence is random in Martin-Löf’s sense exactly when all of its finite initial segments are incompressible. Schnorr also proved the equivalence of these notions with a third notion in terms of unpredictability using effective betting strategies.
While research in this area has continued since the notion of algorithmic randomness was formalized, there has been a flurry of activity beginning in the early 2000s. This research originally focused on the relationship between randomness and classical computability theory: How computationally powerful can a random sequence be, and how does randomness interact with other computability-theoretic concepts? More recently, the focus has expanded to, for instance, the different ways randomness can be relativized to an oracle or formulated in terms of different probability measures.
Researchers in algorithmic randomness have also begun to consider the relationship between analysis and randomness. Almost all sequences are random, and many theorems in analysis hold for almost all real numbers: Can we say that a certain kind of function is differentiable at exactly the random points, or that a certain kind of function’s Fourier series converges on exactly the random points? These types of questions have also been fruitfully investigated, revealing that different notions of randomness capture different kinds of typical behavior in analysis as well as other areas of classical mathematics.
Another recent avenue of investigation is the definition of randomness in “higher” and “lower” contexts. What would it mean to define randomness in the context of effective descriptive set theory, where we can use sets given by higher-order definitions, or in the context of computational complexity theory, where we limit ourselves by imposing resource bounds on the computations used to detect randomness? Drawing on tools in both of these contexts has greatly enriched the study of randomness.
Much of this recent work is surveyed in our edited collection Algorithmic Randomness: Progress and Prospects. We hope it provides not only an introduction to algorithmic randomness in general but also a sense of the current work in the field and potential future research directions.
Latest Comments
Have your say!