The following Test of Time Awards were given at STOC 2024.

The **30-year Test-of Time award** recognizes three seminal papers published in 1994 and 1990:

- "Balanced Allocations", by Y. Azar, A.Z. Broder, A.R. Karlin, and E. Upfal (1994);
- ".878-Approximation Algorithms for MAX CUT and MAX 2SAT", by M.X. Goemans and D.P. Williamson (1994); and
- "An Optimal Algorithm for On-line Bipartite Matching", by R.M. Karp, U.V. Vazirani, V.V. Vazirani (1990).

Azar, Broder, Karlin and Upfal considered the natural balls-and-bins load-balancing problem in which they pioneered the power-of-two-choices paradigm, in which two bins are chosen uniformly at random, and then the next ball is placed in the currently less loaded one. In placing $n$ balls in $n$ bins, this yields, with high probability, a solution in which the most loaded bin has only $\ln\ln n/\ln 2 + O(1)$ balls, an extraordinarily surprising exponential improvement on the result of placing the balls uniformly at random. This spurred a long line of research in which the analogous approach was applied, with similarly strong results, in much more general settings, and remains one of the most effective algorithmic load-balancing tools.

In their landmark paper, Goemans and Williamson gave an elegant and highly impactful approximation algorithm for the Max-Cut problem. This algorithm partitions the vertex set of an undirected graph so that the weight of edges crossing the cut is guaranteed to be at least 0.878 times the maximum-weight cut. Goemans and Williamson used an upper-bounding semidefinite program and a simple randomization algorithm to obtain a cut of expected weight at least 0.878 times the optimal value of this relaxation. This moved semidefinite programming into the mainstream set of tools in the theory of approximation algorithms.

The paper of Karp, Vazirani, and Vazirani was years ahead of its time, proposing a natural on-line setting of maximum cardinality matching problem in bipartite graphs, in which the vertex set of one side was known in advance, and the other side arrived online with its adjacent nodes specified, where the algorithm must make an immediate and irrevocable decision about which, if any, neighbor to match. The Karp-Vazirani-Vazirani paper shows that no deterministic algorithm can achieve competitive ratio better than ½, and shows that by randomly prioritizing the known side, matching always to the highest priority node yields a matching that is at least a $1-1/e$ fraction of the optimum (as well as proving that this is best expected competitive ratio possible). This paper set the stage for an industry not yet born, when it became the starting point for matching adwords in web advertising.

The **20-year Test-of-Time award** recognizes two seminal papers that were published in STOC 2004 and 2000:

- "On Coresets for $k$-Means and $k$-Median Clustering", by S. Har-Peled and S. Mazumdar (2004); and
- "The Small-World Phenomenon: An Algorithmic Perspective", by J. Kleinberg (2000).

The paper of Har-Peled and Mazumdar introduces a method for obtaining highly space-and-time efficient algorithms for the classic $k$-means and $k$-median clustering problems, in which, given a set of $n$ points in ${\mathbb R}^d$, a set of $k$ medians (means) are found so as to minimize the total distance (squared) of the input points to their nearest median (mean). The results in this paper develop the notion of a $(k, \epsilon)$-coreset - a set of points such that the cost of any solution on the coreset is the same (up to a $(1\pm\epsilon)$ factor) as for the whole input point set. Given an input of $n$ points, the algorithm runs in $O(n \log n)$ time, and produces a coreset of size roughly $O(1/\epsilon)^d k\log n$. This sparked a long line of research yielding refinements and improvements - furthermore, the ideas introduced in this paper have had profound impact, from both theoretical and practical perspectives, in new models, such as streaming as well as several distributed models of computation.

The path-setting paper of Kleinberg adds an algorithmic dimension to earlier results dating back to experimental results of Milgram, which had yielded existential small diameter results for classes of random networks. Kleinberg's seminal result provided the first theoretical algorithmic understanding by showing that it is possible for participants in an $n \times n$ grid to route a message in $O(\log^2 n)$ hops by augmenting the grid with a small number of long-range random links and using a simple greedy routing strategy. This gave rise to a deep and rich literature exploring this phenomenon in far more general and practically relevant settings.

The **10-year Test-of-Time award** recognizes twos seminal paper published in STOC 2014 and 2010:

- "How to Use Indistinguishability Obfuscation: Deniable Encryption, and More", by A. Sahai and B. Waters (2014); and
- "A Sparse Johnson-Lindenstrauss transform", by A. Dasgupta, R. Kumar, and T. Sarlós, (2010).

The Sahai-Waters paper changed the landscape of cryptography by developing ingenious new techniques for leveraging indistinguishability obfuscation (IO) towards powerful cryptographic primitives. This paved the way to numerous applications in cryptography and beyond, and served as a major catalyst for the sequence of works that led to the 2021 breakthrough of basing indistinguishability obfuscation on well-studied assumptions. Sahai & Waters demonstrated the surprising power of IO to serve as a substitute for ideal (but unrealizable) flavors of obfuscation in many cryptographic applications. This includes not only standard public-key encryption schemes, but also first-of-their-kind constructions of more advanced cryptographic primitives. One prominent example, featured in the paper title, is settling the longstanding open problem of realizing deniable encryption: a public-key encryption in which given an encrypted message, one can consistently explain every possible plaintext message by generating fake encryption randomness.

The paper of Dasgupta, Kumar, & Sarlós introduced the Sparse Johnson-Lindenstrauss Transform (SJLT), the first dramatic leap in the asymptotic computational efficiency of the JL Transform. In doing so, it gave a rigorous theoretical basis for a widely employed technique in machine learning, introducing the key technique that was developed and refined further in subsequent work that led to optimal results both for the classic JL Transform and for its applications in computational linear algebra. This paper was the first to raise the question of whether the $k \times d$ projection matrix needs to have $\Omega(k) = \Omega(\epsilon^{−2})$ non-zeros per column. The paper showed how to construct a matrix with $O(\epsilon^{−1} \log k)$ non-zeros per column, while still preserving the conclusions of the JL Transform. In particular, the projection of an input point $x$ can be computed in time $O((\epsilon^{−1}\mathtt{nnz}(x))$, where $\mathtt{nnz}(x)$ is the number of nonzero entries in $x$, thus fully taking advantage of any sparsity in the input!

**Award Committee:**

Sandy Irani (UC Irvine)

Michel Goodrich (UC Irvine)

Mike Paterson (Warwick)

David Shmoys (Cornell, chair)