TCS for All Meeting 2023 Program

Thursday June 22, 2023 2-4pm

2pm- 3pm EST: Rising Stars talks:

2pm Aarushi Goel (NTT Research)

2:12 pm Shubhada Agrawal (Georgia Institute of Technology)

2:24 pm June Vuong (Stanford University)

2:36 pm Dorna Abdolazimi (University of Washington)

2:48 pm Rucha Kulkarni (UIUC)

3pm-3:10 Break

3:10 pm-4pm: Inspirational talk by Dana Randall (Georgia Institute of Technology)

Talks and speakers info

Inspirational Talk:

Speaker: Dana Randall (Georgia Institute of Technology.)

Title: Random Walks and Emergent Phenomena

Sampling algorithms based on Markov chains are fundamental in many areas of science. The idea is to perform a random walk among the elements in a large state space so that samples chosen from close to stationarity let us extract useful information about the distributions. Often there is a parameter of the system (typically related to temperature) so that at low values natural chains converge rapidly while at high values they converge slowly, requiring exponential time. We will explain this phenomenon, giving examples from the natural and social sciences, and show how such phase changes can form the basis of emergent computation for computationally bounded agents in distributed systems.

Bio: Dana Randall is a Professor of Computer Science and Adjunct Professor of Mathematics at the Georgia Institute of Technology.  She obtained her A.B. in Mathematics at Harvard University and her Ph.D. in Computer Science at the University of California at Berkeley before holding postdoctoral positions at the Institute for Advanced Study and DIMACS in Princeton.  She has served as inaugural co-executive director and co-founder of the Institute for Data Engineering and Science (IDEaS), director of the Algorithms and Randomness Center (ARC), and ADVANCE Professor for diversity, equity, and inclusion for the College of Computing at Georgia Tech.  Dr. Randall is a fellow of the American Mathematical Society, a National Associate of the National Academies, and has won numerous awards for research, teaching and service.  Her research in randomized algorithms and stochastic processes bridges computer science, discrete mathematics, and statistical physics, with a focus on emergent phenomena in computation.

Rising Stars

Aarushi Goel

Title: Differentially Private Sublinear Algorithms

Title: Zero-Knowledge Proofs of Training

How can a model owner prove that they trained their model according to the correct specification? More importantly, how can they do so while preserving the privacy of the underlying dataset and the final model? We study this problem and formulate the notion of zero-knowledge proof of training (zkPoT), which formalizes rigorous security guarantees that should be achieved by a privacy-preserving proof of training. We then design a novel zkPoT protocol, that is provably secure, based on MPC-in-the-head techniques combined with ideas from the zkSNARKs literature to achieve the best of both worlds in terms of communication efficiency and computational resources needed. Importantly, our protocol is streaming friendly and does not require RAM proportional to the size of the circuit being trained and hence can be adapted to the requirements of available hardware. We implement and evaluate our protocol for training a logistic regression model and estimate that for a dataset of 1 million records with 1024 features, the online prover time is about one hour.

This is based on joint work with Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru Vamsi Policharla, Mingyuan Wang.

Bio: Aarushi Goel is postdoc at NTT Research, mentored by Prof. Sanjam Garg. She received her PhD in 2022 in Computer Science from Johns Hopkins University, where she was advised by Prof. Abhishek Jain. Her research interests are in cryptography and in related areas of security and theoretical computer science.

Speaker: Shubhada Agrawal,

Title: A Tale of Heavy Tails in Multi-Armed Bandits

Abstract: Multi-armed bandit (MAB) is a popular framework for sequential decision-making in an uncertain environment. In the classical regret-minimization setup of MAB, the algorithm has access to a fixed and finite set K of unknown, independent probability distributions or arms. At each time step, having observed the outcomes of all the previous actions, the algorithm chooses one of the K arms and receives an independent sample drawn from the underlying distribution, viewed as a reward by the algorithm. Its goal is to maximize the accumulated reward on average. 

Variants of this classical formulation are widely studied in the literature. Tight lower bounds and optimal algorithms have been developed under the assumption that the arm distributions are either Sub-Gaussian or come from a single parameter exponential family (SPEF), for example, Gaussian distributions with a known variance or Bernoulli distributions. However, in practice, these distributional assumptions may not hold. Developing lower bounds and optimal algorithms for the general distributions largely remained open, mainly because of the need for new tools for the analysis in this generality.

In this talk, I will present an asymptotically-optimal algorithm for MAB with heavy-tailed distributions for the regret-minimization framework. A crucial component of designing an optimal algorithm for MAB is constructing tight, anytime-valid confidence intervals for the mean. To this end, we will look at a novel approach for estimating the mean of a heavy-tailed distribution, which may be of independent interest.

Bio: Shubhada Agrawal is a Herbert A. Johnson/ARC Postdoctoral Fellow at the Georgia Institute of Technology. She completed her PhD from TIFR Mumbai, India. Her PhD was supported by Google PhD Fellowship in Machine Learning. Before joining TIFR, she worked as a high-frequency trading analyst for two years. She completed her bachelor’s and master’s degree in Mathematics and Computing from IIT Delhi. Her research interests lie broadly in the area of learning theory with a focus on sequential decision-making under uncertainty. Of late, she is also interested in problems at the intersection of robust statistics and machine learning as well as in stochastic approximation algorithms.

Speaker: June Vuong

Title: A toolbox for sampling

Abstract: Sampling is a fundamental task with many applications in algorithm design and in machine learning. Sampling from high-dimensional discrete distributions, such as Ising models, has long been studied for its applications in statistical physics, image modeling, and neural networks, as well as its connection to complexity theory. While Markov chain Monte Carlo (MCMC) is a widely used method for sampling, many gaps remain in understanding its performance. 

In this talk, I will show tight analysis for Markov chains to sample from various discrete distributions using discrete log-concavity. “

Bio: Thuy-Duong “June” Vuong is a fourth-year PhD student at Stanford University advised by Nima Anari and Moses Charikar. She completed her undergraduate at MIT in 2019, double majoring in Mathematics and Computer Science.  Her research is in designing and analyzing sampling, counting, and optimization algorithms using geometry of polynomials and high-dimensional expanders. Her research has been supported by a Microsoft graduate fellowship.

Speaker: Dorna Abdolazimi

Title: Stronger Trickle-Down Theorems and Applications 

Abstract: I talk about a strengthening of the trickle-down theorem for bounding the local spectral expansion of partite simplicial complexes based on my work with Oveis Gharan [AO’22]. I explain that, as an immediate application, one can get significantly better bounds on the local spectral expansion of recently constructed bounded degree high dimensional expanders. I also briefly discuss a generalized matrix trickle-down framework and its applications in sampling graph colorings [ALO’21].

Bio: I’m a fourth-year Ph.D. student at the University of Washington, where I am fortunate to be advised by Prof. Shayan Oveis Gharan. I am broadly interested in design and analysis of approximation algorithms. My main focus has been studying high-dimensional expanders and analysis of Markov chains using algebraic techniques.

Speaker: Rucha Kulkarni (UIUC)

Title: Traditional and beyond problems for fair indivisible resource allocation

Abstract: Finding fair allocations of indivisible resources is an important problem in Algorithmic game theory. The fairness of an allocation can be quantified in multiple ways. One of these is share based fairness, which defines a fair share value for each individual. Allocations are called fair if every individual receives a bundle of items worth at least their fair share. Most share based fairness notions are NP-hard to optimize exactly, hence most popular works focus on designing approximation algorithms for these. I will highlight my work on designing approximation algorithms for the maximin share (MMS) fair allocation problem. 

Further, the fair division problem is highly applicable in real world settings which motivate several theoretical directions beyond the traditional setting. I will discuss my explorations in two such directions. The first is characterizing classes of instances for which exact MMS fair allocations always exist and can be efficiently computed. And the second is designing online algorithms for fair division problems. I have worked on designing such algorithms for fairly and efficiently routing traffic

Bio: Rucha is a PhD student at UIUC advised by Ruta Mehta, and also mentored by Chandra Chekuri and Jugal Garg. Prior to this, she completed her master’s degree in computer science from IIT Bombay, where she was advised by Sundar Vishwanathan and also mentored by Nutan Limaye. Her primary research interests are fair division in algorithmic game theory, submodular optimization and beyond worst case analysis. She is on the job market search for postdoctoral positions in these areas.