Rajesh Jayaram

Rajesh Jayaram

Senior Research Scientist, Google Research NYC

About

I am a Senior Research Scientist at Google NYC in the Algorithms and Optimization Group. I received my PhD in computer science at Carnegie Mellon in the summer of 2021, where I was fortunate to be advised by David Woodruff. Prior to that, I received my bachelor's from Brown University in May of 2017.

My academic research background is in sublinear algorithms and high-dimensional geometry — specifically sketching, streaming, and distributed algorithms. At Google, I apply this theoretical foundation to practical problems, including industry-scale vector search, neural embedding model design and training, and inference-time scaling for mathematics and science.

Research Interests

High-Dimensional Geometry & Neural Embeddings

Leveraging techniques from High Dimensional Geometry and Probability to study the expressivity of Neural Embedding models, and the complexity of Geometric Optimization Tasks (MST, EMD, etc.)

Inference-Time Scaling for Science

Developing scaling methods to solve complex problems in mathematics and science, and accelerating scientific development with tools such as automated reviewing (see our ICML and STOC collaborations).

Theory & Practice of Nearest Neighbor Search

Developing faster algorithms in both the theory and practice of NNS, especially for complex metrics (e.g. Earth Mover's Distance, Chamfer). This fuels the development of production-grade vector search systems serving billions of queries, both for (classic) single-vector and multi-vector models.

Sketching & Sublinear Algorithms

Design and analysis of sublinear algorithms for big data, including streaming, distributed computing, and information compression. These techniques have broad applications, from efficient attention mechanisms in large language models to massively parallel clustering and graph algorithms.

Publications

Preprints

CRISP: Clustering Multi-Vector Representations for Denoising and Pruning
With João Veneroso, Jinmeng Rao, Gustavo Hernández Ábrego, Majid Hadian, and Daniel Cer.
Multi-vector models, such as ColBERT, are a significant advancement in neural information retrieval (IR), delivering state-of-the-art performance by representing queries and documents by multiple contextualized token-level embeddings. However, this increased representation size introduces considerable storage and computational overheads which have hindered widespread adoption in practice. A common approach to mitigate this overhead is to cluster the model's frozen vectors, but this strategy's effectiveness is fundamentally limited by the intrinsic clusterability of these embeddings. In this work, we introduce CRISP (Clustered Representations with Intrinsic Structure Pruning), a novel multi-vector training method which learns inherently clusterable representations directly within the end-to-end training process. By integrating clustering into the training phase rather than imposing it post-hoc, CRISP significantly outperforms post-hoc clustering at all representation sizes, as well as other token pruning methods. On the BEIR retrieval benchmarks, CRISP achieves a significant rate of ~3x reduction in the number of vectors while outperforming the original unpruned model. This indicates that learned clustering effectively denoises the model by filtering irrelevant information, thereby generating more robust multi-vector representations. With more aggressive clustering, CRISP achieves an 11x reduction in the number of vectors with only a $3.6\%$ quality loss.

2026

Near-Optimal Directed Euclidean Spanners in High Dimensions
With Shyamal Patel, Cliff Stein, Erik Waingarten, and Tian Zhang.
STOC 2026

2025

Hierarchical Retrieval: The Geometry and a Pretrain-Finetune Recipe
With Chong You, Ananda Theertha Suresh, Robin Nittka, Felix Yu, and Sanjiv Kumar.
NeurIPS 2025
Dual encoder (DE) models, where a pair of matching query and document are embedded into similar vector representations, are widely used in information retrieval due to their simplicity and scalability. However, the Euclidean geometry of the embedding space limits the expressive power of DEs, which may compromise their quality. This paper investigates such limitations in the context of hierarchical retrieval (HR), where the document set has a hierarchical structure and the matching documents for a query are all of its ancestors. We first prove that DEs are feasible for HR as long as the embedding dimension is linear in the depth of the hierarchy and logarithmic in the number of documents. Then we study the problem of learning such embeddings in a standard retrieval setup where DEs are trained on samples of matching query and document pairs. Our experiments reveal a lost-in-the-long-distance phenomenon, where retrieval accuracy degrades for documents further away in the hierarchy. To address this, we introduce a pretrain-finetune recipe that significantly improves long-distance retrieval without sacrificing performance on closer documents. We experiment on a realistic hierarchy from WordNet for retrieving documents at various levels of abstraction, and show that pretrain-finetune boosts the recall on long-distance pairs from 19% to 76%. Finally, we demonstrate that our method improves retrieval of relevant products on a shopping queries dataset.
Approximating High-Dimensional Earth Mover's Distance as Fast as Closest Pair
With Lorenzo Beretta, Vincent Cohen-Addad, and Erik Waingarten.
FOCS 2025
We give a reduction from $(1+\varepsilon)$-approximate Earth Mover's Distance (EMD) to $(1+\varepsilon)$-approximate Closest Pair (CP). As a consequence, we improve the fastest known approximation algorithm for high-dimensional EMD. Here, given $p\in [1, 2]$ and two sets of $n$ points $X,Y \subseteq (\mathbb R^d,\ell_p)$, their EMD is the minimum cost of a perfect matching between $X$ and $Y$, where the cost of matching two vectors is their $\ell_p$ distance. Further, CP is the basic problem of finding a pair of points realizing $\min_{x \in X, y\in Y} ||x-y||_p$. Our contribution is twofold: we show that if a $(1+\varepsilon)$-approximate CP can be computed in time $n^{2-φ}$, then a $1+O(\varepsilon)$ approximation to EMD can be computed in time $n^{2-Ω(φ)}$; plugging in the fastest known algorithm for CP [Alman, Chan, Williams FOCS'16], we obtain a $(1+\varepsilon)$-approximation algorithm for EMD running in time $n^{2-\tildeΩ(\varepsilon^{1/3})}$ for high-dimensional point sets, which improves over the prior fastest running time of $n^{2-Ω(\varepsilon^2)}$ [Andoni, Zhang FOCS'23]. Our main technical contribution is a sublinear implementation of the Multiplicative Weights Update framework for EMD. Specifically, we demonstrate that the updates can be executed without ever explicitly computing or storing the weights; instead, we exploit the underlying geometric structure to perform the updates implicitly.
Metric Embeddings Beyond Bi-Lipschitz Distortion via Sherali-Adams
With Ainesh Bakshi, Vincent Cohen-Addad, Samuel B. Hopkins, and Silvio Lattanzi.
COLT 2025
Metric embeddings are a widely used method in algorithm design, where generally a ``complex'' metric is embedded into a simpler, lower-dimensional one. Historically, the theoretical computer science community has focused on bi-Lipschitz embeddings, which guarantee that every pairwise distance is approximately preserved. In contrast, alternative embedding objectives that are commonly used in practice avoid bi-Lipschitz distortion; yet these approaches have received comparatively less study in theory. In this paper, we focus on Multi-dimensional Scaling (MDS), where we are given a set of non-negative dissimilarities $\{d_{i,j}\}_{i,j\in [n]}$ over $n$ points, and the goal is to find an embedding $\{x_1,\dots,x_n\} \subset R^k$ that minimizes $$\textrm{OPT}=\min_{x}\mathbb{E}_{i,j\in [n]}\left(1-\frac{\|x_i - x_j\|}{d_{i,j}}\right)^2.$$ Despite its popularity, our theoretical understanding of MDS is extremely limited. Recently, Demaine et. al. (arXiv:2109.11505) gave the first approximation algorithm with provable guarantees for this objective, which achieves an embedding in constant dimensional Euclidean space with cost $\textrm{OPT} +ε$ in $n^2\cdot 2^{\textrm{poly}(Δ/ε)}$ time, where $Δ$ is the aspect ratio of the input dissimilarities. For metrics that admit low-cost embeddings, $Δ$ scales polynomially in $n$. In this work, we give the first approximation algorithm for MDS with quasi-polynomial dependency on $Δ$: for constant dimensional Euclidean space, we achieve a solution with cost $O(\log Δ)\cdot \textrm{OPT}^{Ω(1)}+ε$ in time $n^{O(1)} \cdot 2^{\text{poly}((\log(Δ)/ε))}$. Our algorithms are based on a novel geometry-aware analysis of a conditional rounding of the Sherali-Adams LP Hierarchy, allowing us to avoid exponential dependency on the aspect ratio, which would typically result from this rounding.
Randomized Dimensionality Reduction for Euclidean Maximization and Diversity Measures
With Jie Gao, Benedikt Kolbe, Shay Sapir, Chris Schwiegelshohn, Sandeep Silwal, and Erik Waingarten.
ICML 2025
Randomized dimensionality reduction is a widely-used algorithmic technique for speeding up large-scale Euclidean optimization problems. In this paper, we study dimension reduction for a variety of maximization problems, including max-matching, max-spanning tree, max TSP, as well as various measures for dataset diversity. For these problems, we show that the effect of dimension reduction is intimately tied to the \emph{doubling dimension} $λ_X$ of the underlying dataset $X$ -- a quantity measuring intrinsic dimensionality of point sets. Specifically, we prove that a target dimension of $O(λ_X)$ suffices to approximately preserve the value of any near-optimal solution,which we also show is necessary for some of these problems. This is in contrast to classical dimension reduction results, whose dependence increases with the dataset size $|X|$. We also provide empirical results validating the quality of solutions found in the projected space, as well as speedups due to dimensionality reduction.
Unleashing Graph Partitioning for Large-Scale Nearest Neighbor Search
With Laxman Dhulipala, Lars Gottesbüren, and Jakub Lacki.
VLDB 2025
We consider the fundamental problem of decomposing a large-scale approximate nearest neighbor search (ANNS) problem into smaller sub-problems. The goal is to partition the input points into neighborhood-preserving shards, so that the nearest neighbors of any point are contained in only a few shards. When a query arrives, a routing algorithm is used to identify the shards which should be searched for its nearest neighbors. This approach forms the backbone of distributed ANNS, where the dataset is so large that it must be split across multiple machines. In this paper, we design simple and highly efficient routing methods, and prove strong theoretical guarantees on their performance. A crucial characteristic of our routing algorithms is that they are inherently modular, and can be used with any partitioning method. This addresses a key drawback of prior approaches, where the routing algorithms are inextricably linked to their associated partitioning method. In particular, our new routing methods enable the use of balanced graph partitioning, which is a high-quality partitioning method without a naturally associated routing algorithm. Thus, we provide the first methods for routing using balanced graph partitioning that are extremely fast to train, admit low latency, and achieve high recall. We provide a comprehensive evaluation of our full partitioning and routing pipeline on billion-scale datasets, where it outperforms existing scalable partitioning methods by significant margins, achieving up to 2.14x higher QPS at 90% recall$@10$ than the best competitor.
Near-Optimal Spectral Density Estimation via Explicit and Implicit Deflation
With Rajarshi Bhattacharjee, Cameron Musco, Christopher Musco, and Archan Ray.
SODA 2025
We study algorithms for approximating the spectral density of a symmetric matrix $A$ that is accessed through matrix-vector product queries. By combining a previously studied Chebyshev polynomial moment matching method with a deflation step that approximately projects off the largest magnitude eigendirections of $A$ before estimating the spectral density, we give an $ε\cdotσ_\ell(A)$ error approximation to the spectral density in the Wasserstein-$1$ metric using $O(\ell\log n+ 1/ε)$ matrix-vector products, where $σ_\ell(A)$ is the $\ell^{th}$ largest singular value of $A$. In the common case when $A$ exhibits fast singular value decay, our bound can be much stronger than prior work, which gives an error bound of $ε\cdot ||A||_2$ using $O(1/ε)$ matrix-vector products. We also show that it is nearly tight: any algorithm giving error $ε\cdot σ_\ell(A)$ must use $Ω(\ell+1/ε)$ matrix-vector products. We further show that the popular Stochastic Lanczos Quadrature (SLQ) method matches the above bound, even though SLQ itself is parameter-free and performs no explicit deflation. This bound explains the strong practical performance of SLQ, and motivates a simple variant of SLQ that achieves an even tighter error bound. Our error bound for SLQ leverages an analysis that views it as an implicit polynomial moment matching method, along with recent results on low-rank approximation with single-vector Krylov methods. We use these results to show that the method can perform implicit deflation as part of moment matching.
Massively Parallel Minimum Spanning Tree in General Metric Spaces
With Amir Azarmehr, Soheil Behnezhad, Jakub Łącki, Vahab Mirrokni, and Peilin Zhong.
SODA 2025
We study the minimum spanning tree (MST) problem in the massively parallel computation (MPC) model. Our focus is particularly on the *strictly sublinear* regime of MPC where the space per machine is $O(n^δ)$. Here $n$ is the number of vertices and constant $δ\in (0, 1)$ can be made arbitrarily small. The MST problem admits a simple and folklore $O(\log n)$-round algorithm in the MPC model. When the weights can be arbitrary, this matches a conditional lower bound of $Ω(\log n)$ which follows from a well-known 1vs2-Cycle conjecture. As such, much of the literature focuses on breaking the logarithmic barrier in more structured variants of the problem, such as when the vertices correspond to points in low- [ANOY14, STOC'14] or high-dimensional Euclidean spaces [JMNZ, SODA'24]. In this work, we focus more generally on metric spaces. Namely, all pairwise weights are provided and guaranteed to satisfy the triangle inequality, but are otherwise unconstrained. We show that for any $\varepsilon > 0$, a $(1+\varepsilon)$-approximate MST can be found in $O(\log \frac{1}{\varepsilon} + \log \log n)$ rounds, which is the first $o(\log n)$-round algorithm for finding any constant approximation in this setting. Other than being applicable to more general weight functions, our algorithm also slightly improves the $O(\log \log n \cdot \log \log \log n)$ round-complexity of [JMNZ24, SODA'24] and significantly improves its approximation from a large constant to $1+\varepsilon$. On the lower bound side, we prove that under the 1vs2-Cycle conjecture, $Ω(\log \frac{1}{\varepsilon})$ rounds are needed for finding a $(1+\varepsilon)$-approximate MST in general metrics. It is worth noting that while many existing lower bounds in the MPC model under the 1vs2-Cycle conjecture only hold against "component stable" algorithms, our lower bound applies to *all* algorithms.

2024

MUVERA: Multi-Vector Retrieval via Fixed Dimensional Encodings
With Laxman Dhulipala, Majid Hadian, Jason Lee, and Vahab Mirrokni.
NeurIPS 2024
Neural embedding models have become a fundamental component of modern information retrieval (IR) pipelines. These models produce a single embedding $x \in \mathbb{R}^d$ per data-point, allowing for fast retrieval via highly optimized maximum inner product search (MIPS) algorithms. Recently, beginning with the landmark ColBERT paper, multi-vector models, which produce a set of embedding per data point, have achieved markedly superior performance for IR tasks. Unfortunately, using these models for IR is computationally expensive due to the increased complexity of multi-vector retrieval and scoring. In this paper, we introduce MUVERA (MUlti-VEctor Retrieval Algorithm), a retrieval mechanism which reduces multi-vector similarity search to single-vector similarity search. This enables the usage of off-the-shelf MIPS solvers for multi-vector retrieval. MUVERA asymmetrically generates Fixed Dimensional Encodings (FDEs) of queries and documents, which are vectors whose inner product approximates multi-vector similarity. We prove that FDEs give high-quality $ε$-approximations, thus providing the first single-vector proxy for multi-vector similarity with theoretical guarantees. Empirically, we find that FDEs achieve the same recall as prior state-of-the-art heuristics while retrieving 2-5$\times$ fewer candidates. Compared to prior state of the art implementations, MUVERA achieves consistently good end-to-end recall and latency across a diverse set of the BEIR retrieval datasets, achieving an average of 10$\%$ improved recall with $90\%$ lower latency.
Efficient Centroid-Linkage Clustering
With MohammadHossein Bateni, Laxman Dhulipala, Willem Fletcher, Kishen N Gowda, D Ellis Hershkowitz, and Jakub Łącki.
NeurIPS 2024
We give an efficient algorithm for Centroid-Linkage Hierarchical Agglomerative Clustering (HAC), which computes a $c$-approximate clustering in roughly $n^{1+O(1/c^2)}$ time. We obtain our result by combining a new Centroid-Linkage HAC algorithm with a novel fully dynamic data structure for nearest neighbor search which works under adaptive updates. We also evaluate our algorithm empirically. By leveraging a state-of-the-art nearest-neighbor search library, we obtain a fast and accurate Centroid-Linkage HAC algorithm. Compared to an existing state-of-the-art exact baseline, our implementation maintains the clustering quality while delivering up to a $36\times$ speedup due to performing fewer distance comparisons.
Metric Clustering and MST with Strong and Weak Distance Oracles
With MohammadHossein Bateni, Prathamesh Dharangutte, and Chen Wang.
COLT 2024
We study optimization problems in a metric space $(\mathcal{X},d)$ where we can compute distances in two ways: via a ''strong'' oracle that returns exact distances $d(x,y)$, and a ''weak'' oracle that returns distances $\tilde{d}(x,y)$ which may be arbitrarily corrupted with some probability. This model captures the increasingly common trade-off between employing both an expensive similarity model (e.g. a large-scale embedding model), and a less accurate but cheaper model. Hence, the goal is to make as few queries to the strong oracle as possible. We consider both so-called ''point queries'', where the strong oracle is queried on a set of points $S \subset \mathcal{X} $ and returns $d(x,y)$ for all $x,y \in S$, and ''edge queries'' where it is queried for individual distances $d(x,y)$. Our main contributions are optimal algorithms and lower bounds for clustering and Minimum Spanning Tree (MST) in this model. For $k$-centers, $k$-median, and $k$-means, we give constant factor approximation algorithms with only $\tilde{O}(k)$ strong oracle point queries, and prove that $Ω(k)$ queries are required for any bounded approximation. For edge queries, our upper and lower bounds are both $\tildeΘ(k^2)$. Surprisingly, for the MST problem we give a $O(\sqrt{\log n})$ approximation algorithm using no strong oracle queries at all, and a matching $Ω(\sqrt{\log n})$ lower bound. We empirically evaluate our algorithms, and show that their quality is comparable to that of the baseline algorithms that are given all true distances, but while querying the strong oracle on only a small fraction ($<1\%$) of points.
Parallel and Sequential Hardness of Hierarchical Graph Clustering
With Mohammad Hossein Bateni, Laxman Dhulipala, Kishen Gowda, D Ellis Hershkowitz, and Jakub Lacki.
ICALP 2024
Average linkage Hierarchical Agglomerative Clustering (HAC) is an extensively studied and applied method for hierarchical clustering. Recent applications to massive datasets have driven significant interest in near-linear-time and efficient parallel algorithms for average linkage HAC. We provide hardness results that rule out such algorithms. On the sequential side, we establish a runtime lower bound of $n^{3/2-ε}$ on $n$ node graphs for sequential combinatorial algorithms under standard fine-grained complexity assumptions. This essentially matches the best-known running time for average linkage HAC. On the parallel side, we prove that average linkage HAC likely cannot be parallelized even on simple graphs by showing that it is CC-hard on trees of diameter $4$. On the possibility side, we demonstrate that average linkage HAC can be efficiently parallelized (i.e., it is in NC) on paths and can be solved in near-linear time when the height of the output cluster hierarchy is small.
Dynamic PageRank: Algorithms and Lower Bounds
With Jakub Łącki, Slobodan Mitrović, Krzysztof Onak, and Piotr Sankowski.
ICALP 2024
We consider the PageRank problem in the dynamic setting, where the goal is to explicitly maintain an approximate PageRank vector $π\in \mathbb{R}^n$ for a graph under a sequence of edge insertions and deletions. Our main result is a complete characterization of the complexity of dynamic PageRank maintenance for both multiplicative and additive ($L_1$) approximations. First, we establish matching lower and upper bounds for maintaining additive approximate PageRank in both incremental and decremental settings. In particular, we demonstrate that in the worst-case $(1/α)^{Θ(\log \log n)}$ update time is necessary and sufficient for this problem, where $α$ is the desired additive approximation. On the other hand, we demonstrate that the commonly employed ForwardPush approach performs substantially worse than this optimal runtime. Specifically, we show that ForwardPush requires $Ω(n^{1-δ})$ time per update on average, for any $δ> 0$, even in the incremental setting. For multiplicative approximations, however, we demonstrate that the situation is significantly more challenging. Specifically, we prove that any algorithm that explicitly maintains a constant factor multiplicative approximation of the PageRank vector of a directed graph must have amortized update time $Ω(n^{1-δ})$, for any $δ> 0$, even in the incremental setting, thereby resolving a 13-year old open question of Bahmani et al.~(VLDB 2010). This sharply contrasts with the undirected setting, where we show that $\rm{poly}\ \log n$ update time is feasible, even in the fully dynamic setting under oblivious adversary.
Data-Dependent LSH for the Earth Mover's Distance
With Erik Waingarten and Tian Zhang.
STOC 2024
We give new data-dependent locality sensitive hashing schemes (LSH) for the Earth Mover's Distance ($\mathsf{EMD}$), and as a result, improve the best approximation for nearest neighbor search under $\mathsf{EMD}$ by a quadratic factor. Here, the metric $\mathsf{EMD}_s(\mathbb{R}^d,\ell_p)$ consists of sets of $s$ vectors in $\mathbb{R}^d$, and for any two sets $x,y$ of $s$ vectors the distance $\mathsf{EMD}(x,y)$ is the minimum cost of a perfect matching between $x,y$, where the cost of matching two vectors is their $\ell_p$ distance. Previously, Andoni, Indyk, and Krauthgamer gave a (data-independent) locality-sensitive hashing scheme for $\mathsf{EMD}_s(\mathbb{R}^d,\ell_p)$ when $p \in [1,2]$ with approximation $O(\log^2 s)$. By being data-dependent, we improve the approximation to $\tilde{O}(\log s)$. Our main technical contribution is to show that for any distribution $μ$ supported on the metric $\mathsf{EMD}_s(\mathbb{R}^d, \ell_p)$, there exists a data-dependent LSH for dense regions of $μ$ which achieves approximation $\tilde{O}(\log s)$, and that the data-independent LSH actually achieves a $\tilde{O}(\log s)$-approximation outside of those dense regions. Finally, we show how to "glue" together these two hashing schemes without any additional loss in the approximation. Beyond nearest neighbor search, our data-dependent LSH also gives optimal (distributional) sketches for the Earth Mover's Distance. By known sketching lower bounds, this implies that our LSH is optimal (up to $\mathrm{poly}(\log \log s)$ factors) among those that collide close points with constant probability.
HyperAttention: Long-Context Attention in Near-Linear Time
With Insu Han, Amin Karbasi, Vahab Mirrokni, David Woodruff, and Amir Zandieh.
ICLR 2024
We present an approximate attention mechanism named HyperAttention to address the computational challenges posed by the growing complexity of long contexts used in Large Language Models (LLMs). Recent work suggests that in the worst-case scenario, quadratic time is necessary unless the entries of the attention matrix are bounded or the matrix has low stable rank. We introduce two parameters which measure: (1) the max column norm in the normalized attention matrix, and (2) the ratio of row norms in the unnormalized attention matrix after detecting and removing large entries. We use these fine-grained parameters to capture the hardness of the problem. Despite previous lower bounds, we are able to achieve a linear time sampling algorithm even when the matrix has unbounded entries or a large stable rank, provided the above parameters are small. HyperAttention features a modular design that easily accommodates integration of other fast low-level implementations, particularly FlashAttention. Empirically, employing Locality Sensitive Hashing (LSH) to identify large entries, HyperAttention outperforms existing methods, giving significant speed improvements compared to state-of-the-art solutions like FlashAttention. We validate the empirical performance of HyperAttention on a variety of different long-context length datasets. For example, HyperAttention makes the inference time of ChatGLM2 50\% faster on 32k context length while perplexity increases from 5.6 to 6.3. On larger context length, e.g., 131k, with causal masking, HyperAttention offers 5-fold speedup on a single attention layer.
Massively Parallel Algorithms for High-Dimensional Euclidean Minimum Spanning Tree
With Vahab Mirrokni, Shyam Narayanan, and Peilin Zhong.
SODA 2024
We study the classic Euclidean Minimum Spanning Tree (MST) problem in the Massively Parallel Computation (MPC) model. Given a set $X \subset \mathbb{R}^d$ of $n$ points, the goal is to produce a spanning tree for $X$ with weight within a small factor of optimal. Euclidean MST is one of the most fundamental hierarchical geometric clustering algorithms, and with the proliferation of enormous high-dimensional data sets, such as massive transformer-based embeddings, there is now a critical demand for efficient distributed algorithms to cluster such data sets. In low-dimensional space, where $d = O(1)$, Andoni, Nikolov, Onak, and Yaroslavtsev [STOC '14] gave a constant round MPC algorithm that obtains a high accuracy $(1+ε)$-approximate solution. However, the situation is much more challenging for high-dimensional spaces: the best-known algorithm to obtain a constant approximation requires $O(\log n)$ rounds. Recently Chen, Jayaram, Levi, and Waingarten [STOC '22] gave a $\tilde{O}(\log n)$ approximation algorithm in a constant number of rounds based on embeddings into tree metrics. However, to date, no known algorithm achieves both a constant number of rounds and approximation. In this paper, we make strong progress on this front by giving a constant factor approximation in $\tilde{O}(\log \log n)$ rounds of the MPC model. In contrast to tree-embedding-based approaches, which necessarily must pay $Ω(\log n)$-distortion, our algorithm is based on a new combination of graph-based distributed MST algorithms and geometric space partitions. Additionally, although the approximate MST we return can have a large depth, we show that it can be modified to obtain a $\tilde{O}(\log \log n)$-round constant factor approximation to the Euclidean Traveling Salesman Problem (TSP) in the MPC model. Previously, only a $O(\log n)$ round was known for the problem.
Fully Dynamic Consistent k-Center Clustering
With Christoph Grunau, Bernhard Haeupler, Jakub Łącki, and Václav Rozhoň.
SODA 2024
We study the consistent k-center clustering problem. In this problem, the goal is to maintain a constant factor approximate $k$-center solution during a sequence of $n$ point insertions and deletions while minimizing the recourse, i.e., the number of changes made to the set of centers after each point insertion or deletion. Previous works by Lattanzi and Vassilvitskii [ICML '12] and Fichtenberger, Lattanzi, Norouzi-Fard, and Svensson [SODA '21] showed that in the incremental setting, where deletions are not allowed, one can obtain $k \cdot \textrm{polylog}(n) / n$ amortized recourse for both $k$-center and $k$-median, and demonstrated a matching lower bound. However, no algorithm for the fully dynamic setting achieves less than the trivial $O(k)$ changes per update, which can be obtained by simply reclustering the full dataset after every update. In this work, we give the first algorithm for consistent $k$-center clustering for the fully dynamic setting, i.e., when both point insertions and deletions are allowed, and improves upon a trivial $O(k)$ recourse bound. Specifically, our algorithm maintains a constant factor approximate solution while ensuring worst-case constant recourse per update, which is optimal in the fully dynamic setting. Moreover, our algorithm is deterministic and is therefore correct even if an adaptive adversary chooses the insertions and deletions.
Streaming Algorithms with Few State Changes
With David Woodruff and Samson Zhou.
PODS 2024
In this paper, we study streaming algorithms that minimize the number of changes made to their internal state (i.e., memory contents). While the design of streaming algorithms typically focuses on minimizing space and update time, these metrics fail to capture the asymmetric costs, inherent in modern hardware and database systems, of reading versus writing to memory. In fact, most streaming algorithms write to their memory on every update, which is undesirable when writing is significantly more expensive than reading. This raises the question of whether streaming algorithms with small space and number of memory writes are possible. We first demonstrate that, for the fundamental $F_p$ moment estimation problem with $p\ge 1$, any streaming algorithm that achieves a constant factor approximation must make $Ω(n^{1-1/p})$ internal state changes, regardless of how much space it uses. Perhaps surprisingly, we show that this lower bound can be matched by an algorithm that also has near-optimal space complexity. Specifically, we give a $(1+\varepsilon)$-approximation algorithm for $F_p$ moment estimation that uses a near-optimal $\widetilde{\mathcal{O}}_\varepsilon(n^{1-1/p})$ number of state changes, while simultaneously achieving near-optimal space, i.e., for $p\in[1,2]$, our algorithm uses $\text{poly}\left(\log n,\frac{1}{\varepsilon}\right)$ bits of space, while for $p>2$, the algorithm uses $\widetilde{\mathcal{O}}_\varepsilon(n^{1-2/p})$ space. We similarly design streaming algorithms that are simultaneously near-optimal in both space complexity and the number of state changes for the heavy-hitters problem, sparse support recovery, and entropy estimation. Our results demonstrate that an optimal number of state changes can be achieved without sacrificing space complexity.

2023

A Near-Linear Time Algorithm for the Chamfer Distance
With Ainesh Bakshi, Piotr Indyk, Sandeep Silwal, and Erik Waingarten.
NeurIPS 2023
For any two point sets $A,B \subset \mathbb{R}^d$ of size up to $n$, the Chamfer distance from $A$ to $B$ is defined as $\text{CH}(A,B)=\sum_{a \in A} \min_{b \in B} d_X(a,b)$, where $d_X$ is the underlying distance measure (e.g., the Euclidean or Manhattan distance). The Chamfer distance is a popular measure of dissimilarity between point clouds, used in many machine learning, computer vision, and graphics applications, and admits a straightforward $O(d n^2)$-time brute force algorithm. Further, the Chamfer distance is often used as a proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. However, the \emph{quadratic} dependence on $n$ in the running time makes the naive approach intractable for large datasets. We overcome this bottleneck and present the first $(1+ε)$-approximate algorithm for estimating the Chamfer distance with a near-linear running time. Specifically, our algorithm runs in time $O(nd \log (n)/\varepsilon^2)$ and is implementable. Our experiments demonstrate that it is both accurate and fast on large high-dimensional datasets. We believe that our algorithm will open new avenues for analyzing large high-dimensional point clouds. We also give evidence that if the goal is to \emph{report} a $(1+\varepsilon)$-approximate mapping from $A$ to $B$ (as opposed to just its value), then any sub-quadratic time algorithm is unlikely to exist.
Streaming Euclidean MST to a Constant Factor
With Vincent Cohen-Addad, Xi Chen, Amit Levi, and Erik Waingarten.
STOC 2023
We study streaming algorithms for the fundamental geometric problem of computing the cost of the Euclidean Minimum Spanning Tree (MST) on an $n$-point set $X \subset \mathbb{R}^d$. In the streaming model, the points in $X$ can be added and removed arbitrarily, and the goal is to maintain an approximation in small space. In low dimensions, $(1+ε)$ approximations are possible in sublinear space [Frahling, Indyk, Sohler, SoCG '05]. However, for high dimensional spaces the best known approximation for this problem was $\tilde{O}(\log n)$, due to [Chen, Jayaram, Levi, Waingarten, STOC '22], improving on the prior $O(\log^2 n)$ bound due to [Indyk, STOC '04] and [Andoni, Indyk, Krauthgamer, SODA '08]. In this paper, we break the logarithmic barrier, and give the first constant factor sublinear space approximation to Euclidean MST. For any $ε\geq 1$, our algorithm achieves an $\tilde{O}(ε^{-2})$ approximation in $n^{O(ε)}$ space. We complement this by proving that any single pass algorithm which obtains a better than $1.10$-approximation must use $Ω(\sqrt{n})$ space, demonstrating that $(1+ε)$ approximations are not possible in high-dimensions, and that our algorithm is tight up to a constant. Nevertheless, we demonstrate that $(1+ε)$ approximations are possible in sublinear space with $O(1/ε)$ passes over the stream. More generally, for any $α\geq 2$, we give a $α$-pass streaming algorithm which achieves a $(1+O(\frac{\log α+ 1}{ αε}))$ approximation in $n^{O(ε)} d^{O(1)}$ space. Our streaming algorithms are linear sketches, and therefore extend to the massively-parallel computation model (MPC). Thus, our results imply the first $(1+ε)$-approximation to Euclidean MST in a constant number of rounds in the MPC model.
Optimal Fully Dynamic k-Centers Clustering
With MohammadHossein Bateni, Hossein Esfandiari, and Vahab Mirrokni.
SODA 2023
We present the first algorithm for fully dynamic $k$-centers clustering in an arbitrary metric space that maintains an optimal $2+ε$ approximation in $O(k \cdot \operatorname{polylog}(n,Δ))$ amortized update time. Here, $n$ is an upper bound on the number of active points at any time, and $Δ$ is the aspect ratio of the data. Previously, the best known amortized update time was $O(k^2\cdot \operatorname{polylog}(n,Δ))$, and is due to Chan, Gourqin, and Sozio. We demonstrate that the runtime of our algorithm is optimal up to $\operatorname{polylog}(n,Δ)$ factors, even for insertion-only streams, which closes the complexity of fully dynamic $k$-centers clustering. In particular, we prove that any algorithm for $k$-clustering tasks in arbitrary metric spaces, including $k$-means, $k$-medians, and $k$-centers, must make at least $Ω(n k)$ distance queries to achieve any non-trivial approximation factor. Despite the lower bound for arbitrary metrics, we demonstrate that an update time sublinear in $k$ is possible for metric spaces which admit locally sensitive hash functions (LSH). Namely, we demonstrate a black-box transformation which takes a locally sensitive hash family for a metric space and produces a faster fully dynamic $k$-centers algorithm for that space. In particular, for a large class of metrics including Euclidean space, $\ell_p$ spaces, the Hamming Metric, and the Jaccard Metric, for any $c > 1$, our results yield a $c(4+ε)$ approximate $k$-centers solution in $O(n^{1/c} \cdot \operatorname{polylog}(n,Δ))$ amortized update time, simultaneously for all $k \geq 1$. Previously, the only known comparable result was a $O(c \log n)$ approximation for Euclidean space due to Schmidt and Sohler, running in the same amortized update time.
Merged with Hendrik Fichtenberger, Monika Henzinger, and Andreas Wiese
Differentially Oblivious Relational Database Operators
With Lianke Qin, Elaine Shi, Zhao Song, Danyang Zhuo, and Shumo Chu.
VLDB 2023
There has been a recent effort in applying differential privacy on memory access patterns to enhance data privacy. This is called differential obliviousness. Differential obliviousness is a promising direction because it provides a principled trade-off between performance and desired level of privacy. To date, it is still an open question whether differential obliviousness can speed up database processing with respect to full obliviousness. In this paper, we present the design and implementation of three new major database operators: selection with projection, grouping with aggregation, and foreign key join. We prove that they satisfy the notion of differential obliviousness. Our differentially oblivious operators have reduced cache complexity, runtime complexity, and output size compared to their state-of-the-art fully oblivious counterparts. We also demonstrate that our implementation of these differentially oblivious operators can outperform their state-of-the-art fully oblivious counterparts by up to $7.4\times$.

2022

Stars: Tera-Scale Graph Building for Clustering and Learning
With CJ Carey, Jonathan Halcrow, Vahab Mirrokni, Warren Schudy, and Peilin Zhong.
NeurIPS 2022
A fundamental procedure in the analysis of massive datasets is the construction of similarity graphs. Such graphs play a key role for many downstream tasks, including clustering, classification, graph learning, and nearest neighbor search. For these tasks, it is critical to build graphs which are sparse yet still representative of the underlying data. The benefits of sparsity are twofold: firstly, constructing dense graphs is infeasible in practice for large datasets, and secondly, the runtime of downstream tasks is directly influenced by the sparsity of the similarity graph. In this work, we present $\textit{Stars}$: a highly scalable method for building extremely sparse graphs via two-hop spanners, which are graphs where similar points are connected by a path of length at most two. Stars can construct two-hop spanners with significantly fewer similarity comparisons, which are a major bottleneck for learning based models where comparisons are expensive to evaluate. Theoretically, we demonstrate that Stars builds a graph in nearly-linear time, where approximate nearest neighbors are contained within two-hop neighborhoods. In practice, we have deployed Stars for multiple data sets allowing for graph building at the $\textit{Tera-Scale}$, i.e., for graphs with tens of trillions of edges. We evaluate the performance of Stars for clustering and graph learning, and demonstrate 10~1000-fold improvements in pairwise similarity comparisons compared to different baselines, and 2~10-fold improvement in running time without quality loss.
New Streaming Algorithms for High Dimensional EMD and MST
With Xi Chen, Amit Levi, and Erik Waingarten.
STOC 2022
We study streaming algorithms for two fundamental geometric problems: computing the cost of a Minimum Spanning Tree (MST) of an $n$-point set $X \subset \{1,2,\dots,Δ\}^d$, and computing the Earth Mover Distance (EMD) between two multi-sets $A,B \subset \{1,2,\dots,Δ\}^d$ of size $n$. We consider the turnstile model, where points can be added and removed. We give a one-pass streaming algorithm for MST and a two-pass streaming algorithm for EMD, both achieving an approximation factor of $\tilde{O}(\log n)$ and using polylog$(n,d,Δ)$-space only. Furthermore, our algorithm for EMD can be compressed to a single pass with a small additive error. Previously, the best known sublinear-space streaming algorithms for either problem achieved an approximation of $O(\min\{ \log n , \log (Δd)\} \log n)$ [Andoni-Indyk-Krauthgamer '08, Backurs-Dong-Indyk-Razenshteyn-Wagner '20]. For MST, we also prove that any constant space streaming algorithm can only achieve an approximation of $Ω(\log n)$, analogous to the $Ω(\log n)$ lower bound for EMD of [Andoni-Indyk-Krauthgamer '08]. Our algorithms are based on an improved analysis of a recursive space partitioning method known generically as the Quadtree. Specifically, we show that the Quadtree achieves an $\tilde{O}(\log n)$ approximation for both EMD and MST, improving on the $O(\min\{ \log n , \log (Δd)\} \log n)$ approximation of [Andoni-Indyk-Krauthgamer '08, Backurs-Dong-Indyk-Razenshteyn-Wagner '20].
Truly Perfect Samplers for Data Streams and Sliding Windows
With David Woodruff and Samson Zhou.
PODS 2022
In the $G$-sampling problem, the goal is to output an index $i$ of a vector $f \in\mathbb{R}^n$, such that for all coordinates $j \in [n]$, \[\textbf{Pr}[i=j] = (1 \pm ε) \frac{G(f_j)}{\sum_{k\in[n]} G(f_k)} + γ,\] where $G:\mathbb{R} \to \mathbb{R}_{\geq 0}$ is some non-negative function. If $ε= 0$ and $γ= 1/\text{poly}(n)$, the sampler is called perfect. In the data stream model, $f$ is defined implicitly by a sequence of updates to its coordinates, and the goal is to design such a sampler in small space. Jayaram and Woodruff (FOCS 2018) gave the first perfect $L_p$ samplers in turnstile streams, where $G(x)=|x|^p$, using $\text{polylog}(n)$ space for $p\in(0,2]$. However, to date all known sampling algorithms are not truly perfect, since their output distribution is only point-wise $γ= 1/\text{poly}(n)$ close to the true distribution. This small error can be significant when samplers are run many times on successive portions of a stream, and leak potentially sensitive information about the data stream. In this work, we initiate the study of truly perfect samplers, with $ε= γ= 0$, and comprehensively investigate their complexity in the data stream and sliding window models. Abstract truncated due to arXiv limits; please see paper for full abstract.

2021

An Optimal Algorithm for Triangle Counting in a Stream
With John Kallaugher.
APPROX 2021
We present a new algorithm for approximating the number of triangles in a graph $G$ whose edges arrive as an arbitrary order stream. If $m$ is the number of edges in $G$, $T$ the number of triangles, $Δ_E$ the maximum number of triangles which share a single edge, and $Δ_V$ the maximum number of triangles which share a single vertex, then our algorithm requires space: \[ \widetilde{O}\left(\frac{m}{T}\cdot \left(Δ_E + \sqrt{Δ_V}\right)\right) \] Taken with the $Ω\left(\frac{m Δ_E}{T}\right)$ lower bound of Braverman, Ostrovsky, and Vilenchik (ICALP 2013), and the $Ω\left( \frac{m \sqrt{Δ_V}}{T}\right)$ lower bound of Kallaugher and Price (SODA 2017), our algorithm is optimal up to log factors, resolving the complexity of a classic problem in graph streaming.
Learning and Testing Junta Distributions with Subcube Conditioning
With Xi Chen, Amit Levi, and Erik Waingarten.
COLT 2021
We study the problems of learning and testing junta distributions on $\{-1,1\}^n$ with respect to the uniform distribution, where a distribution $p$ is a $k$-junta if its probability mass function $p(x)$ depends on a subset of at most $k$ variables. The main contribution is an algorithm for finding relevant coordinates in a $k$-junta distribution with subcube conditioning [BC18, CCKLW20]. We give two applications: 1. An algorithm for learning $k$-junta distributions with $\tilde{O}(k/ε^2) \log n + O(2^k/ε^2)$ subcube conditioning queries, and 2. An algorithm for testing $k$-junta distributions with $\tilde{O}((k + \sqrt{n})/ε^2)$ subcube conditioning queries. All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open question posed by Aliakbarpour, Blais, and Rubinfeld [ABR17].
In-Database Regression in Input Sparsity Time
With Alireza Samadian, David Woodruff, and Peng Ye.
ICML 2021
Sketching is a powerful dimensionality reduction technique for accelerating algorithms for data analysis. A crucial step in sketching methods is to compute a subspace embedding (SE) for a large matrix $\mathbf{A} \in \mathbb{R}^{N \times d}$. SE's are the primary tool for obtaining extremely efficient solutions for many linear-algebraic tasks, such as least squares regression and low rank approximation. Computing an SE often requires an explicit representation of $\mathbf{A}$ and running time proportional to the size of $\mathbf{A}$. However, if $\mathbf{A}= \mathbf{T}_1 \Join \mathbf{T}_2 \Join \dots \Join \mathbf{T}_m$ is the result of a database join query on several smaller tables $\mathbf{T}_i \in \mathbb{R}^{n_i \times d_i}$, then this running time can be prohibitive, as $\mathbf{A}$ itself can have as many as $O(n_1 n_2 \cdots n_m)$ rows. In this work, we design subspace embeddings for database joins which can be computed significantly faster than computing the join. For the case of a two table join $\mathbf{A} = \mathbf{T}_1 \Join \mathbf{T}_2$ we give input-sparsity algorithms for computing subspace embeddings, with running time bounded by the number of non-zero entries in $\mathbf{T}_1,\mathbf{T}_2$. This results in input-sparsity time algorithms for high accuracy regression, significantly improving upon the running time of prior FAQ-based methods for regression. We extend our results to arbitrary joins for the ridge regression problem, also considerably improving the running time of prior methods. Empirically, we apply our method to real datasets and show that it is significantly faster than existing algorithms.
When is Approximate Counting for Conjunctive Queries Tractable?
With Marcelo Arenas, Luis Alberto Croquevielle, and Cristian Riveros.
STOC 2021
Conjunctive queries are one of the most common class of queries used in database systems, and the best studied in the literature. A seminal result of Grohe, Schwentick, and Segoufin (STOC 2001) demonstrates that for every class $G$ of graphs, the evaluation of all conjunctive queries whose underlying graph is in $G$ is tractable if, and only if, $G$ has bounded treewidth. In this work, we extend this characterization to the counting problem for conjunctive queries. Specifically, for every class $C$ of conjunctive queries with bounded treewidth, we introduce the first fully polynomial-time randomized approximation scheme (FPRAS) for counting answers to a query in $C$, and the first polynomial-time algorithm for sampling answers uniformly from a query in $C$. As a corollary, it follows that for every class $G$ of graphs, the counting problem for conjunctive queries whose underlying graph is in $G$ admits an FPRAS if, and only if, $G$ has bounded treewidth (unless $\text{BPP} \neq \text{P}$)}. In fact, our FPRAS is more general, and also applies to conjunctive queries with bounded hypertree width, as well as unions of such queries. The key ingredient in our proof is the resolution of a fundamental counting problem from automata theory. Specifically, we demonstrate the first FPRAS and polynomial time sampler for the set of trees of size $n$ accepted by a tree automaton, which improves the prior quasi-polynomial time randomized approximation scheme (QPRAS) and sampling algorithm of Gore, Jerrum, Kannan, Sweedyk, and Mahaney '97. We demonstrate how this algorithm can be used to obtain an FPRAS for many hitherto open problems, such as counting solutions to constraint satisfaction problems (CSP) with bounded hypertree-width, counting the number of error threads in programs with nested call subroutines, and counting valid assignments to structured DNNF circuits.

2020

Testing Positive Semi-Definiteness via Random Submatrices
With Ainesh Bakshi and Nadiia Chepurko.
FOCS 2020
We study the problem of testing whether a matrix $\mathbf{A} \in \mathbb{R}^{n \times n}$ with bounded entries ($\|\mathbf{A}\|_\infty \leq 1$) is positive semi-definite (PSD), or $ε$-far in Euclidean distance from the PSD cone, meaning that $\min_{\mathbf{B} \succeq 0} \|\mathbf{A} - \mathbf{B}\|_F^2 > εn^2$, where $\mathbf{B} \succeq 0$ denotes that $\mathbf{B}$ is PSD. Our main algorithmic contribution is a non-adaptive tester which distinguishes between these cases using only $\tilde{O}(1/ε^4)$ queries to the entries of $\mathbf{A}$. If instead of the Euclidean norm we considered the distance in spectral norm, we obtain the "$\ell_\infty$-gap problem", where $\mathbf{A}$ is either PSD or satisfies $\min_{\mathbf{B}\succeq 0} \|\mathbf{A}- \mathbf{B}\|_2 > εn$. For this related problem, we give a $\tilde{O}(1/ε^2)$ query tester, which we show is optimal up to $\log(1/ε)$ factors. Our testers randomly sample a collection of principal submatrices and check whether these submatrices are PSD. Consequentially, our algorithms achieve one-sided error: whenever they output that $\mathbf{A}$ is not PSD, they return a certificate that $\mathbf{A}$ has negative eigenvalues. We complement our upper bound for PSD testing with Euclidean norm distance by giving a $\tildeΩ(1/ε^2)$ lower bound for any non-adaptive algorithm. Our lower bound construction is general, and can be used to derive lower bounds for a number of spectral testing problems. As an example of the applicability of our construction, we obtain a new $\tildeΩ(1/ε^4)$ sampling lower bound for testing the Schatten-$1$ norm with a $εn^{1.5}$ gap, extending a result of Balcan, Li, Woodruff, and Zhang [SODA'19]. In addition, it yields new sampling lower bounds for estimating the Ky-Fan Norm, and the cost of the best rank-$k$ approximation.
A Framework for Adversarially Robust Streaming Algorithms
With Omri Ben-Eliezer, David Woodruff, and Eylon Yogev.
PODS 2020 & Journal of the ACM
PODS Best Paper Award 2020 Invited to Journal of the ACM 2021 SIGMOD Research Highlight Invited to HALG 2021
We investigate the adversarial robustness of streaming algorithms. In this context, an algorithm is considered robust if its performance guarantees hold even if the stream is chosen adaptively by an adversary that observes the outputs of the algorithm along the stream and can react in an online manner. While deterministic streaming algorithms are inherently robust, many central problems in the streaming literature do not admit sublinear-space deterministic algorithms; on the other hand, classical space-efficient randomized algorithms for these problems are generally not adversarially robust. This raises the natural question of whether there exist efficient adversarially robust (randomized) streaming algorithms for these problems. In this work, we show that the answer is positive for various important streaming problems in the insertion-only model, including distinct elements and more generally $F_p$-estimation, $F_p$-heavy hitters, entropy estimation, and others. For all of these problems, we develop adversarially robust $(1+\varepsilon)$-approximation algorithms whose required space matches that of the best known non-robust algorithms up to a $\text{poly}(\log n, 1/\varepsilon)$ multiplicative factor (and in some cases even up to a constant factor). Towards this end, we develop several generic tools allowing one to efficiently transform a non-robust streaming algorithm into a robust one in various scenarios.
Span Recovery for Deep Neural Networks with Applications to Input Obfuscation
With Qiuyi Zhang and David Woodruff.
ICLR 2020
The tremendous success of deep neural networks has motivated the need to better understand the fundamental properties of these networks, but many of the theoretical results proposed have only been for shallow networks. In this paper, we study an important primitive for understanding the meaningful input space of a deep network: span recovery. For $k<n$, let $\mathbf{A} \in \mathbb{R}^{k \times n}$ be the innermost weight matrix of an arbitrary feed forward neural network $M:\mathbb{R}^n \to \mathbb{R}$, so $M(x)$ can be written as $M(x) = σ(\mathbf{A} x)$, for some network $σ:\mathbb{R}^k \to \mathbb{R}$. The goal is then to recover the row span of $\mathbf{A}$ given only oracle access to the value of $M(x)$. We show that if $M$ is a multi-layered network with ReLU activation functions, then partial recovery is possible: namely, we can provably recover $k/2$ linearly independent vectors in the row span of $\mathbf{A}$ using poly$(n)$ non-adaptive queries to $M(x)$. Furthermore, if $M$ has differentiable activation functions, we demonstrate that full span recovery is possible even when the output is first passed through a sign or $0/1$ thresholding function; in this case our algorithm is adaptive. Empirically, we confirm that full span recovery is not always possible, but only for unrealistically thin layers. For reasonably wide networks, we obtain full span recovery on both random networks and networks trained on MNIST data. Furthermore, we demonstrate the utility of span recovery as an attack by inducing neural networks to misclassify data obfuscated by controlled random noise as sensical inputs.

2019

Optimal Sketching for Kronecker Product Regression and Low Rank Approximation
With Huaian Diao, Zhao Song, Wen Sun, and David Woodruff.
NeurIPS 2019
We study the Kronecker product regression problem, in which the design matrix is a Kronecker product of two or more matrices. Given $A_i \in \mathbb{R}^{n_i \times d_i}$ for $i=1,2,\dots,q$ where $n_i \gg d_i$ for each $i$, and $b \in \mathbb{R}^{n_1 n_2 \cdots n_q}$, let $\mathcal{A} = A_1 \otimes A_2 \otimes \cdots \otimes A_q$. Then for $p \in [1,2]$, the goal is to find $x \in \mathbb{R}^{d_1 \cdots d_q}$ that approximately minimizes $\|\mathcal{A}x - b\|_p$. Recently, Diao, Song, Sun, and Woodruff (AISTATS, 2018) gave an algorithm which is faster than forming the Kronecker product $\mathcal{A}$ Specifically, for $p=2$ their running time is $O(\sum_{i=1}^q \text{nnz}(A_i) + \text{nnz}(b))$, where nnz$(A_i)$ is the number of non-zero entries in $A_i$. Note that nnz$(b)$ can be as large as $n_1 \cdots n_q$. For $p=1,$ $q=2$ and $n_1 = n_2$, they achieve a worse bound of $O(n_1^{3/2} \text{poly}(d_1d_2) + \text{nnz}(b))$. In this work, we provide significantly faster algorithms. For $p=2$, our running time is $O(\sum_{i=1}^q \text{nnz}(A_i) )$, which has no dependence on nnz$(b)$. For $p<2$, our running time is $O(\sum_{i=1}^q \text{nnz}(A_i) + \text{nnz}(b))$, which matches the prior best running time for $p=2$. We also consider the related all-pairs regression problem, where given $A \in \mathbb{R}^{n \times d}, b \in \mathbb{R}^n$, we want to solve $\min_{x} \|\bar{A}x - \bar{b}\|_p$, where $\bar{A} \in \mathbb{R}^{n^2 \times d}, \bar{b} \in \mathbb{R}^{n^2}$ consist of all pairwise differences of the rows of $A,b$. We give an $O(\text{nnz}(A))$ time algorithm for $p \in[1,2]$, improving the $Ω(n^2)$ time needed to form $\bar{A}$. Finally, we initiate the study of Kronecker product low rank and low $t$-rank approximation. For input $\mathcal{A}$ as above, we give $O(\sum_{i=1}^q \text{nnz}(A_i))$ time algorithms, which is much faster than computing $\mathcal{A}$.
Towards Optimal Moment Estimation in Streaming and Distributed Models
With David Woodruff.
APPROX 2019
One of the oldest problems in the data stream model is to approximate the $p$-th moment $\|\mathcal{X}\|_p^p = \sum_{i=1}^n |\mathcal{X}_i|^p$ of an underlying vector $\mathcal{X} \in \mathbb{R}^n$, which is presented as a sequence of poly$(n)$ updates to its coordinates. Of particular interest is when $p \in (0,2]$. Although a tight space bound of $Θ(ε^{-2} \log n)$ bits is known for this problem when both positive and negative updates are allowed, surprisingly there is still a gap in the space complexity when all updates are positive. Specifically, the upper bound is $O(ε^{-2} \log n)$ bits, while the lower bound is only $Ω(ε^{-2} + \log n)$ bits. Recently, an upper bound of $\tilde{O}(ε^{-2} + \log n)$ bits was obtained assuming that the updates arrive in a random order. We show that for $p \in (0, 1]$, the random order assumption is not needed. Namely, we give an upper bound for worst-case streams of $\tilde{O}(ε^{-2} + \log n)$ bits for estimating $\|\mathcal{X}\|_p^p$. Our techniques also give new upper bounds for estimating the empirical entropy in a stream. On the other hand, we show that for $p \in (1,2]$, in the natural coordinator and blackboard communication topologies, there is an $\tilde{O}(ε^{-2})$ bit max-communication upper bound based on a randomized rounding scheme. Our protocols also give rise to protocols for heavy hitters and approximate matrix product. We generalize our results to arbitrary communication topologies $G$, obtaining an $\tilde{O}(ε^{2} \log d)$ max-communication upper bound, where $d$ is the diameter of $G$. Interestingly, our upper bound rules out natural communication complexity-based approaches for proving an $Ω(ε^{-2} \log n)$ bit lower bound for $p \in (1,2]$ for streaming algorithms. In particular, any such lower bound must come from a topology with large diameter.
Learning Two Layer Rectified Neural Networks in Polynomial Time
With Ainesh Bakshi and David Woodruff.
COLT 2019
Consider the following fundamental learning problem: given input examples $x \in \mathbb{R}^d$ and their vector-valued labels, as defined by an underlying generative neural network, recover the weight matrices of this network. We consider two-layer networks, mapping $\mathbb{R}^d$ to $\mathbb{R}^m$, with $k$ non-linear activation units $f(\cdot)$, where $f(x) = \max \{x , 0\}$ is the ReLU. Such a network is specified by two weight matrices, $\mathbf{U}^* \in \mathbb{R}^{m \times k}, \mathbf{V}^* \in \mathbb{R}^{k \times d}$, such that the label of an example $x \in \mathbb{R}^{d}$ is given by $\mathbf{U}^* f(\mathbf{V}^* x)$, where $f(\cdot)$ is applied coordinate-wise. Given $n$ samples as a matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$ and the (possibly noisy) labels $\mathbf{U}^* f(\mathbf{V}^* \mathbf{X}) + \mathbf{E}$ of the network on these samples, where $\mathbf{E}$ is a noise matrix, our goal is to recover the weight matrices $\mathbf{U}^*$ and $\mathbf{V}^*$. In this work, we develop algorithms and hardness results under varying assumptions on the input and noise. Although the problem is NP-hard even for $k=2$, by assuming Gaussian marginals over the input $\mathbf{X}$ we are able to develop polynomial time algorithms for the approximate recovery of $\mathbf{U}^*$ and $\mathbf{V}^*$. Perhaps surprisingly, in the noiseless case our algorithms recover $\mathbf{U}^*,\mathbf{V}^*$ exactly, i.e., with no error. To the best of the our knowledge, this is the first algorithm to accomplish exact recovery. For the noisy case, we give the first polynomial time algorithm that approximately recovers the weights in the presence of mean-zero noise $\mathbf{E}$. Our algorithms generalize to a larger class of rectified activation functions, $f(x) = 0$ when $x\leq 0$, and $f(x) > 0$ otherwise.
Efficient Logspace Classes for Enumeration, Counting, and Uniform Generation
With Marcelo Arenas, Luis Alberto Croquevielle, and Cristian Riveros.
PODS 2019 & Journal of the ACM
PODS Best Paper Award 2019 Invited to Journal of the ACM 2021 SIGMOD Research Highlight
In this work, we study two simple yet general complexity classes, based on logspace Turing machines, which provide a unifying framework for efficient query evaluation in areas like information extraction and graph databases, among others. We investigate the complexity of three fundamental algorithmic problems for these classes: enumeration, counting and uniform generation of solutions, and show that they have several desirable properties in this respect. Both complexity classes are defined in terms of non-deterministic logspace transducers (NL transducers). For the first class, we consider the case of unambiguous NL transducers, and we prove constant delay enumeration, and both counting and uniform generation of solutions in polynomial time. For the second class, we consider unrestricted NL transducers, and we obtain polynomial delay enumeration, approximate counting in polynomial time, and polynomial-time randomized algorithms for uniform generation. More specifically, we show that each problem in this second class admits a fully polynomial-time randomized approximation scheme (FPRAS) and a polynomial-time Las Vegas algorithm for uniform generation. Interestingly, the key idea to prove these results is to show that the fundamental problem $\text{#NFA}$ admits an FPRAS, where $\text{#NFA}$ is the problem of counting the number of strings of length $n$ (given in unary) accepted by a non-deterministic finite automaton (NFA). While this problem is known to be $\text{#P}$-complete and, more precisely, $\text{SpanL}$-complete, it was open whether this problem admits an FPRAS. In this work, we solve this open problem, and obtain as a welcome corollary that every function in $\text{SpanL}$ admits an FPRAS.
Weighted Reservoir Sampling from Distributed Streams
With Gokarna Sharma, Srikanta Tirthapura, and David P. Woodruff.
PODS 2019
We consider message-efficient continuous random sampling from a distributed stream, where the probability of inclusion of an item in the sample is proportional to a weight associated with the item. The unweighted version, where all weights are equal, is well studied, and admits tight upper and lower bounds on message complexity. For weighted sampling with replacement, there is a simple reduction to unweighted sampling with replacement. However, in many applications the stream has only a few heavy items which may dominate a random sample when chosen with replacement. Weighted sampling \textit{without replacement} (weighted SWOR) eludes this issue, since such heavy items can be sampled at most once. In this work, we present the first message-optimal algorithm for weighted SWOR from a distributed stream. Our algorithm also has optimal space and time complexity. As an application of our algorithm for weighted SWOR, we derive the first distributed streaming algorithms for tracking \textit{heavy hitters with residual error}. Here the goal is to identify stream items that contribute significantly to the residual stream, once the heaviest items are removed. Residual heavy hitters generalize the notion of $\ell_1$ heavy hitters and are important in streams that have a skewed distribution of weights. In addition to the upper bound, we also provide a lower bound on the message complexity that is nearly tight up to a $\log(1/ε)$ factor. Finally, we use our weighted sampling algorithm to improve the message complexity of distributed $L_1$ tracking, also known as count tracking, which is a widely studied problem in distributed streaming. We also derive a tight message lower bound, which closes the message complexity of this fundamental problem.

2018

Perfect Lp Sampling in a Data Stream
With David Woodruff.
FOCS 2018 & SIAM Journal on Computing
In this paper, we resolve the one-pass space complexity of $L_p$ sampling for $p \in (0,2)$. Given a stream of updates (insertions and deletions) to the coordinates of an underlying vector $f \in \mathbb{R}^n$, a perfect $L_p$ sampler must output an index $i$ with probability $|f_i|^p/\|f\|_p^p$, and is allowed to fail with some probability $δ$. So far, for $p > 0$ no algorithm has been shown to solve the problem exactly using $\text{poly}( \log n)$-bits of space. In 2010, Monemizadeh and Woodruff introduced an approximate $L_p$ sampler, which outputs $i$ with probability $(1 \pm ν)|f_i|^p /\|f\|_p^p$, using space polynomial in $ν^{-1}$ and $\log(n)$. The space complexity was later reduced by Jowhari, Sağlam, and Tardos to roughly $O(ν^{-p} \log^2 n \log δ^{-1})$ for $p \in (0,2)$, which tightly matches the $Ω(\log^2 n \log δ^{-1})$ lower bound in terms of $n$ and $δ$, but is loose in terms of $ν$. Given these nearly tight bounds, it is perhaps surprising that no lower bound exists in terms of $ν$---not even a bound of $Ω(ν^{-1})$ is known. In this paper, we explain this phenomenon by demonstrating the existence of an $O(\log^2 n \log δ^{-1})$-bit perfect $L_p$ sampler for $p \in (0,2)$. This shows that $ν$ need not factor into the space of an $L_p$ sampler, which closes the complexity of the problem for this range of $p$. For $p=2$, our bound is $O(\log^3 n \log δ^{-1})$-bits, which matches the prior best known upper bound in terms of $n,δ$, but has no dependence on $ν$. For $p<2$, our bound holds in the random oracle model, matching the lower bounds in that model. Moreover, we show that our algorithm can be derandomized with only a $O((\log \log n)^2)$ blow-up in the space (and no blow-up for $p=2$). Our derandomization technique is general, and can be used to derandomize a large class of linear sketches.
Data Streams with Bounded Deletions
With David Woodruff.
PODS 2018
Two prevalent models in the data stream literature are the insertion-only and turnstile models. Unfortunately, many important streaming problems require a $Θ(\log(n))$ multiplicative factor more space for turnstile streams than for insertion-only streams. This complexity gap often arises because the underlying frequency vector $f$ is very close to $0$, after accounting for all insertions and deletions to items. Signal detection in such streams is difficult, given the large number of deletions. In this work, we propose an intermediate model which, given a parameter $α\geq 1$, lower bounds the norm $\|f\|_p$ by a $1/α$-fraction of the $L_p$ mass of the stream had all updates been positive. Here, for a vector $f$, $\|f\|_p = \left (\sum_{i=1}^n |f_i|^p \right )^{1/p}$, and the value of $p$ we choose depends on the application. This gives a fluid medium between insertion only streams (with $α= 1$), and turnstile streams (with $α= \text{poly}(n)$), and allows for analysis in terms of $α$. We show that for streams with this $α$-property, for many fundamental streaming problems we can replace a $O(\log(n))$ factor in the space usage for algorithms in the turnstile model with a $O(\log(α))$ factor. This is true for identifying heavy hitters, inner product estimation, $L_0$ estimation, $L_1$ estimation, $L_1$ sampling, and support sampling. For each problem, we give matching or nearly matching lower bounds for $α$-property streams. We note that in practice, many important turnstile data streams are in fact $α$-property streams for small values of $α$. For such applications, our results represent significant improvements in efficiency for all the aforementioned problems.

2017

Approximating Language Edit Distance Beyond Fast Matrix Multiplication: Ultralinear Grammars Are Where Parsing Becomes Hard!
With Barna Saha.
ICALP 2017

Dissertation

PhD Thesis, Carnegie Mellon University, May 2021
Committee: David Woodruff (advisor), Anupam Gupta (CMU), Andrej Risteski (CMU), Alexandr Andoni (Columbia), Jelani Nelson (Berkeley)

Teaching

I taught as an Adjunct Professor at NYU's Tandon School of Engineering.

Spring 2022: NYU CS-GY 6763 — Algorithmic Machine Learning and Data Science

Workshops

I co-organized the Approximate Nearest Neighbor Search (ANNS) workshop at FOCS 2025, with the goal of “Bridging Theoretical Foundations and Industrial Frontiers” for the central ANNS problem.

I co-organized the Industry Workshop at FOCS 2024, with the goal of bridging techniques between theory and practice.

I co-organized the Workshop on Robust Streaming, Sketching, and Sampling at STOC 2021. A full recording can be found here.

I co-organized the Workshop on Algorithms for Large Data (Online) (WALDO 2021), which took place August 23–25, 2021.