The focus of my research is on extremely resource-constrained algorithms (“sublinear algorithms”), especially in the context of quantum computing. Some examples of problems I have worked on:
In the “streaming model”, we are given some very large input as a series of small pieces. For instance, a massive graph could be received one edge at a time. We want to answer some question about this input while using as little space (memory) as possible.
Suppose our algorithm is allowed to use quantum memory instead of being restricted to classical storage. Does this decrease the amount of space we need? My work includes the first example of a “natural” problem where the answer is “yes”.
Later, my collaborators and I found a natural problem where the quantum advantage is exponential: the space used by any classical algorithm must scale with the square root of the size of the input, while a quantum algorithm only needs polylogarithmic space.
How hard is it to find out whether an unknown Hamiltonian H is local (generated by interactions between small numbers of objects)? We gave the best known algorithm for this problem in the setting where you can only learn about H through preparing states and evolving them under H.
A classic result on streaming algorithms says that any turnstile streaming algorithm (an algorithm that can handle deletions from as well as insertions to a dataset) “might as well” be a linear sketch: an algorithm that maintains a linear function of its input as its only storage. This result comes with some seemingly minor caveats, but we demonstrated that that they were very significant: there are reasonable tweaks that can be made to the streaming model (e.g. guaranteeing that each item will be inserted / deleted / re-inserted no more than 100 times) that exponentially separate turnstile streaming and linear sketching.