Boettcher, A., L. Fukshansky, S. R. Garcia, H. Maharaj, and D. Needell. “Lattices from tight equiangular frames,” Linear Algebra and its Applications, vol. 510, 2016, 395-420.
Abstract: We consider the set of all linear combinations with integer coefficients of the vectors of a unit equiangular tight (k,n) frame and are interested in the question whether this set is a lattice, that is, a discrete additive subgroup of the k-dimensional Euclidean space. We show that this is not the case if the cosine of the angle of the frame is irrational. We also prove that the set is a lattice for n = k+1 and that there are infinitely many k such that a lattice emerges for n = 2k. We dispose of all cases in dimensions k at most 9. In particular, we show that a (7,28) frame generates a strongly eutactic lattice and give an alternative proof of Roland Bacher’s recent observation that this lattice is perfect.
Needell, Deanna, Nathan Srebro, and Rachel Ward. "Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm." Mathematical Programming 155.1-2, 2016, 549-573.
Abstract: We obtain an improved finite-sample guarantee on the linear convergence of stochastic gradient descent for smooth and strongly convex objectives, improving from a quadratic dependence on the conditioning (L/μ)2(L/μ)2 (where LL is a bound on the smoothness and μμ on the strong convexity) to a linear dependence on L/μL/μ. Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence in the average smoothness, dominating previous results. We also discuss importance sampling for SGD more broadly and show how it can improve convergence also in other scenarios. Our results are based on a connection we make between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. In particular, we recast the randomized Kaczmarz algorithm as an instance of SGD, and apply our results to prove its exponential convergence, but to the solution of a weighted least squares problem rather than the original least squares problem. We then present a modified Kaczmarz algorithm with partially biased sampling which does converge to the original least squares solution with the same exponential convergence rate.
Shi, H.M., M. Case, X. Gu, S. Tu, and D. Needell. “Methods for Quantized Compressed Sensing.” Proc. Information Theory and Applications (ITA), La Jolla CA, Jan. 2016.
Abstract: In this paper, we compare and catalog the performance of various greedy quantized compressed sensing algorithms that reconstruct sparse signals from quantized compressed measurements. We also introduce two new greedy approaches for reconstruction: Quantized Compressed Sampling Matching Pursuit (QCoSaMP) and Adaptive Outlier Pursuit for Quantized Iterative Hard Thresholding (AOP-QIHT). We compare the performance of greedy quantized compressed sensing algorithms for a given bit-depth, sparsity, and noise level.
External grant: The Sloan Foundation, Grant 2016-7296, “ 2018 Summer Graduate Workshop: Representations of High Dimensional Data”, co-PI, $150,000, 2016-18
External grant: MSRI Summer Graduate School grant (with Blake Hunter, CMC), Representations of High Dimensional Data, 2018, $18,000, co-PI and co-organizer.
Abstract: In today's world, data is exploding at a faster rate than computer architectures can handle. For that reason, mathematical techniques to analyze large-scale objects must be developed. One mathematical method that has gained a lot of recent attention is the use of sparsity. Sparsity captures the idea that high dimensional signals often contain a very small amount of intrinsic information. Using this notion, one may design efficient low-dimensional representations of large-scale data as well as robust reconstruction methods for those representations. Moreover, in many applications one does not desire to reconstruct the full signal but rather perform some inference task such as classification, clustering, parameter estimation, and feature selection. In this course, we study the mathematical representations, reconstruction approaches, and data mining techniques for such large-scale data. We will explore various mathematical notions used in high dimensional signal processing including an overview of compressive signal processing, clustering and classification, and topic modeling. Students will learn the mathematical theory, solve small problems, and perform lab activities working with these techniques on real world data.
External Grant: NSF CAREER #1348721,“Practical Compressive Signal Processing, $413,527, co-PI.
Abstract: A "signal" is any data set that one would like to acquire, for example, an image, a large block of data, or an audio clip. One can imagine asking how quickly one would need to sample an audio clip so that from those samples alone, the audio clip could be accurately recovered. Would you need to sample every nanosecond, every millisecond, or every second? Compressive Signal Processing (CSP) shows that the important information in many signals can be obtained and recovered from far fewer samples than traditionally thought. The applications of CSP are widespread and include imaging (medical, hyperspectral, microscopy, biological), analog-to-information conversion, radar, large scale information synthesis, geophysical data analysis, computational biology, and many more. Although these applications are astounding, there has been a disconnect between the theoretical work in CSP and the use of CSP in practical settings. The goals of this project will bridge this gap by providing methods and analysis for CSP that apply to real-world signals and settings. Such work will lead to decreased scan time in MRI, reduced cost and energy consumption in computing infrastructures, improved detection of diseased crops from hyperspectral images, increased accuracy in radar, and improved compression and analysis in many other large-data applications. In addition, this project will involve students at all levels and introduce them to rigorous scientific research. The PI actively recruits members from under-represented populations, and will continue to promote diversity through her own research and outreach programs.
Early CSP models restrict the class of signals to those compressible in a very specific sense (sparse with respect to an orthonormal or incoherent basis). One goal of this project is to relax this restriction to allow for signals actually encountered in practice, such as those sparse in redundant, coherent, and highly overcomplete dictionaries. We will utilize both greedy approaches and optimization-based methods, tailored to specific dictionaries of interest, as well as more general methods for arbitrary bases. In addition, this project will develop adaptive CSP sampling schemes, where measurements of the signal are designed "on the fly," as they are being taken. Traditional measurement schemes ignore this information, while adaptive schemes have the potential to significantly reduce reconstruction error, number of measurements, and computation time. We will identify optimal measurement strategies for constrained and unconstrained settings, and analyze how much one can actually gain from adaptivity from an information theoretic point of view. The project will also involve work in "one-bit CSP", a new and exciting branch of CSP which handles extreme (and often more realistic) quantization. We will draw on sub-linear methods, where large errors appear naturally, and also use optimization based techniques along with adaptive quantization thresholds to reduce the recovery error below the best possible for non-adaptive quantization. In studying these topics, the research will bridge a large gap in the theory of CSP and provide a unified framework for both practitioners and researchers.
External grant: Sloan Fellowship award, $50,000.
Abstract: Traditional signal acquisition measures the entire signal of interest using costly sampling hardware, only to discard it as the processing hardware compresses the signal for storage or analysis. Recently, a new field of compressive signal processing (CSP) has emerged in an effort to resolve this seemingly wasteful dilemma. The applications of CSP range from imaging, analog-to-information conversion and radar to geophysical data analysis and computational biology The Sloan Research Fellowships seek to stimulate fundamental research by early-career scientists and scholars of outstanding promise. These two-year, $50,000 fellowships are awarded yearly to 126 researchers in recognition of distinguished performance and a unique potential to make substantial contributions to their field. Nominations are reviewed and candidates selected by an independent selection committee of distinguished scientists in each eligible field. Fellows are selected on the basis of their independent research accomplishments, creativity, and potential to become leaders in the scientific community through their contributions to their field.