Machine learning is the activity of learning from data to find patterns. NYU is a leader in this field, as our colleague Yann LeCun is the primary inventor of convolutional neural networks. The focus on machine learning among the NYU WIRELESS faculty has to do with image understanding including video, information theoretic approaches to privacy, and methods to improve the accuracy of general machine learning methods through selective refusal of predictions. BEYOND MASSIVE MIMO Does Massive MIMO represent the ultimate wireless physical layer technology, or is there something potentially much better? Our research is addressing this question through a close fusion of electromagnetic theory and communication theory. Holographic Massive MIMO Analogous to optical holography, the idea of Holographic Massive MIMO is to replace a large array of discrete antennas with a spatial continuum of antennas, either linear, planar, or volumetric. The spatially continuous transmit/receive aperture is a logical successor to the Massive MIMO array. In addition, Holographic Massive MIMO constitutes a new theoretical tool for analyzing the limit behavior of MIMO systems when the number of service antennas grows without bound. So far, our research has produced stochastic models for small-scale fading that rigourously account for wave propagation physics, and that are particularly attractive from a computational standpoint. Super-Directive Antenna Arrays Conventional phased-array antenna theory and practice dictate a beamforming gain that grows linearly with the number of antennas. Super-directivity can, in principle, yield a beamforming gain that grows quadratically with the number of antennas. The central idea is to place antennas closer together than the usual half-wavelength spacing, deliberately creating strong mutual coupling among the antennas, and then to exploit this coupling to yield super-directive gain. Our research is searching for sweet spots for super-directivity with respect to deployment scenarios, array configurations, and the expenditure of reactive power. In parallel with the associated numerical gain optimization, we are elucidating the physical phenomenology via the plane-wave expansion of the radiated field. Ultimately we plan for experimental validation. MACHINE LEARNING FOR VISUAL ANALYTICS AND COMPRESSION Machine Learning One of the research projects at NYU WIRELESS is joint optimization of video coding and delivery in networked video applications. We are also looking at vehicle tracking at busy intersections of urban streets. We developed a deep learning network that can simultaneously detect and track an object. The main kind of architecture is called a region proposal network. It considers all possibilities and decided whether or not the object is a good candidate. Then, on the candidate they choose, they do additional kind of classification to see whether this actually is an object. In our future research, we plan to extend this to detect a video object, not on a individual frame. MACHINE LEARNING-SUPPORTED DEBUGGING Large enterprises tend to have long software pipelines consisting of components and interconnections of high complexity. Moreover the components are sometimes black boxes whose only control points are parameter settings one can give. Failures of such pipelines can result from changes in parameter settings or bad data or software development. Sometimes failures are the result of multiple changes. We have developed a machine-learning based system called BugDoc that automatically and efficiently finds the causes of errors (including data errors) that lead to failure. EFFICIENT HARDWARE FOR DEEP LEARNING Training and executing deep neural networks is computationally demanding. For this reason, leading companies are designing specialized chips to accelerate deep learning workloads. Our work explores new circuit and architectural optimizations to increase the performance, reliability and energy-efficient of deep learning hardware. Several leading companies have recently released specialized chips tailored for deep learning; Google's Tensor Processing Unit (TPU), for instance, accelerates deep neural network (DNN) inference (and training) using a systolic array, a precisely timed array containing thousands of multiply-and-accumulate (MAC) units. At the EnSuRe group, we are pursuing cutting edge research on designing more energy-efficient and reliable hardware accelerators for ML. SECURE MACHINE LEARNING Deep learning deployments, especially in safety- and health-critical applications, must account for the security or else malicious attackers will be able to engineer misbehavior with potentially disastrous consequences (autonomous car crashes, for instance). How we can safely and securely deploy ML/AI technology in the real-world? CURRENT RESEARCH Beyond Massive MIMO Theory Communication Theory Machine Learning for Physical Objects Low Power Hardware Accelerators for Deep Learning Machine Learning-supported Debugging