https://cs.nyu.edu/dynamic/news/colloquium/1079/

END:VEVENT BEGIN:VEVENT UID:1078.event.events@cs.nyu.edu DTSTART:20180403T180000Z DTEND:20180403T190000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1078/ DTSTAMP:20180403T180000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:TBA by Danqi Chen URL:https://cs.nyu.edu/dynamic/news/colloquium/1078/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1078/

END:VEVENT BEGIN:VEVENT UID:1077.event.events@cs.nyu.edu DTSTART:20180330T150000Z DTEND:20180330T160000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1077/ DTSTAMP:20180330T150000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:TBA by Jacob Eisenstein URL:https://cs.nyu.edu/dynamic/news/colloquium/1077/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1077/

END:VEVENT BEGIN:VEVENT UID:1080.event.events@cs.nyu.edu DTSTART:20180326T180000Z DTEND:20180326T190000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1080/\n\nSynopsis:\n \nDespite decades of research building secure operating systems\, many dep loyed systems must still choose between flexible application APIs and secu rity. As a result\, the vast majority of programmers are unable to improve these systems. This is not merely a result of poor system building. It is hard to design highly extensible systems that are both secure and useful. Moreover\, evaluating novel designs with actual developers is critical in order to make sure system builders can adopt research systems in practice .\nFortunately\, in emerging application domains\, such as the Internet of Things\, there are no entrenched operating systems and application portab ility is less important. This makes it possible evaluate research techniqu es for building more secure and extensible systems with developers who are willing to adopt them.\nI'll describe Tock\, an operating system for micr ocontrollers that enables third-party developers to extend the system. Toc k uses the Rust type-system to isolate kernel extensions and the hardware to isolate applications. I'll discuss how we continuously evaluate Tock by engaging with practitioners\, and how lessons from practitioners have fed back into the system's design. DTSTAMP:20180326T180000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Closing the Loop on Secure System Design by Amit Levy URL:https://cs.nyu.edu/dynamic/news/colloquium/1080/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1080/

Synopsis:

Despite decades of research building secure operating sys tems\, many deployed systems must still choose between flexible applicatio n APIs and security. As a result\, the vast majority of programmers are un able to improve these systems. This is not merely a result of poor system building. It is hard to design highly extensible systems that are both sec ure and useful. Moreover\, evaluating novel designs with actual developers is critical in order to make sure system builders can adopt research syst ems in practice.

\nFortunately\, in emerging application domains\, s uch as the Internet of Things\, there are no entrenched operating systems and application portability is less important. This makes it possible eval uate research techniques for building more secure and extensible systems w ith developers who are willing to adopt them.

\nI'll describe Tock\, an operating system for microcontrollers that enables third-party develop ers to extend the system. Tock uses the Rust type-system to isolate kernel extensions and the hardware to isolate applications. I'll discuss how we continuously evaluate Tock by engaging with practitioners\, and how lesson s from practitioners have fed back into the system's design.

END:VEVENT BEGIN:VEVENT UID:1076.event.events@cs.nyu.edu DTSTART:20180323T150000Z DTEND:20180323T160000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1076/\n\nSynopsis:\n \nThe automation of posterior inference in Bayesian data analysis has enab led experts and nonexperts alike to use more sophisticated models\, engage in faster exploratory modeling and analysis\, and ensure experimental rep roducibility. However\, standard automated posterior inference algorithms are not tractable at the scale of massive modern datasets\, and modificati ons to make them so are typically model-specific\, require expert tuning\, and can break theoretical guarantees on inferential quality. This talk wi ll instead take advantage of data redundancy to shrink the dataset itself as a preprocessing step\, forming a "Bayesian coreset." The coreset can be used in a standard inference algorithm at significantly reduced cost whil e maintaining theoretical guarantees on posterior approximation quality. T he talk will include an intuitive formulation of Bayesian coreset construc tion as sparse vector sum approximation\, an automated coreset constructio n algorithm that takes advantage of this formulation\, strong theoretical guarantees on posterior approximation quality\, and applications to a vari ety of real and simulated datasets. DTSTAMP:20180323T150000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Automated\, Scalable Bayesian Inference with Theoretical Guarantees by Trevor Campbell URL:https://cs.nyu.edu/dynamic/news/colloquium/1076/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1076/

Synopsis:

The automation of posterior inference in Bayesian data an alysis has enabled experts and nonexperts alike to use more sophisticated models\, engage in faster exploratory modeling and analysis\, and ensure e xperimental reproducibility. However\, standard automated posterior infere nce algorithms are not tractable at the scale of massive modern datasets\, and modifications to make them so are typically model-specific\, require expert tuning\, and can break theoretical guarantees on inferential qualit y. This talk will instead take advantage of data redundancy to shrink the dataset itself as a preprocessing step\, forming a "Bayesian coreset." The coreset can be used in a standard inference algorithm at significantly re duced cost while maintaining theoretical guarantees on posterior approxima tion quality. The talk will include an intuitive formulation of Bayesian c oreset construction as sparse vector sum approximation\, an automated core set construction algorithm that takes advantage of this formulation\, stro ng theoretical guarantees on posterior approximation quality\, and applica tions to a variety of real and simulated datasets.

END:VEVENT BEGIN:VEVENT UID:1075.event.events@cs.nyu.edu DTSTART:20180319T180000Z DTEND:20180319T190000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1075/\n\nSynopsis:\n \nAI has made huge advancement into our daily life and increasingly we req uire intelligent agents that work intimately with people in a changing env ironment. However\, current systems mostly work in a passive mode: waiting for requests from users and processing them one at a time. An interactive agent must handle real-time\, sequential inputs and actively collaborate with people through communication. In this talk\, I will present my recent work addressing challenges in real-time language processing and collabora tive dialogue. The first part involves making predictions with incremental inputs. I will focus on the application of simultaneous machine interpret ation and show how we can produce both accurate and prompt translations. T hen\, I will present my work on building agents that collaborate with peop le through goal-oriented conversation. I will conclude by discussing futur e directions towards adaptive\, active agents. DTSTAMP:20180319T180000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Learning interactive agents by He He URL:https://cs.nyu.edu/dynamic/news/colloquium/1075/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1075/

Synopsis:

AI has made huge advancement into our daily life and incr easingly we require intelligent agents that work intimately with people in a changing environment. However\, current systems mostly work in a passiv e mode: waiting for requests from users and processing them one at a time. An interactive agent must handle real-time\, sequential inputs and active ly collaborate with people through communication. In this talk\, I will pr esent my recent work addressing challenges in real-time language processin g and collaborative dialogue. The first part involves making predictions w ith incremental inputs. I will focus on the application of simultaneous ma chine interpretation and show how we can produce both accurate and prompt translations. Then\, I will present my work on building agents that collab orate with people through goal-oriented conversation. I will conclude by d iscussing future directions towards adaptive\, active agents.

END:VEVENT BEGIN:VEVENT UID:1074.event.events@cs.nyu.edu DTSTART:20180309T160000Z DTEND:20180309T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1074/\n\nSynopsis:\n \nRobotics and AI are experiencing radical growth\, fueled by innovations in data-driven learning paradigms coupled with novel device design\, in ap plications such as healthcare\, manufacturing and service robotics. And in our quest for general purpose autonomy\, we need abstractions and algorit hms for efficient generalization.\nData-driven methods such as reinforceme nt learning circumvent hand-tuned feature engineering\, albeit lack guaran tees and often incur a massive computational expense: training these model s frequently takes weeks in addition to months of task-specific data-colle ction on physical systems. Further such ab initio methods often do not sca le to complex sequential tasks. In contrast\, biological agents can often learn faster not only through self-supervision but also through imitation. My research aims to bridge this gap and enable generalizable imitation fo r robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task i nstances across visual\, dynamical or semantic variations. And this involv es designing algorithms that unify in reinforcement learning\, control the oretic planning\, semantic scene &\; video understanding\, and design.\ nIn this talk\, I will discuss two aspects of Generalizable Imitation: Tas k Imitation\, and Generalization in both Visual and Kinematic spaces. Firs t\, I will describe how we can move away from hand-designed finite state m achines by unsupervised structure learning for complex multi-step sequenti al tasks. Then I will discuss techniques for robust policy learning to han dle generalization across unseen dynamics. I will revisit structure learni ng for task-level understanding for generalization to visual semantics.\nA nd lastly\, I will present a program synthesis based method for generaliza tion across task semantics with a single example with unseen task structur e\, topology or length. The algorithms and techniques introduced are appli cable across domains in robotics\; in this talk\, I will exemplify these i deas through my work on medical and personal robotics. DTSTAMP:20180309T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Towards Generalizable Imitation in Robotics by Animesh Garg URL:https://cs.nyu.edu/dynamic/news/colloquium/1074/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1074/

Synopsis:

Robotics and AI are experiencing radical growth\, fueled by innovations in data-driven learning paradigms coupled with novel device design\, in applications such as healthcare\, manufacturing and service r obotics. And in our quest for general purpose autonomy\, we need abstracti ons and algorithms for efficient generalization.

\nData-driven metho ds such as reinforcement learning circumvent hand-tuned feature engineerin g\, albeit lack guarantees and often incur a massive computational expense : training these models frequently takes weeks in addition to months of ta sk-specific data-collection on physical systems. Further such ab initio me thods often do not scale to complex sequential tasks. In contrast\, biolog ical agents can often learn faster not only through self-supervision but a lso through imitation. My research aims to bridge this gap and enable gene ralizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can ge neralize to new task instances across visual\, dynamical or semantic varia tions. And this involves designing algorithms that unify in reinforcement learning\, control theoretic planning\, semantic scene &\; video unders tanding\, and design.

\nIn this talk\, I will discuss two aspects of Generalizable Imitation: Task Imitation\, and Generalization in both Visu al and Kinematic spaces. First\, I will describe how we can move away from hand-designed finite state machines by unsupervised structure learning fo r complex multi-step sequential tasks. Then I will discuss techniques for robust policy learning to handle generalization across unseen dynamics. I will revisit structure learning for task-level understanding for generaliz ation to visual semantics.

\nAnd lastly\, I will present a program s ynthesis based method for generalization across task semantics with a sing le example with unseen task structure\, topology or length. The algorithms and techniques introduced are applicable across domains in robotics\; in this talk\, I will exemplify these ideas through my work on medical and pe rsonal robotics.

END:VEVENT BEGIN:VEVENT UID:1063.event.events@cs.nyu.edu DTSTART:20180305T190000Z DTEND:20180305T200000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1063/\n\nSynopsis:\n \nFormal verification has been gaining the attention and resources of both the academic and the industrial world since it prevents critical software bugs that cost money\, energy\, time\, and even lives. Yet\, software dev elopment and formal verification are decoupled\, requiring verification ex perts to prove properties of a template &ndash\; instead of the actual &nd ash\; implementation ported into verification specific languages. My goal is to bridge formal verification and software development for the programm ing language Haskell. Haskell is a unique programming language in that it is a general purpose\, functional language used for industrial development \, but simultaneously it stands at the leading edge of research and teachi ng welcoming new\, experimental\, yet useful features.\nIn this talk I am presenting Liquid Haskell\, a refinement type checker in which formal spec ifications are expressed as a combination of Haskell&rsquo\;s types and ex pressions and are automatically checked against real Haskell code. This na tural integration of specifications in the language\, combined with automa tic checking\, established Liquid Haskell as a usable verifier\, enthusias tically accepted by both industrial and academic Haskell users. Recently\, I turned Liquid Haskell into a theorem prover\, in which arbitrary theore ms about Haskell functions would be proved within the language. As a conse quence\, Liquid Haskell can be used in Haskell courses to teach the princi ples of mechanized theorem proving.\nTurning a general purpose language in to a theorem prover opens up new research questions &ndash\; e.g.\, can th eorems be used for runtime optimizations of existing real-world applicatio ns? &ndash\; that I plan to explore in the future. DTSTAMP:20180305T190000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Liquid Haskell: Usable Language-Based Program Verification by Niki Vazou URL:https://cs.nyu.edu/dynamic/news/colloquium/1063/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1063/

Synopsis:

Formal verification has been gaining the attention and re sources of both the academic and the industrial world since it prevents cr itical software bugs that cost money\, energy\, time\, and even lives. Yet \, software development and formal verification are decoupled\, requiring verification experts to prove properties of a template &ndash\; instead of the actual &ndash\; implementation ported into verification specific lang uages. My goal is to bridge formal verification and software development f or the programming language Haskell. Haskell is a unique programming langu age in that it is a general purpose\, functional language used for industr ial development\, but simultaneously it stands at the leading edge of rese arch and teaching welcoming new\, experimental\, yet useful features.

\ nIn this talk I am presenting Liquid Haskell\, a refinement type checke r in which formal specifications are expressed as a combination of Haskell &rsquo\;s types and expressions and are automatically checked against real Haskell code. This natural integration of specifications in the language\ , combined with automatic checking\, established Liquid Haskell as a usabl e verifier\, enthusiastically accepted by both industrial and academic Has kell users. Recently\, I turned Liquid Haskell into a theorem prover\, in which arbitrary theorems about Haskell functions would be proved within th e language. As a consequence\, Liquid Haskell can be used in Haskell cours es to teach the principles of mechanized theorem proving.

\nTurning a general purpose language into a theorem prover opens up new research que stions &ndash\; e.g.\, can theorems be used for runtime optimizations of e xisting real-world applications? &ndash\; that I plan to explore in the fu ture.

END:VEVENT BEGIN:VEVENT UID:1062.event.events@cs.nyu.edu DTSTART:20180228T190000Z DTEND:20180228T200000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1062/\n\nSynopsis:\n \nThe emergence of stupendously large matrices in applications such as dat a mining and large-scale scientific simulations has rendered the classical software frameworks and numerical methods inadequate in many situations. In this talk\, I will demonstrate how building domain-specific compilers a nd reformulating classical mathematical methods significantly improve the performance and scalability of large-scale applications on modern computin g platforms.\nIn the first part of the talk\, I will introduce Sympiler\, a domain-specific code generator that transforms computation patterns in s parse matrix methods for high-performance. Specifically\, I will demonstra te how decoupling symbolic analysis from numerical manipulation will enabl e automatic optimization of sparse codes with static sparsity patterns. Th e performance of Sympiler-generated code will be compared to optimized lib rary codes to demonstrate the effectiveness of symbolic decoupling.\nIn th e second part of the talk\, I will show that through mathematical reformul ation\, communication patterns in classical optimization methods can be tr ansformed to reduce communication costs. As a result\, the performance of optimization algorithms is inherently improved when executed on distribute d platforms leading to significant speedups compared to the classical form ulations. DTSTAMP:20180228T190000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Transforming Computation and Communication Patterns for High-Perfo rmance by Maryam Mehri Dehnavi URL:https://cs.nyu.edu/dynamic/news/colloquium/1062/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1062/

Synopsis:

The emergence of stupendously large matrices in applicati ons such as data mining and large-scale scientific simulations has rendere d the classical software frameworks and numerical methods inadequate in ma ny situations. In this talk\, I will demonstrate how building domain-speci fic compilers and reformulating classical mathematical methods significant ly improve the performance and scalability of large-scale applications on modern computing platforms.

\nIn the first part of the talk\, I will introduce Sympiler\, a domain-specific code generator that transforms com putation patterns in sparse matrix methods for high-performance. Specifica lly\, I will demonstrate how decoupling symbolic analysis from numerical m anipulation will enable automatic optimization of sparse codes with static sparsity patterns. The performance of Sympiler-generated code will be com pared to optimized library codes to demonstrate the effectiveness of symbo lic decoupling.

\nIn the second part of the talk\, I will show that through mathematical reformulation\, communication patterns in classical o ptimization methods can be transformed to reduce communication costs. As a result\, the performance of optimization algorithms is inherently improve d when executed on distributed platforms leading to significant speedups c ompared to the classical formulations.

END:VEVENT BEGIN:VEVENT UID:1073.event.events@cs.nyu.edu DTSTART:20180227T190000Z DTEND:20180227T200000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1073/\n\nSynopsis:\n \nAutomated visual recognition is in increasingly high demand. However\, d espite tremendous performance improvement in recent years\, state-of-the-a rt deep visual models learned using large-scale benchmark datasets still f ail to generalize to the diverse visual world. In this talk I will discuss a general purpose semi-supervised learning algorithm\, domain adversarial learning\, which facilitates transfer of information between different vi sual environments and across different semantic tasks thereby enabling rec ognition models to generalize to previously unseen worlds. I&rsquo\;ll dem onstrate applications of this approach to different visual tasks\, such as semantic segmentation in driving scenes and transfer between still image object recognition and video action recognition. DTSTAMP:20180227T190000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Adaptive Adversarial Learning for a Diverse Visual World by Judy Ho ffman URL:https://cs.nyu.edu/dynamic/news/colloquium/1073/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1073/

Synopsis:

Automated visual recognition is in increasingly high dema nd. However\, despite tremendous performance improvement in recent years\, state-of-the-art deep visual models learned using large-scale benchmark d atasets still fail to generalize to the diverse visual world. In this talk I will discuss a general purpose semi-supervised learning algorithm\, dom ain adversarial learning\, which facilitates transfer of information betwe en different visual environments and across different semantic tasks there by enabling recognition models to generalize to previously unseen worlds. I&rsquo\;ll demonstrate applications of this approach to different visual tasks\, such as semantic segmentation in driving scenes and transfer betwe en still image object recognition and video action recognition.

END:VEVENT BEGIN:VEVENT UID:1061.event.events@cs.nyu.edu DTSTART:20180226T190000Z DTEND:20180226T200000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1061/\n\nSynopsis:\n \nThere have been many recent proposals to change the network infrastructu re in order to meet different performance objectives. These changes are of ten difficult to deploy\, either requiring specialized network switching h ardware or greatly complicating network management. Rather than continuing to add new features to the network in an ad-hoc manner\, my work advocate s a principled approach for meeting different performance objectives\, tha t leads to a more stable network infrastructure. This approach is based on the following two questions: First\, can we avoid making changes to the n etwork infrastructure by looking for solutions that only change the end-po ints? Second\, when infrastructure changes are needed\, can we make them u niversal in nature? In this talk\, I will primarily focus on the second qu estion\, where I explore whether we can have a universal packet scheduling algorithm\, that can mimic all other scheduling algorithms. Towards the e nd\, I will briefly present three examples in the context of wide-area and datacenter congestion control\, where I tackle the first question of avoi ding changes to the network infrastructure. DTSTAMP:20180226T190000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Towards a More Stable Network Infrastructure by Radhika Mittal URL:https://cs.nyu.edu/dynamic/news/colloquium/1061/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1061/

Synopsis:

There have been many recent proposals to change the netwo rk infrastructure in order to meet different performance objectives. These changes are often difficult to deploy\, either requiring specialized netw ork switching hardware or greatly complicating network management. Rather than continuing to add new features to the network in an ad-hoc manner\, m y work advocates a principled approach for meeting different performance o bjectives\, that leads to a more stable network infrastructure. This appro ach is based on the following two questions: First\, can we avoid making c hanges to the network infrastructure by looking for solutions that only ch ange the end-points? Second\, when infrastructure changes are needed\, can we make them universal in nature? In this talk\, I will primarily focus o n the second question\, where I explore whether we can have a universal pa cket scheduling algorithm\, that can mimic all other scheduling algorithms . Towards the end\, I will briefly present three examples in the context o f wide-area and datacenter congestion control\, where I tackle the first q uestion of avoiding changes to the network infrastructure.

END:VEVENT BEGIN:VEVENT UID:1060.event.events@cs.nyu.edu DTSTART:20180223T160000Z DTEND:20180223T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1060/\n\nSynopsis:\n \nModern information processing systems increasingly demand the ability to continuously process incoming streaming data in a timely and reliable man ner. Data streams arise in diverse applications ranging from patient monit oring in healthcare to real-time decision-making in emerging Internet of T hings (IoT) systems. In this talk\, I will present my research on the desi gn of programming abstractions for stream processing that enable guarantee s of correctness and predictable performance. First\, I will present Strea mQRE\, a declarative domain-specific language and execution engine for str eam processing. StreamQRE offers strong theoretical guarantees for resourc e usage\, and its performance on realistic workloads is shown to compare f avorably against other popular streaming engines. As a case study\, I will discuss the application of StreamQRE to the design space exploration of a lternative algorithms for cardiac arrhythmia detection. Finally\, I will i ntroduce a type-based framework for the logical specification of distribut ed streaming computations that facilitates correct and efficient deploymen t on distributed architectures such as Apache Storm. DTSTAMP:20180223T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Programming Abstractions for Data Stream Processing Systems by Kons tantinos Mamouras URL:https://cs.nyu.edu/dynamic/news/colloquium/1060/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1060/

Synopsis:

Modern information processing systems increasingly demand the ability to continuously process incoming streaming data in a timely a nd reliable manner. Data streams arise in diverse applications ranging fro m patient monitoring in healthcare to real-time decision-making in emergin g Internet of Things (IoT) systems. In this talk\, I will present my resea rch on the design of programming abstractions for stream processing that e nable guarantees of correctness and predictable performance. First\, I wil l present StreamQRE\, a declarative domain-specific language and execution engine for stream processing. StreamQRE offers strong theoretical guarant ees for resource usage\, and its performance on realistic workloads is sho wn to compare favorably against other popular streaming engines. As a case study\, I will discuss the application of StreamQRE to the design space e xploration of alternative algorithms for cardiac arrhythmia detection. Fin ally\, I will introduce a type-based framework for the logical specificati on of distributed streaming computations that facilitates correct and effi cient deployment on distributed architectures such as Apache Storm.

END:VEVENT BEGIN:VEVENT UID:1059.event.events@cs.nyu.edu DTSTART:20180209T160000Z DTEND:20180209T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1059/\n\nSynopsis:\n \nEveryday code\, written in imperative languages\, is plagued by bugs: mi ssed corner cases\, logic errors\, and even compiler bugs. Programming lan guage theory and formal methods give us the techniques we need to prove th at programs satisfy their specifications\, are memory safe\, and/or use co ncurrency and synchronization correctly. Several recent projects have demo nstrated that we can construct bug-free software at scale\, using logic\, semantics\, and interactive theorem proving. I will present my work in ver ifying two concurrent applications\, a dynamic race detector and a messagi ng system for autonomous vehicles\, and describe an ongoing project in ver ifying high-performance concurrent data structures. By combining detailed models of program behavior\, state-of-the art logics for memory and concur rency\, and tools for constructing and checking mathematical proofs\, we c an now formally guarantee that real-world programs are bug-free. DTSTAMP:20180209T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Building Correct Programs by William Mansky URL:https://cs.nyu.edu/dynamic/news/colloquium/1059/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1059/

Synopsis:

Everyday code\, written in imperative languages\, is plag ued by bugs: missed corner cases\, logic errors\, and even compiler bugs. Programming language theory and formal methods give us the techniques we n eed to prove that programs satisfy their specifications\, are memory safe\ , and/or use concurrency and synchronization correctly. Several recent pro jects have demonstrated that we can construct bug-free software at scale\, using logic\, semantics\, and interactive theorem proving. I will present my work in verifying two concurrent applications\, a dynamic race detecto r and a messaging system for autonomous vehicles\, and describe an ongoing project in verifying high-performance concurrent data structures. By comb ining detailed models of program behavior\, state-of-the art logics for me mory and concurrency\, and tools for constructing and checking mathematica l proofs\, we can now formally guarantee that real-world programs are bug- free.

END:VEVENT BEGIN:VEVENT UID:1051.event.events@cs.nyu.edu DTSTART:20180207T190000Z DTEND:20180207T200000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1051/\n\nSynopsis:\n \nSparse linear algebra problems\, typically solved using iterative method s\, are ubiquitous throughout scientific and data analysis applications an d are often the most expensive computations in large-scale codes due to th e high cost of data movement. Approaches to improving the performance of i terative methods typically involve modifying or restructuring the algorith m to reduce or hide this cost. Such modifications can\, however\, result i n drastically different behavior in terms of convergence rate and accuracy . A clear\, thorough understanding of how inexact computations\, due to ei ther finite precision error or intentional approximation\, affect numerica l behavior is thus imperative in balancing the tradeoffs between accuracy\ , convergence rate\, and performance in practical settings.\nIn this talk\ , we focus on two general classes of iterative methods for solving linear systems: Krylov subspace methods and iterative refinement. We present boun ds on the attainable accuracy and convergence rate in finite precision s-s tep and pipelined Krylov subspace methods\, two popular variants designed for high performance. For s-step methods\, we demonstrate that our bounds on attainable accuracy can lead to adaptive approaches that both achieve e fficient parallel performance and ensure that a user-specified accuracy is attained. We present two such adaptive approaches\, including a residual replacement scheme and a variable s-step technique in which the parameter s is chosen dynamically throughout the iterations. Motivated by the recent trend of multiprecision capabilities in hardware\, we present new forward and backward error bounds for a general iterative refinement scheme using three precisions. The analysis suggests that on architectures where half precision is implemented efficiently\, it is possible to solve certain lin ear systems up to twice as fast and to greater accuracy.\nAs we push towar d exascale level computing and beyond\, designing efficient\, accurate alg orithms for emerging architectures and applications is of utmost importanc e. We discuss extensions to machine learning and data analysis application s\, the development of numerical autotuning tools\, and the broader challe nge of understanding what increasingly large problem sizes will mean for f inite precision computation both in theory and practice. DTSTAMP:20180207T190000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Sparse Linear Algebra in the Exascale Era by Erin Carson URL:https://cs.nyu.edu/dynamic/news/colloquium/1051/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1051/

Synopsis:

Sparse linear algebra problems\, typically solved using i terative methods\, are ubiquitous throughout scientific and data analysis applications and are often the most expensive computations in large-scale codes due to the high cost of data movement. Approaches to improving the p erformance of iterative methods typically involve modifying or restructuri ng the algorithm to reduce or hide this cost. Such modifications can\, how ever\, result in drastically different behavior in terms of convergence ra te and accuracy. A clear\, thorough understanding of how inexact computati ons\, due to either finite precision error or intentional approximation\, affect numerical behavior is thus imperative in balancing the tradeoffs be tween accuracy\, convergence rate\, and performance in practical settings.

\nIn this talk\, we focus on two general classes of iterative metho ds for solving linear systems: Krylov subspace methods and iterative refin ement. We present bounds on the attainable accuracy and convergence rate i n finite precision s-step and pipelined Krylov subspace methods\, two popu lar variants designed for high performance. For s-step methods\, we demons trate that our bounds on attainable accuracy can lead to adaptive approach es that both achieve efficient parallel performance and ensure that a user -specified accuracy is attained. We present two such adaptive approaches\, including a residual replacement scheme and a variable s-step technique i n which the parameter s is chosen dynamically throughout the iterations. M otivated by the recent trend of multiprecision capabilities in hardware\, we present new forward and backward error bounds for a general iterative r efinement scheme using three precisions. The analysis suggests that on arc hitectures where half precision is implemented efficiently\, it is possibl e to solve certain linear systems up to twice as fast and to greater accur acy.

\nAs we push toward exascale level computing and beyond\, desig ning efficient\, accurate algorithms for emerging architectures and applic ations is of utmost importance. We discuss extensions to machine learning and data analysis applications\, the development of numerical autotuning t ools\, and the broader challenge of understanding what increasingly large problem sizes will mean for finite precision computation both in theory an d practice.

END:VEVENT BEGIN:VEVENT UID:1052.event.events@cs.nyu.edu DTSTART:20180202T160000Z DTEND:20180202T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1052/\n\nSynopsis:\n \nOuter-loop applications\, such as optimization\, control\, uncertainty q uantification\, and inference\, form a loop around a computational model a nd evaluate the model in each iteration of the loop at different inputs\, parameter configurations\, and coefficients. Using a high-fidelity model i n each iteration of the loop guarantees high accuracies but often quickly exceeds available computational resources because evaluations of high-fide lity models typically are computationally expensive. Replacing the high-fi delity model with a low-cost\, low-fidelity model can lead to significant speedups but introduces an approximation error that is often hard to quant ify and control. We introduce multifidelity methods that combine\, instead of replace\, the high-fidelity model with low-fidelity models. The overal l premise of our multifidelity methods is that low-fidelity models are lev eraged for speedup while occasional recourse is made to the high-fidelity model to establish accuracy guarantees. The focus of this talk is the mult ifidelity Monte Carlo method that samples low- and high-fidelity models to accelerate the Monte Carlo estimation of statistics of the high-fidelity model outputs. Our analysis shows that the multifidelity Monte Carlo metho d is optimal in the sense that the mean-squared error of the multifidelity estimator is minimized for the available computational resources. We prov ide a convergence analysis\, prove that adapting the low-fidelity models t o the Monte Carlo sampling reduces the mean-squared error\, and give an ou tlook to multifidelity rare event simulation. Our numerical examples demon strate that multifidelity Monte Carlo estimation provides unbiased estimat ors ("accuracy guarantees") and achieves speedups of orders of magnitude c ompared to crude Monte Carlo estimation that uses a single model alone. DTSTAMP:20180202T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Data-Driven Multifidelity Methods for Monte Carlo Estimation by Ben jamin Peherstorfer URL:https://cs.nyu.edu/dynamic/news/colloquium/1052/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1052/

Synopsis:

Outer-loop applications\, such as optimization\, control\ , uncertainty quantification\, and inference\, form a loop around a comput ational model and evaluate the model in each iteration of the loop at diff erent inputs\, parameter configurations\, and coefficients. Using a high-f idelity model in each iteration of the loop guarantees high accuracies but often quickly exceeds available computational resources because evaluatio ns of high-fidelity models typically are computationally expensive. Replac ing the high-fidelity model with a low-cost\, low-fidelity model can lead to significant speedups but introduces an approximation error that is ofte n hard to quantify and control. We introduce multifidelity methods that co mbine\, instead of replace\, the high-fidelity model with low-fidelity mod els. The overall premise of our multifidelity methods is that low-fidelity models are leveraged for speedup while occasional recourse is made to the high-fidelity model to establish accuracy guarantees. The focus of this t alk is the multifidelity Monte Carlo method that samples low- and high-fid elity models to accelerate the Monte Carlo estimation of statistics of the high-fidelity model outputs. Our analysis shows that the multifidelity Mo nte Carlo method is optimal in the sense that the mean-squared error of th e multifidelity estimator is minimized for the available computational res ources. We provide a convergence analysis\, prove that adapting the low-fi delity models to the Monte Carlo sampling reduces the mean-squared error\, and give an outlook to multifidelity rare event simulation. Our numerical examples demonstrate that multifidelity Monte Carlo estimation provides u nbiased estimators ("accuracy guarantees") and achieves speedups of orders of magnitude compared to crude Monte Carlo estimation that uses a single model alone.

END:VEVENT BEGIN:VEVENT UID:1050.event.events@cs.nyu.edu DTSTART:20180126T160000Z DTEND:20180126T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1050/\n\nSynopsis:\n \nOver 13 months in 2016--17 the US Federal Communications Commission cond ucted an "incentive auction" to repurpose radio spectrum from broadcast te levision to wireless internet. In the end\, the auction yielded $19.8 bill ion USD\, $10.05 billion USD of which was paid to 175 broadcasters for vol untarily relinquishing their licenses across 14 UHF channels. Stations tha t continued broadcasting were assigned potentially new channels to fit as densely as possible into the channels that remained. The government netted more than $7 billion USD (used to pay down the national debt) after cover ing costs (including retuning). A crucial element of the auction design wa s the construction of a solver\, dubbed SATFC\, that determined whether se ts of stations could be "repacked" in this way\; it needed to run every ti me a station was given a price quote.\nThis talk describes the process by which we built SATFC and its impact on the auction. We adopted an approach we dub "deep optimization"\, taking a data-driven\, highly parametric\, a nd computationally intensive approach to solver design. More specifically\ , to build SATFC we designed software that could pair both complete and lo cal-search SAT-encoded feasibility checking with a wide range of domain-sp ecific techniques\, such as constraint graph decomposition and novel cachi ng mechanisms that allow for reuse of partial solutions from related\, sol ved problems. We then used automatic algorithm configuration techniques to construct a portfolio of eight complementary algorithms to be run in para llel\, aiming to achieve good performance on instances that arose in propr ietary auction simulations. We found that within the short time budget req uired in practice\, SATFC solved more than 95% of the problems it encounte red. Furthermore\, simulation results showed that the incentive auction pa ired with SATFC produced nearly optimal allocations in a restricted settin g and substantially outperformed other alternatives at national scale. DTSTAMP:20180126T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Incentive Auctions and Spectrum Repacking: A Case Study for "Deep Optimization" by Kevin Leyton-Brown URL:https://cs.nyu.edu/dynamic/news/colloquium/1050/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1050/

Synopsis:

Over 13 months in 2016--17 the US Federal Communications Commission conducted an "incentive auction" to repurpose radio spectrum fr om broadcast television to wireless internet. In the end\, the auction yie lded $19.8 billion USD\, $10.05 billion USD of which was paid to 175 broad casters for voluntarily relinquishing their licenses across 14 UHF channel s. Stations that continued broadcasting were assigned potentially new chan nels to fit as densely as possible into the channels that remained. The go vernment netted more than $7 billion USD (used to pay down the national de bt) after covering costs (including retuning). A crucial element of the au ction design was the construction of a solver\, dubbed SATFC\, that determ ined whether sets of stations could be "repacked" in this way\; it needed to run every time a station was given a price quote.

\nThis talk des cribes the process by which we built SATFC and its impact on the auction. We adopted an approach we dub "deep optimization"\, taking a data-driven\, highly parametric\, and computationally intensive approach to solver desi gn. More specifically\, to build SATFC we designed software that could pai r both complete and local-search SAT-encoded feasibility checking with a w ide range of domain-specific techniques\, such as constraint graph decompo sition and novel caching mechanisms that allow for reuse of partial soluti ons from related\, solved problems. We then used automatic algorithm confi guration techniques to construct a portfolio of eight complementary algori thms to be run in parallel\, aiming to achieve good performance on instanc es that arose in proprietary auction simulations. We found that within the short time budget required in practice\, SATFC solved more than 95% of th e problems it encountered. Furthermore\, simulation results showed that th e incentive auction paired with SATFC produced nearly optimal allocations in a restricted setting and substantially outperformed other alternatives at national scale.

END:VEVENT X-WR-CALNAME:NYU CS Colloquia Calendar END:VCALENDAR