https://cs.nyu.edu/dynamic/news/colloquium/1050/

END:VEVENT BEGIN:VEVENT UID:1049.event.events@cs.nyu.edu DTSTART:20171212T160000Z DTEND:20171212T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1049/\n\nSynopsis:\n \nThis talk will introduce BBR\, a ground-up redesign of Internet congesti on control - the algorithm that decides how fast to send data over the net work. The talk will discuss the motivations for BBR\, its design\, impleme ntations\, performance results\, and deployment at Google for TCP and QUIC (it will also introduce QUIC\, a new always encrypted user-space UDP-base d transport). BBR is now used for all TCP and QUIC Internet traffic from g oogle.com\, YouTube\, and Google Cloud\, and all TCP traffic between Googl e data centers. BBR is available as open source for Linux TCP and QUIC\, a nd an implementation is under way for FreeBSD TCP at Netflix.\nBBR was a c lose collaboration between Neal Cardwell\, Yuchung Cheng\, C. Stephen Gunn \, Soheil Hassas Yeganeh\, and Van Jacobson - all members of Google's make tcp-fast project. The goal of this project is to evolve Internet transpor t via fundamental research and open source software. Contributions of the project include TFO (TCP Fast Open)\, TLP (Tail Loss Probe)\, RACK loss re covery\, fq/pacing\, BBR congestion control\, and a large fraction of the git commits to the Linux kernel TCP code for the past five years. Team mem ber Van Jacobson wrote the original TCP congestion control algorithm that has been the Internet standard for the past 30 years. DTSTAMP:20171212T160000Z LOCATION:60 Fifth Avenue\, C15 SUMMARY:BBR Congestion Control by Neal Cardwell\, Google URL:https://cs.nyu.edu/dynamic/news/colloquium/1049/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1049/

Synopsis:

This talk will introduce BBR\, a ground-up redesign of In ternet congestion control - the algorithm that decides how fast to send da ta over the network. The talk will discuss the motivations for BBR\, its d esign\, implementations\, performance results\, and deployment at Google f or TCP and QUIC (it will also introduce QUIC\, a new always encrypted user -space UDP-based transport). BBR is now used for all TCP and QUIC Internet traffic from google.com\, YouTube\, and Google Cloud\, and all TCP traffi c between Google data centers. BBR is available as open source for Linux T CP and QUIC\, and an implementation is under way for FreeBSD TCP at Netfli x.

\nBBR was a close collaboration between Neal Cardwell\, Yuchung C heng\, C. Stephen Gunn\, Soheil Hassas Yeganeh\, and Van Jacobson - all me mbers of Google's make tcp-fast project. The goal of this project is to ev olve Internet transport via fundamental research and open source software. Contributions of the project include TFO (TCP Fast Open)\, TLP (Tail Loss Probe)\, RACK loss recovery\, fq/pacing\, BBR congestion control\, and a large fraction of the git commits to the Linux kernel TCP code for the pas t five years. Team member Van Jacobson wrote the original TCP congestion c ontrol algorithm that has been the Internet standard for the past 30 years .

END:VEVENT BEGIN:VEVENT UID:1048.event.events@cs.nyu.edu DTSTART:20171201T143000Z DTEND:20171201T153000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1048/\n\nSynopsis:\n \nThe New York Area Theory Day is a semi-annual conference\, aimed to brin g together people in the New York metropolitan area for one day of interac tion and discussion. The meeting is free and open to everyone\; in particu lar\, students are encouraged to attend.\nFor directions\, please see \;here.\nTo subscribe to our mailing list\, follow the instructions \; here.\nOrganizers:\n\nAlex Andoni\nYevgeniy Dodis\nKrzysztof Onak\n\n \;\nProgram \;\n\n\n\n9:30 - 10:00\nCoffee and bagels\n\n\n10:00 - 10: 55\nProfessor \;Christos PapadimitriouA Computer Scientist Thinks abou t the Brain\n\n\n10:55 - 11:10\nCoffee break\n\n\n11:10 - 12:10\nShort tal ks:Shay Solomon\, Ilya Razenshteyn\, Sepideh Mahabadi\n\n\n12:10 - 2:00\nL unch break\n\n\n2:00 - 2:55\nDr. \;Ilan KomargodskiWhite-Box vs. Black -Box Complexity of Search Problems: \;Ramsey and Graph Property Testin g\n\n\n2:55 - 3:15\nShort talk: Sandro Coretti\n\n\n3:15 - 3:35\nCoffee br eak\n\n\n3:35 - 3:55\nShort talk: Greg Bodwin\n\n\n3:55 - 4:50\n\nProfesso r Virginia Vassilevska WilliamsFine-Grained Complexity of Problems in P\n\ n\n\n5:00 - 7:00\nFollow-up social\n\n\n\n \;\nLong Talks\nChristos H. Papadimitriou \;(Columbia University)\nA Computer Scientist Thinks ab out the Brain\nUnderstanding the ways in which the Brain gives rise to the Mind \;(memory\, behavior\, cognition\, intelligence\, language) is t he most challenging \;problem facing science today. As the answer seem s likely to be at least partly \;computational\, computer scientists s hould work on this problem. Using a \;simplified model of Hebbian plas ticity and employing random graph theory and \;Bernoulli approximation s we reproduce a form of association between memories \;recently obser ved experimentally in humans. We discuss several implications and \;ex tensions. (Joint work with W. Maass and S. Vempala).\n\nIlan Komargodski ( Cornell Tech)\nWhite-Box vs. Black-Box Complexity of Search Problems: Rams ey and Graph \;Property Testing\nRamsey theory assures us that in any graph there is a clique or \;independent set of a certain size\, rough ly logarithmic in the graph size. But \;how difficult is it to find th e clique or independent set? This problem is in \;TFNP\, the class of search problems with guaranteed solutions. \; If the graph is \;gi ven explicitly\, then it is possible to do so while examining a linear num ber \;of edges. If the graph is given by a black-box\, where to figure out whether a \;certain edge exists the box should be queried\, then a large number of queries \;must be issued.\n\nWhat if one is given a program or circuit ("white-box") for computing the \;existence of an e dge. Does the search problem remain hard?\nCan we generically translate al l TFNP black-box hardness into white-box \;hardness?\nDoes the problem remain hard if the black-box instance is small?\n\nWe will answer all of these questions and discuss related questions in the \;setting of prop erty testing.\nJoint work with Moni Naor and Eylon Yogev.\n\nVirginia Vass ilevska Williams \;(MIT)\nFine-Grained Complexity of Problems in P\nA central goal of algorithmic research is to determine how fast computationa l problems can be solved in the worst case. Theorems from complexity theor y state that there are problems that\, on inputs of size n\, can be solved in t(n) time but not in t(n)^{1-eps} time for eps>\;0. The main challen ge is to determine where in this hierarchy various natural and important p roblems lie. Throughout the years\, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for ma ny problems. Nevertheless\, for many other central problems\, the best kno wn running times are essentially those of their classical algorithms from the 1950s and 1960s.\nUnconditional lower bounds seem very difficult to ob tain\, and so practically all known time lower bounds are conditional. For years\, the main tool for proving hardness of computational problems have been NP-hardness reductions\, basing hardness on P\\neq NP. However\, whe n we care about the exact running time (as opposed to merely polynomial vs non-polynomial)\, NP-hardness is not applicable\, especially if the probl em is already solvable in polynomial time. In recent years\, a new theory has been developed\, based on "fine-grained reductions" that focus on exac t running times. Mimicking NP-hardness\, the approach is to (1) select a k ey problem X that is conjectured to require essentially t(n) time for some t\, and (2) reduce X in a fine-grained way to many important problems. Th is approach has led to the discovery of many meaningful relationships betw een problems\, and even sometimes to equivalence classes.\nThe main key pr oblems used to base hardness on have been: the 3SUM problem\, the CNF-SAT problem (based on the Strong Exponential TIme Hypothesis (SETH)) and the A ll Pairs Shortest Paths Problem. Research on SETH-based lower bounds has f lourished in particular in recent years showing that the classical algorit hms are optimal for problems such as Approximate Diameter\, Edit Distance\ , Frechet Distance\, Longest Common Subsequence etc.\nIn this talk I will give an overview of the current progress in this area of study\, and will highlight some exciting new developments.\n \;\nShort Talks\nShay Solo mon (IBM Research)\nFully Dynamic Maximal Matching\nGraph matching is one of the most well-studied problems in \;combinatorial optimization\, wi th a reach plethora of applications. Nevertheless\, \;until recently l ittle was known about this problem in real-life dynamic \;networks\, w hich aim to model the constantly changing physical world.\nFocusing on the maximal matching problem\, there is a trivial linear time \;algorithm for computing such a matching in static networks\, which naturally \; translates into an algorithm with a constant amortized cost in the increme ntal \;case. In this talk I will discuss some of the challenges that w e overcame to \;obtain a constant amortized cost algorithm in the full y dynamic case.\n\nIlya Razenshteyn (Columbia)\nPractical Data-Dependent M etric Compression with Provable Guarantees\nHow well can one compress a da taset of points from a high-dimensional \;space while preserving pairw ise distances? Indyk and Wagner have recently \;obtained almost optima l bounds for this problem\, but their construction (based on hierarchical clustering) is not practical. In this talk\, I will show a new \;pract ical\, quadtree-based compression scheme\, whose provable performance \;essentially matches that of the result of Indyk and Wagner.\nIn addition to the theoretical results\, we will see experimental comparison of \ ;the new scheme and Product Quantization (PQ)--one of the most popular heu ristics \;for distance-preserving compression--on several datasets. Un like PQ and other \;heuristics that rely on the clusterability of the dataset\, the new algorithm \;ends up being more robust.\nThe talk is based on a joint work with Piotr Indyk and Tal Wagner.\n\nSepideh Mahabadi \;(Columbia)\nSet Cover in Sub-Linear Time\nWe study the classic set cover problem from the perspective of \;sub-linear algorithms. Given a ccess to a collection of $m$ sets over $n$ \;elements in the query mod el\, we show that sub-linear algorithms derived from \;existing techni ques have almost tight query complexities.\nOn one hand\, first we show th at an adaptation of streaming algorithms to the \;sub-linear query mod el returns an $\\alpha$-approximate cover using $\\tilde \;O(m(n/k)^{1 /(\\alpha-1)} + nk)$ queries to the input\, where $k$ denotes the value&nb sp\;of a minimum set cover. We then complement this upper bound by proving that for \;lower values of $k$\, the required number of queries is $\ \tilde \;Omega(m(n/k)^{1/(2\\alpha)})$\, even for estimating the optim al cover size. \;Moreover\, we prove that even checking whether a give n collection of sets covers \;all the elements would require $\\Omega( nk)$ queries. These two lower bounds \;provide strong evidence that th e upper bound is almost tight for certain values \;of the parameter $k $. On the other hand\, we show that this bound is not optimal \;for la rger values of the parameter $k$\, as there exists a \;$(1+\\eps)$-app roximation algorithm with $\\tilde O(mn/k\\eps^2)$ queries. We show that t his bound is essentially tight for sufficiently small constant $\\eps$\, b y \;establishing a lower bound of $\\tilde Omega(mn/k)$ query complexi ty.\nThis is a joint work with Piotr Indyk\, Ronitt Rubinfeld\, Ali Vakili an\, and Anak \;Yodpinyanee.\n\nSandro Coretti \;(New York Univers ity)\nRandom Oracles and Non-Uniformity\nHash functions are ubiquitous in cryptography. They are widely used in practice to build one-way functions (OWFs)\, collision-resistant hash functions (CRHFs)\, pseudorandom functio ns/generators (PRFs/PRGs)\, message authentication codes (MACs)\, etc. Mor eover\, they are often used together with other computational assumptions to prove the security of higher-level applications\, such as Fiat-Shamir h euristics for signature schemes\, full-domain-hash signatures\, or OAEP en cryption\, among many others.\nThe security of such schemes is commonly an alyzed in the random-oracle model (ROM)\, in which the hash function is mo deled as a truly random function that can be queried by an attacker at a b ounded number of points. The traditional ROM\, however\, does not account for the fact that a non-uniform attacker may obtain arbitrary advice about the hash function H before attacking a construction. As a result\, bounds derived in the ROM do not accurately reflect security against such attack ers.\nTo remedy this situation\, Unruh (CRYPTO &rsquo\;07) considered the auxiliary-input ROM (AI-ROM)\, in which an attacker may obtain (a bounded amount of) arbitrary leakage about the random oracle before attacking a cr yptographic scheme. In this talk we discuss significant improvements to Un ruh's presampling technique\, which\, intuitively\, allows a leaky random oracle to be replaced by one that is adversarially fixed on a subset of th e coordinates but uniformly random otherwise. We also show how this improv ed presampling leads to better and easily derivable AI-ROM security bounds for a number of cryptographic applications. We conclude by noting that in several cases (such as OWFs\, PRGs\, and CRHFs) there are still significa nt gaps between the derived bounds and the best attacks\, leaving interest ing open problems.\n\nGreg Bodwin \;(MIT)\n Compressing Graphs While P reserving Their Distances\nAbstract TBA DTSTAMP:20171201T143000Z LOCATION:Warren Weaver Hall\, 109 SUMMARY:New York Area Theory Day by IBM/NYU/Columbia URL:https://cs.nyu.edu/dynamic/news/colloquium/1048/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1048/

Synopsis:

The New York Area Theory Day is a semi-annual conference\ , aimed to bring together people in the New York metropolitan area for one day of interaction and discussion. The meeting is free and open to everyo ne\; in particular\, students are encouraged to attend.

\nFor direct ions\, please see \;here.

\nTo subscribe to our mailing list\, fo llow the instructions \;here.

\nOrganizers:

\n\n\;

\n9:30 - 10:00\n | Coffee and bagels |

10:00 - 10:55 | P
rofessor \;Christos Papadimitriou A Computer S cientist Thinks about the Brain |

10:55 - 11:1 0 | Coffee break |

11:10 - 12:10 | Sh
ort talks: Shay Solomon\, Ilya Razenshteyn\, Sepideh Mahabadi |

12:10 - 2:00 | Lunch break |

2:00 - 2:55 | Dr. \;Ilan Komargodski
White-Box vs. Black-Box Complexity of Search Problems: \;< /span>Ramsey and Graph Property Testing |

Short talk: Sandro Coretti | |

3:15 - 3:35 | Coffee break |

3:3 5 - 3:55 | Short talk: Greg Bodwin |

3:55 - 4:50 | \n Professor Virginia Vassilevska Willi
ams |

5:00 - 7:00 | Follow-up social |

\;

\n** Christos H. Papadimitriou** \;(Columbia University)\n

*A Computer Scientist Thinks about the Brain*

Understanding the ways in which the Brain gives rise to the Mind \;(memory\, behavior\, cognition\, intelligence\, language) is the most challenging \;problem facing science today. As the answer seems likely to be at least partly \;computational \, computer scientists should work on this problem. Using a \;< span>simplified model of Hebbian plasticity and employing random graph the ory and \;Bernoulli approximations we reproduce a form of association between memories \;recently observed experim entally in humans. We discuss several implications and \; extensions. (Joint work with W. Maass and S. Vempala).

\n\ n

**Ilan Komargodski** (Cornell Tech)

*White-Box
vs. Black-Box Complexity of Search Problems: Ramsey and Graph \;Prope
rty Testing*

Ramsey theory assures us that in any graph there is a clique or \;independent set of a certain size\ , roughly logarithmic in the graph size. But \;how diffic ult is it to find the clique or independent set? This problem is in \; TFNP\, the class of search problems with guaranteed solutions . \; If the graph is \;given explicitly\, then it is possible to do so while examining a linear number \;of ed ges. If the graph is given by a black-box\, where to figure out whether a& nbsp\;certain edge exists the box should be queried\, then a large number of queries \;must be issued.

\n- \n
- What if one is given a program or circuit ("white-box") for c omputing the \;existence of an edge. Does the search prob lem remain hard? \n
- Can we generically translate all T FNP black-box hardness into white-box \;hardness?< /li>\n
- Does the problem remain hard if the black-box instance is small? \n

We will answer all of these questions and discuss related questions in the \;setting of propert y testing.

\nJoint work with Moni Naor and Eylon Yogev.

\n\n

**Virginia Vassilevska Williams \;
**(MIT)

*Fine-Grained Complexity of Problems in P
*

A central goal of algorithmic research is to determine how f ast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that\, on inputs of size n \, can be solved in t(n) time but not in t(n)^{1-eps} time for eps>\;0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years\, many ingenious algorit hmic techniques have been developed and applied to obtain blazingly fast a lgorithms for many problems. Nevertheless\, for many other central problem s\, the best known running times are essentially those of their classical algorithms from the 1950s and 1960s.

\nUnconditional lower bounds se em very difficult to obtain\, and so practically all known time lower boun ds are conditional. For years\, the main tool for proving hardness of comp utational problems have been NP-hardness reductions\, basing hardness on P \\neq NP. However\, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial)\, NP-hardness is not applicable\, especially if the problem is already solvable in polynomial time. In recen t years\, a new theory has been developed\, based on "fine-grained reducti ons" that focus on exact running times. Mimicking NP-hardness\, the approa ch is to (1) select a key problem X that is conjectured to require essenti ally t(n) time for some t\, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meanin gful relationships between problems\, and even sometimes to equivalence cl asses.

\nThe main key problems used to base hardness on have been: t he 3SUM problem\, the CNF-SAT problem (based on the Strong Exponential TIm e Hypothesis (SETH)) and the All Pairs Shortest Paths Problem. Research on SETH-based lower bounds has flourished in particular in recent years show ing that the classical algorithms are optimal for problems such as Approxi mate Diameter\, Edit Distance\, Frechet Distance\, Longest Common Subseque nce etc.

\nIn this talk I will give an overview of the current progr ess in this area of study\, and will highlight some exciting new developme nts.

\n\;

\n**Shay Solomon (IBM Research)**

*Fully Dynamic Maximal Matching*

Graph matching is one of the most well-studied problems in \ ;combinatorial optimization\, with a reach plethora of applic ations. Nevertheless\, \;until recently little was known about this problem in real-life dynamic \;networks\, whic h aim to model the constantly changing physical world.

\n\n

**Ilya Razenshteyn** (Colum
bia)

*Practical Data-Dependent Metric Compression with Provable
Guarantees*

How well can one compress a dataset of poin ts from a high-dimensional \;space while preserving pairw ise distances? Indyk and Wagner have recently \;obtained almost optimal bounds for this problem\, but their construction (based on hierarchical clustering) is not practical. In this talk\, I w ill show a new \;practical\, quadtree-based compression s cheme\, whose provable performance \;essentially matches that of the result of Indyk and Wagner.

\nIn addition t o the theoretical results\, we will see experimental comparison of \;< /span>the new scheme and Product Quantization (PQ)--one of the most popular heuristics \;for distance-preserving compression- -on several datasets. Unlike PQ and other \;heuristics th at rely on the clusterability of the dataset\, the new algorithm \;ends up being more robust.

\nThe talk is base d on a joint work with Piotr Indyk and Tal Wagner.

\n\n

**Sepideh Mahabadi \;**(Columbia)

We study the classic set
cover problem from the perspective of \;sub-linear algor
ithms. Given access to a collection of $m$ sets over $n$ \;

On one hand\, first we show that an adaptati on of streaming algorithms to the \;sub-linear query mode l returns an $\\alpha$-approximate cover using $\\tilde \;O(m(n/k)^{1/(\\alpha-1)} + nk)$ queries to the input\, where $k$ denotes the value \;of a minimum set cover. We then complement th is upper bound by proving that for \;lower values of $k$\ , the required number of queries is $\\tilde \;Omega(m(n/ k)^{1/(2\\alpha)})$\, even for estimating the optimal cover size. \;Moreover\, we prove that even checking whether a given collecti on of sets covers \;all the elements would require $\\Ome ga(nk)$ queries. These two lower bounds \;provide strong evidence that the upper bound is almost tight for certain values \;of the parameter $k$. On the other hand\, we show that this boun d is not optimal \;for larger values of the parameter $k$ \, as there exists a \;$(1+\\eps)$-approximation algorith m with $\\tilde O(mn/k\\eps^2)$ queries. We show that this bo und is essentially tight for sufficiently small constant $\\eps$\, by \;establishing a lower bound of $\\tilde Omega(mn/k)$ query c omplexity.

\nThis is a joint work with Piotr Indyk\, Ro nitt Rubinfeld\, Ali Vakilian\, and Anak \;Yodpinyanee.

\n\n

**Sandro Coretti \;**(
New York University)

*Random Oracles and Non-Uniformity*

Hash functions are ubiquitous in cryptography. They are widely used in practice to build one-way functions (OWFs)\, collision-resistant hash functions (CRHFs)\, pseudorandom functions/generators (PRFs/PRGs)\, m essage authentication codes (MACs)\, etc. Moreover\, they are often used t ogether with other computational assumptions to prove the security of high er-level applications\, such as Fiat-Shamir heuristics for signature schem es\, full-domain-hash signatures\, or OAEP encryption\, among many others.

\nThe security of such schemes is commonly analyzed in the random-oracle model (ROM)\, in which the hash function is modeled as a truly random function that can be queried by an attacker at a bounded nu mber of points. The traditional ROM\, however\, does not account for the f act that a non-uniform attacker may obtain arbitrary advice about the hash function H before attacking a construction. As a result\, bounds derived in the ROM do not accurately reflect security against such attackers.

\nTo remedy this situation\, Unruh (CRYPTO &rsquo\;07) cons idered the auxiliary-input ROM (AI-ROM)\, in which an attacker may obtain (a bounded amount of) arbitrary leakage about the random oracle before att acking a cryptographic scheme. In this talk we discuss significant improve ments to Unruh's presampling technique\, which\, intuitively\, allows a le aky random oracle to be replaced by one that is adversarially fixed on a s ubset of the coordinates but uniformly random otherwise. We also show how this improved presampling leads to better and easily derivable AI-ROM secu rity bounds for a number of cryptographic applications. We conclude by not ing that in several cases (such as OWFs\, PRGs\, and CRHFs) there are stil l significant gaps between the derived bounds and the best attacks\, leavi ng interesting open problems.

\n\n

**Greg Bo
dwin \;**(MIT)

* Compressing Graphs While Pre
serving Their Distances*

Abstract TBA

END:VEVENT BEGIN:VEVENT UID:1009.event.events@cs.nyu.edu DTSTART:20171117T160000Z DTEND:20171117T170000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1009/\n\nSynopsis:\n \nBitcoin and blockchains have received large amounts of attention both&nb sp\;inside and outside academia. Much of this work has focused on how to&n bsp\;improve blockchains themselves either in terms of performance\, featu res\, \;or theoretical underpinnings. This talk will focus primarily o n another \;question: what can blockchains do for computer security an d \;cryptography? It will cover blockchains as a solution to preexisti ng \;problems in e-cash\, fairness in MPC\, and more broadly as a tool for \;assured communication and state-keeping in cryptographic protoc ols. DTSTAMP:20171117T160000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:Bitcoin\, Payment Privacy and Beyond: Blockchains as Limited Truste d Third Parties by Ian Miers URL:https://cs.nyu.edu/dynamic/news/colloquium/1009/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1009/

Synopsis:

Bitcoin and blockchains have received large amounts of attention both \;inside and outside academia. Much of this work has focused on how to \;improve blockchains th emselves either in terms of performance\, features\, \;or theoretical underpinnings. This talk will focus primarily on another \;question: what can blockchains do for computer security and \;cryptography? It will cover blockchains as a solution to preexisting \;problems in e-cash\, fairness in MPC\, a nd more broadly as a tool for \;assured communication and state-keeping in cryptographic protocols.

END:VEVENT BEGIN:VEVENT UID:1010.event.events@cs.nyu.edu DTSTART:20171027T150000Z DTEND:20171027T160000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1010/\n\nSynopsis:\n \nFor decades\, Supercomputers have been at the frontier of computing tech nology and opened new ways for scientific discovery. After almost a decade since reaching Petascale\, the challenge on the horizon is to reach Exasc ale computing. The creation of an Exascale system will have major impacts in key areas including national security\, medicine\, finance\, and many o ther fields. In this talk\, we will go over some of the basics about Super computing (High Performance Computing)\, talk about the IBM CORAL System a nd then focus on Exascale computing. We also talk about the current softwa re/hardware trends and give an overview of the challenges that the industr y is facing to build an Exascale system. DTSTAMP:20171027T150000Z LOCATION:60 Fifth Avenue\, 150 SUMMARY:The Path to Exascale Computing by Alessandro Morari and Abdullah Ka y URL:https://cs.nyu.edu/dynamic/news/colloquium/1010/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1010/

Synopsis:

For decades\, Supercomputers have been at the f rontier of computing technology and opened new ways for scientific discove ry. After almost a decade since reaching Petascale\, the challenge on the horizon is to reach Exascale computing. The creation of an Exascale system will have major impacts in key areas including national security\, medici ne\, finance\, and many other fields. In this talk\, we will go over some of the basics about Supercomputing (High Performance Computing)\, talk abo ut the IBM CORAL System and then focus on Exascale computing. We also talk about the current software/hardware trends and give an overview of the ch allenges that the industry is facing to build an Exascale system. END:VEVENT BEGIN:VEVENT UID:1012.event.events@cs.nyu.edu DTSTART:20171011T163000Z DTEND:20171011T173000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1012/\n\nSynopsis:\n \nA successful Cloud computing provider depends on a variety of fast\, cos t-effective\, and reliable networks. This talk explains the requirements f or cloud networks\, and then discusses how a cloud provider such as Google creates virtual networks\, and the underlying data-center and wide-area n etworks that support these virtual networks. DTSTAMP:20171011T163000Z LOCATION:60 Fifth Avenue\, C15 SUMMARY:Under the Covers: The networks of a cloud provider by Jeff Mogul URL:https://cs.nyu.edu/dynamic/news/colloquium/1012/ X-ALT-DESC;FMTTYPE=text/html:

https://cs.nyu.edu/dynamic/news/colloquium/1012/

Synopsis:

A successful Cloud computing provider depends on a variet y of fast\, cost-effective\, and reliable networks. This talk explains the requirements for cloud networks\, and then discusses how a cloud provider such as Google creates virtual networks\, and the underlying data-center and wide-area networks that support these virtual networks.

END:VEVENT BEGIN:VEVENT UID:1013.event.events@cs.nyu.edu DTSTART:20171010T150000Z DTEND:20171010T160000Z DESCRIPTION:https://cs.nyu.edu/dynamic/news/colloquium/1013/\n\nSynopsis:\n \nBy expressing systems as compositions of tiny pure functions\, we've fou nd it's possible to build better applications for video and image processi ng and for many other tasks that currently occupy our computers: compiling software\, machine learning\, data exploration\, etc. I'll discuss our ex perience with systems that demonstrate this basic idea: ExCamera paralleli zes video encoding into thousands of tiny tasks\, each handling a fraction of a second of video\, much shorter than the interval between keyframes\, and executing in parallel on AWS Lambda. Rhino uses a network-aware video codec\, and a video-aware transport protocol\, for real-time videoconfere ncing. This architecture substantially outperforms more loosely-coupled ap plications -- Skype\, Facetime\, Hangouts\, and WebRTC -- in both delay an d visual quality. Lepton uses a purely functional JPEG/VP8 transcoder to c ompress images in parallel across a distributed network filesystem with ar bitrary block boundaries. This open-source system is in production at Drop box and has compressed\, by 23%\, more than 200 petabytes of user JPEGs.\n Based on our experience\, we propose a general abstraction for outsourced morsels of computation\, called cloud "thunks" -- stateless closures that describe their data-dependencies by content hash. We have created a tool t hat infers these thunks from an arbitrary build system (make\, cmake\, etc .) and compiles software with 1000-way parallelism on public functions-as- a-service infrastructure like AWS Lambda. We believe that functionalizing applications in this way will fuel a new wave of "general-purpose lambda c omputing\," permitting us to turn many time-consuming processes into large numbers of lambdas executing in parallel with low latency in the cloud. DTSTAMP:20171010T150000Z LOCATION:60 Fifth Avenue\, C15 SUMMARY:Tiny Functions for Codecs\, Compilation\, and (Maybe) Soon Everythi ng by Keith Winstein URL:https://cs.nyu.edu/dynamic/news/colloquium/1013/ X-ALT-DESC;FMTTYPE=text/html:https://cs.nyu.edu/dynamic/news/colloquium/1013/

Synopsis:

By expressing systems as compositions of tiny pure functi ons\, we've found it's possible to build better applications for video and image processing and for many other tasks that currently occupy our compu ters: compiling software\, machine learning\, data exploration\, etc. I'll discuss our experience with systems that demonstrate this basic idea: ExC amera parallelizes video encoding into thousands of tiny tasks\, each hand ling a fraction of a second of video\, much shorter than the interval betw een keyframes\, and executing in parallel on AWS Lambda. Rhino uses a netw ork-aware video codec\, and a video-aware transport protocol\, for real-ti me videoconferencing. This architecture substantially outperforms more loo sely-coupled applications -- Skype\, Facetime\, Hangouts\, and WebRTC -- i n both delay and visual quality. Lepton uses a purely functional JPEG/VP8 transcoder to compress images in parallel across a distributed network fil esystem with arbitrary block boundaries. This open-source system is in pro duction at Dropbox and has compressed\, by 23%\, more than 200 petabytes o f user JPEGs.

\nBased on our experience\, we propose a general abstr action for outsourced morsels of computation\, called cloud "thunks" -- st ateless closures that describe their data-dependencies by content hash. We have created a tool that infers these thunks from an arbitrary build syst em (make\, cmake\, etc.) and compiles software with 1000-way parallelism o n public functions-as-a-service infrastructure like AWS Lambda. We believe that functionalizing applications in this way will fuel a new wave of "ge neral-purpose lambda computing\," permitting us to turn many time-consumin g processes into large numbers of lambdas executing in parallel with low l atency in the cloud.

END:VEVENT X-WR-CALNAME:NYU CS Colloquia Calendar END:VCALENDAR