Processing Computer Science by Alexander / Make A Comment / Filed under Reading
Computers, to my untrained mind, are things to be programmed to achieve useful tasks, such as attempting to run Microsoft software or challenging Russian grandmasters in chess.
I have never fully understood the science behind computer science, even after inflicting several classes on the subject upon myself as an undergraduate. Most of what I learned there was that a kilobyte is really 1024 bytes, not 1000 bytes; computers count in bianary; and that programming C++ is hard.
And what was scientific about that?
Science is a process of research that yields insights into mysteries as yet unexplained and most scientific disciplines do a great job of imparting relevance onto the long dead men (and occasionally women) who committed acts of discovery on their way to tenure. Just ask any high schooler to tell you what they know about gravity and you'll likely hear that it was discovered by either Galileo or Isaac Newton or maybe Albert Einstein. That this approximation of information can penetrate their Facebook-addled, YouTube-junkie brains is a testament to how hard schools push it.
Now quick, name a computer scientist. Steve Jobs does not count.
I've read more than a little bit about computers, their creation and the rapid improvements they've achieved in the past half-century and I find myself at a loss on this very question.
The science of these devices seemed to me be a matter of making them speedier, getting them to teraflop faster or float more points or whatever mumbo-jumbo IBM made up to sell more of its heavy iron.
Such was the state of my perception until I encountered Natural Computing, due out this May from W.W. Norton. The book, by computer scientist Dennis Shasha and reporter Cathy Lazere does much to demystify what computer scientists do as well as reviewing the current state of research in the field.
It's the sort of book that's perfect for a college student thinking about a career in computer science, or trying to understand which academic advisors to pick for his or her thesis.
For the Silicon Valley set, it's a good way to catch up with what's new and who is working on it. The topics covered may be a touch too cutting edge for quick commercialization, but it's the sort of book I'd pick up if I wanted to try and keep pace with someone like Steve Jurvetson.
The book features profiles of 15 computer scientists, the problems each tackles and the progress each has made. Think of it as a modern day version of Vasari's The Lives of the Artists.
The books sections cover adaptive computing, genetic and biologic computing and the merging of physics and computer science and makes for interesting reading. In fact, the subject matter of the book is so interesting as to mask the sometimes thin reporting which relies on a single source for each section. It might have been interesting or useful to hear from each scientist's peers, research associates or even detractors. Science does not happen in a vacuum and such panoply of voices can demonstrate the process and procedure of progress.
Of course the subject matter is difficult enough without confusing it with too many other inputs and the authors do a remarkable job of making it not just readable, but also enjoyable. I found myself nodding and understanding exactly what delineates digital programming from analogue programming and the implications of moving from one to the other for certain applications.
Yet if I had read that scientists that had got their start microwaving guitars for Gibson had wired up a piece of Jell-O to run a "Hello World" program, I would have thought it was yet another dumb Google April Fool's joke. But that's exactly what Jonathan Mills, a computer scientist at Indiana University, Bloomington, has done. It's an important first step to modeling real world problems and natural events more simply.
Of greatest interest to those in Silicon Valley may be the section on Jake Loveless, a financial trader who uses "K" and other esoteric computer science such as genetic algorithms to optimize returns for hedge funds. I've long wondered what went on inside the black boxes inside the proprietary trading divisions of the financial behemoths that rule Wall Street. Maybe an update is in order describing just how all these systems seemed to simultaneously glitch. One wonders if they were running Windows.
In this breezy overview of current trends in computer design and software, computer science professor Shasha and writer-editor Lazere profile 15 computer scientists working on the application of "evolutionary techniques" like natural selection to robots exploring distant planets, next generation pharmaceutical designs, "analog programming," and more. While traditional computing relies on "skills learned in the last few hundred years of human history," pioneer Rodney Brooks looked to solutions developed over millennia of insect evolution, hypothesizing a robot that interacts directly with the world using touch and sonar, rather than a digital representation; today, Brooks designs bomb-disarming robots that crawl on "articulated pogo-stick sensing devices that work independently." In finance, Jake Loveless perfected "micromarket trading," which allows computers to detect patterns and adapt to changes over the very short term (such as minute-by-minute price and volume changes). Other profiles look at "computers" built out of DNA, the use of viruses to design new drugs, and other ways scientists are planning our escape from "the digital electronic prison" that dominates mainstream computing. Amateur tech enthusiasts should be absorbed by this knowledgeable but welcoming look at the bleeding edge of computing. (Mar.)
"Natural Computing" is the easiest read of the three, consisting of short biographical sketches of 14 people who have made important advances in computing. They range from MIT scientist Rodney Brooks, whose 1990 paper "Elephants Don't Play Chess" changed the way computer scientists think about artificial intelligence, to Jake Loveless, a college dropout who went on to make a fortune using computers in the stock market.
The biographies, by Dennis Shasha and Cathy Lazere, are bite-size -- no more than six pages or so -- and the technical material is segregated in sidebars so that the reader doesn't get bogged down unless he or she wants to. In the end, though, as entertaining as the biographies were, when I finished reading them I felt that I was just up in the air. The important advances presented are scattered all over the field of computing, with no obvious connections. I would have liked a little more coherence.
May 10, 2010
Nature and technology may seem worlds apart, but New York University Computer Scientist Dennis Shasha maintains that the natural world can bolster the capacity of today's most sophisticated machines. In Natural Computing: DNA, Quantum Bits, and the Future of Smart Machines, Shasha and co-author Cathy Lazere describe the work of 15 pioneers who have successfully harnessed nature's power in advancing technology.
Increasingly complex processors and software have been utilized to overcome increasingly complex problems, but what happens when technology reaches the cognitive limits of the designers? Instead of trying to design for all scenarios, engineers are designing intelligent machines that synthesize and adapt to the world in which they operate.
Shasha and Lazere recount work in a promising field "natural computing" that has started to yield machines that exceed the capability of traditional technologies. Taking inspiration from life, scientists have learned how to create robots that move intelligently on Mars, spacecrafts that can heal themselves, and even methods to trade successfully on Wall Street all through an increased understanding of how evolutionary ideas can aid in solving fundamentally non-algorithmic problems.
They also review the work of scientists, such as NYU Chemistry Professor Nadrian Seeman, who has successfully programmed not the behavior of certain types of software, but more significantly, the actions of life's most fundamental building blocks through a "DNA assembly line." While this may sound exotic, it can also be practical.
"If you want a device that will repair skin, bones, or arteries," authors Shasha and Lazere explain, "it makes far more sense to build the device out of DNA, viruses, or cells, than to build it out of electronics."
Shasha is a professor at NYU’s Courant Institute of Mathematical Sciences and Lazere, a freelance writer, is a former editor at the Economist Intelligence Unit. They previously co-authored Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists (Springer).
For review copies, contact Alice Rha, W. W. Norton & Company, at 212.790.4295 or arha@wwnorton.com.
Posted by David Foster on May 16th, 2010 (All posts by David Foster)
Present-day computers are remarkably fast: a garden-variety laptop can do over a billion basic operations (additions, multiplications, etc) every second. The machine on which you are reading this can do more calculating, if you ask it nicely, than the entire population of the United States. And supercomputers are available which are much faster.
Yet there are important problems for which all this computational capacity is completely inadequate. In their book Natural Computing, Dennis Shasha and Cathy Lazere describe the calculations necessary for the analysis of protein folding, which is important in biological research and particularly in drug design. Time must be divided into very short intervals of around one femtosecond, which is a million billionth of a second, and for each interval, the interactions of all the atoms involved in the process must be calculated. Then do it again for the next femtosecond, and the next, and the next.
To perform this calculation for one millisecond of real time (which is apparently a biologically-interesting interval) would require 100,000 years on a conventional computer.
Under the sponsorship of David Shaw (of the investment and technology firm D E Shaw & Co), a specialized supercomputer has been built to address the protein-folding problem. Named Anton (after Anton van Leeuwenhoek, the inventor of the microscope), this machine can simulate 10 microseconds of protein-folding activity in only one day of real time, implying that the important 1-millisecond period can be simulated in only 100 days.
An alternative approach to the problem has been taken by the project Folding@Home, in which individuals contribute their unused computer time (PCs, game machines such as the Playstation 3, etc) to a vast distributed-computing network, which now has something like 400,000 participating machines.
It is sobering to think about what vast computational resources are necessary to even begin to simulate what tiny bits of nature do all the time.
And note that the protein-folding problem is a deterministic physical problem, without the complexities of human behavior which are involved in economic modeling.
As the senior managing director of a technology firm that employs algorithms in their most complex forms, I spend a lot of time trying to explain, via nature-centric analogies, how these formulae work. The most cutting-edge algorithms are known as "genetic algorithms," because they self-adapt their "recipes" through interactions with a wider environment of stimuli, thereby approximating evolutionary responses found in nature. The hardest part about explaining that to people comes from overcoming their bias toward unnatural silicon solutions, as opposed to the carbon-based pathways of discovery and adaptation through failure that define our human existence. Oddly enough, people tend to trust computers' seeming infallibility more than nature's trial and error.
But at the same time, people fear a more highly technologized future, because they assume it will be less natural. In truth, technology, including computing, will evolve more in the direction of nature than the other way around, and will fuse with it increasingly on the latter's terms. When we watch the movie, "Avatar," we see its naturally "wired" planet of Pandora through the prism of our own nostalgia for the primitive. But as a fascinating new book on computing argues, Pandora's back-to-nature alternative to our own frighteningly technologized trajectory is not the lost past, but rather the inevitable future of the path we find ourselves on.
In "Natural Computing: DNA, Quantum Bits, and the Future of Smart Machines," Dennis Shasha and Cathy Lazere draw upon interviews with 15 leading scientists working in disparate fields to explore the outer reaches of computing. They expected to write a book about a future world dominated by thinking machines, but instead found that the common vision to have emerged across all of these fields is that "the future of computing is a synthesis with nature." The way forward is presented less as a choice and more as a necessity: In computing as in deep space, to really "go where no man has gone before," we'll need machines that can self-heal, learn and evolve -- just like humans.
Shasha and Lazere identify three strands to this vision. First, biological thinking has inspired new approaches to digital computing. The classic digital methodology is to pose a question, generate a model, code a program, and run the calculations that yield the answer. But what if the target in question contains too many "unknown unknowns," meaning it can't be directly modeled? What if you need to wander in the desert for a while, making unexpected discoveries along the way? That's where genetic algorithms come into play, in an iterative and highly competitive process: You start with a population of candidates, tweak them with random changes, and after evaluating their relative fitness, you combine the best ones into even better combinations that move you closer to the answers. Nature may abhor a vacuum, but she admires creative workarounds.
The second strand predicts that biological entities will eventually replace most silicon-based computing. We currently live in an environment dominated by a relatively small number of brand-name models that span great networks and enjoy lengthy careers. But as Shasha and Lazere note, "computers made of bacteria or viruses come by the million, have no names, and are provincial -- they communicate only with their neighbors." Yes, they often fail. But according to the authors, a complex computing machine made up of trillions of not-all-that-smart and prone-to-failure cells can nonetheless tackle the most complex challenges. Don't believe it? Just think of the human body, which can "run, think, and love, even though none of our individual cells can do those things."
Finally, the third strand of this new vision for computing says we'll invariably move past much of the binary calculations that define digital computing (answers cast in 0s and 1s) and toward the measuring of answers that come with naturally unique signatures. Unlike the industrial model of "one size fits all," in which form follows function, solutions will be customized for each organism, allowing form to follow nature. The book offers as an example a cellular computer that monitors arteries from inside the human body. When it detects a slight abnormality in its host's heartbeat, it transmits its findings to an outside computer that then designs a bacterial cleaning agent specifically for the host's body. A week later the remedy arrives in the mail.
The paradigm shift here cannot be overstated. In our current world, computer programs either work or fail, and changing software, according to Shasha and Lazere, resembles heart surgery: "You want everything in place before you wake the patient," which in this case means rebooting your PC. By contrast, in a more naturalized future computing environment, we'll expect our computers to fail more often, but to recover on their own -- in effect, fending for themselves more and more. Instead of finding them alien, argue Shasha and Lazere, we may find ourselves "feeling affection for these machines," because the fact that they tackle harder problems, sometimes make mistakes, and repair themselves will make them "more human."
Reading the book, I came away with the comforting thought that the mindset of future computers will seem far less alien to my kids than to me. I came of age in a world where science seemed more of an exogenous, uncontrollable threat (e.g., nuclear war) than an intimate trustworthy companion. But my children, having grown up totally networked from day one, seem both more comfortable with these tools -- and with the wandering, trial-and-error logic they naturally reward. In my childhood, you needed to bat .300 to make the town's Little League team, and success was measured in the most binary of fashion: Did you get on base or not? But my kids think nothing of trying 5,000 different ways to complete a role-playing game, learning from each success and failure as they move along.
That's a major theme in "Natural Computing": Most of what we call computing today seeks to approximate skills that humans have learned only in the past few hundred years. So programmers tend to err on the side of bigness, complexity and ultra-reliability, when nature says to keep it simple, distributed and fault-tolerant. In the future, we will rely less on the giant, central processing unit and more on the swarm of relatively dumb machines that operate according to a few simple rules. One man's "shallow thinking" is another man's evolutionary leap in networked intelligence.
As one scientist interviewed in the book put it, evolutionary computing "takes you to strange places." That may not sound comforting, but given all the "unknown unknowns" we're confronting thanks to globalization's continued advance, we'll need ever more capable and adaptable tools as we inexorably move toward the "undiscovered territory" that lies ahead.
Natural computing, as envisaged by Shasha and Lazere, will take on added importance as our technological society inevitably reaches a tipping point in terms of its impact on this planet. Without a doubt, we'll put Mother Earth through its most trying stress test in the decades ahead, so the more cognizant we become of her stunning complexity and resilience -- leveraging it all we can -- the better. Natural computing alone won't save the planet, but it could help us find the "unknown unknowns" we'll need to do so.
Thomas P.M. Barnett is senior managing director of Enterra Solutions LLC and a contributing editor for Esquire magazine. His latest book is "Great Powers: America and the World After Bush" (2009). His weekly WPR column, The New Rules, appears every Monday. Reach him and his blog at thomaspmbarnett.com.
In "Natural Computing," Dennis Shasha and Cathy Lazere profile Mr. Shaw and 14 other scientists who are pushing computer science beyond traditional boundaries. In particular, the scientists are trespassing into the realms of biology and physics and attempting to create computer designs and functions that will imitate organic reality. The authors look at both the research and the researchers themselves, providing basic background to the ideas under investigation and profiling the scientists, often tracing their paths from eccentric childhood passions to scientific breakthroughs.
Mr. Shaw's investigations are of particular interest. We learn from Mr. Shasha and Ms. Lazere that he studied physics in college and did his doctoral studies in artificial intelligence, eventually assembling the prototype of a machine that integrated logic and memory on the same chip. He thereby "avoided the so-called von Neumann bottleneck—the time required to communicate data between processor and memory," a breakthrough in the early 1980s with applications to Wall Street trading and quantitative investing.
For years, we are told, Mr. Shaw pondered leaving the financial world and returning to basic research, and he finally did in 2001. "Shaw thought protein dynamics problems provided a good fit for his interests and his background in developing novel machine architectures," the authors write. "He had a personal reason as well: his mother, father, and sister had all died of cancer." Mr. Shaw hoped that he could design a computational tool that would "someday be used to develop life-saving drugs."
His "Anton machine," named for the inventive early scientist Anton van Leeuwenhoek, creates simulations of protein development and operates at astounding computational rates, executing certain aspects of molecular dynamics "at an effective peak speed of more than 650 trillion operations per second." For the moment, the Anton machine is seen as a means of investigating biological processes. Whether it will eventually make life-saving drugs possible is anyone's guess. Mr. Shaw's goal, at any rate, is "to advance the state of scientific knowledge."
The Anton machine applies computer models to a biological problem, but biology often plays a role as a designer in Mr. Shasha and Ms. Lazere's narrative. The scientist Rodney Brooks, for instance, used the mechanics of insect movement to build robots that now explore the surface of the moon and of Mars. When he first began designing such machines, in the early 1990s, he believed (and has since been proved right) that "an intelligent system had to have its representations grounded in the physical world."
More such biological inspiration is needed, the authors say, in the field of computer science generally, now overly focused on brittle algorithms. The authors compare algorithms to the interchangeable parts of bricks-and-mortar mass manufacture. In the future, they argue, computers will have to avoid a static, fixed condition and find ways of evolving without direct supervision, even learning to heal themselves without a visit from a repairman.
Louis Qualls, another of the scientists profiled in "Natural Computing," needs such capabilities to design nuclear power plants for use on the surface of the moon, a task that challenges designers to create a safe, reliable reactor for an unfamiliar environment. He works at Oak Ridge National Laboratory using "genetic" algorithms to help put together complicated systems that are very difficult to design manually. He starts with basic design possibilities and lets the computer explore a myriad of possible tweaks, probing over and over again, say, how best to model radiation damage in metals, until a solution is reached—without human intervention. Such programs are implemented with traditional hardware, but the inspiration for the techniques comes from the world of living things.
Biology's role can be even more direct. The authors argue that the silicon in our familiar computers may be replaced, over time, with biological materials like bacteria and DNA. Scientists can already create origami out of DNA and make viruses in various designed shapes. Similar approaches to making a biochemical-hardware interface could be used to control biological processes with computers or make peripheral devices, like a "printer" that would output a new vaccine.
No one has yet built a computer based on quantum mechanics, but algorithms have been designed that slash the complexity of current approaches to imitating quantum processes. Some problems that are effectively impossible to solve on current machines will take only seconds on quantum computers. The boon to mankind will be great, though such computers will render useless current encryption techniques, affecting the utility of government security programs and the privacy of Web shopping sites. There is a downside to everything.
Mr. Hamilton is founder of Categorical Technology, LLC, a start-up that applies natural computing techniques to problems in Web publishing.