What has surprised me in computer technology

Ernest Davis
June 14, 2022

In recent days, some of my friends have claimed, here and there, that had anyone in 1990 or 1980 been told about the amazing things that AI systems like DALL E-2, GPT-3, LaMDA and so on can now do, they would have been flabbergasted and they would have concluded that, clearly, by 2022 AI programs must be close to human-level intelligence, sentience, and so on. Based on my memory of my own state of mind thirty and forty years ago, I've denied that, and claimed that, if a fair presentation were made that included the weird failure modes as well as the successes, our earlier selves certainly would have been impressed, but we would also have thought that there was clearly a long way to go yet. Admittedly, we would certainly have been surprised at the way that these programs were constructed, using essentially tabula rasa machine learning methods on vast quantities of data; but not so much at the accomplishments as such.

It has been suggested that I am just jaded and being contrary and falling victim to 20/20 hindsight.

How the hell do I know what I find incredible? Credibility is an expanding field ... Sheer disbelief hardly registers on the face before the head is nodding with all the wisdom of instant hindsight. 'Archbishop Clegthorpe? Of course! The inevitable capstone to a career in veterinary medicine!'
— Tom Stoppard, Jumpers

Perhaps there's some truth in that. But I don't think it's the entire story, or most of it, because there have been many things in computer technology that have surprised me over the years and that still surprise me looking back on them, some positive and some negative. First, the positives, in roughly chronological order:

The World Wide Web. I don't know of anyone who foresaw how quickly and completely this would take off. And especially the degree to which everyone everywhere would devote themselves to creating content for other people. It turns out that one of the main things people want to do is to communicate and share, to a degree that I don't think was really known before the Web. David Gelernter foresaw some aspects of the Web in his 1991 book Mirror Worlds; but his vision was much more centralized, much less limited in terms of content creation, much more complicated technically, much less workable, and much more centered on politics and civic interaction, and much less on shopping and fun.

It's worth noting that most of the recent successes of AI --- certainly all the successes in natural language and vision --- had the World Wide Web as a prerequisite. These AI systems are completely dependent on the vast quantities of information that is easily available on the web.

Web search. In all my professional career, I have never been so astonished as in 1995 when David McAllester came into my office and showed me Infoseek. It amazed me that it was even feasible. It still amazes me, even though I have taught a graduate course on web search engines seven times.

The triumph of corpus-based learning over symbolic reasoning. This occurred gradually, starting around 1990ish and picking up steam; by 2000, it was certainly decisive. I did not at all anticipate it, and I still find it extremely counterintuitive. Given that large surprise, though, most individual successes haven't seem to me to be huge additional surprises, except as noted below.

Wikipedia. Like the World Wide Web, I find it completely amazing that the world's largest compendium of knowledge should have been built almost entirely by unpaid volunteers. Ten years ago or so, I organized a campaign nominating Jimmy Wales for an honorary degree at NYU (it didn't happen), and in my nomination document I described Wikipedia as "the noblest product of the Information Age." That still seems right to me.

Youtube, Netflix etc. Here I can be quite certain of my surprise, because, as my wife likes to remind me, I confidently predicted to her around 1984 or so that we would not live to see this kind of functionality. I vastly underestimated what kind of bandwidth would be technologically feasible. With YouTube, yet again, part of my surprise is sociological, that so many people would spend so much time and effort to creating and posting content.

Smart phones. Everyone agrees they're amazing, so the point need not be belabored. But it really is amazing that you can have so much compute power and functionality in something that fits in your pocket.

Watson beating Jeopardy. This was the second biggest surprise in my professional career. In 2010, I would certainly have bet long odds that this would not be possible for another 10 years.

The defeat of the Winograd Schema Challenge. All right, so this does come in the category of recent AI. I was certainly way overconfident. The email from Vid Kocijan, in May 2019, announcing that their system had gotten 75% made me jump out of my seat. Up to that moment I had been loudly assuring everyone I knew that the Challenge was completely safe until NLP researchers repented and turned to knowledge-based methods. (The current top systems score over 90% on the Challenge, but Kocijan's result was the breakthrough, establishing that LLM technology could definitely do better than chance.)

Negative surprises

And then there are the persistent problems that I am surprised haven't been fixed, the new problems that I am surprised have been created, and the goals where I am surprised so little progress has been made.

Computer interfaces. Why is it that operating systems are still such a misery to deal with? Why is it that every program on earth with more than a very limited set of functionalities is such a headache to learn, and so infuriating to deal with when something unexpected happens?

Character encodings. A couple of days ago, I cut and pasted a paragraph from my email into a Latex source document. Alphabetic characters, commas periods, blanks, and line breaks; nothing fancy or esoteric like a percent sign or an accent grave or, God forbid, a quotation mark. But even so, a line break got turned into a blank between words, when viewed in my text editor (vi) and when viewed by spell, but Latex didn't recognize it, and the two words got jammed together in the PDF. Why the hell is white space still a problem in 2022?

Programming languages. Why on earth has the computer world, in its collective wisdom, created however many it is --- 10,000? 50,000? --- different programming languages? And what kind of baleful cyber-incarnation of Grisham's law accounts for the fact that, after 70 years of programming language development and theory and design, appalling, misdesigned, arbitrary crap like C++ and PHP and Python is still built and deployed and taught and widely used?

Incompatible data formats. Similarly, why has computer technology created hundreds or thousands of incompatible encodings of images, audio, video, and so on?

Search engines, other than a few big companies. The big web search engines are wonderful things, and Amazon has a fine search engine. But otherwise, if a web site --- a university, a company, a government agency --- has its own individual web search engine, you're almost always better off ignoring the fact and using Google with a specification of the site. I swear to God that the search engine at BarnesAndNoble.com was built and is maintained by saboteurs from Amazon. You would think there would be all kinds of really good site-specific and domain-specific search engines.

[A minor mystery, by the way, is why Google search itself (I haven't tried the other major search engines) is so erratic for search with quoted phrases. The number of "results" that it claims are often orders of magnitudes larger than the actual number of results it finds, and its coverage can be spotty. You would think that if you give it a quoted phrase with the specification "site:gutenberg.org", that it could find you all the instances of the quoted phrase in Gutenberg --- that seems as though it should be fairly straightforward. But demonstrably it doesn't.]

Computer security. Why in God's name has the computer world been so much more focused on adding cutesy features to programs that already are way overburdened with cutesy features instead of making sure that they are safe? And who thought that it was anything other than insane to put a lot of Things on the Internet, until these kinds of issues had been worked out?

Cryptocurrency. I'm not even going to start on that.

The non-impact of AI on cognitive science and epistemology. There's been some positive impact. In cognitive psychology, computational models of mind are respectable, which is a good thing. AI has been a boon to corpus linguistics and to certain kinds of large-scale analysis. A lot of math exists in formalized form online.

But 40 and 50 years ago, our hopes for that were much larger. We thought that having actual tools at hand would allow us to flesh out Carnap's project and tie all of scientific theory, experiment, observation, and ordinary experience, and actually build a theory of science comparable to what Whitehead and Russell started for math. We thought that computational tools would allow us to tie together the many different features of language, and see how form, function, and denotation all fit together into a wonderful whole. We thought that computational analysis of learning would allow us to solve the philosophical problem of induction. We thought that AI would progress hand in hand with cognitive psychology and that AI programs would be natural places to test out the sufficiency of cognitive models. (My own jaundiced view is that the specifics of computational models have been more misleading than helpful in cognitive psychology, but I may be succumbing to the narcissism of small differences there.) Little of that, comparatively, has happened, and it seems to me that current trends in AI give no reason to expect that it will ever happen. If things continue on their current track, we may end up attain human-level AI without gaining any significant insights into language, science, rationality, or human cognition.