The growing threat of (reduced) human intelligence (HI)

Nelson Morgan
5 min readMay 27, 2023

--

Do a search for articles on artificial intelligence (AI), and you will get a huge response. Limit the search to those that are talking about dangers from AI, and you will find that many experts in the field are quite concerned.

It makes sense to monitor the social effects of technology. But what should our concerns about AI really be?

Are we about to be attacked by Skynet? Do the systems developed today represent a fundamental advance over earlier systems? And even if the answers to both questions are no, is there still a related risk for our species?

The answers are no, no, and yes.

The current dominant paradigm in AI makes use of artificial neural networks.

These are interconnected networks of simplified artificial “neurons” When large numbers of these elements are used together, they can be extremely useful for many tasks.

Artificial neural networks have been used in research laboratories for many years. Tiny networks were developed in academic labs in the 1950s and 1960s, and they grew to be fairly large in the late 1980s and early 1990s — at the time I referred to them as BDNNs — Big Dumb Neural Networks. That name didn’t catch on. Go figure.

So, what has happened in the last few decades to achieve capabilities that concern so many, including many in this scientific field? There is one overwhelming factor that dominates: the advances in computational power, both in raw computing and data storage.

I say this while recognizing that there have been many innovations since the systems of the early 1990s(although they have significant precursors from that earlier era). But the dramatic increase in computational capabilities has enabled much larger neural networks. And the concurrent growth of the internet, also enabled by greater computational capability, has made it possible to collect the vast amounts of data necessary to train such huge networks.

When we coined “BDNN,” we were training networks with a few million parameters spread over a few layers of artificial neurons, using a few hundred million examples of short segments of speech. Networks developed by large technology companies can now train many billions of parameters spread deeply into many layers using a comparable number of examples. This vast expansion of computation for what is mostly a set of multiplications and additions has greatly increased the capability for recognition and generation, leading to the astonishing level of performance we now see.

But how different in function is this from earlier approaches?

In the mid 1960s Joseph Weizenbaum at MIT developed a program called ELIZA that mimicked the language of a psychotherapist interacting with a patient. It didn’t use neural networks, but like pretty much all of the “AI” of the 1950s-1970s, it was a computer program incorporating embedded rules. While primitive by modern standards, it reportedly did fool some users into thinking they were interacting with a real therapist (via text).

In the 1970s, I worked at a small audio company, and edited sounds from a number of recordings of the recently resigned President Nixon to have him be the voice of our answering machine. No neural networks were used there either (I edited ¼” magnetic tape) but it sounded relatively natural, or at least I thought so. But you could think of this as an early “deep fake.”

While neither of these examples used neural networks, they do show that simple technologies were able to provide some simple function that people could use.

By the early 1990s, neural networks were used in a number of fields, primarily for research. But in some cases, they were employed for commercial and governmental needs. For instance, simple multilayer networks were used for lunar terrain analysis by NASA as far back as the 1960s. In our own 1980s-90s work at ICSI and UC Berkeley, we developed neural network speech recognition that ultimately worked pretty well. But results were limited by our computational capabilities. In the same timeframe, Yann LeCun and colleagues at AT&T developed a neural network system for recognizing handwritten zip codes that was state of the art. But at the time it didn’t scale to large general systems for computer vision.

By the early 2010s, these limitations were overcome, and there were some dramatic improvements on tasks like speech recognition and computer vision. And by the time of Interspeech 2016 (an international conference that I chaired that year), the sessions of this speech technology and science meeting were hugely dominated by reports on work with neural networks.

To return to the questions I began this essay with:

Are we about to be attacked by SkyNet? In other words, has AI changed so fundamentally that it could choose to destroy humanity? I don’t see this happening, given my response to question 2:

Do the systems developed today represent a fundamental advance over earlier systems?

The AI systems of today are “just” far more capable than earlier ones due to their increased size and complexity; but they still are a big collection of simple arithmetic functions. Not to minimize the fantastic efforts and achievements of the scientists and engineers working in this field over the last few decades, but as someone who worked in this area myself, I have to accept that increased computation and storage have dominated the changes.

But finally,

Is there still a related risk for our species? Here, I have to say, yes, we have a huge risk. But I don’t think it’s due to the advances in AI. It may be due to the degradation of Human Intelligence (HI).

Our AI systems are still developed by humans, by and large. Humans choose training data; humans choose prompts for generative systems; humans decide on tasks to apply AI to and determine how to use the AI’s response. Worried about AI running killing machines? The biggest concern about this risk is for humans to give them that task. And we may overestimate the good that AI can do in contrast to these risks.

And some humans (specifically some politicians) are working very hard to degrade our own intelligence. There are active efforts to isolate children from uncomfortable truths, like our country’s difficult history. Books are being banned, and teachers are being threatened with punishment for teaching the “wrong” thing. College is getting more expensive while governmental financial support fails to keep up with these costs. And while awareness of global warming has grown, we may well be doing too little and too late to clean up our own race.

In short, we indeed may be losing our race with AI, but not only because of the advances in that branch of computing; more fundamentally, because of our own problems as a species.

That is my biggest concern.

I remain hopeful that we can recover from these difficulties, primarily because of the dedicated work of millions of concerned people worldwide, those who actively strive to overcome our limitations. The people who work to make our society more compassionate, who stand against oppression and the degrading of education. And I am particularly hopeful about our young people who are so active in opposing the threats to their future.

For me, this is the question for our time: will the human species face and conquer the human-made challenges. I’m less concerned about Skynet.

--

--

Nelson Morgan
Nelson Morgan

Written by Nelson Morgan

Former EECS Professor, Berkeley, led ICSI, UpRise Campaigns. Wrote “We Can Fix It: How to Disrupt the Impact of Big Money on Politics.”

No responses yet