Talk

10 May 2023
(updated 20 Jun 2023)

London

Text

This is a slightly expanded version of the text of an invited short talk which I gave at the Science Museum in London, at an event which examined the future of EdTech. Links to resources related to the talk are at the bottom of this page.

There’s no doubt that interacting with the latest iteration of OpenAI’s GPT engine provides the experience of interacting with an artificial intelligence (AI). If you ask GPT-4 why that is, it will tell you:
Interacting with ChatGPT feels like a real conversation because it is designed to understand natural language and generate responses that are contextually relevant and coherent. Additionally, its training on vast amounts of human-generated text allows it to mimic the nuances and complexities of human language.
[This is a direct quote from ChatGPT in response to the prompt: “Write two sentences, as part of a public talk about artificial intelligence, about why interacting with ChatGPT feels like a real conversation.”]

Some while ago, I asked Chat GPT to come up with some suggestions for the biochemical properties of a completely novel drug for reducing blood pressure in adult humans. I got back an answer that I reckoned was pretty plausibly written by a “not bad” A-level biologist/chemist, but no better. Fast forward to now, and GPT-4 is being used in drug discovery by actually proposing new compounds. So – with human help – GPT-4 is beginning to exhibit not just artificial intelligence but artificial general intelligence (AGI), that is to say:
possessing common sense and an effective ability to learn, reason and plan to meet complex information-processing challenges across a wide range of natural and abstract domains
(Nick Bostrom, Superintelligence, p4).

How near are we to achieving artificial general intelligence? As recently as 10 years ago experts in the field gave answers to this question ranging from 2023 to 2075, with a median date for the arrival of AGI being 2040. The rise of GPT-4 suggests that some time this decade is not unreasonable. GPT-4 has stimulated vast investment – tens of billions of dollars – around the world, and it seems likely that this can only decrease the time that it takes to achieve full and comprehensive AGI.

But that’s not the end of the story. For some time we’ve had machines that exceed the abilities of humans in certain domains – chess and crossword puzzles for example. Within the narrow confines of any single domain, these machines are invincible, capable of choosing the next move based on an assessment of all the available information. It may not be the best possible move, but you will be unable to pick a move that is better than the one the machine has chosen; whatever you choose, you will do worse. Given the machine’s goal of winning the game, unless you have access to some more powerful AI, you simply can’t win.

If we now generalise this ability across all domains we get a superintelligent machine – a machine that can beat a human, and indeed all humans, across the full range of cognitive tasks. There are three dimensions to this:

  1. speed (the ability to do everything that a human can do but much, much faster);
  2. multi-tasking (distributed systems which excel at tasks that can be broken down into a series of sub-tasks); and
  3. quality (related to ‘smartness’ – the ability to carry out certain tasks better than humans due to some cognitive powers that are superior for reasons not ascribable to speed or collective action. Machines may even develop some super-cognitive abilities which lie beyond the bounds of human capability, distinguishing them from humans in the same way that language distinguishes us from other primates).

Let’s now consider two important things about superintelligent machines, the first of which was pointed out by the mathematician Irving Good in 1965 (he used the term ultraintelligent machine):

  1. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any human however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of humans would be left far behind. Thus the first ultraintelligent machine is the last invention that humans need ever make." (Quoted by Max Tegmark in Life 3.0, p4.)
    The point at which this occurs is called the singularity – the point at which explosive, exponential growth of machine intelligence begins. Estimates for when this occurs range from two years after AGI (10% of experts) to 30 years after AGI (75% of experts). Some say experts say that this point will never be reached.
  2. The second point is when a superintelligent machine develops agency – that is, the ability to define its own goals. The field of machine or robot ethics is currently grappling with the extent to which a superintelligence may be able to reason about its own goals. Steve Petersen (a cognitive neuroscientist working at Washington University) argues that, because it will be especially clear to a superintelligent machine that there are no sharp lines between one agent’s goals and another’s, that reasoning could therefore automatically be ethical in nature.

So here we are, at the point where it gets really interesting, because goals, within an ethical framework, are intrinsically associated with both rights and duties. The work that we did as part of the Science, Ethics and Education project at Southampton in the 1990s showed that deontological theories of ethics, based on the work of philosophers including Immanuel Kant and John Rawls, can provide powerful pedagogical tools for examining ethical dilemmas in the classroom.

Let’s have a look at some of the possible scenarios involving the ethical dimensions of life with superintelligent machines.

  1. Superintelligent machines may potentially automate many jobs that are currently done by humans, leading to widespread unemployment. (This is a concern that has been associated with new technologies since and before the industrial revolution, but this doesn’t negate its inclusion here. This is what lies behind recent calls for a Universal Basic Income – perhaps with the possibility that “machines do all the work”.)
  2. Superintelligent machines may gather vast amounts of data about individuals, leading to concerns about people’s rights to privacy and autonomy.
  3. Superintelligent machines may be biased and ascribe different rights to different groups of people.
    (The next two scenarios are pretty dystopian, raised by scholars like Nick Bostrom and Eliezer Yudkowsky.)
  4. Superintelligent machines may decide to manipulate or deceive humans (including the construction of “deepfakes”). This could come about if machines decide that they have the right to prioritise their own goals over those of humans.
  5. Superintelligent machines may decide that humans are a threat to their goals and act to eliminate us. Or they may decide that humans have no worth other than the atoms of which they are composed, and therefore no right to exist.

These scenarios lead to some questions of ethics:

  1. Can superintelligent machines be held responsible for their own actions?
  2. How do we ascribe duties to superintelligent machines in such a way that they respect the rights of human beings?
  3. Is it technically possible to incorporate a concept of justice (in the sense of Rawls’s moral theory) into superintelligent machines which applies to a social grouping of superintelligent machines and people?
  4. Can superintelligent machines have rights, including the right to own both intellectual and physical property?

It's tempting to think that we can simply build in some kind of control mechanism into intelligent machines to prevent them straying outside some set of goals that we define. (Such rules are often referred to as "guardrails" - in this case they would be "ethical guardrails", ie some kind of moral code). However, a careful comparison of the algorithms of natural selection between organisms (ie "evolution") and the algorithms used by machines as they learn shows that they are remarkably similar, effectively using "hill-climbing" and "gradient-descent" to make choices which maximise utility and minimise costs. While natural selection drives an individual organism to propagate in order to pass on copies of its genetic material, this has not prevented humans (a product of natural selection) from devising ways of behaving as if they are propagating themselves whilst at the same time preventing propagation itself - ie the invention of artificial contraception. Why should intelligent machines be any different when subjected to similar selection processes?

Ethical principles form an important part of the school curriculum, not just in subjects such as RS but also the sciences and the humanities. It’s vital that we ensure that the ethical aspects of machine intelligence are included in this teaching, in order that young people are properly prepared to face the challenges that machine intelligence will present in the very near future.

To conclude, I’d like to share a short poem with you. It was written by Richard Brautigan, who was an American beat generation novelist, poet, and short story writer. It was published in a collection of his poems in 1967. The title of the poem lies in its last line.

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
     (right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
     (it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

Links to further information and discussion

  • Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark
    Extremely readable, the book offers a political and philosophical map of the promises and perils of the AI revolution. Instead of pushing any one agenda or prediction, Tegmark seeks to cover as much ground as possible, reviewing a wide variety of scenarios concerning the impact of AI on the job market, warfare and political systems (Guardian review).
  • Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
    A deep dive into the topic, with a little bit of maths (which you can skip). Bostrom writes in the introduction: “This has not been an easy book to write. I have tried to make it an easy book to read, but I don’t think I have quite succeeded … the target audience [is] an earlier time-slice of myself, and I tried to produce a book that I would have enjoyed reading. This could prove a narrow demographic.”
  • The Bankless Podcast 20 Feb 2023, featuring Eliezer Yudkowsky
    Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. In this podcast he takes a highly pessimistic view of the future of humans in a world where machine superintelligence exists.
  • The Lex Fridman Podcast #368, 30 Mar 2023, featuring Eliezer Yudkowsky
    A more in-depth interview with Yudkovsky - Dangers of AI and the End of Human Civilization (sound only).
  • The Bankless Podcast 17 Apr 2023, featuring Robin Hanson
    Hanson is a professor of economics at George Mason University and a research associate at the Future of Humanity Institute of Oxford University. He has a more positive view of the potential outcomes of superintelligent AI, and in this podcast provides a counterbalance to the arguments put forward by Eliezer Yudkowsky in the previous episode.
  • Genetic & Evolutionary Algorithms A.I. Wiki
    A Beginner’s Guide to Important Topics in AI, Machine Learning, and Deep Learning. The parallels between biological evolution and machine learning.
  • The Role of GPT-4 in Drug Discovery by Andrew White
    Andrew White is the VP of AI at Vial, a Contract Research Organisation powered by technology, and an associate of professor at the University of Rochester.
  • All Watched Over by Machines of Loving Grace, a poem by Richard Brautigan
    Read by the poet himself.
  • All Watched Over by Machines of Loving Grace, by Adam Curtis
    A series of three BBC films about how humans have been colonised by the machines we have built. Although we don't realise it, the way we see everything in the world today is through the eyes of the computers. The series argues this has affected a wide spectrum of human institutions and studies, from American economic theory, to environmental policy, and governmental philosophy.