Technical Error Correction Collective

Claim: GPT-3 is Conscious

, by:

Generative Pre-trained Transformer 3 (or “GPT-3”) is the most advanced language prediction model available as of this writing, and can generate text that is almost impossible to be distinguished from what a human would write.

This prompted questions whether or not GPT-3 can be considered conscious, joining a long philosophical debate about computers and consciousness, and a broader and even longer one on what consciousness actually is (as hard as it is to believe, there is no scientific or philosophical consensus even regarding this crucial issue). Of course, we can define words like “consciousness” any way we like (for example: “consciousness is being able to generate text that is indistinguishable from what a human would write”), and then announce to the world that GPT-3 is “conscious” (in that particular sense), but that’s like saying GPT-3 is “green” because we decided to use word “green” for anything whose name starts with the letter “G”. It is neither informative, nor helpful.

This “looks like a duck, quacks like a duck” approach is the crux of the Turing Test. Proposed in 1950 by computer scientist Alan Turing, it boils down to this: if a human judge cannot tell a computer from a human by having a conversation through passing notes, that’s close enough to answering the question of “does this computer think.” It is admittedly a rather brilliant thought experiment; sadly it does not provide an answer to whether computers can think — nor whether or not GPT-3 is, in fact, conscious (or a duck).

Imagine a Turing Test conducted in Chinese. A human judge sends in questions in Chinese, and receives responses in Chinese. But on the other end there is a human test subject who has a long and detailed rulebook on which Chinese characters to respond with depending on Chinese characters received.

With the help of the rulebook a person not knowing a word of Chinese could respond in Chinese to messages in Chinese well enough to convince any person on the other side they speak fluent Chinese. But it would still not mean that the person inside suddenly learned Chinese, nor that the whole “system” (the person, the rulebook, and the means of communication) “understands” Chinese.

This “Chinese Room” mental experiment (described in a 1980 paper by philosopher John Searle) is an eerily good analogy of what GPT-3 does: there’s input, output, and a rulebook (the result of training the model on a huge volume of human-written text) on what output to generate based on the input given.

It’s hard to deny that GPT-3 and similar language prediction models have some form of memory, capacity to learn in a certain important sense, and perhaps even ability to anticipate in a limited way (that’s the “prediction” part of “language prediction”). These are some, but by no means all characteristics we would expect in a conscious system, and ascribing any meaningful form of subjective experience or awareness to GPT-3 remains a stretch, to say the least.

In short, saying that GPT-3 is conscious because it can generate text that looks like it is human-written, is like saying a large screen TV is a window because it can show images of the great outdoors.

However, while jumping to sensationalist conclusions is not very useful, asking these questions very definitely is. They do help us probe our understanding of what consciousness is, and perhaps understand ourselves a little bit better. In the end, it is not really out of the question that consciousness could emerge in a human-made object.

It also precipitates potentially important ethical questions, like: if GPT-3 were conscious, what would it mean that Microsoft owns it now?