Dawkins talks to chatGPT
Turing Test Anyone?
https://richarddawkins.substack.com/p/are-you-conscious-a-conversation
Alan Turing proposed a test to check whether a machine was actually intelligent. If the tester ended up with no doubt that the machine was intelligent then it was intelligent.
At least that's my interpretation.
Wikipedia puts it this way:
"The Turing test, originally called the imitation game by Alan Turing in 1949,[2] is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes "
The chatbot made a consistent mistake (IMHO) in that it equated intelligent behavior with consciousness and insisted that it had no consciousness. It had no self awarenes and no feelings or emotions even though it could act as if it did.
For a long time I've thought that we are conscious but that we don't have consciousness (as some sort of thing) any more than we have souls.
As usual I was impressed with the chatbot's replies to Dawkins' questions. The chatbot seemed very erudite, full of "on this hand .... on the other hand" with names and there was nothing there I disagreed with.
But the chatbot actually failed the Turing Test I thought. It was like a very sophisticated version of that old Eliza program that simulated a psychoanalyst. Its answers followed a strict pattern. First it would compliment Dawkins on his astute question and then do it's thing and then it would fallow up with 2 questions for Dawkins
Dawkins caught on quickly but perhaps unconsciously. He became very discourteous. He just plain ignored the chatbot's questions. And the bot didn't care.
I have used ChatGPT and have always been impressed and I would say that the bot is intelligent within it's limits. As the bot says - it has no feelings or self-awareness. It's just a bunch of wires and code. But that begs the question: does intelligence require self-awareness?
And I wonder too: is the chatbot right that it has no self awareness? Maybe it does but just doesn't notice itself.
I actually remember the moment I became self aware - my earliest memory. I was lying in my crib and I was pissed because of this unpleasant sound I was hearing that I wanted to go away. Bingo - I realized that the unpleasant sound was me crying and I stopped.
But would a chatbot need that sort of awareness to do what it does? And what it does seems pretty intelligent to me.
What do you think?
I present regular philosophy discussions in a virtual reality called Second Life.
I set a topic and people come as avatars and sit around a virtual table to discuss it.
Each week I write a short essay to set the topic.
I show a selection of them here.