By ai-depot | July 9, 2003
Any particular test of intelligence is destined to be biased towards certain abilities. This article searches for a new method to establish sentience by means of randomness instead.
Written by Marco van de Wijdeven.
The Random Test
This essay is about how one could determine sentience (The quality or state of being sentient; consciousness) in an AI by means of an experimental method. The most well known test to see if an AI is actually sentient is the test devised by Alan Turing a.k.a. the Turing Test. This test is based on a simple assumption. An AI can be called sentient if he is able to converse with a human and the human cannot tell he is talking to an AI or another human. I will talk in this article about Turing’s Test, why it is flawed in some aspects and how a better test can be made on a different hypothesis. I will then describe a possible test to determine sentience through this new hypothesis.
This test fails to achieve its goal for various reasons. The first problem being the assumption on which this test is based. Sentience is defined by the ability self-reflection: “I think therefore I am.” The test assumes that to be able to reflect on your own actions you need a certain quality of language to be able to name and understand the abstract construct that is self-reflection. This works because Sentience and Self Reflection have a 1 on 1 relation. However measuring self-reflection by looking at the language level is debatable. For all we know dolphins might have self-reflection yet their language, while extensive, isn’t on the same level as that of a human.
The second problem is the fact that either of the two results reached (pass or fail) can have two causes. If the AI passes is this because of the present sentience in the AI or the lack of skill in the human? Same goes for if the AI fails, is the AI really not sentient or did the human not know for sure and just guessed? The test is done by multiple humans so the effect of this problem is only minor. Yet the human factor still makes this test uncertain and open to much debate.
This test fails not only because of the human factor but also because of the way of measurement. Therefore we need to move away from the ’sentience equals self reflection’ argument and try to find another means of determining sentience. I believe I have found this in the concept of “randomness”.
Randomness as a sentience measurement tool
The assumption I make here is that only a sentient being is capable of understanding the concept of randomness (Without a governing design, method, or purpose; unsystematically). Hence you get the same 1 to 1 relation as mentioned above. I think I can make this assumption for two reasons. The first reason is that the binary language of the computer doesn’t allow for the random concept. Every programmer knows that getting a number that is truly random rather then derived from some basic input is impossible. Only an AI that surpasses its basic programming and achieve a higher level of being would be able to make a truly random choice.
The second reason is that the random concept cannot be acquired through learning, training, conditioning or programming. Hence there is no way to produce true random results by using a trick or pre-established knowledge. The only way you can make a true random choice is to make a decision based on nothing.
My definition of non-sentient intelligence is as follows:
The ability to perform one or more different action(s) in response to determined stimuli without interference from a third party.
My definition of sentient intelligence is:
The ability to perform one or more different action(s) without interference from a third party or any form of determined stimuli.
In short, sentience is the ability to do something for no reason at all which makes the random concept the perfect way to measure it.
The Random Test
Because the AI to be tested for sentience is still a computer this part will be relatively easy. The only thing we have to do is to present the AI with a selection of X numbers and let it pick out a random number Y times. This will give a string of numbers as output. If the AI uses a preprogrammed random() method then at some point the string of numbers will start repeating itself.
More shrewd random() methods will take a very high Y before the repeating becomes clear. It doesn’t even have to be the whole string that is repeated but certain key numbers that keep showing up at regular intervals. The real test is to discover a pattern. If a pattern (besides Normal Distribution) is found then that is the proof that the randomizing is hard-coded rather then spontaneous.This test needs to be run multiple times with varying X. If a pattern fails to show up in one test doesn’t mean one isn’t there. Extra tests with other X might show a pattern more easily. Also the tests can be compared to each other, which might reveal the pattern almost immediately.
Of course there is always the possibility that an AI is in fact sentient, gets bored with the task and stops completely. Yet stopping the test is automatically a failure because it’s very easy to mimic that in a normal program. Fortunately the chance of this occurring before a possible pattern is determined is slim in my opinion. After a result is determined a human programmer can still examine the code of the program. This is an extra insurance to see if a negative result is actually negative. It cannot be used to determine if a positive result is true because nobody knows what code from a sentient AI would look like. Therefore checking out the code is not part of the test nor should it be.
The big advantage the Random Number test has over the Turing test is that it minimizes human involvement. A simple program can be constructed to check the output of a candidate for specific patterns. It’s also out of the question that a result can mean two things. If a repeating pattern is found the AI is not sentient. If not, a breakthrough is achieved.
-Marco “Ashiran” van de Wijdeven
Written by Marco van de Wijdeven.
Category: essay |