AI FOR HUMANKIND COVER STORY:
brain recognises these humanlike attributes and almost immediately our biological system responds. Of course, these are not “authentic” emotions or humanlike intentions, but imitated ones, but they do influence us humans at an unconscious level. But, there is also an interesting phenomenon related to this effect, which is called “the uncanny valley”. This effect refers to the fact that, if a robot looks too much like a human, then the positive effects that I have just outlined will be eliminated and people will show aversion to the robot. A possible explanation is that a robot that is almost indistinguishable from a human does pose some existential threat and activates “us versus them” thinking.
You refer in your book to Alan Turing’s test of intelligence, in which a system’s intelligence is evaluated by its behaviour, regardless of the internal processes that may be going on invisibly inside that system. In simple terms, if the system’s behaviour is indistinguishable from that of a human, it may be said to be “intelligent”. But, as you have observed, although such behaviour may look “real”, in fact it’s only an imitation which we know is not backed up by understanding. Is it enough for a decision-making system to imitate understanding? At what point would such a system be found lacking?
Turing’s ideas are still important today and were developed in a time where no attention was paid to how people were thinking, but rather how they were acting. In other words, the mind was considered a black box and only behaviour was considered to be a good indicator of what people were doing and why (i.e. the behaviourism paradigm). So, in the Turing test, the idea was that a human was communicating with a computer in another room. If the human communicator could not distinguish whether a computer or a human was in the other room, then it meant that machine was able – in line with behaviourism – to act as humans do and was thus considered intelligent. As we know now, imitation can do a good job and manage tasks as well – or even better – than humans do. However, there are, of course, limits to this.
First of all, people may be accepting advice from a computer and work with those guidelines, but as soon as they find out that the source of the advice is a computer, it only takes one bad piece of advice from the computer to discount the machine entirely. On the other hand, if a human provides advice, he or she is not evaluated as negatively as a computer after a first failure. This example makes it clear that humans do prefer the “real thing” to deliver advice and help in decision-making. Second, in situations where people feel more personally involved or feel they are under scrutiny to be evaluated for something that matters to them, then they do not so easily accept technology that can imitate a human very well. For example, if you are evaluated for a bonus, promotion or a new job all together, then people most often prefer another human to make the decision and not a machine. Our own research shows that when algorithms are used to evaluate people in terms of who they are, which is the case when being considered for a job, these people show dislike towards an automated decision-maker (algorithm aversion). One reason for this is that AI is a machine and people believe that machines do not have the ability and empathy to know what it means to be a human. And, for that reason, people consider it
A robot that is almost indistinguishable from a human does pose some existential threat and activates “us versus them” thinking.
14 The European Business Review January - February 2021