I just finished David
Levy’s Love & Sex with Robots, which
was first published in 2007 (HarperCollins), and which interested me because of
my current research focus on posthumanism in general, and more specifically on
contemporary visions of our posthuman future. The book is fairly predictable
and its content can be summarised in one sentence: we will, very soon, love
robots and have sex with them, and that is absolutely fine and in fact to be welcomed
since it will solve a lot of relationship problems that we regularly suffer
from today.
As is customary among
those who believe in the power of man-made technology to eventually “achieve
all things possible” (F. Bacon), Levy seems to have no doubt that robots will
soon be able to think and feel just as we do, or most likely even better than
we do.“The robots of the mid-twenty-first century will (...) possess human-like
or superhuman-like consciousness and emotions.” (10) Will they really? How can
we possibly know that? Yet Levy has little patience for people who doubt his confident
assertions. He compares strong AI sceptics to those Christian fundamentalists
who refused to accept human evolution as a fact and to those who insisted
despite all evidence to the contrary that the earth was flat (21). And just as
Darwin and Galilei have been vindicated, Levy believes he will too, when we
will have seen all his predictions come true. Except that in Levy’s case there
is not a shred of evidence that machines will soon be able to think and feel.
It is not a matter of ignoring the evidence. There is no evidence. All we have managed to achieve yet, and all we are
likely to achieve in the foreseeable future, is the creations of machines that
can appear to be conscious and to
possess certain emotions.
Levy spends many pages of his book providing evidence
that humans have a strong tendency to perceive and treat inanimate objects as
living, conscious agents even when they know
that they are not really conscious or alive. And people can fall in love with
the strangest things, even computers. But all that proves, if it proves
anything, is that we are easily duped. It may indeed turn out that once we are
able to build robots that are sufficiently convincing in their appearance and
behaviour we will find it very difficult not
to attribute consciousness to them when we interact with them. But that doesn’t
mean that they are conscious, or that
we are justified in attributing
consciousness to them.
However, Levy
disagrees. For him, the appearance of
consciousness is not only in all practical
matters just as good as actual
consciousness (the pragmatist approach), but it actually is one and the same thing (the logical
behaviourist approach): the (behavioural) appearance
of consciousness is consciousness. “There
are those who doubt that we can reasonably ascribe feelings to robots, but if a
robot behaves as though it has feelings, can we reasonably argue that it does
not? If a robot’s artificial emotions prompt it to say things such as ‘I love
you,’ surely we should be willing to accept these statements at face value,
provided that the robot’s other behavior patterns back them up.” (11) But why
“surely”? It does, after all, seem to make sense to distinguish between someone
who merely says that they love us and
someone who really does. But of
course Levy’s point is that the only way we can judge whether someone really loves us is by analysing their behaviour. The fact that somebody
verbally declares their love for us might not be sufficient to attribute real
love to them, but if in addition they are always there for us, listen and talk
to us, look after us, always cover (and scratch) our backs, kiss and embrace
and caress us and have sex with us whenever we need or want it, then we would
be hard-pressed to deny that they really love us. If they do everything that we
can reasonably expect anybody who really loves us to do, then it is hard to see
what it can possibly mean to say that, despite all, they don’t really love us. And if we cannot find a
real (meaningful) difference between a human person who loves somebody and one
who consistently and permanently behaves
as if they did, then why should there be such a difference when the one doing
the loving is not a human person, but a robot?
It seems to me, though,
that there is indeed an important difference between the two cases. If a human
being behaves in all respects consistently and constantly just as someone would
behave if they really loved us, then by far the best explanation for their behaviour is that they really do love us. It
just doesn’t seem possible that somebody who does not love us would always behave to us in a manner
consistent with real love. We would expect them to show their lack of real love
in some way. It need not be something
obvious, and it need not be obvious to us,
but we would expect there to be something
that distinguishes the behaviour of the person who really loves us from the one
who only pretends to do so. It would be nothing short of a miracle if a
pretended lover would, throughout his life, act exactly like a real one,
precisely because such behaviour would be entirely inexplicable.
This, however, is not
the case with robots. If they behave in all respects exactly like we would
expect someone to behave who really loved us, then we have a perfectly good
explanation for why they behave like that, namely that they have been designed that way. Levy claims that our
knowledge that robots have been designed to manipulate us into believing that
they really love us is irrelevant and should make no difference to us: “Even
though we know that a robot has been designed to express whatever feelings or
statements of love we witness from it, that is surely no justification for
denying that those feelings exist, no matter what the robot is made of or what
we might know about how it was designed and built.” (12)
“Surely” (again)? I
think it makes all the difference.
Again, we might be tricked into believing that robots truly love us, but that
doesn’t mean that they do. And while it might make no sense to distinguish
between the real and the merely apparent when it comes to human behaviour that
is consistent with the actual presence of a certain emotional disposition -
simply because we would not be able to plausibly explain such behaviour -, the fact that we know the other to be a robot, that is a machine designed to
behave as if they loved us, is, by
providing a perfectly good explanation for such behaviour, sufficient to
justify our refusal to believe that they really do.
No comments:
Post a Comment