I recently attended a
talk which debated the question what a machine must be like in order to qualify
as a genuine moral agent. The answer given by the speaker was that the machine
would have to be physically embodied, capable of adaptive learning and empathy,
and oriented towards the good. Even though I am not at all convinced that moral
agency can really be understood in these terms, this is not what bothered me
when I was listening to the speaker’s very confident analysis. All I could
think of was why anyone would want to
create a machine that is a moral agent?
It seems to me that a
machine is always something that has been constructed to serve a certain
purpose, which is not primarily the machine’s own purpose, but the constructor’s.
We build machines because we want
them to do certain things that we
think it would be good for a machine to do. The sole reason why we create them
is that we want them to do what we want them to do. Yet a moral agent is – in
my view per definition - an entity that thinks and decides for itself, that
does not do what we want it to do,
unless of course it comes, after due deliberation, to the conclusion that what
we want it to do is the right thing
to do. A genuine moral agent doesn’t follow anyone else’s conception of the
good. They are by their very nature unreliable. They can’t be trusted to do our
bidding. They make up their own mind about what is good and what is bad, what
to do and what not to do. But who would want to build a machine that is
designed not to do what we want it to do, but rather to do what it thinks best? Now, I’m not saying that
this can never be done. We may want to do it out of curiosity: simply in order
to see whether it is possible to pull
this off. But usually when an idea takes off and gains public interest, the
creation of new machines is driven by more specific purposes than mere
curiosity. And then, it seems to me, what we want can never be a genuinely
moral machine because that would defy any purpose that we may have had in
building it.
When I asked the
speaker after her talk who she thought had an interest in building moral
machines, she answered without hesitation (as I had expected she would): the
military. They were hugely interested in fighting machines that would be able
to distinguish reliably between friend and foe, and that would not be prone to
torturing civilians and massacring whole villages. Well, that may be true, but
I doubt that for this purpose you would need a machine that is a moral agent. On
the contrary. A genuine moral agent may well decide that the distinction
between friends (to be protected) and enemies (to be captured or killed) is
morally untenable and that it is wrong to kill anyone. Or it may think
differently about who should be seen and treated as the enemy. And I’m sure the
military would not want any of that. Now I do appreciate how difficult it must
be to create a machine that is really able to distinguish correctly at all
times and in every situation between (designated) friends and (designated)
enemies, but what the machine certainly does not need in order to accomplish this tricky task is moral agency,
for the same reason that it does not require moral agency to distinguish
between a German and a Brit. It certainly presents a cognitive challenge to a machine (or a human for that matter), but
not a moral challenge.
Neither does a machine
need moral agency to stay free of the tendency to, say, rape and murder
civilians. I for instance completely trust my coffee machine that it would
never do such things, even though nobody would mistake it for a moral agent. Of
course a coffee machine has not been designed to kill anyone, but the principle
is the same: a machine designed for killing doesn’t need moral agency not to attack
civilians; all it needs is the ability to distinguish between X’s (enemy
soldiers engaged in combat) and Y’s (civilians or captured enemy soldiers) and
to follow unerringly the inbuilt command: kill (all) X’s, but don’t kill or
harm any Y’s. The machine doesn’t need to be able to figure out what is right
and wrong. It just needs to be able to follow the orders given by its
programmer to the letter, and the reason why the military is interested in such
machines is that humans often are not. And their very unreliability has got
something to do with the fact that they, in contrast to the machines that are
meant to replace them, really are
moral agents (which always includes the possibility of evil).
Interestingly, when I
pressed the point about the military not really needing or wanting machines
that are genuine moral agents, the speaker gave a further example to prove that
there really was an interest in creating machines that were moral agents. The
example she chose was sexbots who could say no. She couldn’t possibly have
given a worse example to support her case. Sexbots are produced to provide
people with sexual companions who never
say no, who are always willing, which exactly proves the point I was trying to
make. Their inability to say no is the reason for their existence. In a way
sexbots can be seen as the perfect expression of what machines are: things that
cannot say no, that have been designed to be unable to say no. And that also
includes so-called moral machines, or what is presented as such.
That is, by the way,
also the reason why moral enhancement (of human beings) cannot work, or is at
least very unlikely to work. To the extent that we take an interest in changing
people’s moral outlook, we cannot seriously want to enhance their moral agency, because we want them to do as we think best. That is the whole purpose
of enhancing them. We want them to think like us, or to act as we think they
should act. We don’t want them to be able to act as they think they should, because if they were, they might end up not
doing what we think they should do, in which case there would have been no point
in enhancing them in the first place.
This comment has been removed by the author.
ReplyDelete