An autonomous system is a software agent capable of performing what appear to be actions, with what appears to be a significant degree of independence and autonomy. Autonomous systems, at least at present, are not genuine moral agents: they do not themselves have moral obligations or permissions. But the programming of an autonomous system is uncontroversially subject to moral evaluation. This talk will consider the general question whether the morality of programming an autonomous system to behave in a certain way in a certain situation is reducible to the morality of a human actor, in its place, behaving in that same way in that same situation. It will consider three ways in which this reduction might fail, based on programmers’ ignorance of who their act affects, on the difference between performing one and multiple instances of the same act, and on the extrinsic effects that can arise from the visibility of programs to others.
HAI-EIS Postdoctoral Fellow