Tuesday, 12 November 2019

The Moral Robot

a post by Chris Horner for 3 Quarks Daily



The question of how to program AI to behave morally has exercised a lot of people, for a long time.

Most famous, perhaps, are the three rules for robots that Isaac Asimov introduced in his SF stories:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law;
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These have been discussed, amended, extended and criticised at great length ever since Asimov published them in 1942, and with the current interest in ‘intelligent AI’ it seems they will be subject for debate for some time to come. But I think the difficulties of coming up with effective rules of this kind are more interesting for what they tell us about the difficulties of any rule or duty based morality for humans than they are for the question of ‘AI morality’.

Continue reading


No comments: