The question of how to program AI to behave morally has exercised a lot of people, for a long time.
Most famous, perhaps, are the three rules for robots that Isaac Asimov introduced in his SF stories:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Continue reading
No comments:
Post a Comment