Robots that won't tilt at windmills

Or humans. Phew!

Co-authors Mark Riedl and Brent Harrison have been researching how to teach value alignment to AI and robots. As a first step they have been experimenting with their new training system, Quixote.

Much like we teach children how to behave in socially acceptable ways through stories, Mark and Brent believe AI will be able to make better choices, and not harm humans, if they learn to act like the protagonist in a story.


To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.