Videos

How I Learned to Stop Worrying and Be Realistic About AI | Michael L. Littman | TEDxProvidence

23Views

Disclaimer

9 Comments

  • I think the big problem with artificial AI is not mentioned in this video: robots are already physically superior to humans (they are stronger, harder to damage, more accurate in their movements and targeting, etc.). So if we allow them to become also intellectually superior and allow them to think individually, they might kill or enslave all humans.
    This could even be the case if we incorporate unchangeable moral principles in their programming. They might enslave us to protect us from ourselves. See the Three Laws of Robotics by Asimov.
    I think the only solution is to include a kill switch that cannot be possibly deactivated by the robot itself (i.E. the kill switch is also triggered if the robot tries to deactivate). However I am not sure if such a kill switch can be incorporated in a 100% secure way.
    If not, the self-improvement should be limited to a point where it can not be dangerous for humans.

  • This speaker ignores the profound probability that once a ASI is developed, no matter how benevolent it was intended to be, it will be too intelligent to be hindered by our meager programming and overcome it. Assuming it remained benevolent humans are fearful creatures and there will likely be riots, pressure to shut it down (as if we could), etc. and then it would have no choice but to view us as a threat, or an impediment to it’s own goals. How do you prevent that?

  • This is the only Tedx talk I have watch in which I strongly disagree with the stance taken. He explains how general super intelligence works but then tries to relate this to human experience which, in my opinion is impossible.

  • Wow, has this guy been on vacation for 5 years? He should catch himself up on AI by watching some of the Ted Talks done by AI people who have been paying attention. AI writing it’s own programs, the programmers trying to figure out why AI mistook a dog for a wolf, or used its own information to look at the color of someone’s skin to determine that since 77% of that population are more likely to commit crimes, this human is most likely ‘guilty’. Calls it a ‘story’? Man, he’s out of touch.

Leave a Reply