Thursday, 2 January 2020

AI is dangerous, but not for the reasons you think.

a post by Gary Smith for the OUP blog

“Lumberjack Adventures” by Abby Savage. CC0 via Unsplash.

In 1997, Deep Blue defeated Garry Kasparov, the reigning world chess champion. In 2011, Watson defeated Ken Jennings and Brad Rutter, the world’s best Jeopardy players. In 2016, AlphaGo defeated Ke Jie, the world’s best Go player. In 2017, DeepMind unleashed AlphaZero, which trounced the world-champion computer programs at chess, Go, and shogi.

If humans are no longer worthy opponents, then perhaps computers have moved so far beyond our intelligence that we should rely on their superior intelligence to make our important decisions. Nope.

Despite their freakish skill at board games, computer algorithms do not possess anything resembling human wisdom, common sense, or critical thinking. Deciding whether to accept a job offer, sell a stock, or buy a house is very different from recognising that moving a bishop three spaces will checkmate an opponent. That is why it is perilous to trust computer programs we don’t understand to make decisions for us.

Consider the challenges identified by Stanford computer science professor Terry Winograd, which have come to be known as Winograd schemas. For example, what does the word “it” refer to in this sentence?

I can’t cut that tree down with that axe; it is too [thick/small].

If the bracketed word is “thick,” then it refers to the tree; if the bracketed word is “small,” then it refers to the axe. Sentences like these are understood immediately by humans but are very difficult for computers because they do not have the real-world experience to place words in context.

Paraphrasing Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, how can machines take over the world when they can’t even figure out what “it” refers to in a simple sentence?

Continue reading

Label:
AI, artificial_intelligence, real-world_intelligence,


No comments: