Ethical risks of detecting diseases with Big Data
February 25, 2015
Oscar predictions: Comparing three mathematical models
February 26, 2015
The Crayon Blog

Who's responsible if artificial intelligence becomes a threat?

Industry Articles | Published February 25, 2015  |   Tejeswini Kashyappan

Will artificial intelligence be a threat to human existence? Some leading technologists, experts and scientists like including Elon Musk and Stephen Hawking believe that building artificial intelligence in humanity’s image will make it dangerous. However, theorist Benjamin H Bratton believes that artificial intelligence will be a threat, not because it is clever and evil, but because we are stupid.
“If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits,” he wrote in the New York Times.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
“I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all,” he said.
Click here to read the complete article. Benjamin H. Bratton is an associate professor of visual arts at the University of California, San Diego.

Subscribe to the Crayon Blog. Get the latest posts in your inbox!

The Crayon Blog

Who's responsible if artificial intelligence becomes a threat?

Industry Articles | Published February 25, 2015  |   Tejeswini Kashyappan

Will artificial intelligence be a threat to human existence? Some leading technologists, experts and scientists like including Elon Musk and Stephen Hawking believe that building artificial intelligence in humanity’s image will make it dangerous. However, theorist Benjamin H Bratton believes that artificial intelligence will be a threat, not because it is clever and evil, but because we are stupid.
“If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits,” he wrote in the New York Times.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
“I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all,” he said.
Click here to read the complete article. Benjamin H. Bratton is an associate professor of visual arts at the University of California, San Diego.

Subscribe to the Crayon Blog. Get the latest posts in your inbox!

The Crayon Blog

Who's responsible if artificial intelligence becomes a threat?

Industry Articles | Published February 25, 2015  |   Tejeswini Kashyappan

Will artificial intelligence be a threat to human existence? Some leading technologists, experts and scientists like including Elon Musk and Stephen Hawking believe that building artificial intelligence in humanity’s image will make it dangerous. However, theorist Benjamin H Bratton believes that artificial intelligence will be a threat, not because it is clever and evil, but because we are stupid.
“If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits,” he wrote in the New York Times.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
“I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all,” he said.
Click here to read the complete article. Benjamin H. Bratton is an associate professor of visual arts at the University of California, San Diego.

Subscribe to the Crayon Blog. Get the latest posts in your inbox!

The Crayon Blog

Who's responsible if artificial intelligence becomes a threat?

Industry Articles | Published February 25, 2015  |   Tejeswini Kashyappan

Will artificial intelligence be a threat to human existence? Some leading technologists, experts and scientists like including Elon Musk and Stephen Hawking believe that building artificial intelligence in humanity’s image will make it dangerous. However, theorist Benjamin H Bratton believes that artificial intelligence will be a threat, not because it is clever and evil, but because we are stupid.
“If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits,” he wrote in the New York Times.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
“I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all,” he said.
Click here to read the complete article. Benjamin H. Bratton is an associate professor of visual arts at the University of California, San Diego.

Subscribe to the Crayon Blog. Get the latest posts in your inbox!