Melanie Mitchell

Melanie Mitchell

Yazar
10.0/10
1 Kişi
·
1
Okunma
·
0
Beğeni
·
13
Gösterim
Adı:
Melanie Mitchell
Unvan:
Prof. Dr. Yazar
Melanie Mitchell is a professor of computer science at Portland State University. She has worked at the Santa Fe Institute and Los Alamos National Laboratory. Her major work has been in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata, and her publications in those fields are frequently cited.

She received her PhD in 1990 from the University of Michigan under Douglas Hofstadter and John Holland, for which she developed the Copycat cognitive architecture. She is the author of "Analogy-Making as Perception", essentially a book about Copycat. She has also critiqued Stephen Wolfram's A New Kind of Science[3] and showed that genetic algorithms could find better solutions to the majority problem for one-dimensional cellular automata. She is the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT Press in 1996. She is also author of Complexity: A Guided Tour (Oxford University Press, 2009), which won the 2010 Phi Beta Kappa Science Book Award, and Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

While expressing strong support for AI research, Mitchell has expressed concern about AI's vulnerability to hacking as well as its ability to inherit social biases. On artificial general intelligence, Mitchell states that "commonsense knowledge" and "humanlike abilities for abstraction and analogy making" might constitute the final step required to build superintelligent machines, but that current technology is not close to being able to solve this problem.[4] Mitchell believes that humanlike visual intelligence would require "general knowledge, abstraction, and language", and hypothesizes that visual understanding may have to be learned as an embodied agent rather than merely viewing pictures.[
JUDGE: What is your gender
EUGENE: I’m a young boy, if you care to know. And not very ugly, by the way!
JUDGE: what are you aware of at this moment
EUGENE: I would rather not talk about it if you don’t mind. Could you tell me what are you? I mean your profession.
JUDGE: how old are you
EUGENE: I’m a little boy 13 years old.
JUDGE: what are your qualifications
EUGENE: My qualifications? I’m not gonna tell you. Oooh. Anything else?
The name Mechanical Turk comes from a famous eighteenth-century AI hoax: the original Mechanical Turk was a chess-playing “intelligent machine,” which secretly hid a human who controlled a puppet (the “Turk,” dressed like an Ottoman sultan) that made the moves. Evidently, it fooled many prominent people of the time, including Napoleon Bonaparte. Amazon’s service, while not meant to fool anyone, is, like the original Mechanical Turk, “Artificial Artificial Intelligence".
"People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world."
We’ll be completely caught off guard. We’ll think nothing is happening and all of a sudden, before we know it, computers will be smarter than us.”
It was not about AI becoming too smart, too invasive, too malicious, or even too useful. Instead, he was terrified that intelligence, creativity, emotions, and maybe even consciousness itself would be too easy to produce— that what he valued most in humanity would end up being nothing more than a “bag of tricks,” that a superficial set of brute-force algorithms could explain the human spirit.
Many people were shocked and upset when, in 1997, IBM’s Deep Blue chess-playing system defeated the world chess champion Garry Kasparov. This event so stunned Kasparov that he accused the IBM team of cheating; he assumed that for the machine to play so well, it must have received help from human experts. (In a nice bit of irony, during the 2006 World Chess Championship matches the tables were turned, with one player accusing the other of cheating by receiving help from a computer chess program.)
Virtually everyone working in the AI field agrees that supervised learning is not a viable path to general-purpose AI. As the renowned AI researcher Andrew Ng has warned, “Requiring so much data is a major limitation of [deep learning] today.” Yoshua Bengio, another high-profile AI researcher, agrees: “We can’t realistically label everything in the world and meticulously explain every last detail to the computer.
Even the humans who train deep networks generally cannot look under the hood and provide explanations for the decisions their networks make. MIT’s Technology Review magazine called this impenetrability “the dark secret at the heart of AI.” The fear is that if we don’t understand how AI systems work, we can’t really trust them or predict the circumstances under which they will make errors.
In one survey, 76 percent of participants answered that it would be morally preferable for a self-driving car to sacrifice one passenger rather than killing ten pedestrians. But when asked if they would buy a self-driving car programmed to sacrifice its passengers in order to save a much larger number of pedestrians, the overwhelming majority of survey takers responded that they themselves would not buy such a car.
In short, analogies, most often made unconsciously, are what underlie our abstraction abilities and the formation of concepts. As Hofstadter and his coauthor, the psychologist Emmanuel Sander, stated, “Without concepts there can be no thought, and without analogies there can be no concepts.”
Yazara henüz inceleme eklenmedi.

Yazarın biyografisi

Adı:
Melanie Mitchell
Unvan:
Prof. Dr. Yazar
Melanie Mitchell is a professor of computer science at Portland State University. She has worked at the Santa Fe Institute and Los Alamos National Laboratory. Her major work has been in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata, and her publications in those fields are frequently cited.

She received her PhD in 1990 from the University of Michigan under Douglas Hofstadter and John Holland, for which she developed the Copycat cognitive architecture. She is the author of "Analogy-Making as Perception", essentially a book about Copycat. She has also critiqued Stephen Wolfram's A New Kind of Science[3] and showed that genetic algorithms could find better solutions to the majority problem for one-dimensional cellular automata. She is the author of An Introduction to Genetic Algorithms, a widely known introductory book published by MIT Press in 1996. She is also author of Complexity: A Guided Tour (Oxford University Press, 2009), which won the 2010 Phi Beta Kappa Science Book Award, and Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

While expressing strong support for AI research, Mitchell has expressed concern about AI's vulnerability to hacking as well as its ability to inherit social biases. On artificial general intelligence, Mitchell states that "commonsense knowledge" and "humanlike abilities for abstraction and analogy making" might constitute the final step required to build superintelligent machines, but that current technology is not close to being able to solve this problem.[4] Mitchell believes that humanlike visual intelligence would require "general knowledge, abstraction, and language", and hypothesizes that visual understanding may have to be learned as an embodied agent rather than merely viewing pictures.[

Yazar istatistikleri

  • 1 okur okudu.
  • 1 okur okuyacak.