Kamis, 05 Juni 2008

Artificial Intelligence

Artificial intelligence (AI) is both the intelligence of machines and the branch of computer science which aims to create it.

Major AI textbooks define artificial intelligence as "the study and design of intelligent agents, "where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.
AI can be seen as a realization of an abstract intelligent agent (AIA) which exhibits the functional essence of intelligence. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."
Among the traits that researchers hope machines will exhibit are
reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[6] General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.

AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic.
AI research also overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others. Other names for the field have been proposed, such as computational intelligence, synthetic intelligence, intelligent systems, or computational rationality.

Perspectives on AI
Humanity has imagined in great detail the implications of thinking machines or artificial beings. They appear in
Greek myths, such as Talos of Crete, the golden robots of Hephaestus and Pygmalion's Galatea. The earliest known humanoid robots (or automatons) were sacred statues worshipped in Egypt and Greece, believed to have been endowed with genuine consciousness by craftsman. In medieval times, alchemists such as Paracelsus claimed to have created artificial beings. Realistic clockwork imitations of human beings have been built by people such as Yan Shi, Hero of Alexandria, Al-Jazari and Wolfgang von Kempelen.

Pamela McCorduck observes that "artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized."In modern fiction, beginning with Mary Shelley's classic Frankenstein, writers have explored the ethical issues presented by thinking machines.
If a machine can be created that has intelligence, can it also feel? If it can feel, does it have the same rights as a human being? This is a key issue in Frankenstein as well as in modern science fiction:
for example, the film Artificial Intelligence: A.I. considers a machine in the form of a small boy which has been given the ability to feel human emotions, including, tragically, the capacity to suffer. This issue is also being considered by futurists, such as California's Institute for the Future under the name "robot rights", although many critics believe that the discussion is premature.
Science fiction writers and futurists have also speculated on the technology's potential impact on humanity. In fiction, AI has appeared as a servant (R2D2), a comrade (Lt. Commander Data), an extension to human abilities (Ghost in the Shell), a conqueror (The Matrix), a dictator (With Folded Hands) and an exterminator (Terminator, Battlestar Galactica). Some realistic potential consequences of AI are decreased human labor demand, the enhancement of human ability or experience, and a need for redefinition of human identity and basic values.
Futurists estimate the capabilities of machines using Moore's Law, which measures the relentless exponential improvement in digital technology with uncanny accuracy. Ray Kurzweil has calculated that desktop computers will have the same processing power as human brains by the year 2029, and that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the "technological singularity".
"Artificial intelligence is the next stage in evolution," Edward Fredkin said in the 1980s, expressing an idea first proposed by Samuel Butler's Darwin Among the Machines (1863), and expanded upon by George Dyson in his book of the same name (1998). Several futurists and science fiction writers have predicted that human beings and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger, is now associated with robot designer Hans Moravec, cyberneticist Kevin Warwick and Ray Kurzweil.
Transhumanism has been illustrated in fiction as well, for example on the manga Ghost in the Shell

History of AI research
Main articles:
history of artificial intelligence and timeline of artificial intelligence
In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning.
The field of modern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.
Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing:computers were solving word problems in algebra, proving logical theorems and speaking English.
By the middle 60s their research was heavily funded by the U.S. Department of Defense[34] and they were optimistic about the future of the new field:
1965,
H. A. Simon: "[M]achines will be capable, within twenty years, of doing any work a man can do"1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced.
In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI. This was the first AI Winter.
In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars.
Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow.
Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began.
In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.
The success was due to several factors: the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards

Philosophy of AI
Can the brain be simulated by a digital computer? If it can, then would the simulation have a mind in the same sense that people do?
In a
classic 1950 paper, Alan Turing posed the question "Can Machines Think?" In the years since, the philosophy of artificial intelligence has attempted to answer it.
Turing's "polite convention": If a machine acts as intelligently as a human being, then it is as intelligent as a human being. Alan Turing realized that, ultimately, we can only judge the intelligence of machine based on its behavior. This insight forms the basis of the Turing test.
The Dartmouth proposal: Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it. This assertion was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.
Newell and Simon's physical symbol system hypothesis: A physical symbol system has the necessary and sufficient means of general intelligent action. This statement claims that the essence of intelligence is symbol manipulation.
Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a "feel" for the situation rather than explicit symbolic knowledge.
Gödel's incompleteness theorem: A physical symbol system can not prove all true statements. Roger Penrose is among those who claim that Gödel's theorem limits what machines can do.

Searle's "strong AI position": A physical symbol system can have a mind and mental states. Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the "mind" might be.
The
artificial brain argument: The brain can be simulated. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original. This argument combines the idea that a suitably powerful machine can simulate any process, with the materialist idea that the mind is the result of a physical process in the brain.

Source : http://en.wikipedia.org/wiki/Artificial_intelligence

Tidak ada komentar: