Artificial_Intelligence_History

Artificial_Intelligence_History wiki, Artificial_Intelligence_History history, Artificial_Intelligence_History review, Artificial_Intelligence_History facts Artificial_Intelligence_History news, what is Artificial_Intelligence_History Artificial_Intelligence_History wikipedia
Artificial_Intelligence_History information, Artificial_Intelligence_History definition, Artificial_Intelligence_History timeline, Artificial_Intelligence_History location

[1] https://everipedia.org/wiki/Artificial_Intelligence

The Precursors

As storytelling devices

The concept of artificial beings (some of which are capable of thought) appeared as storytelling devices in antiquity: "In Homer's Iliad , the half-god Hephaestus may be the first fabricator of imagined automata, mobile tripodal creatures capable of attending the gods (Book 18)"

cited from [2] ​http://www.sf-encyclopedia.com/entry/robots

Since the 19th century, artificial beings are common in fiction, as in Mary

Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots) .

As actually calculating machines

The idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Lull (c. 1300 CE). The first known calculating machine was built around 1623 by scientist Wilhelm Schickard. Gottfried Leibniz then built a crude variant, intended to perform operations on concepts rather than numbers.

Mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning (a " predicate calculus " ). Around the 1940s, Alan Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. [2] Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullough and Pitts' 1943 formal design for Turing-complete "artificial neurons".

The Start of AI Research 1956

The field of AI research was founded at a conference at Dartmouth College in 1956. The attendees, including John McCarthy, Marvin Minsky, Allen Newell, Arthur Samuel and Herbert Simon, became the leaders of AI research. They and their students wrote programs that were, to most people, simply astonishing: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.

Middle of 1960

By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were optimistic about the future: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do". Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an " AI winter ", a period when funding for AI projects was hard to find.

The 1980 s

In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.

The 1990 s and early 21 st century

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas. The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

From the Mid 2010s onwards

Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception. [3] By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research [4] as do intelligent personal assistants in smartphones. [5] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. [6]

All information for Artificial_Intelligence_History's wiki comes from the below links. Any source is valid, including Twitter, Facebook, Instagram, and LinkedIn. Pictures, videos, biodata, and files relating to Artificial_Intelligence_History are also acceptable encyclopedic sources.
Other wiki pages related to Artificial_Intelligence_History.
QmdrQPUKtAyrbtfTnUXyxPkwVfotHhusmBz4jkuEf4ScwM