When artificial intelligence started?

When artificial intelligence started? A complete History

Getting your Trinity Audio player ready...
When artificial intelligence started?

When artificial intelligence started?

Artificial intelligence (AI) is a new discipline that has been around for 60 years. It’s an entire set of science techniques, theories and theories (including mathematical logic, statistical probabilities, statistics, computational neurobiology and computer science) which aims to mimic the abilities of humans. The concept was first introduced in the Second World War; its advancements are closely with those of computers and have enabled computers to carry out increasingly complicated tasks that could be delegated to humans.

But, this technology is not human-like in the traditional sense, leaving the name subject to criticism from some experts. The end-to-end phase of their research (a “strong” AI, i.e. an ability to conceptualize various specialized issues in a completely autonomous manner) is in no way comparable with the present achievements (“weak” (or “moderate” AIs that are highly effective in their field of training). “It is believed that a “strong” AI, which has yet to be realized as a science fiction character, will require advancements in basic research (not just performance improvements) for it to represent the world in its entirety.

Since 2010, the field has seen an increase in popularity, primarily because of the significant improvement in the power of computing computers and the access to massive amounts of data.

Promising, renewed fears, often imagined, can hinder an understanding objective of the issue. A brief historical overview can help situate the field and guide the current discussions.


When artificial intelligence started?

1940-1960: The birth of AI in the aftermath of cybernetics

The time between 1940 and 1960 was defined by the combination of technological advances (the Second World War was an acceleration) and the need to comprehend how to integrate the functions of machines and living beings. According to Norbert Wiener, a pioneer in cybernetics research, his goal was to unite electronic theory, mathematical theory and automation into “a complete concept of communication and control for both machines and animals.” Before that, the first computer-based mathematical model of the brain’s neuron (formal neuron) was developed by Warren McCulloch and Walter Pitts in 1943.

In the year 1950, at the start, John Von Neumann and Alan Turing did not create the concept of AI, but they were the founders of the technology that underlies it. They transitioned from computers’ decimal logic of the 19th century (which also dealt with numbers between 0 and 9) and from binary machines (which rely on Boolean algebra to deal with less or more essential chains of 1 or 0). The two researchers then defined the basic architecture of modern computers and proved that it was a universal device that could execute programmed programs. Turing, in contrast, asked about the potential intelligence of machines for the first time in his infamous 1950 article “Computing Machines as well as Intelligence” and explained the concept of a “game that mimics” what a human is expected to be able to tell in a teletype conversation whether it is with a human or a machine. Whatever controversial the article might seem (this “Turing testing” is not believed to be a good test for most experts), It will frequently be used to explain the root of the debate over the line between humans and machines.


Is artificial intelligence good for society?


Are jobs in artificial intelligence on the rise?

Shop Now

“AI,” as it is commonly referred to, was developed by “AI” can be traced back to John McCarthy of MIT (Massachusetts Institute of Technology) as well as the term “AI,” which Marvin Minsky (Carnegie-Mellon University) defines as “the development of computer programs which engage in tasks that humans can better perform because they require high-level cognitive processes, such as perception, memory learning and critical thinking. 1956 the summer conference at Dartmouth College (funded by the Rockefeller Institute) was considered the field’s originator. It is also worth mentioning the vast achievement of what was not a convention but more of it was a workshop. Six people, including McCarthy and Minsky, were in the room throughout this project (which relied on many developments built upon the formal logic).

Technology was still exciting and exciting (see the example of the article written in 1963 written by Reed C. Lawlor, a member of the California Bar, entitled “What Computers Can Do Analysis and Prediction of the Judicial Decisions”). However, technology’s popularity waned into the 1960s, particularly in the beginning. The computers had very little memory, making it hard to utilize a computer’s language. But, some fundamentals in place are still in use today, like the solution trees used to solve issues: the IPL, also known as the information processing language, which allowed writing from 1956 onwards. The LTM (logic theorist machine) program was intended to show mathematical theories.

Herbert Simon, economist and sociologist, predicted in 1957 that AI would be able to beat human players at Chess within the following ten years. However, the AI was then able to enter a first winter. Simon’s prediction proved correct… thirty years after.

1980-1990 1990: Expert systems

In 1968, Stanley Kubrick directed the film “2001 Space Odyssey” in which the computer called HAL 9000 (only one letter from IBM’s IBM) sums up in its way the entirety of ethical concerns raised by AI, whether it represents an advanced level of technology and be a positive thing for humanity or be a risk to the human race? The effect of the film will not be scientific. Still, it will help to spread the subject, much like the author of science fiction Philip K. Dick, who never ceases to think about the possibility that machines can feel emotions one day.

AI could only catch up and become the glorious age of expert systems with the introduction of the first microprocessors towards the end of the year 1970.

The Way to the Future was opened by MIT in 1965 by DENDRAL (an expert system specializing in molecular chemical analysis) in 1965, and later by Stanford University in 1972 with MYCIN (a system that specialized in detecting blood disorders and prescription medications). These systems were built in an “inference engine,” designed to function as a logical reflection of human thinking. Through the input of information, the engine gave responses that were high in quality.

The promise of a massive advancement, however, the craze would slow down at the close of the year 1980 or early in 1990. Programming such knowledge required a great deal of work and, ranging up to 300 rules, There was a “black box” effect, which was unclear how the machine figured out. Maintenance and development became a significant challenge, and more importantly, faster and more straightforward and affordable methods were feasible in various other ways. It is worth noting that by the late 1990s, the term “artificial intelligence” was almost a taboo word, and more modest variations of the time had even been adopted by universities, like “advanced computation.”

The May 1997 success of Deep Blue (IBM’s expert system) in the chess match against Garry Kasparov fulfilled Simon’s prophecy from 1957, 30 years later. Still, it did not justify this AI’s funding and further development. Its work Deep Blue was built on a logical brute force algorithm in which every possible move was evaluated and weighted. The human defeat remains a symbol of the time, but Deep Blue had, in reality, only dealt with only a small area (that is, of Chess) but was a long way from being able to comprehend the world’s vastness.

Since 2010, a new flower based on massive data and the latest computing technology

Two reasons explain the recent growth in the discipline that began around 2010.

First, access to enormous amounts of data. To use algorithms to classify images and cat recognition, for instance, previously, you had to conduct a sampling exercise yourself. Nowadays, a quick search on Google will reveal millions.

Then it was discovered that there is an efficient GPU processor on computers to help speed up the computation of algorithms for learning. This process could be more varied; it could be weeks before 2010 to complete the amount of data. The power of computers in this card (capable of over 1,000 trillion transactions every second) has made it possible to make significant advancements at a low price (less than 1,000 euros per card).

This latest technology has led to major public achievements and increased the amount of money available: in 2011, Watson, IBM’s IA, won the game against two Jeopardy champions! >>. The year 2012 is when Google X (Google’s search lab) will be able to make the ability to have an AI recognize cats on the screen. A total of 16,000 processors have been utilized to accomplish this project, yet the possibility is amazing: a machine can learn to recognize the difference between something. In 2016 AlphaGO (Google’s AI that is specialized in Go games) will defeat all of the European champions (Fan Hui) and the world champion (Lee Sedol) as well as herself (AlphaGo Zero). Let us clarify Go is a game that Goes is a combinatorics game that is much more significant than Chess (more than the amount of universe-wide particles). It is impossible to achieve such massive results in terms of pure power (as with Deep Blue in 1997).

What is the source of this unique phenomenon? A paradigm shift altogether from expert systems. The method has evolved into inductive. It is not a matter of codes used by experts; instead, computers figure them out by correlation and classifying using vast amounts of data.

Machine learning, which includes deep understanding, is the most promising in various applications (including image or voice recognition). It was in 2003 that Geoffrey Hinton (University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (University of New York) were able to establish an investigation program that would improve neural networks to current. The experiments conducted by Microsoft, Google and IBM with the assistance of the Toronto lab in Hinton demonstrated that this learning reduced speech recognition error rates. Hinton’s image recognition group obtained similar results.

Most research teams have embraced this technology in a few hours, and the results are indisputable. This kind of learning also has led to significant progress in text recognition. However, as per experts such as Yann LeCun, there’s still a long way to be taken to develop a system that can understand the text. Conversational agents demonstrate this issue well. Our smartphones recognize how to transcribe instructions but need help comprehending them or analysing their intentions.



Are artificial intelligence self aware?

Leave a Reply

Your email address will not be published. Required fields are marked *