Chef Watson is a machine ‘chef’ who can create recipes from almost any ingredient. We’re familiar with Siri and Alexa, virtual assistants that do everything from setting alarms to translating foreign phrases. Tesla’s electric cars come with the Autopilot feature that lets them auto-steer, change lanes, navigate, and self-park. What do these three things have in common? Artificial intelligence. Whether they come as software solutions or embodied in products, AI systems are everywhere we look, and they have become a driving force in most sectors of the global economy. The use of artificial intelligence raises several important legal questions, but to analyze these issues it is important first to understand what it is and how it has evolved.
Intelligence is defined by the Merriam-Webster Dictionary as “the ability to learn or understand or to deal with new or trying situations; the skilled use of reason.” Intelligence has also been described as “the ability to think, to learn from experience, to solve problems, and to adapt to new situations.” It may become obvious from a closer examination of these definitions that one could, with these terms, just as well describe a robot as one could a human being. For example, when Google Maps factors in congestion delays and extends the arrival time from what was first displayed, the application has taken a new situation of things, analyzed it, and adjusted accordingly, displaying a level of intelligence. This is not to say that machines and humans display the same type of intelligence: natural and artificial intelligence differ in numerous ways, but they are fundamentally similar in that they involve the application of learned information or data to solve a problem or carry out a task.
Artificial intelligence does not lend itself easily to a definition, but many scholars adopt the following definition given by Elaine Rich: “Artificial intelligence is the study of how to make computers do things at which, at the moment, people are better.” In simple terms, artificial intelligence is a simulation of natural intelligence. Thus, the ability of a machine to carry out high-level cognitive functions that require intelligent behavior would qualify as artificial intelligence. Artificial intelligence can have simple or complex utility functions. Artificial intelligence is built on algorithms: it is, essentially, layers of algorithms, some even self-programming. An algorithm is a set of specific instructions for solving a defined problem or solving a task: the internet has made them ubiquitous in everything from the apps on our phones to video games. For example, social media such as Twitter and Instagram use algorithms to sort content and decide its visibility to each user. Complex algorithms support the structure of strong artificial intelligence.
How does artificial intelligence depend on algorithms? The answer is quite simple: machine learning. Machine learning is a central sub-field of artificial intelligence that focuses on algorithms or systems that learn from data and improve automatically. In other words, artificial intelligence becomes more efficient because the algorithms that make it up can get better at identifying patterns in data and making decisions with minimal human intervention. Advancements in the field of artificial intelligence have what is known as ‘the AI effect’: when a function becomes significantly mainstream, it is no longer considered artificial intelligence. For example, optical character recognition (OCR) is no longer considered AI because it has become a routine technology.
Artificial intelligence, much like any scientific development, is built on foundations that far precede it. A discussion on its history will take us way back to ancient times, but actual developments started in about the 19th century. Studies in (philosophical) logic and theoretical computer science far predate artificial intelligence. 1884 in particular is a very important year for artificial intelligence. It was in this year that Charles Babbage worked on a mechanical machine that should exhibit intelligent behavior. However, he later decided that he would not be able to produce a machine that would exhibit as intelligent behaviors as a human being, and considered his work suspended. In 1937, Alan Turing, a British mathematician, developed a halting problem that pointed out the limits of intelligent machines. In 1950, he created the Turing test to assess the intelligence of computers, and the intelligence level of the machines that passed the test at that time was considered adequate. Alonzo Church, an American mathematician, further contributed to Turing’s work, birthing what is known as the Church-Turing thesis.
Artificial intelligence officially became an academic pursuit at the historic Dartmouth College conference in 1956 where the term was first used. The first artificial intelligence applications were introduced during this period. These applications were based on logic theorems and chess games. The programs developed during this period were distinguished from the geometric forms used in the intelligence tests, which supported the idea that intelligent computers could be created. Professor John McCarthy, an American computer scientist referred to as the ‘father of artificial intelligence’, is credited with coining the term as well as starting the AI laboratory at MIT around 1956 and at Stanford University in 1963. He developed LISP, a programming language described as “the most important tool for the implementation of symbol-processing AI systems.”