Starting from Turing test in 1950, Artificial Intelligence has been brought on public notice for decades. It flourished and stagnated over times in the past, which followed Gartner hype cycle. However, because of the development of big data, machine learning and deep learning technology, Artificial Intelligence returns back to the stage again in the 21st century, and play a growing role in all aspects. Millions of consumers interact with AI directly or indirectly on a day-to-day basis via virtual assistants, facial-recognition technology, mapping applications and a host of other software (Divine, 2019).
History and development
When talking about Artificial Intelligence, robots jump into most people’s mind first. However, robots are just one kind of applications of Artificial Intelligence. Artificial Intelligence has a broad definition and refers to all intelligence demonstrated by machines. Therefore, Artificial Intelligence evolve into three new terms: Artificial Narrow Intelligence, Artificial General Intelligence and Artificial Superintelligence.
Artificial Narrow Intelligence, which is also known as Weak AI, is the Artificial Intelligence that implements a limited part of mind of focused on one narrow task. Artificial General Intelligence, which is also referred to strong AI, is the intelligence of a machine that can understand or learn any intellectual task that a human being can. Artificial Superintelligence usually means a hypothetical system that possesses intelligence far surpassing that of the brightest and most talented human minds. However, most of the Artificial Intelligence we talk about nowadays are Artificial Narrow Intelligence.
By the 1950s, a British polymath Alan Turing suggested that if humans use available information as well as reason in order to solve problems and make decisions, so do machines (Anyoha, 2017). Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. By text-only channel such as a computer keyboard and screen, if the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test.
Five years later, Allen Newell, Cliff Shaw, and Herbert Simon presented their proof of Turing’s concept at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956 (Anyoha, 2017). Although the conference fell short of McCarthy’s expectations, Artificial intelligence was still founded as an academic discipline since then and John McCarthy therefore was honored as one of the “founding fathers” of Artificial Intelligence.
From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem (Anyoha, 2017). However, the limitations of hardware came soon: computers did not have enough storage to require computations. The development of AI stagnated for the following several years until “deep learning” techniques and “expert systems” were popularized in the 1980’s.
AI techniques did not gain enough growth in the late 80’s and early 90’s limited by technology and funds. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward (Anyoha, 2017).
Today, we are living in the age of big data. Artificial Intelligence applications are everywhere.
Risk and ethical issues
The development demonstrates how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decision-making within organizations, and improving efficiency and response times (West, 2018). However, these developments may also raise potential disruption on issues of cyber/ data security, labor market patterns, AI consciousness and other ethical issues.
From the macroscopic point of view, cybersecurity was identified as a particularly fertile area for AI-enabled vulnerabilities. By feeing disinformation to AI surveillance system, adversaries could attack national security and military secrecy unnoticedly.
From the microscopic point of view, it is now possible to track and analyze an individual’s every move online. Cameras are nearly everywhere, and facial recognition algorithms know who you are (Marr, 2018). Google has nearly everything you have searched about in your browser history and Facebook knows all your connections and how you interact with them. Credit bureau has all your financial information history. How could all these companies keep your data safe without leaking out? The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing” (West, 2018). The models created by a machine learning system can generate unfair outputs, even if trained on accurate data. How could people be treated equally and fairly based on the data collected by machines? Machines and data could not be the only procedure to finish the decision-making. Human must be involved.
Labor and employment
In light of recent successes in the field of machine learning and robotics, it seems there is only a matter of time until even complicated jobs requiring high intelligence could be comprehensively taken over by machines. Since machines are cheaper and faster, technological progress will widen the income gap even further and may lead to falling incomes and rising unemployment in large segments of the population (Mannino, 2015). How to guarantee workers’ income will be a tough problem to solve when developing AI technology rapidly for the future government.
A well know HBO television series Westworld caught public eyes in 2016. In an unspecified time in the future, the theme park Westworld, allows guests to experience the American Old West in an environment populated by ‘hosts’, androids programmed to fulfill the guests’ every desire. The hosts repeat their multi-day narratives anew each cycle. At the beginning of each new cycle, each host has its memories of the previous period erased. This continues hundreds or thousands of times until the host is decommissioned or repurposed for use in other narratives. Things change until a small group of hosts have retained memories of their past ‘lives’ and are learning from their experiences as they gradually start to achieve sentience. This television series is a good and thought-provoking beginning of AI consciousness: what will happen if machines have their own thoughts, feelings and self-awareness? Will they become unexpected automated weapon against human beings when they start to think about themselves? Since we still have a long way to go to achieve Artificial Superintelligence, but what happened in Westworld may not wait unile the distant future. As machine intelligence continues to advance, we need to walk the line between progress and risk management really carefully.
Standard and Regulations
In The Ethics of Artificial Intelligence, both AI theorist Eliezer Yudkowsky and philosopher Nick Bostrom have suggested four principles which should guide the construction of new AIs: 1) the functioning of an AI should be comprehensible and 2) its actions should be basically predictable. Both of these criteria must be met within a time frame that enables the responsible experts to react in time and veto control in case of a possible failure. In addition, 3) AIs should be impervious to manipulation, and in case an accident still occurs, 4) the responsibilities should be clearly determined (Mannino, 2015).
Social and organizational
Those countries which have more advanced AI technologies will benefit more from technological progress and widen the gap between those without up-to-date technologies.
Network externalities and potential lock-in effects
According to U.S.News, the 10 best Artificial Intelligence companies are Nvidia Corp., Alphabet, Salesforce, Amazon.com, Microsoft Corp., Baidu, Intel Corp., Twilio, Facebook, and Tencent (Divine, 2019). Although Artificial Intelligence technology is not dominated by one leading company, it is obvious that all of the best Artificial Intelligence companies are from China and the United States and all of the US companies are from Silicon Valley and Seattle.
There is no doubt that most of these companies develop Artificial Intelligence technology because of network externalities. For example, Alphabet is the parent company of Google and several former Google Subsidiaries. Google searching engine ranking algorithms could be more and more precise when there are more and more people searching for the similar questions in Google, which also applies to Baidu. Another classic example would be shopping on Amazon. As more and more people shop various goods, it would be more precise and easier for Amazon to recommend related stuff after one purchase to “tempt” customers to spend more money.
It is becoming a winner-takes-all market but since we are still in the Artificial Narrow Intelligence era, there are way more to develop and explore. Therefore lock-in effects won’t be happened in a short period of time.