Abductive Reasoning as the Key to Build Trusted Artificial Intelligence

Topics:
Words:
3147
Pages:
7
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

Table of contents

  1. Introduction
  2. Abduction, Deduction and Induction
  3. The State of Modern AI
  4. Artificial Intelligence of the Future
  5. Conclusion

Modern AI Systems have seen some major advancements and breakthroughs in recent years. However, almost all of them use a bottom-down approach where machines are heavily trained in as many situations as possible to increase accuracy and minimize their margin or error. This is a rather inefficient and at times untrustworthy way to teach machines. This approach requires large amounts of ‘good data’ and even with that, it is always uncertain that AI can be trusted in abstract situations. Abductive reasoning is a type of inference where a conclusion is made using whatever information is available at the time and a ‘best explanation’ is generated with that available information. This is much like how humans make decisions and are very intuitive. If this approach can be implemented in AI that it will be possible to trust AI in much more circumstances than ever.

Thesis: I claim that if we implement a top-down ‘abductive reasoning’ approach in AI Systems then it will help us reach the next generation of AI which will be more human-like. This approach will strengthen our trust in AI and increase its adaptability in diverse circumstances.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

Introduction

Artificial Intelligence (AI) and Machine Learning(ML) have become buzzwords in today’s technological world where almost everyone wants a piece of the AI-ML cake. However, most people don’t understand how AI works, what it takes to build a system that is capable of learning on its own and that modern techniques which theorize AI are very limited and cannot be trusted to do many tasks that we humans take for granted. Kelner and Kostadinov (2019) state that almost 40% of European start-ups that are classified as AI companies don’t actually use artificial intelligence in a way that is “material” to their businesses. Modern AI systems are heavily trained in different environments to eliminate their margin of error and increase accuracy, this is very inefficient and time-consuming. Why is it that modern AI systems cannot be trusted to perform some tasks that humans find very basic? Why do start-up AI organizations find it difficult to compete with the old AI organizations even when the former has more qualified and experienced people running it? Why do modern AI systems fail when exposed to new environments? All of these questions come down to one single answer. Modern AI is heavily dependent on Data. Without enough data, AI systems fail in most cases. Data is the foundation of Artificial Intelligence and Machine Learning, it is a common saying in the technological world that “whoever owns the data is king”. This type of thinking and modern AI practices has paved the way for fields like Data Science. However, it is not possible to always rely on data because there are times when machines have to go through abstract situations and are required to make quick decisions but do not have enough data and consequently, fails. AI needs to be trained differently, a new approach is required to make trusted AI. Abductive reasoning may just be the solution to these problems.

Abduction, Deduction and Induction

You happen to know that Drake and Josh have recently had a terrible fight that finished their friendship. Now some friend of yours tells you that she just saw Drake and Josh working out together. The best explanation for this that you think is that they made up. You conclude that they are friends again.

In Silver Blade, an Arthur Conan Doyle’s short story, Sherlock Holmes solves the mystery of the stolen racehorse by swiftly grasping the significance of the fact that no one in the house heard the family dog barking the night of the theft. As the dog was kept in the stables, the natural inference was that the thief must have been someone the dog knew.

In these examples, the conclusion does not have a logical order from the premises. For instance, it doesn’t logically follow that Drake and Josh are friends again from the premises that they had a terrible fight which finished their friendship and that they have just been seen working out together; it does not even follow, we may suppose, from all the information you have about Drake and Josh. Nor do you have any concrete statistical data about friendships, terrible fights, and working out that might lead us to an inference from the information that you have about Drake and Josh to the conclusion that they are friends again, or even to the conclusion that, probably (or with a firm probability), they are friends again. What leads you to the conclusion, and what according to a considerable number of philosophers may also lead to this conclusion, is precisely the fact that Drake and Josh’s being friends again would, if true, best explain the fact that they have just been seen working out together. The type of inference exhibited here is called abduction or, somewhat more commonly nowadays, Inference to the Best Explanation (Douven, 2017).

Abductive reasoning is a type of reasoning which usually starts with an incomplete set of observations or inferences and goes from there to the likeliest possible explanation. It is used for making and testing a hypothesis with whatever information is available (Kudo, Murai & Akama, 2009). This is the type of reasoning that humans use most often. Apart from Abduction, there are two other major types of inferences – Deductive and Inductive. The difference between deductive reasoning and inductive reasoning is that the former corresponds to the distinction between the necessary inferences and later corresponds to the distinction between the non-necessary inferences.

In deductive reasoning, what you infer is necessarily true if the premises from which it is inferred are true, that is, the truth of the premise guarantees the truth of the conclusions (Douven 2017). For instance: “All apples are fruit. Macintosh is an apple. Hence, Macintosh is a fruit”.

It is important to note that not all inferences are of this type. Consider, for instance, the inference of “Adam is rich” from “Adam lives in Manchester” and “Most people living in Manchester are rich”. Here, the truth of the first sentence is not guaranteed (but very likely) by the combined truth of the second and third sentences. Differently put, it is not always the case that when the premises are true, then so is the conclusion: it is logically compatible with the truth of the premises that Adam is a member of the minority non-rich population of Manchester. The case is similar regarding your inference to the conclusion that Drake and Josh are friends again on the basis of the information that they have been seen working out together. Perhaps Drake and Josh are former business associates who still had some business-related matters to discuss, however much they would have liked to avoid this, and decided to combine this with their daily exercise; this is compatible with their being firmly decided never to make up.

It is common to group non-necessary inferences into the category of inductive and abductive inferences. Inductive inferences are the types of inferences that are purely based upon statistical data. For instance: “91 percent of the UofT Students got an average of 90+ in high school. Tanmay is a UofT Student. Hence, Tanmay got an average of 90+ in high school”.

However, the important statistical information may also be more elusively given, as in the premise, “Most people living in Manchester are rich”. There is debate about whether the conclusion of an inductive argument can be stated in purely qualitative terms or whether it should be a quantitative one—for example, that it holds with a probability of 0.91 that Tanmay got an average of 90+ in high school—or whether it can sometimes be stated in qualitative terms—for example, if the probability that it is true is high enough—and sometimes not.

The State of Modern AI

There have been amazing advancements in AI during the past few years. Machines can recognize people and images, they can transcribe speech and translate languages. They can drive a car, diagnose diseases and even tell you that you’re depressed before you know it based on how you type and scroll (Dagum, 2018). The concept of AI has been around for a while now, so why suddenly in these recent years have we seen so many advances in AI Systems? The answer does not lie on the algorithms but instead, it’s all about the data. Whenever we hear about AI, it is often accompanied by words like deep machine learning and Big Data. The key point being that there must be enough good data and the expensive infrastructure to process that data. In fact, the top 20 contributors to open source AI include Google, Microsoft, IBM, Uber etc (Assay, 2018). These biggest players are readily open sourcing their AI pipelines but what are they not open sourcing? They are not open sourcing data because it’s their number one asset.

While many sci-fi movies depict AI by highlighting its incredible computational power, in reality, however, all effective practices begin with data. Consider Maslow’s Hierarchy of Needs, which shows a pyramid which includes the most basic things needed for human survival at the bottom and the most complex need at the top. Similarly, Monica Rogati’s Data Science Hierarchy of Needs is a pyramid which depicts what is necessary to add intelligence to the production system. At the bottom of the pyramid is the need to gather the right data, in the right formats and systems, and in the right quantity (Rogati, 2017). Any application of AI and ML will only be as useful and accurate as the quality of data collected. When starting to implement AI, many organisations find out that their data is in many different formats stored throughout several MES, ERP, and SCADA systems. If the production process has been manual, very little data has been gathered and analyzed at all, and it has a lot of variance in it. This is what’s known as ‘dirty data’, which means that anyone who tries to make sense of it—even a data scientist—will have to spend a tremendous amount of time and effort. They’ll need to convert the data into a common format and import it to a common system, where it can be used to build models. Once good, clean data is being gathered, manufacturers must ensure they have enough of the right data about the process they’re trying to improve or the problem they’re trying to solve. They need to make sure they have enough use cases and that they are capturing all the data variables that are impacting that use case.

Artificial Intelligence can do wonders when it has access to data that is sophisticated but is it possible to collect data for everything about everything? No, it is combinatorially explosive. Just take the number of possible moves in a game of Chess or Go, if calculated correctly, the number exceeds the number of atoms in the universe by a large factor (Silver D et al., 2016). One should realize that these board games are relatively much simple problems as compared to real-world problems such as driving a car or performing medical surgery. How can we trust AI so perform such tasks, knowing that there will always be situations when data won’t be enough for the machine to come to a conclusion. Moreover, AI is also at a risking of failing when the data gets corrupted or when the data is incorrect. In this case, the machine will be able to a conclusion but chances are that the conclusion will be wrong. A German pilot who left his plane on autopilot AI got locked out his cockpit and the autopilot crashed the plane. Later, using the black box, it was found out that the autopilot had incorrect and corrupted data which led to the crash of the plane (Faiola, 2015). An AI system with trusted autonomy should be sophisticated enough to override such commands, even when the correct overrides are imputed.

Artificial Intelligence of the Future

It is one thing to win a game of Chess or Go against a world champion (Silver D. et al., 2016), but it’s entirely another when it comes to risking our lives in driverless cars. That is the difference between an AI system that memorized a set of rules to win a game and an AI system that is trusted to make spontaneous decisions when the number of possibilities is endless and impossible to compute. Modern AI machines work on the principle of Deductive and Inductive reasoning where the computers are provided with complete sets of data and strict rules upon which they make their conclusions. This type of AI is very limited and difficult to trust in many situations where human-like reasoning ability is required which works on intuition and abductive reasoning.

In the past, AI advanced through deep learning and machine learning, which used the bottom-up approach by training them on mountains of data. For example, driverless cars are trained in as many traffic conditions as possible to collect as much data as possible. But these data-hungry neural networks have a serious limitation. They have trouble handling the ‘corner’ cases because they have very little data about it. For instance, a driverless vehicle that is capable of handling crosswalks, pedestrians, and traffic have trouble processing rare occurrences like children dressed in unusual Halloween costumes, crossing the road after a night session of trick and treating. Many systems also easily get fooled. The iPhone X’s facial recognition system doesn’t recognize ‘morning faces’ – a user’s puffy, hazy look in the morning (Withers, 2018). Neural networks have beaten chess champions and conquered the ancient game of Go but they get easily fooled by an upside down or slightly altered version of a photo and misidentify it.

Many companies and organisations have already started understanding the importance of a top-down approach in AI and so in the future, we will have top-down systems which don’t require loads of data and are more spontaneous, flexible, and faster, much more like human-beings with more innate intelligence. There are 4 major areas where work needs to be done in order to implement a top-down approach in AI systems (Carbone & Crowder, 2017):

  1. More efficient Robot reasoning. When machines have a conceptual understanding of the world, as humans do, they use far less data and it is much easier to teach them things. A Union City start-up backed by people like Mark Zuckerberg and Jeff Bezos, Vicarious, is working towards developing ‘general intelligence for robots’ enabling them to perform tasks with accuracy with very few training sessions. Consider the CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart), they are very easy for humans to solve but surprisingly difficult computers. By computational neuroscience, scientists at Vicarious have developed a model that can break the CAPTCHA at a much higher rate than deep neural networks with greater efficiency (Lázaro-Gredilla, Lin, Guntupalli, George. 2019). Such models which have the ability to generalize more broadly and train faster are leading us to the direction of machines that have a human-like conceptual understanding of the world.
  2. Ready Expertise. The ability to come to conclusions spontaneously and modelling a machine on the basis of what a human expert would do in situations of high uncertainty and little data, an abductive approach can help AI beat data-hungry approach which lacks all of the mentioned abilities. Siemens is applying top-down approach in their AI to control the highly complex combustion process in gas turbines, where air and gas flow into a chamber, ignite and burn at temperatures as high as 1,600 degrees Celsius. Factors such as quality of the gas to air flow and internal and external temperatures will determine the volume of emissions generated and ultimately how long the turbine will continue to operate. On the other hand, by bottom-up machine learning methods, the gas turbine would have to run for a century before producing enough data to begin training. Instead, Siemens researchers used methods that required little data in the learning phase for the machines. The monitoring system that resulted makes fine adjustments that optimize how the turbines run in terms of emissions and wear, continuously seeking the best solution in real time, much like an expert knowledgeably twirling multiple knobs in concert (Sterzing & Udluft. 2017).
  3. Common Sense. If we could teach the machines to navigate the world using common sense, then AI would be able to tackle problems that require a diverse form of inference and knowledge. To be able to understand everyday actions and objects, keeping track of new trends, communicate naturally and handle unexpected situations without much data would pave way for human-like AI systems. But what comes naturally to humans, without much training or data is unimaginably difficult to machines. There is progress and certain organisations have launched programs like Machine Common Sense (MCS) program and have invested a lot to make this a reality (Zellers, Bisk, Schwartz & Choi. 2018).
  4. Making better bets. Humans have the ability to routinely, often spontaneously and effortlessly, go through the possibilities and act on the likeliest, even without prior experience. Machines are now starting to mimic the same type of reasoning with the help of Gaussian processes. Gaussian Processes are probabilistic models that can deal with extensive uncertainty, act on sparse data, and learn from experience (Rasmussen & Williams. 2006). Alphabet, Google’s parent company, recently launched Project Loon, designed to provide internet service to underserved areas of the world through a system of giant balloons flying in the stratosphere. Their navigational systems use Gaussian processes to foresee where in the stratified and highly variable winds aloft the balloons need to go. Each balloon then travels into a layer of wind blowing in the right course, arranging themselves to form one large communication network. The balloons are not only able to make reasonably accurate predictions by analyzing past flight data but also analyze data during a flight and adjust their predictions accordingly (Metz. 2017). Such Gaussian processes hold great potential. They don’t require huge amounts of data to recognize patterns; the computations required for inference and learning are relativity simple, and if something goes wrong its cause can be back-traced, unlike the black boxes of neural networks.

Conclusion

Machines need to become less artificial and more intelligent. Instead of relying on a bottom-up ‘big data’ approach, machines should adopt a top-down ‘abductive reasoning’ approach that more closely resembles the way humans approach problems and tasks. This general reasoning ability will help AI to be more diversely applied than ever, and in addition, it will also create opportunities for early adopters, even new organisations which were previously unable to compete with the leaders due to lack of data, will be able to apply their ideas into creating something useful and trustable. Using these techniques of abductive reasoning and top-down approach, it is possible to build AI Systems that can be trusted in any situation.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

Abductive Reasoning as the Key to Build Trusted Artificial Intelligence. (2022, September 01). Edubirdie. Retrieved July 18, 2024, from https://edubirdie.com/examples/abductive-reasoning-as-the-key-to-build-trusted-artificial-intelligence/
“Abductive Reasoning as the Key to Build Trusted Artificial Intelligence.” Edubirdie, 01 Sept. 2022, edubirdie.com/examples/abductive-reasoning-as-the-key-to-build-trusted-artificial-intelligence/
Abductive Reasoning as the Key to Build Trusted Artificial Intelligence. [online]. Available at: <https://edubirdie.com/examples/abductive-reasoning-as-the-key-to-build-trusted-artificial-intelligence/> [Accessed 18 Jul. 2024].
Abductive Reasoning as the Key to Build Trusted Artificial Intelligence [Internet]. Edubirdie. 2022 Sept 01 [cited 2024 Jul 18]. Available from: https://edubirdie.com/examples/abductive-reasoning-as-the-key-to-build-trusted-artificial-intelligence/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.