Artificial Intelligence And Its Social And Ethical Implications

Topics:
Words:
2445
Pages:
5
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

Artificial intelligence (AI) is believed to change the way humans live on this planet. Barr and Feigenbaum (1981) define AI as: “Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behaviour – understanding language, learning, reasoning, solving problems and so on”. A more basic definition of AI is given by Minsky (1968) as “Artifical intelligence is the science of making machines do things that would require intelligence if done by men”

By 2020 the storage capacity (memory) and computational speed (processing) of computers will match that of humans in all aspects and this will start an era of conscious machines (Kurzweil, 1999). Turing (1950) provided a test to measure when machines can be said to have progressed to a stage of human capability – This was when humans could communicate with machines without telling a difference if it was a machine or a human. Turing triage test (Sparrow, 2004) gives a moral test whereby a machine could be proclaimed as morally conscious which could make a decision regarding when for example one of the two patients could be saved and one of the patients is replaced by a conscious machine. Thus AI with all its possibilities also brings a moral dilemma and a challenge which is much more than just technological. In addition, AI will affect the way we live on this planet and the social dynamics.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

Research in AI is a hot topic today. However, it has many facets and areas which bring their own challenges both technologically and ethically. These challenges are different according to the form (or no form) in which AI is manifested. According to Sparrow (2004) the moral equivalence of a machine to a human cannot be established unless the machines has a form resembling humans. Lemaignan et al (2017) describes the application of human with robots having Artificial Intelligence built in them. This requires cognition of social aspects and multi-modal processing of multiple inputs as witnessed in human to human interactions. The communication and reciprocation between humans is complex which entails visual signal processing, understanding symbols and gestures, mental real time processing, planning and coordinating, reactive control and recognition of patterns. Lemaignan et al (2017) selected communication through language, contextual meaning of words/phrases and no verbal communication through the eyes i.e social gaze. To implement these objectives Lemaignan et al (2017) had to design the robot to interpret belief symbols, keep and update state of the world around it, keep and iterate plans and execute and check human partners actions in a manner independent of the event. The authors implemented the diverse range of software required to achieve these goals by mimicking the first order semantics of human beings.

Robots can be divided into three distinct categories – Those used to perform tasks in a controlled environment inside, ones to be used in harsh and unpredictable environments outside and humanoid or anthropomorphic robots. Robots designed to work outside have to have the structure and flexibility to move in different types of terrain and thus need AI along with the use of specialised actuators to allow it to move in uncertain and changing environment. Cheetah 3 (Bledt et al., 2018) is one of the most advanced quadruped robots of this category. The robots that require the most extensive use of AI are however humanoid robots. Some examples of humanoid robots built are Honda’s ASIMO (Sakagami et al, 2002), WABOT 2(Kato et al, 1974), Saya (Kobayashi, 2003), HUBO 2 (Oh, 2006) and Hanson Robotocs PKD Android (Hanson, 2006). The obsession with these anthropomorphic robots continues with new and updated models coming up like Kwada and Atlas. Duffy (2002) regards that the human propensity to give human form to inanimate objects (in this case robots) serves to limit the possibilities which could be achieved with robots and AI. This is a complex phenomenon where an intelligence is given a shape and form however once a human shape is given to a robot it is expected to behave in a human like manner and other factors creep in like emotions, personality etc which introduce new challenges rather than developing an intelligence which is just used for its own sake. Lemaignan et al (2017) present a model of a framework of human-robot interaction whereby mutual exchange of information could take place, tasks be achieved collaboratively, and execution of tasks be carried out in a human-aware way. This would entail implementing AI layers for belief systems, apriori common sense and mental models which could conform to human semantics and cognition. These humanoid bring a social and ethical challenge as they are becoming closer in appearance to humans and with AI are getting their unique personalities raising the question if they can be someday equivalent to human beings.

Another area where AI can play a more important role is that of autonomous vehicles. The military application of such vehicles is ideal for future automated warfare which could lead to disastrous results as such automobiles and tanks have the potential to cause huge destruction. However, the civilian uses of such automated vehicles are also immense, and these can bring in huge benefits and solve some of the grave problems we are facing today. Thrun (2006) describes such an autonomous self-driving vehicle that won DARPA grand challenge. This vehicle had the AI built into it allowing it to make decisions dynamically on the basis of sensor data giving the long-term features as well as short duration changes and obstacles on the way. Such autonomous vehicles had applications in space exploration. Also, the traffic problems are worsening every day in the world. Autonomous vehicles equipped with AI not only free the driver but also reduce accidents and increase the efficiency of road usage by packing more vehicles on the same road and using AI to navigate routes. Parking problems would also have a huge effect as AV could reduce the need of car parks as these could be called on to pick up and drop on demand. The AI in the cars along with the ability to make adhoc networks with other road users would make the commutes much more efficient and easier. As the autonomous vehicles make their way to the road these will bring an ethical dilemma along-with (Bonnefon, 2016). This is the algorithm which allows the AV to make decisions. The algorithm could be programmed to save the passengers in the vehicle at all costs or to sacrifice the passengers for a child or a large group of people.

Another area where AI can play a role is the assistive technologies that help people with disabilities to help them perform day to day tasks with ease and service robots which carry out dull and repetitive tasks as chores around the household and child minding. Sharkey (2008) gives the ethical issues with such technologies such as leaving a child to the full care of a robot. The algorithms programmed in the robot should be able to make constrained, rational and ethical decisions which is quite complex and any error or the algorithmic bug could lead to a disaster. The same is true for assistive robots for the elderly.

AI can take place in the form of just a software agent where there are no actuators, physical movements or even a even a piece of hardware we can point at being the centre of intelligence. Such distributed computational systems also called agents in the language of AI act on the basis of previous knowledge, history, observation of current environment and past experience (Poole, 2012). The question of ethics is not only restricted to the growing capability and use of AI in robots. It is also an important question as regards the development of a distributed network intelligence which does not have a shape or form (Poole, 2010)

Efforts to include Ethics in AI Reserach

Assimov (1950) in his science fiction gave these three basic laws to be programmed into any AI capable robot to make it to behave in a non-destructive way all the time. These laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First and Second Laws.

These laws are very basic and simplistic; however these can provide a ground on which future laws regarding robots or any other form of AI is to be built. Such laws and principles could be hard coded in such intelligent agents which could help prevent a disaster or minimise damage.

Bostrom (2015) makes an assertion that soon artificial intelligence agent can acquire intelligence equivalent to human beings and once that happens it will exponentially increase its own intelligence. This he asserts may be the doom of humans on this planet as there would be no way to stop this super intelligence force multiplier. Some people like Joy (2000) advocate to put a ban on AI research as according to him it would certainly lead to a super intelligence out of control of humans and would inevitably take over the world. Davis (2015) does not agree with Bostrom stating the flaws in his argument as the computational power and memory cannot be equated with intelligence, the increase in intelligence does not necessarily result in a corresponding increase in power, Larger intelligence does not equate to the ability to do more things and the belief that increase in AI would not in parallel be accomplished with giving AI agents ethical grounding. h will surpass us and result in a superintelligence which will take over the world. This seems unlikely to happens at the moment. However, the safe thing to do would be to come up with an ethical framework on which AI research should be carried out and have these ethical safeguards hard coded in intelligent agents. A High Level Expert Group on Artificial Intelligence (AI HLEG) was set up by the European commission in 2018 who prepared the Ethics Guidelines for Trustworthy Artificial Intelligence. This report (HLEG, 2018) uses the term trustworthy AI which means that the AI is ethical and technically robust. The guidance gives the framework for development of AI on the basis of human centric model and the use of this technology to alleviate the sufferings of the people rather than creating a technological showpiece.

A brief overview of Artificial Intelligence developments in the recent years has been presented. It is clear that this is a very rich and active area of research which offers tremendous opportunities for the future. Job markets would change considerably with the development of AI as automated assistants would carry out most of the tasks in the software and robots carrying out the physical task using actuators. This is certainly going to happen very soon. Also, the areas requiring low level technical skills would also suffer such as computer coders and technician jobs. Social sciences and soft skills would be more in demand for the future. Another future perspective is looking at AI and the thought that somehow, we will be able to create an intelligence which will surpass us and result in a superintelligence which will take over the world. This seems unlikely to happens at the moment. However, the safe thing to do would be to come to a ethical framework on which AI research should be carried out and have these ethical safeguards hard coded in intelligent agents.

In the light of the arguments it is safe to say that AI will play the most vital role not only in technological developments but also in the social, economic and political spheres and change the way humans live on this planet in a big way. Risks are there for it to go out of hand but the realisation of this is already there and necessary safeguards are being devised to avoid any pitfalls which may result in disastrous consequences.

References

  1. AI HLEG. 2018. Ethics guidelines for trustworthy AI
  2. Assimov, I 1950. I, Robot. Doubleday, Garden City, New York.
  3. Barr, A. and Feigenbaum, E., 1981. The Handbook of Artificial Intelligence Vol. I. Pitman.
  4. Bledt, G., Powell, M.J., Katz, B., Di Carlo, J., Wensing, P.M. and Kim, S., 2018, October. MIT Cheetah 3: Design and control of a robust, dynamic quadruped robot. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 2245-2252). IEEE.
  5. Bonnefon, J.F., Shariff, A. and Rahwan, I., 2016. The social dilemma of autonomous vehicles. Science, 352(6293), pp.1573-1576.
  6. Duffy, B.R., 2003. Anthropomorphism and the social robot. Robotics and autonomous systems, 42(3-4), pp.177-190.
  7. Davis, E., 2015. Ethical guidelines for a superintelligence. Artificial Intelligence, 220, pp.121-124.
  8. Hanson, D., 2006, July. Exploring the aesthetic range for humanoid robots. In Proceedings of the ICCS/CogSci-2006 long symposium: Toward social mechanisms of android science (pp. 39-42). Citeseer.
  9. Harle, R., 1999. Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence. SOPHIA-MELBOURNE-, 38, pp.158-160.
  10. Joy, B. 2000. Why the future does not need us. Wired. Available at https://www.wired.com/2000/04/joy-2/
  11. Kobayashi, H., Ichikawa, Y., Senda, M. and Shiiba, T., 2003, October. Realization of realistic and rich facial expressions by face robot. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003)(Cat. No. 03CH37453) (Vol. 2, pp. 1123-1128). IEEE.
  12. Kato, I., Ohteru, S., Kobayashi, H., Shirai, K. and Uchiyama, A., 1974. Information-power machine with senses and limbs. In On theory and practice of robots and manipulators (pp. 11-24). Springer, Vienna.
  13. Lemaignan, S., Warnier, M., Sisbot, E.A., Clodic, A. and Alami, R., 2017. Artificial cognition for social human–robot interaction: An implementation. Artificial Intelligence, 247, pp.45-69.
  14. Minsky, M., 1968, Semantic information processing. Cambridge, Mass.
  15. Oh, J.H., Hanson, D., Kim, W.S., Han, Y., Kim, J.Y. and Park, I.W., 2006, October. Design of android type humanoid robot Albert HUBO. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1428-1433). IEEE.
  16. Poole, D.L. and Mackworth, A.K., 2010. Artificial Intelligence: foundations of computational agents. Cambridge University Press.
  17. Sharkey, N., 2008. The ethical frontiers of robotics. Science, 322(5909), pp.1800-1801.
  18. Sparrow, R., 2004. The turing triage test. Ethics and Information Technology, 6(4), pp.203-213.
  19. Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N. and Fujimura, K., 2002. The intelligent ASIMO: System overview and integration. In IEEE/RSJ international conference on intelligent robots and systems (Vol. 3, pp. 2478-2483). IEEE.
  20. Turing, A.M., 2004. Computing machinery and intelligence (1950). The Essential Turing: The Ideas that Gave Birth to the Computer Age. Ed. B. Jack Copeland. Oxford: Oxford UP, pp.433-64.
  21. Thorn, P.D., 2015. Nick Bostrom: Superintelligence: Paths, Dangers, Strategies.
  22. Thrun, S., 2006, September. Winning the darpa grand challenge: A robot race through the mojave desert. In 21st IEEE/ACM International Conference on Automated Software Engineering (ASE'06) (pp. 11-11). IEEE.
Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

Artificial Intelligence And Its Social And Ethical Implications. (2022, Jun 16). Edubirdie. Retrieved November 2, 2024, from https://edubirdie.com/examples/artificial-intelligence-and-its-social-and-ethical-implications/
“Artificial Intelligence And Its Social And Ethical Implications.” Edubirdie, 16 Jun. 2022, edubirdie.com/examples/artificial-intelligence-and-its-social-and-ethical-implications/
Artificial Intelligence And Its Social And Ethical Implications. [online]. Available at: <https://edubirdie.com/examples/artificial-intelligence-and-its-social-and-ethical-implications/> [Accessed 2 Nov. 2024].
Artificial Intelligence And Its Social And Ethical Implications [Internet]. Edubirdie. 2022 Jun 16 [cited 2024 Nov 2]. Available from: https://edubirdie.com/examples/artificial-intelligence-and-its-social-and-ethical-implications/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.