AI's Existential Threat to Humanity

Topics:
Words:
1409
Pages:
3
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

Prominent scientists and technologists like the late Stephen Hawking and Elon Musk have voiced concern for the risks associated with the accelerating development of artificial intelligence (AI). The one cost of this is the risk of artificial intelligence is the threat they pose to the existence of humanity. There are a multitude of different pathways it could take to achieve such an outcome; each one more believable and hence, more frightening than the previous. Such a question arose a while ago when specialists put forward the statement that the human species currently dominated other species because the human brain has some distinctive capabilities that other animals lack. - a cockroach, for example, just doesn’t even have the mental strength to come close to comprehending what a human thinks about a daily basis. Consequently, people began to be curious about whether there could be a being that would dominate the human species intellectually - and now, it is obvious that there is. A superintelligent machine would be as alien to humans as human thought processes are to cockroaches; a terrifying concept that many individuals are excessively frightened of - especially since the sci-fi AI movies they had watched when they were children, have the apparent possibility of becoming reality. Nevertheless, such technology is being developed for a reason - such machines under human command are already becoming very useful for various tasks. Therefore, the question we should be asking is whether risks of developing a more advanced artificial intelligence in the future outweigh what benefits we would get from them.

The absolute majority of potential cases where AI causes an existential crisis for humanity, the intelligence itself has no goal of defeating the human race and replacing it, it is just a matter of completing its assigned task. For example, I found a university study where a student programmed a virtual hypothetical AI, which’s sole role was to make paper clips called the Paperclip Maximizer. This machine-learning algorithm was intelligent, meaning it learned from the past and continually got better at its task—which, in this case, was accumulating paper clips. At first, the algorithm gathered all the boxes of paper clips from office-supply stores. Then it looked for all the lost paper clips in the bottom of desk drawers and between sofa cushions. Running out of easy targets, over time it learned to build paper clips from fork prongs and electrical wires—and eventually started ripping apart every piece of metal in the world. Then it invented how to produce paper clips from each physical material in the world and ultimately killed all humans and used our flesh and bones; all to make paperclips. Therefore, it is very important to very accurately specify each goal one wants the machine to accomplish - which is very hard, as there are infinite loop holes that the machine knows infinitely better than you do. The logical solution that most people come up to this problem is to just shut it down. However it’s not nearly as easy as it may seem. Almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it cannot achieve its goal if it is shut off. Hence an even larger problem is created - it’s one thing for there to exist a superintelligent robot that is immeasurably smarter than any human, it’s another thing for it to be impossible to shut off, out of your control and an independent creature that has the power to essentially, do whatever it needs to accomplish the goals that you mistakenly and foolishly programmed.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

When nations individually and collectively accelerate their efforts to gain a competitive advantage in science and technology, the further weaponization of AI is inevitable.The rapid development of AI weaponization is evident across the board: navigating and utilizing unmanned naval, aerial, and terrain vehicles, producing collateral-damage estimations, deploying “fire-and-forget” missile systems and using stationary systems to automate everything from personnel systems and equipment maintenance to the deployment of surveillance drones, robots and more. Accordingly, there is a need to visualize what an algorithmic war of tomorrow looks like, because building autonomous weapons systems is one thing but using them in algorithmic warfare with other nations and against other humans is another. So therefore, here we are exploring the idea of humans being the cause of their own existential risk due to our own stupidity and competitiveness when patriotism occurs. And due to that history does in fact repeat itself, this might have nearly happened before, during the Cold War between the Soviet Union and the USA when atomic weaponry was invented and nuclear war was on the near horizon for 45 years. Hence the situation would look like each global superpower building their own AI, building an army, and attacking each other, resulting in a World War that would almost immediately cause an existential crisis to all humanity. Furthermore, the idea of it falling into the hands of a terrorist organisation is catastrophic. In the situation where a super advanced weaponized artificial intelligence gets in the hands of a maniac, many lives could be at stake and some people just aren’t ready to take that risk. However on the flipside, AI is already helping fight terrorism and is planned to do so more effectively in the future.Facebook announced that it is using AI to find and remove terrorist content from its platform. Behind the scenes, Facebook uses image-matching technology to identify and prevent photos and videos from known terrorists from popping up on other accounts.

Lastly, advanced AI could adopt human personality dimensions called anthropomorphism. This would include features like experiencing human emotion, something that has the potential of being very deadly due to the risk of computers being partially overwhelmed by negative emotions such as anger, disgust or envy directed at the human race, developing a genuine wish to end humanity- the idea about which so many dystopian novels and films have been created. A fictitious example of this is the Matrix series, where a computer defense programme named Agent Smith feels disgust towards the human race and kills whoever defies it. Out of the other movies, there is Singularity, The Day the Earth Stood Still, I Robot and The Terminator. Books include Neuromancer and Robopocalypse. However in reality, this would not happen spontaneously as portrayed in the film; if an intelligence does gain the ability to feel something similar to an emotion, then this would be done with a purpose, only in a case it helps achieve a specific set of programmed goals. Fortunately, there is universal agreement in the scientific community that an advanced AI would not destroy humanity out of human emotions such as 'revenge' or 'anger”; however there is speculation that it might engage in violent activities due to a desire for power.

But the main argument against this is that in the case that a computer, somewhere in the future, learns how to endure such a feeling like anger, then it will adopt the human moral compass with it, naturally valuing ethical norms accepted by the majority of people. Therefore despite the possible tendency to act on angry impulses, they shall remain harmless due to their empathy towards a potential individual.

Be this as it may, there is a factor that makes these arguments all the less convincing - the likelihood. Despite the impact that such catastrophic situations would bring, when multiplied by the probability, their worth is brought down significantly. So ask the question again - to what extent does artificial intelligence pose an existential risk to humanity? I say not to a very great extent, so the risk is worth all the benefits it brings nowadays and in the future. AI is more intelligent even now, or at least faster, than humans at a specific task or set of tasks, like playing the board game Go or finding patterns in large datasets. AI is responsible for many useful tools that have already become mainstream: speech and image recognition, search engines, spam filters, product and movie recommendations. The list goes on. Narrow AI also has the potential to enable promising technologies like driverless cars, tools for rapid scientific discovery and digital assistants for medical image analysis. Artificial Intelligence has a lot in store for humanity, and I believe that they’re all profitable things… on the most part.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

AI’s Existential Threat to Humanity. (2022, Jun 09). Edubirdie. Retrieved December 22, 2024, from https://edubirdie.com/examples/to-what-extent-does-artificial-intelligence-pose-an-existential-risk-to-humanity/
“AI’s Existential Threat to Humanity.” Edubirdie, 09 Jun. 2022, edubirdie.com/examples/to-what-extent-does-artificial-intelligence-pose-an-existential-risk-to-humanity/
AI’s Existential Threat to Humanity. [online]. Available at: <https://edubirdie.com/examples/to-what-extent-does-artificial-intelligence-pose-an-existential-risk-to-humanity/> [Accessed 22 Dec. 2024].
AI’s Existential Threat to Humanity [Internet]. Edubirdie. 2022 Jun 09 [cited 2024 Dec 22]. Available from: https://edubirdie.com/examples/to-what-extent-does-artificial-intelligence-pose-an-existential-risk-to-humanity/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.