Negative Impacts of AI: How Big a Threat Is It To Humanity?
“Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.” – Ray Kurzweil, American computer scientist, and futurist who pioneered pattern-recognition technology.
Almost every industry is embracing Artificial Intelligence and reaping its benefits. In fact, we encounter AI every single day. However, with autonomous technology progressing at a breakneck pace, several stalwarts from the science and engineering background, including Stephen Hawking, have expressed their concerns over the increasing penetration of the technology. Tesla and SpaceX founder, Elon Musk, said, “I am really quite close… to the cutting edge in AI, and it scares the hell out of me. It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.”
Whether Artificial Intelligence is a threat to humanity is a question that has been haunting humankind ever since computer scientist Alan Turing started to believe that someday computers might leave an unlimited impact on us. We are already aware of some of the threatening characteristics of AI. For starters, the pace of technological progress has been and will continue to be shockingly fast. For instance, OpenAI’s GPT 3 was shockingly good. In another example, people were taken aback by Microsoft’s announcement that AI was proven to be better than professional radiologists. Robots were, however, intended to replace manual labor jobs, not professional work. But here we are: AI is quickly devouring entire professions, and those jobs will not return. As Stephen Hawking stated:
“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. So we cannot know if we will be infinitely helped by AI or ignored by it and sidelined, or conceivably destroyed by it.”
Understanding what some of these negative impacts of AI might be is the first step towards preparing for them. Here are some of the most pressing challenges:
Negative Impacts of AI
Loss of Jobs
AI has broken the bottleneck of human efficiency and has reduced repetitious labour considerably. Perhaps the biggest concern associated with AI is the loss of certain types of jobs. While AI will generate employment, there are many jobs done by people today that will be taken over by machines. In fact, AI has surpassed human capabilities in many areas, including speech translation and accounting.
Once the technology becomes more accessible, it will be very challenging, if not impossible, to restrict it. For instance, if a business can replace 50 human operators with a single chatbot service, we can imagine the money they will save. Similarly, if a bus operator with a fleet of 500 buses can replace the drivers with driverless buses and save money, they will do so. Besides, there is no legal implication if they fire humans. Indeed, they might feel bad about it. But from just a shareholder-centric single bottom-line perspective, they might simply get over it.
As a result, there is a fear that one day AI will take over many jobs. Humans will have to make adjustments to training and educational programs in order to prepare the future workforce and the current workforce to transition to better positions by upskilling.
Losing the Human Touch
The real fear with technology is about humans being “replaced”. While basic automation such as robot-run factories that automate simple, rudimentary tasks did not really achieve this, new concerns have emerged with developments such as autonomous cars. Another problem is even if humans are not really “replaced”, we might lose the “human touch” should AI become more prominent. After all, we still strive for human interaction. AI cannot connect with us on the emotional levels yet.
Moreover, as impressive as the technology is, AI does not have all the answers. For instance, suppose you had an unpleasant experience on a cab ride. Understandably frustrated, you might want to speak to a human who can empathize with you instead of talking to the chatbot.
AI bias arises when an algorithm generates results that are systematically biased due to erroneous assumptions. Incorrect, flawed, or incomplete data can result in inaccurate and biased predictions. In some cases, the data used to train the model can also reflect the existing prejudices, stereotypes, and any other incorrect assumptions. This perpetuates real-world biases into the computer system itself. Facial recognition technology, for example, has not been racially inclusive in many instances. A study conducted by the Massachusetts Institute of Technology showed that facial analysis software exhibited an error rate of 34.7% for dark-skinned women. In contrast, the software showed an error rate of merely 0.8% for light-skinned men. In the hands of the wrong people, AI can be used to manipulate elections and spread misinformation.
Security Concerns and the Terror of Deepfakes
Although job loss is one of the most pressing issues associated with the emergence of AI, there are many other potential risks lingering around AI disruption. Of these, there are concerns surrounding how AI could potentially be used for tampering with privacy and security. A 2018 paper titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” sheds light on how the “Malicious use of AI could threaten digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), and political security (e.g., through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns).”
Just as AI can be leveraged to detect and halt cyberattacks, unfortunately, the technology can also be used by cybercriminals to launch sophisticated attacks. In fact, with the decreasing cost of development and increasing research in the field, access to emerging technologies has eased considerably. This simply implies that hackers can build more sophisticated and harmful software with less effort and at a lower cost.
As with deepfakes, AI is making it extremely simple to produce fake videos of real people. In the worst scenarios, these videos may be used with the malicious intent of sabotaging an individual’s reputation or fueling some political propaganda. For instance, an audio recording of a politician may be altered to make it seem as though the person spewed racist ideas, but in reality, they did not which can completely mislead the public.
Challenges Concerning Transparency
There has been a need for greater transparency into the inner works of AI models in recent years and for a good cause. Transparency may mitigate problems associated with prejudice, partiality, and injustice, which has drawn attention lately.
In one such instance, Apple’s credit card business was accused of a sexist and discriminatory lending model. Apple’s answer to this just added to the uncertainty and mistrust. No one from the company could explain how the algorithm operated, much less defend its results. While Goldman Sachs, the issuing bank, claimed that the algorithm is gender-neutral, it failed to provide any evidence to prove the same. The company then stated that a third party reviewed the algorithm, and it did not even consider gender as an input. However, this justification was not taken well. For one reason, it was believed that even if the algorithm did not consider gender as an input, it was still entirely possible for it to discriminate based on gender. For example, enforcing intentional blindness to a critical variable like gender makes it more difficult for a business to identify, prevent, and reverse prejudice on that variable. However, AI disclosures come with their own set of risks. For instance, explanations can be hacked, exposing more data may render AI more vulnerable to assaults, and public disclosures may expose businesses to regulatory action.
In short, although greater transparency can provide tremendous advantages, it may also introduce equally harmful risks and threats. In order to mitigate these risks, businesses must consider how they manage the information generated about the risks and how data is shared and safeguarded.
Autonomous Weapons and AI-enabled Terrorism
Of late, there have been many heated debates amongst industry leaders, journalists, and governments on how AI is enabling deadly autonomous weapons systems and what could happen if this technology falls into the wrong hands. Although the original intent of incorporating AI in military operations is to safeguard humans, it is not difficult to imagine the potential negative implications of the same. In fact, at present, a variety of weapon systems with different degrees of human participants are being tested. For instance, the Taranis drone, an autonomous combat aerial vehicle, is anticipated to be fully operational by 2030. It is expected that it will be capable of replacing the human-piloted tornado GR4 fighter planes.
Though “killer robots” do not exist yet, it is expected that they continue to be a concern for many, and codes of ethics are already being developed. The question is whether it is possible for a robot combatant to understand and implement the Laws of Armed Conflict and whether it can differentiate between a friend and a foe. It is for this reason that Human Rights Watch has already urged prohibitions against fully autonomous AI units that can make lethal decisions, recommending a ban similar to those in place for mines and chemical weapons.
Challenges with AI Regulation
It is widely believed that regulation is the only way to prevent or at least temper the most malicious AI from wreaking havoc. However, there is a caveat. Experts believe that regulation of AI implementation is acceptable, but not of the research. Regulation of AI research can stifle the actual progress itself, which can turn out to be rather dangerous and kill innovation or rob the country that regulates it. Peter Diamandis, a Greek-American engineer, and entrepreneur stated:
“If the government regulates against the use of drones or stem cells or artificial intelligence, all that means is that the work and the research leave the borders of that country and go someplace else.”
Among the many benefits that AI will bring are improving health, transportation, and the reshaping of business. The command and control paradigm should be replaced with humility, cooperation, and voluntary solutions to change. As intelligent machines become more prevalent, innovative policies must follow the trail.
Approaching Safe AI
Developing advanced AI systems is fraught with uncertainty and disagreement about the anticipated timeline, but whatever the speed of progress in the field, there seems to be some valuable work that can be done right now. In the upcoming years, AI systems are bound to become more powerful and sophisticated. While there are many uncertainties pertaining to the future, this is the time calling for serious efforts to lay down the groundwork for future systems with an understanding of the possible dire consequences. In fact, if we do our homework and undertake the appropriate steps now, we can be better prepared for any possibilities in the future.
Globally, there has been a stark growth in the community working towards realizing a safe and beneficial AI, thanks to AI researchers who are demonstrating leadership on this issue. Safe AI research is being led by teams at OpenAI, DeepMind, among others. Research into AI governance is also emerging into a field of study.
A better approach is required to manage a future with growing artificial intelligence and machine intelligence intermediaries. As we harness the opportunities that AI is creating, we must vigorously confront ethical issues across all the areas, including transportation, safety, healthcare, criminal justice, etc. There will be many beneficial impacts resulting from AI’s continued transformation, but it is inevitable that it may come with some potential repercussions, as with any change in life. In light of the rapid advancement in the field of AI, we need to start debating on how we can develop AI in a constructive manner while minimizing its destructive potential.