Skip to content

4 Potential Risks of Artificial Intelligence and How You Can Leverage It

what are the risks of artificial intelligence?

RevBlogSpeech to Text Technology4 Potential Risks of Artificial Intelligence and How You Can Leverage It

The artificial intelligence boom of the last decade has provided humans with all kinds of convenient tools and technologies. Thanks to AI systems, businesses are more efficient, decision-makers are more informed, and consumers can have better experiences. And those are just a few of the advantages of artificial intelligence — as developers and scientists make new discoveries in the space, AI’s applications will only grow in scope and importance. 

While AI technology makes our lives easier in myriad ways, it does have drawbacks. As AI development accelerates, experts and leaders in the industry have urged developers to be aware of technology’s potential risks. Scientists like the late Stephen Hawking and tech giants like Bill Gates and Elon Musk have all been outspoken about their increasing wariness of AI

But why? 

Without the proper considerations, AI could lead to bias on the basis of race or gender, inequality, human job loss, and, in extreme cases, even physical harm. In the second article of our two-part series, we’ll examine some of the most commonly raised concerns about artificial intelligence and the risks it poses.

Potential Risks of Artificial Intelligence

1. Job Automation

Experts agree that job automation is the most immediate risk of AI applications. According to a 2019 study by the Brookings Institution, automation threatens about 25 percent of American jobs. The study found that automation would impact low-wage earners, especially those in food-service, office management, and administration. Jobs with repetitive tasks are the most vulnerable, but as machine learning algorithms become more sophisticated, jobs requiring degrees could be more at risk as well. 

So, are humans going to be replaced by robots in the workplace? Not exactly. With training and education programs, employees can learn to work alongside the AI instead of being replaced by it. Many processes require judgement calls — and AI is still an imperfect technology — so this “attended automation” would be the ideal model, ensuring the machines produce the desired results.

Some AI technologies, like Automated Speech Recognition (ASR), can benefit greatly from human input. Take Rev’s industry-leading ASR engine, Rev.ai, a fully automated solution that has been trained on millions of hours of human-generated transcripts. This training data comes from our network of more than 60,000 freelancers, who create 99 percent accurate transcripts for customers across multiple industries. It’s the quality, volume, and varied sourcing of our data that makes our speech-to-text output the fastest and most accurate in the game. 

So, our freelancers provide ground-truth transcripts to our speech recognition development team, helping them enhance our ASR engine. In turn, Rev.ai allows our freelancers to produce transcripts faster and more accurately. This combination of speech recognition AI and trained human professionals ensures that the AI’s speed and accuracy continually improves, so that customers (both human and automated) get the highest quality product possible.

2. Fairness and Bias Concerns

One perceived advantage of AI is that algorithms can make fair decisions, unencumbered by human bias. But an AI system’s decisions are only as good as the data it’s trained on. If a particular population is underrepresented in the data used to train a machine learning model, the model’s output could be unfairly discriminatory towards that population. Facial recognition technologies are the latest applications to come under scrutiny, but there have already been historic cases of bias in the last few years.

The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system case is likely the most famous example of biased, untrustworthy AI. COMPAS, a risk assessment algorithm employed by U.S. Courts in Florida and other states, is used to predict the likelihood of a defendant becoming a recidivist. But a 2016 ProPublica investigation found that the algorithm was biased against African-Americans defendants. “Blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend,” the ProPublica team wrote.

How does this bias in AI happen? In most cases, it is often the underlying data that cause the bias. According to the McKinsey Global Institute, “Models may be trained on data containing human decisions or on data that reflect second-order effects of societal or historical inequities.” Bias can even result from the manner in which data was collected. 

Automated Speech Recognition systems can contain bias for gender or race-based ethnic groups because the data sets that train ASR systems are not always equally inclusive. The speech-to-text team here at Rev is well aware that this bias exists. That’s why we’re actively looking for ways to solve it. Our speech team is always very careful about how we select training data. In fact, thanks to our extensive training data from a wide variety of audio sources, our ASR engine has less bias than our competitors.

3. Accidents and Physical Safety Considerations

If left unchecked, it’s possible for AI’s imperfections to cause physical harm. Let’s look at self-driving cars, an AI application that is beginning to take hold in today’s automobile market. If a self-driving car malfunctions and goes off-course, that poses an immediate risk to the passenger, other drivers on the road, and pedestrians. 

Whether self-driving cars pose a threat remains up for debate. Autonomous vehicle advocates argue that the technology will eventually make accidents a relic of the past. But a June 2020 study from the Insurance Institute for Highway Safety (IIHS) found that autonomous vehicles would actually still struggle to avoid about two-thirds of crashes. This was especially the case for situations like speeding or illegal maneuvers on the road — deliberate actions based on driver preference. 

In a statement about the study, IIHS research scientist Alexandra Mueller said, “It will be crucial for designers to prioritize safety over rider preferences if autonomous vehicles are to live up to their promise to be safer than human drivers.”

Some self-driving vehicle researchers and advocates took issue with the study, however, noting that a one-third reduction in automobile crashes would be a major milestone. 

In fact, in November 2020, the National Highway Traffic Safety Administration announced that it’s seeking public comment as it develops a rules framework that would govern self-driving vehicle safety. If you’re interested in providing feedback, you can do so here.

4. Malicious Use of AI

AI researchers have managed to do a lot of good with the technology’s applications. But in the wrong hands, AI systems can be used for malicious or even dangerous purposes. In a 2018 report titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” experts and researchers found that malicious use of AI technology could threaten our digital, physical, and political security.

  • Digital Security: Machine learning algorithms could conceivably be used to automate vulnerability identification, which hackers could then exploit. And while autonomous software has been used to hack vulnerabilities for quite some time, experts worry that more sophisticated hacking algorithms will be able to exploit vulnerabilities faster and do more damage.
  • Physical Security: Autonomous weapons systems are another commonly cited AI risk. Machines programmed to destroy or kill are a frightening prospect, and a potential AI arms race between nations is even worse. In the United States, the Defense Innovations Board has established guidelines around ethical development of autonomous weapons, but governments around the world are still deciding how to regulate such machines.
  • Political Security: Machine learning technology could be leveraged to automate hyper-personalized disinformation campaigns in key districts during an election. In another scenario, some researchers think that Natural Language Processing (NLP) technology could be used to create a fraudulent recording of a politician making inflammatory statements, tanking their campaign.

So, Is AI Really a Threat?

Does artificial intelligence pose a catastrophic risk? Do we need to prepare for Terminators prowling the streets? 

In short, no — not if these risks are mitigated through principled, ethical development, and careful consideration from regulatory bodies. Experts believe that the future of artificial intelligence will depend on debate between a diverse population of people from different backgrounds, ethnicities, genders, and professions, in order to address the wide-ranging impacts this technology will have on our lives.

Speech recognition technology is one of the most useful and advantageous AI applications, creating efficiency and streamlining workflows across multiple industries. Here at Rev, we are committed to our journey towards a faster, fairer, and more accurate AI model. Rev.ai consistently outperforms similar solutions from Google, Amazon, and other major players, and we look forward to seeing we can provide even better solutions to our customers in the future. Click here to read the first part in our series, The Advantages of Artificial Intelligence.

Free eBook: How to Transition to Using More AI

Affordable, fast transcription. 100% Guaranteed.