Skip to content

Artificial Intelligence Law: Discover How the Law Applies to AI

artificial intelligence law: how the law applies to AI

RevBlogSpeech to Text TechnologyArtificial Intelligence Law: Discover How the Law Applies to AI

Artificial intelligence (AI) adoption has skyrocketed in the last five years. In 2015, Gartner found that only 10 percent of businesses were using or planning to use AI solutions. But by 2019, that number had risen to 37%. Clearly, as technology improves and businesses see the value in AI adoption, the artificial intelligence trend will only continue. In a 2020 Cognilytica survey of 1,500 representatives from companies and government bodies, nearly 90 percent of respondents said they will have in-progress AI implementation in the next two years.

While this rapid adoption of AI creates exciting new opportunities for businesses and individuals alike, it also poses an important question: Does current law apply to AI? How should this new technology be regulated? 

There’s no easy answer — regulators struggle to keep up with unabated advancements in AI systems. Even so, governments around the world are trying to stay abreast of these developments and make sure that existing laws and regulations stay relevant as new challenges arise. In this article, we’ll explore the need for AI regulations, expert opinions on the best ways to regulate it, and what those regulations might look like in the future.

The Need for Artificial Intelligence Regulation

AI already has myriad applications for both consumers and businesses, and the benefits are wide-ranging.

Benefits of Artificial Intelligence

For instance, innovations in process automation reduce mundane, repetitive tasks in a multitude of industries. Elsewhere, algorithms drive hyper-personalization so that users have better healthcare experiences, or more curated experiences while shopping or consuming media. Predictive analytics systems provide insights to drive better, more informed decision-making. And automated speech recognition (ASR) systems like Rev.ai have made video conferencing software more accessible than ever thanks to live captioning capabilities

Even with all of these advancements, AI adoption is still in its infancy. But it won’t stay that way for long. McKinsey Global Institute research suggests that by 2030, AI could deliver additional global economic output of $13 trillion per year. And with this constant advancement comes an increasing concern over what rules or regulations should govern the technology.

Potential Risks of Artificial Intelligence

While AI provides — and will continue to provide — many benefits and cutting-edge solutions, experts agree that it also poses quite a few risks. These concerns include but are not limited to:

  • Discrimination stemming from biased facial and speech recognition algorithms
  • Data privacy
  • Human injury resulting from autonomous vehicles
  • Aspects of AI-based decision making
  • AI ethics
  • Intentional malicious use of AI

These experts include CEOs of major technology corporations. In January 2020, Google CEO Sundar Pichai stressed the need for AI regulation, writing: “There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.” 

At the World Economic Forum in Davos, Microsoft CEO Brad Smith echoed similar sentiments.

“We should not wait for the technology to mature before we start to put principles, and ethics, and even rules in place to govern AI,” he said.

Like any new, emerging technology, AI is imperfect. And because businesses and consumers are still figuring out how to implement it to its fullest potential, governments remain in the early stages of regulating it.

How is Artificial Intelligence Currently Regulated?

Of course, how government plays a role in regulation is more complicated, in large part thanks to the extremely complicated nature of the technology. For instance, modern machine learning systems are so complex, and consume such an incredible amount of data, that explaining how they make decisions to lawmakers — let alone regulating it — is a tall order.

Also consider the fact that AI has numerous applications across many different fields and industries — healthcare, financial services, criminal justice, education, insurance, just to name a few. A traditional regulatory approach would likely be ineffective and incredibly hard to enact.

That’s why, for the time being, many governments are taking a tentative approach to AI laws. In many cases, it’s simply too early to tell what kinds of wide-ranging impacts AI will have on society. Existing laws and regulations apply to very specific areas of AI application.

Autonomous Vehicles

According to Cognilytica, 24 countries and regions (including the United States) have established laws for autonomous vehicle operation, with eight more considering enabling autonomous vehicles to operate. This particular AI application is ripe for regulation, as it poses a clear physical danger to human beings. These autonomous vehicles operate in close proximity to people and any error in either hardware or software could have deadly consequences. 

In the U.S., federal lawmakers and regulators have mainly focused on autonomous vehicles, with the Department of Transportation currently investigating how best to regulate them. At the state level, 60 percent of states have enacted some form of legislation related to autonomous vehicles, be it testing or deployment.

Data Privacy and Sharing

A discussion of AI regulation cannot occur without a discussion of data regulation. Data is what feeds AI — what trains it to perform functions and make decisions on its own. According to Cognilytica, thirty one countries and regions have prohibitive laws restricting the sharing and exchange of data without prior consent or with other restrictions.

Besides these two main areas, major AI regulation remains scarce around the world. But that doesn’t mean it’s not coming later down the road.

How Will the Law Apply to Artificial Intelligence in the Future? 

This is not to say that regulation would negatively impact AI research and development. In fact, many experts agree that some regulation would actually allow AI to thrive. Practical rules and regulations would bolster the public’s trust in the technology, drive adoption, and allow researchers and scientists to develop more advanced solutions.

“If you want people to trust this stuff, government has to play a role,” Daniel Weitzner, a principal research scientist at the M.I.T. Computer Science and Artificial Intelligence Laboratory, told The New York Times in 2019. 

The Role of Soft Law in AI Regulation

While governments figure out the most effective ways to enact legislation around AI, researchers and experts agree that such laws should only be a piece of the puzzle. In order to let the AI landscape truly thrive, “soft law” should also be considered a viable complement to government regulations. 

Gary Marchant, a University of Arizona Law Professor, describes soft law as frameworks that “set forth substantive expectations but are not directly enforceable by government, and include approaches such as professional guidelines, private standards, codes of conduct, and best practices.”

The concept of soft law has been around for decades and includes the the U.S. Green Building Council’s 1993 Leadership in Energy and Environmental Design (LEED) certification standards, which are widely used to this day.

Marchant argues that a soft law approach would be particularly suitable for AI because the field is advancing much too fast for any traditional legislative system to keep up. Technology companies are raising billions of investment dollars every year to drive new advancements and discoveries. In the year that it would take for a government body to pass new legislation, it’s entirely possible that the technology landscape will have shifted. 

The Limits of a Traditional Approach

Additionally, the complexity of the AI field — and its many, many applications across industries — are ill-suited for a one-size-fits-all legislative approach. Certainly, autonomous vehicles and high-stakes instances need keen governmental oversight. But a framework developed with input from leading companies, civil organizations, experts, and governments could help developers innovate according to proper ethics and principles.

Soft law frameworks for AI do exist, including the Asilomar AI Principles and Singapore’s Model AI Governance Framework. These resources were developed to help organizations deploy and implement AI responsibly. Experts agree that finding the right balance between formal government regulation and soft law will be an important component of AI law going forward.

According to Pichai: “Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.”

Free eBook: How to Transition to Using More A.I.

Affordable, fast transcription. 100% Guaranteed.