How the World Seeks to Regulate AI - App Development | Chop Dawg | Think Partner, Not Agency

Isadora Teich

08/10/2023
Philadelphia, PA
You are never too old to set another goal or to dream a new dream.
Read More
Connect with Chop Dawg

How the World Seeks to Regulate AI

AI has been a hot topic, especially this year, as OpenAI’s ChatGPT became a global phenomenon. This meteoric rise has raised many practical and ethical concerns for a number of complicated reasons. Here is how some world governments are approaching the issue and may try to regulate AI.

Some governments seem to view this technology as more of a threat, while others see it more as an endless fountain of possibilities. However, most fall more in the middle.

Let’s take a look!

Why Is It So Difficult to Regulate AI?

For one, especially in the US, many feel that the government is ill-equipped to handle the rapidly accelerating digital landscape.

However, even if this were not the case, AI is so new that we are still discovering how it will integrate with our existing societies and structures. This poses a huge challenge for regulators.

What AI is and what it can and should do are somewhat unknown and constantly shifting targets.

Different countries have different approaches, and Open AI itself has been resistant to accepting any regulation despite pro-regulation public statements.

Sam Altman, chief executive of technology firm OpenAI, told US senators this May during a hearing on AI:

“Regulation of AI is essential.”

However, despite OpenAI and other AI firms making pro-regulation statements, they have fought against the EU’s proposed regulations and advocated for making ‘voluntary comitments’ instead.

In this post, we will take a look at how the US, EU, and China are approaching AI regulation.

The EU’s Approach

The EU may have the most risk-focused and cautious approach when it comes to the regulation of AI, according to Matthias Spielkamp, executive director of AlgorithmWatch, a Berlin-based non-profit organization that tracks and studies the effects of automation on society. 

After about two years of intense debate, the EU is about to release its Artificial Intelligence Act. This focuses on banning some uses and allowing others. It also lays out due diligence for AI firms to follow.

This June, however, the EU already released a sweeping AI act that focuses on the potential risk these technologies pose to humanity.

While this act could change because it still requires input from EU legislating bodies, it bans many uses of AI as it claims they present ‘unacceptable risk.’ This includes:

  • Predictive policing
  • Emotion recognition
  • Real-time facial recognition

Many uses of AI are permitted, but in high-risk situations, extensive documentation and oversight are required in the EU.

The EU act requires developers to show that their systems are safe, effective, privacy-compliant, transparent, explainable to users and non-discriminatory.

The US and China have taken somewhat different approaches.

The US Lacks Broad Data Protection or AI Regulation

In October of 2022, the White House Office of Science and Technology released a Blueprint for an AI Bill of Rights.

This white paper describes 5 principles meant to guide the use of AI as well as future regulations. These 5 principles are:

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation
  • Human Alternatives, Consideration, and Fallback

This is in response to what the white paper describes more or less as the global failures of AI to uplift humanity, and instead abuse people, track them without their consent, and limit their opportunities.

According to the white paper:

In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

While the EU and the US share a similar philosophy when it comes to AI, there is one big difference.

The US ve EU Approach to AI Regulation

The EU has put forth actual legislation to regulate AI, and the US has not. The US has held numerous hearings and presidential meetings related to the regulation of AI.

Companies such as OpenAI and Meta have said that they would ‘implement safeguards.’ However, no one really knows what that means.

In place of legislation, at least so far, the US government seems to be taking tech conglomerates at their word. Nothing has been enforced on the tech sector.

Last year, a law did make it through Congress which requires that federal officials that work with AI products and services be trained on how they work.

This year, Biden also signed an executive order which includes a requirement to prevent and remedy algorithmic discrimination at federal agencies.

However, these protections only apply to the federal government, and not to US society at large.

Some say that due to the current political division within the US government, legislation to protect the population has, at least so far, been unable to pass.

Previously, lawmakers considered a bill based on giving tech corporations more responsibility for how their algorithms function. According to this Algorithmic Accountability Bill, firms using automation would have to present impact assessments to the FTC.

This bill did not pass.

China Has Issued the Most Legislation to Regulate AI

So far, China has released the most comprehensive AI legislation. However, critics say that this is an effort to maintain social control in the face of technology which could upset the social order.

While the US has only passed legislation to protect and inform federal workers, China’s legislation focuses on accountability for tech corporations.

A 2021 law requires firms to be transparent and unbiased when using personal data in automated decisions, and to let people opt out of such decisions. And a 2022 set of rules on recommendation algorithms from the Cyberspace Administration of China (CAC) says that these must not spread fake news, get users addicted to content or foster social unrest.

In January of 2023, the Chinese government began tackling deep fakes. Their goal is to stop providers that combine AI with the images of real people without their consent. This approach will legally exist across a variety of AI tools.

This month, the CAC will increase legislation of AI tools. Firms that use AI must prevent the spread of false, private, discriminatory, or violent content. They also cannot spread anything which undermines Chinese government values.

According to Kendra Schaefer, head of tech policy research at Trivium China, a Beijing-based consultancy that briefs clients on Chinese policy:

“On the one hand, China’s government is very motivated to impose social control. China is one of the most censored countries on the planet. On the other hand, there are genuine desires to protect individual privacy from corporate invasion.”

How Does the Rest of the World Want To Regulate AI?

Canada has introduced an Artificial Intelligence and Data Act. This act has strong requirements for ‘high impact AI systems.’ However, it has yet to define high-impact AI systems.

The UK announced in a white paper that it would take a ‘pro-innovation’ approach and announced no plans to regulate AI at all.

India has announced that it wants to make moves toward regulating AI, specifically when it comes to areas like copyright and the well-documented biases of these algorithms.

Australia currently has no laws relating to the regulation of AI, but is beginning to consider them for the future. Their approach has two main concerns.

The minister has outlined two goals for the government: first to ensure that businesses can confidently and responsibly invest in AI technologies, and also to ensure “appropriate safeguards”, in particular for high-risk tools.

Many countries have similar approaches to Australia. This includes countries in Africa, such as Egypt. However, in Egypt there is a sentiment that if they do not make themselves a part of the AI conversation from the beginning, others will likely force AI upon them.

Japan seems to be one of the most pro-AI countries in the world. Their approach focuses on trying to get the most benefits out of AI, without suppressing it due to what they consider to be ‘over-estimated risks.’

Final Thoughts on How the World Seeks to Regulate AI

It is truly a fascinating time in the tech world. AI will likely profoundly change many aspects of how we work and live, and global governments are struggling with how to approach this time of rapid acceleration and innovation.

Whether you personally have a pro or anti-regulation stance, it is quite fascinating to see how different countries and regulatory bodies try to handle the existential and practical issues related to artificial intelligence.

There is no denying that the future is already here!

What do you think? Comment below.

Since 2009, we have helped create 350+ next-generation apps for startups, Fortune 500s, growing businesses, and non-profits from around the globe. Think Partner, Not Agency.

TwitterLinkedInFacebook

Find us on social at #MakeItApp’n®


You might also like

5 Ways Startups Can Make Their Website More User-Friendly

Advertising & Marketing, Design & Branding, Web & Mobile, Technology

Founders: Consider This Before Working With Any Investor

Operations & Management, Revenue & Finances

8 Ways to Bootstrap Market Your App

All, Advertising & Marketing, Featured

How to Fund Your App in 2022

All, Revenue & Finances

Leave a comment

Your email address will not be published. Required fields are marked *

Have An App Idea?

Entrepreneurs approach us at all stages of planning. We’re ready to provide our valuable feedback whether you decide to partner up with us or not!

dear entrepeneurs,

Since founding Chop Dawg in 2009, Joshua and our team have been on a mission to help as many entrepreneurs as possible. To date, Chop Dawg has helped launch over 400+ next generation digital products for startups and growing companies around the world.

Find Your Personal Framework to Success

“Davidson offers hard won advice for fellow entrepreneurs looking to create a lasting business in this perceptive strategy primer.”

– Publisher’s Weekly.

Finally, your favorite ice cream truck has an app! Stay cool and download the Mister Softee app this summer to track your treats in real-time.