Advertisment

How regulated is AI around the world?

Remember when ChatGPT, DALL-E and Midjourney’s rise prompted tech leaders to sign a letter in a bid to pause experiments? Did it pause experiments and developments though? Not really. But, governments and companies have been working on frameworks for AI regulations. We decode the same.

author-image
Shamita Islur
New Update
AI regulations around the world

A few years ago, the thought of machines overpowering humans felt like science fiction, where robots would physically dominate their creators. While this dystopia hasn’t played out, something much more subtle - and possibly more scary - has happened. Rather than being overtaken by robots, we have been introduced to an invisible force, large language models (LLMs) like AI that are already changing the way we work, live, and think. The rise of AI-powered tools like ChatGPT, automated systems, and smart algorithms has raised a different, but important concern - how do we regulate something that can think faster than us, evolve continuously, and, in this case, is invisible?

Whenever I think about the rise of AI, I remember how leaders in the tech industry had signed an open letter calling for a six-month pause on all AI experiments, warning of the risks these advanced systems pose to the world. One of the questions posed then was “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” Well, the experiments didn’t stop and we can see the rise of AI startups and tech giants offering several AI tools and systems to automate tasks.

With the global adoption of AI in different fields including advertising and marketing; governments, companies, and society are grappling with how to manage this technology that claims to bring immense benefits but also significant risks. With this, the challenge of regulating AI is no longer hypothetical; it is an urgent matter of ethics and business strategy. But regulating AI can’t be as so simple. 

Why regulating AI is so tricky

Unlike physical robots, AI systems are intangible - algorithms and data networks that aren’t limited by geographical borders or visible constraints. AI is everywhere and nowhere at the same time and that’s why regulating it is complex. Moreover, the model keeps evolving rapidly. A model like GPT 3 already feels outdated compared to the advanced version of GPT 4.0 and beyond. Regulating systems that are constantly changing and improving can prove to be challenging.

Furthermore, AI can be biased and make mistakes, (it only just figured out that strawberry has three r’s in it), mistakes that affect lives - from biased hiring systems to incorrect medical diagnoses. Research suggests that OpenAI studied millions of conversations with ChatGPT and found that in about 1 out of every 1000 responses, the chatbot might unintentionally produce a harmful stereotype about gender or race based on someone's name. In the worst cases, this could happen as often as 1 in every 100 responses.

This stems from machine learning models ‘learning’ from data sets that may not be complete or fair. The unpredictability of the system makes me wonder who should be held accountable when an AI system misbehaves. Should it be the developer, the user or the company that owns it?

Moreover, should AI regulations be global or specific to a region? Governments and companies are trying to address the same.

Global AI regulations

Around the world, governments are taking very different approaches to AI regulation. Some are eager to control the risks, while others are hoping not to stifle innovation. Europe has been the most aggressive in regulating AI. The EU’s Artificial Intelligence Act (AI Act) is the regulation aimed at overseeing AI development and usage. The AI Act, which will fully come into effect by August 2026, will serve as a comprehensive legal framework for AI development. The regulations aim to set standards for transparency, data usage, and security in AI development. 

It classifies AI systems into different categories based on their perceived risk, from minimal to unacceptable. If a company’s AI system is deemed too risky, like systems manipulating human behaviour or exploiting vulnerabilities, it may be banned entirely.

The EU’s AI Act is a continuation of the General Data Protection Regulation (GDPR), which looks into global data privacy. The AI Act holds companies accountable for transparency, requiring clear documentation of AI system processes, decision-making mechanisms, and data usage. It also has penalties for non-compliance, with fines of up to €30 million or 6% of annual turnover, whichever is higher. 

However, EU’s stringent regulations are not boding well for companies. Meta CEO Mark Zuckerberg and 48 other signatories have urged the EU to ease AI regulations, arguing that the current rules are stifling innovation and putting Europe at risk of falling behind globally in AI development. They believe more flexible regulations are needed to promote AI growth and competitiveness while EU regulators emphasise the need for privacy and ethical protections.

In contrast, in the U.S., California Governor Gavin Newsom has signed several new laws regulating AI, including:

  • AB 2655: Requires online platforms to remove or clearly label AI-generated deepfakes related to elections. 

  • AB 2355: Mandates transparency in AI-generated political ads. 

  • AB 2602: Prohibits Hollywood studios from using AI to replicate an actor’s voice or likeness without their consent. 

  • AB 1836: Bans the use of AI-generated replicas of deceased actors without consent from their estates.

These laws focus on deepfakes in elections and Hollywood, aiming to address the risks of AI while maintaining California's leadership in the sector. 

India on the other hand, has become the tech hub and its AI regulation is still in its nascent stage. India released its National Strategy for AI in 2018, focusing on AI for social good, especially in areas like healthcare, agriculture and education. The government emphasises the importance of fostering innovation while ensuring responsible AI development.

However, the country’s Digital Personal Data Protection Act (DPDP) will most likely impact how AI companies handle data. This act focuses on data privacy and user rights, which could indirectly affect AI systems that are reliant on data sets. 

Challenges companies face & how they tackle 

AI regulations are forcing companies to rethink their approach to product development, data collection, and even internal processes. Some of the biggest challenges companies face include ensuring compliance with transparency requirements, managing bias in AI systems, and dealing with the uncertainty of evolving regulations. 

A new tool developed by Swiss startup LatticeFlow has tested AI models from companies like OpenAI, Meta, and Alibaba against European Union AI regulations, revealing shortcomings in areas like cybersecurity and discriminatory output. While many models scored well overall, gaps in compliance were identified, indicating where companies may need to improve before the AI Act is fully enforced.

However, companies like IBM Watson are investing in explainable AI technologies to address the issues. It provides explanations for its AI-driven decisions in sectors like healthcare and finance to meet regulatory demands for transparency.

The bias in AI systems has become one of the most scrutinised aspects of AI regulation. Companies like Amazon and Google have faced backlash for biased algorithms, prompting them to invest in systems that identify and mitigate it. 

Additionally, Google's Responsible AI Practices aim to ensure the ethical and safe development and use of AI technologies. These practices focus on principles like fairness, transparency, privacy, and accountability. It aims to prevent harm, reduce biases, and promote inclusive outcomes by continuous monitoring, rigorous testing, human oversight, and stakeholder engagement to address potential risks.

Microsoft has also developed a comprehensive AI governance framework that ensures compliance across various jurisdictions. Its Azure AI platform includes tools that help companies adhere to regional requirements to ensure that companies using AI services can scale globally.

AI regulations only came into play when it started utilising consumer data without their consent. Prominent personalities such as Donald Trump, Taylor Swift and other celebrities are facing more deepfake controversies and election misinformation for companies to develop stringent frameworks.  

As AI becomes more ingrained in everyday life, from the content we consume to the cars we drive, with companies coming up with campaigns and products powered by AI, the need for a balanced approach becomes clearer. Regulations should protect people from AI’s risks but also leave room for innovation. What’s concerning, however, is whether big tech companies will be held accountable in case of a mishap. At the end of the day, governments and companies need to work together, share responsibility and build transparent, fair, and safe AI systems.



ChatGPT AI in advertising AI regulations ai innovations Midjourney AI technology AI in automation ChatGPT-4o AI developers AI development AI in election campaigns AI investments AI integration