29 March 2025
Two steps forward, one step back. How on earth can the world keep up with its own technological advancements?! Perhaps a touch on the dramatic side, but the sentiment resonates for me when it comes to AI capabilities and regulation.
There is so much happening in AI right now. Actually, a lot has been happening in AI for years - the roots of AI really go back to process automation, logic and various developments like Babbage’s analytical engine and Alan Turing’s work in the 1950s. But right now, you may have noticed a huge amount of AI regulation, AI frameworks, numerous frameworks, strategy documents and white papers. Much of the discussion focuses on there being two sides to the coin - innovation and responsibility.
We would be naive to believe that the current regulatory landscape is anywhere near complete, and there is definitely a lot of work to be done, especially with the fragmented and unsettled way in which the AI governance world is progressing. What we do know is that the EU made a bold move early, and that the EU AI Act’s wheels are well and truly in motion.
Read on for some insights!
The current landscape
The growing interest in AI regulation and guidance is a positive sign, with many groups working to establish responsible AI practices. Yet, the gap between AI's rapid evolution and the slow implementation of effective legislation is a significant concern. Past technological shifts have demonstrated that regulation (and education) often lags behind, and AI's accelerating capabilities make this gap potentially problematic.
You can find many solid resources out there, but the global legislation tracker by lawyer and developer Raymond Sun (known as Techie Ray) is particularly valuable. As always, do your own research, but this is a tremendous place to start for AI governance, legislation and various other updates.
What do we see globally?
Europe leads the way with the EU AI Act, which is receiving a lot of attention from around the world. It’s a risk-based regulation placing different levels of requirements on different entities. The USA, UK and other countries have expressed concern that such legislation could hinder potential economic growth and commercial innovation. This is the reason both countries gave when they did not provide endorsement to the Leaders’ Declaration at the global AI summit in Paris. There are various reasons, but one of the main concerns was that the EU’s definition of ‘high risk’ AI systems would have caused undue burdens to numerous industries.
The UK currently has a private member bill being considered by the legislature. This “AI (Regulation) Bill” was reintroduced after a previous version lapsed. The government has also published its AI Opportunities Action Plan. While the action plan speaks to trust and security (and mentions the AI Safety Institute specifically), its main focus is on establishing the UK as a global AI hub, rapid AI development, and being on the side of AI innovation.
The United States, like the UK, has no specific AI legislation at the federal level. There are numerous state and local laws that relate to AI in various ways. The BCLP tracker is a helpful tool! It is very clear that AI features heavily on the political spectrum, with the current government being firmly committed to leveraging AI for growth and innovation. It has, however, been somewhat turbulent - the previous administration’s 2023 executive order (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) was revoked in 2025.
Canada’s AI & Data Act was included in the broader Consumer Privacy Protection Act (CPPA). The CPPA had been progressing nicely through Canadian parliament, but it died when parliament was prorogued on the resignation of ex-PM Trudeau. Within the CPPA, the AIDA was drafted to align with the AU AI Act, the OECD AI Principles, and NIST’s AI Risk Management Framework. All things considered, this law was at least designed to fit within the broad global context. For now, it is not part of Canada’s pending legislation.
South Korea enacted the Basic Act on AI in January 2025. Like the EU AI Act, it is a risk-based piece of legislation, focusing on key components like safety, transparency, innovation, and overall risk management. The main portion of the legislation will come into force in January 2026.
What about other global initiatives?
Starting mid-2024, a number of countries (including the US, UK, Canada and Japan) have signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This is the definition of ‘actions speak louder than words’ because the Framework Convention only enters into force properly (i.e. in a binding way) when at least 5 of the signatories ratify it. This means not simply signing to demonstrate intention to comply, but usually taking action at state level and enacting the regulations through parliament.
The OECD’s AI principles were formally adopted by the G20 as a group what seems like eons ago in 2019! Although not every member state within the group accepted to be an individual adherent. Notable exclusions are Russia, China, and India.
The G7 Hiroshima Framework was adopted in 2023, and sets out numerous principles in relation to the use and development of AI systems. These include risk management, continuous monitoring, public transparency, and responsibility. In February 2025, the Hiroshima AI Process Friends Group was opened up internationally, and there are now 55 members of support, including Israel, Vietnam, and India.
What does it mean?
It could mean many things. To use a word I have heard elsewhere, we seem to have a ‘polycrisis’. There is complexity and volatility, not only in the progression of AI as a technology and its regulation, but how the world’s powers compete with each other and jostle for position. AI truly is a world of phenomenal opportunities and perils.
In the words of Thomas Sowell, “one of the great mistakes is to judge policies and programs by their intentions rather than their results.” Some countries believe that some AI regulation (like the EU AI Act) is too strict, creates more roadblocks to innovation, and could and should simply be governed by product liability laws, instead of AI-specific laws.
What is clear, from a risk analysis perspective, is that zero risk and zero controls are as bad as each other. Zero risk stifles growth and development. Zero controls create a reckless and unsafe environment. Without a doubt, businesses must absolutely stay informed about the AI regulatory landscape, take stock of their own risk appetite towards use and development of AI, and plan not only for the short term, but for the years ahead also. The polycrisis is that everything is evolving, and doing so at such as fast pace. The technological capabilities are growing daily, the regulatory landscape is progressing in countless ways across jurisdictions; and domestic and foreign politics underpin almost everything.
My prediction: it will be a bumpy road ahead. The legal issues will continue to stack up. There will be huge liability issues somewhere, much like we have seen huge data breaches in the recent past. Geopolitics will continue to play a role for some time.
And all the while, lawyers, compliance and risk professionals, and AI governance roles will keep navigating as best we can and push for safe, responsible, and trustworthy AI.
We build rock solid relationships with our clients. Get in touch today so we can learn about your business, understand your goals, and see if our solutions can fix your problems.