Trustworthy AI isn't magic (but it might take a miracle)

Trustworthy AI isn't magic (but it might take a miracle)

14 March 2025


Trustworthiness in AI. The concept makes me feel a little tense. It's hard to put my finger on, but there are echoes of privacy laws, breaches of trust, and monumental fines. 


Ultimately, what is needed is conscientious design and implementation, with robust ethical and operational oversight.


Simple, right?


Well, yes. To take a concept from Mel Robbins, it's simple but it's not easy. There are operational intersections, technical limitations, and legal compliance issues to consider. There must be alignment from the start in relation to what the end result will be. All this in the face of speed and being a first mover. Definitely not easy!


Trustworthiness is a slightly old fashioned. In simple terms, trustworthy just means worthy of confidence. Reliable, honest, deserving of trust. It’s about reputation - promising and delivering with integrity. About maintaining good standing. These are almost analog or mechanical concepts, reminiscent of an earlier time. And, when I asked Gemini, I was told that there is certainly an irony to AI being so cutting-edge, while trustworthiness is so timeless. In other words, AI is not human, yet we are asking it to behave in ways that humans deem trustworthy.


But here's the key question: why should AI not be capable of being trustworthy? And of course the correct answer, the only answer, is that it is capable of being trustworthy. Everything else is semantics. We have legislation in force and coming into force, we have regulation and framework documents, we have countless guidance notes, all of which provide some level of direction and suggestion as to how AI systems and GPAI models should be governed. The entire point of these frameworks is to govern the controls we have in place to result in trustworthiness. Hopefully!


There are nuances between all of the different sources of AI governance. But what is clear is the multiple recurring themes for what ‘trustworthy’ really means when it comes to AI development and use. 


What are some of those core concepts?


✅ Ethical governance

✅ Responsible AI

✅ Safety and security

✅ Honesty and integrity

✅ Benevolence

✅ Fairness

✅ Transparency


The rules and regulations are being put in place to ensure AI systems are fit for purpose, perform as expected, are of net benefit, and treat people with respect and dignity. This will change over time, and adaptability will be an important part of AI governance into the future. But with the correct awareness, controls and attitude, businesses can not only develop and deploy trustworthy AI, but they can do so in a trustworthy way.


Maybe the tension isn’t specific to AI. Maybe it relates to first movers more generally.


The first mover advantage to business can be a serious disadvantage to the user. To be clear, I believe that most businesses will do their best to create and use AI systems in a responsible and ethical way. The majority of businesses who want to be the trustworthy business that uses trustworthy AI.


But, as Yogi Berra (may have) said, this is “déja vu all over again”. This type of thing has played out before, and it continues to play out today, with data protection laws. Investigations, complaints, fines, media coverage, you name it. And I think there is real tension because this is so close to home. Privacy and AI overlap because AI does not develop, does not get nourished, does not exist, without data.


We have seen eye-wateringly large fines (in the hundreds of millions of dollars and even over a billion dollars in one headline case) that are simply seen as a cost of doing business. The bigger the business, the bigger the budget, and ultimately, the less incentive to be trustworthy. Based on pace of advancement, scalability, and profit, it might be cheaper to pay fines than to build truly trustworthy AI systems and models from the start. If data is the food of AI, why would companies want to ration themselves voluntarily?!


As with everything risk and compliance, the fundamental question that businesses must answer is: at what cost? Actions speak louder than words and there will be no hiding from true mission and values.


I presented some key concepts above for ethical and trustworthy AI. Sometimes, it helps to have a blueprint for what to avoid in addition to what to do. So, using a simple (non-AI-generated) thesaurus search for the opposite of 'trustworthy', here are 4 descriptions you never, ever want your business or AI system to be given:


🚩 dodgy

🚩 treacherous

🚩 dubious

🚩 fishy


Don't be this business. Keep each other safe out there :)

Secure Your Business With Us

We build rock solid relationships with our clients. Get in touch today so we can learn about your business, understand your goals, and see if our solutions can fix your problems.