mitigating AI risk

Guardrails and training wheels 

Reaping the benefits of AI while mitigating risk.

It’s the question everyone wants to answer. How do you take full advantage of the vast potential of artificial intelligence while at the same time shielding society from the many risks and abuses associated with a family of technologies that is ever more sophisticated?

It’s not a simple question to answer. Over-regulation of AI could have a chilling effect on innovation. At the other end of the spectrum, a laissez-faire approach could generate some very real problems. Policymakers, regulators and AI companies themselves are striving to find the right balance.

The risks certainly can’t be ignored. The potential threat to jobs as increasing numbers of workplace tasks are automated has long been a cause of concern. More recently, it has been widely acknowledged that fake images, video and audio created by generative AI could undermine trust in institutions and democracy. Meanwhile, those working in the cyber security sector can expect an arms race between the defenders and attackers of IT systems, with both sides using AI to get the upper hand.
It’s a conundrum and one that must ultimately be solved by good regulation and effective governance frameworks.

Europe Moves First

In terms of regulation, the European Union has been the first to move. Following an agreement between the European Parliament and the Council of Ministers, the trading bloc is in the process of implementing the world’s first piece of comprehensive legislation governing artificial intelligence. Expected to come into force in 2025, the AI Act is seeking to ensure that artificial intelligence systems are safe, fair to citizens, transparent and environmentally friendly. In addition, the new law stipulates that oversight of the technology must be carried out by people rather than machines.

Given that Europe is a regulatory superpower, this is important legislation but it won’t be the only game in town. US regulation will doubtless follow and in the UK, Prime Minister Rishi Sunak hosted a global summit on AI governance. Over the next few years, we may see the emergence of some kind of international consensus on what good regulation and governance will actually look like.

Responsible Governance

But this is not just a matter for policymakers and regulators. The responsible development of AI technologies and their implementation by businesses and organisations will play a crucial role in ensuring the technology is indeed safe and trusted by the public. 

Those who take governance seriously will be the leaders in the AI revolution. Those who fail to mitigate the risks will suffer. Responsible AI (RAI) is the order of the day. 

As an article published by McKinsey pointed out, public awareness of AI risks has soared in recent months, due to the proliferation of “generative” technologies. 

Generative AI is just one of a fairly diverse group of AI technologies but has captured the public imagination, not least just about anyone can use it through ChatGPT and other platforms. And the truth is that the arrival of generative tools has highlighted many of the risks that users will face. 

For instance, generative AI hallucinates. Or to put it another way, if you ask a bot to write you an essay on Henry VIII, the finished piece may not be completely accurate. That concern about accuracy could equally be applied to the type of AI that says yes or no to mortgage applications or sets insurance premiums. 

Then there is the question of copyright and intellectual property. Generative AI is trained on information, much of it – to date – coming from the public internet. If it is being deployed commercially, users could find themselves in breach of copyright rules relating to the training materials. Again, concern about information sources and/or their accuracy could be levelled at other forms of AI.

And of course, generative AI can be used maliciously, most obviously through fake news stories. The Washington Post noted in December 2023 that the incidence of false articles spread by social media had risen by 1,000 percent. 

Avoiding Missteps

So, for developers and users, there are reputational risks. What happens if the system you are using is breaching copyright or turning out unreliable information? In turn, there could be legal consequences, and operational disruption. This could result in a major financial impact. If these outcomes are to be avoided, governance structures must comply with the requirements of regulators while also aligning with the expectations of customers and other stakeholders. 

As McKinsey says, regulators are (or will be) looking for demonstrable, transparency, human oversight, robust and secure platforms and a system of accountability. Also, there is an expectation – one that is the core of the EU’s AI Act – that systems will be fair and unbiased. In other words, there should be no unfairness or discrimination when the output of systems affects outcomes for citizens. 

The implementation of responsible governance starts at the top. It is – as BCG pointed out – a matter for CEOs to grip. Indeed, research suggests that CEO participation in RAI initiatives plays a key role in optimising the business benefits associated with AI strategies. 

Beyond the CEO, the next step is to create a governance regime – utilising the risk committee – and flagging areas where a particular technology is seen to be high risk. 

But perhaps we should all be looking at something more fundamental – something implied in the EU legislation. Governance that ensures AI deliver the benefits – increased productivity, faster scientific progress, better education – while remaining human-centric. 

In a recent interview, Stanford academic, Dr Fei-Fei Li put it simply. “The most important use of a tool as important as AI is to augment humanity, not replace it.” Human dignity and human jobs should be at the centre of our thinking as the guardrails and regulations are erected.