
The regulatory environment around the era-defining challenge of artificial intelligence is moving at pace. As countries and industries seek to balance rapid and much-needed innovation with the novel and complex risks of artificial intelligence, businesses are grappling with the need to leverage AI in a way that will meet governance, risk and compliance requirements — even as those requirements remain uncertain.
Antony Cook, Corporate Vice-President and Deputy General Counsel, Microsoft and Nasser Ali Khasawneh, Global Head of AI at Eversheds Sutherland recently us to share their perspectives and experience of helping organisations navigate the emerging AI era. We’ve captured some key insights below, and you can watch the full webinar on-demand here.
"One thing that's changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation..." — Antony Cook, Corporate Vice President and Deputy General Counsel, Microsoft
The EU AI Act is the most high-profile example at present, but there is a raft of regulations in development worldwide. The OECD is currently tracking 61 countries in the process of developing AI policies. Beside these sit a wealth of sector-specific initiatives — with approximately 393 in progress — and governance authority programmes; the OECD is aware of 760 governance initiatives currently in the pipeline.
The scale, breadth and depth of regulatory attention is confirmation — if it were needed — of the importance being placed on ensuring the path to AI adoption has sufficient guardrails. Nevertheless, there is diversity in the way different jurisdictions are approaching the task.
Cook summarised the three main approaches:
Some territories are employing a mix of approaches, or shifting between them as political leadership changes, such as in the U.K.
AI regulation is a jurisdictional, nation-state-focused challenge (or supranational in the case of the EU), but ultimately a degree of harmonisation will be essential to help multinational organisations operate a compliant approach, as Khasawneh explained: “We always have to work within jurisdictions, within national laws, but it's fair to say that AI knows no boundaries. It is a technology that flies across boundaries so the need for harmonization could not be greater as we consider various aspects of law that are affected by AI.”
He welcomes the U.K.’s initiative at Bletchley Park of bringing together a number of countries and organisations to work towards standardisation and harmonisation, wondering: “Will we move towards a global body that is the AI equivalent of the World Intellectual Property Organisation, for example?”
However, Nasser acknowledges that geopolitical issues will likely be a barrier to international cooperation preventing any kind of global treaty on AI.
As Global Head of AI at Eversheds Sutherland, Nasser is well-placed to give an overview ofthe common themes and issues on which clients are seeking external counsel. These include:
These topics demand a wide range of expertise and, because few organisations have the depth of experience in this new and expanding area, they underline the necessity of seeking external advice. Alongside that advice on practical elements of AI adoption, businesses need to focus on developing their own framework for responsible AI governance.
Cook shared how Microsoft responded to the challenge of responsible AI. The company’s approach was rooted in the realisation that while the engineers and developers that create AI systems and applications think about the technology through a certain lens, it is vital to go beyond these specific perspectives to establish globally applicable parameters for its ethical application and use.
Microsoft convened a multi-disciplinary and diverse set of stakeholders to explore responsible AI development and use. It included lawyers, humanists, sociologists and computing engineers with the goal of establishing how to ground technology development appropriately.
The result was a set of principles focusing on reliability, safety, privacy, security, accountability and transparency. These amount to an AI standard that is applied across the business and is operationalised through engineering practices, for example, that ensure the principle is put into practice.
Once principles and frameworks have been developed, the next challenge is implementation, and leadership is critical.
The EU AI Act already includes obligations for AI literacy among boards and leadership teams. Nasser believes: “AI accountability is going to become an absolute requirement for boards to comply with, and for CEOs to lead with.” He has witnessed growing focus from boards: “One thing that's changed in the past 18 months is the attention that boards are putting on making sure they have the right approach to governance and that it reflects the implication of the technology across their organisation because this is a technology which is changing go-to-market, it's changing research and development, it's changing supply chain management, it's changing employee productivity and workforce development. So it has a very broad implication across organisations, which I think means that boards are just much more focused.”
"We really need to instill a self-learning culture. You can't expect to arrive at a board meeting once a month and then learn about AI at the board meeting. We all need a commitment to double down on AI literacy because that really puts you in a position to make informed decisions if you're a boardroom, for example, or a CEO." — Dale Waterman, Principle Solution Designer, Diligent
Cook acknowledges the scale of assimilating all the information boards need to move forward on AI, but cautions that trying to figure everything out before moving forward is not a competitive approach: “The technology is so important to competitive differentiation and opportunity, so companies need to be involved in AI. The question is how do they do that appropriately?”
He advises boards to draw on the expertise of large companies that are spearheading AI, like Microsoft, and also trade associations: “There's a lot of the trade associations, which are creating the sets of materials you can leverage in order to be able to get yourself across the issues. Making sure that you're aware of what the technology is doing and how it's being used in your organization is a big way that you can manage the sorts of risks that you may be exposed to.”
Perhaps the greatest business risk around AI right now is the risk of doing nothing. You can decide how you'd like to approach it, but what I think every company needs to do is have a considered approach, decide what their ambitions are, and then start a journey, because sitting and watching and waiting, this is not a fad, it's not going away.
For more insights including the panel’s thoughts on the role of General Counsel, the interplay between AI and privacy regulation, and the issue of trust in AI, watch the on-demand webinar.