Skip to main content Skip to footer

Research Report

From compliance to confidence

Embracing a new mindset to advance responsible AI maturity

5-minute read

December 11, 2024

In brief

  • Generative AI-related risks continue to accumulate. New AI-focused laws and regulations are expanding and AI value chains are growing in complexity.

  • Companies need to build a mature responsible AI capability as a critical enabler to mitigate risks and unlock the many benefits from using AI.

  • While they are improving their responsible AI maturity, no company is fully mature today. Maturity can be improved by focusing on five priorities.

As generative AI continues to transform business and society, its potential risks and rewards have become a critical focus for organizations worldwide. Instances like chatbots "hallucinating" policies, employees accidentally exposing proprietary data or algorithms wrongly flagging individuals for fraud highlight the urgent need for responsible AI practices. 

Our latest research, which surveyed 1,000 C-suite executives, spanning 19 industries and 22 countries in collaboration with Stanford University, evaluates the current state of responsible AI maturity and the importance of adopting a responsible approach to harness AI's full potential.

The findings reveal a significant gap in companies' preparedness to implement AI responsibly: most companies are not as advanced in their responsible AI practices as they believe, with only a minority effectively navigating the risks to reap substantial rewards. 

As AI-related risks mount and regulatory landscapes evolve, responsible AI will play a critical role in securing a competitive advantage and fostering sustainable growth in today’s rapidly evolving technology environment.

Risky business

We found that the risk landscape will continue to expand and evolve across 3 main areas:

Generative AI is reshaping the risk landscape. AI-driven incidents, particularly those linked to generative AI, have surged and account for about two-thirds of all incidents, highlighting the urgent need for risk mitigation strategies.

51%

of respondents cited privacy and data governance as risk the most, followed by security (47%) and reliability (45%).

With no global approach to AI regulation, compliance is increasing in complexity for multinational firms. To add to this complexity, 90% of companies surveyed expect to be subject to AI-adjacent legal obligations over the next five years.

77%

of the surveyed companies either face AI regulation already or expect to be subject to it over the next five years.

As firms begin both buying and developing AI models, they also need to prepare for the associated risks. Almost one-third (28%) of the companies we surveyed take on the role of both buyer and developer.

43%

of companies acquiring AI models have procurement measures in place, such as regulatory checks and 3rd party audits.

We talked to Cathly Li, World Economic Forum, on regulations and compliance. 
“Finding a regulatory approach that is both comprehensive and flexible enough to address the multi-faceted nature of AI while avoiding unnecessary regulatory burdens.”

Watch the video with Valerio Cencig, Intesa Sanpaolo, who shared relevant insights on the risks of using AI and the rewards of doing so responsibly. 
“There is no real value without responsibility in what we create.”

From compliance to value

In the past, businesses treated responsible AI as a compliance issue, rather than something that could unlock value. Fortunately, mindsets are changing. Almost half (49%) of our survey respondents view responsible AI as a key contributor to AI-related revenue growth for their firm. This change in mindset is a positive step, but we wanted to understand how intention translates to execution and how close respondents are to achieving their responsible AI goals.

Redefining responsible AI maturity

To measure companies’ responsible AI maturity in this new AI era, we developed a new four-stage framework in collaboration with Stanford University. Based on our analysis of the survey responses, we placed organizations at their respective stage, awarding a score for organizational maturity and a separate score for operational maturity (see below for definitions; companies with no responsible AI initiatives were excluded from our analysis).

The milestones of responsible AI maturity

In today’s business landscape, continuous change is the new reality. Being set up for continuous change means you need reinvention-readiness in every function and every component of your business. Responsible AI is no different.

What do we mean by being reinvention ready when it comes to responsible AI? Interestingly, no company has yet reached that milestone, but those who get there first will be responsible AI pioneers. They will have fully operationalized responsible AI efforts as a platform to take a more systemic, future-oriented approach that unlocks the true value of AI.

We’ve defined four milestones of responsible AI maturity, which ultimately lead to being reinvention ready. 

Principles

The company has some foundational capabilities to develop AI systems, but its responsible AI efforts are ad hoc.

Program

Following a responsible AI assessment, the organization has put in place a responsible AI strategy, approach, processes and governance, without a more systemic enablement with tools and technology.

Practice

The company has systemically implemented measures across the organization to help meet the relevant regulatory and legal obligations.

Pioneer

Fully-operationalized responsible AI efforts as a platform to take a more systemic, future-oriented approach that unlocks the true value of AI.

Our analysis of the survey responses shows that 8% of organizations have established responsible AI principles, 78% built a responsible AI program and 14% put responsible AI into practice. There are no responsible AI pioneers today.

To learn more about the four stages, and to see examples of what that looks like in practice, read the full report.

Responsibility reality check: how ready are companies for responsible AI

Our research reveals that a vast majority (78%) of companies have established a responsible AI program when considering a composite view of organizational, operational and generative AI maturity for responsible AI.

A smaller portion, 14%, have put responsible AI into practice, while 8% are just beginning their journey by setting responsible AI principles. Notably, none of the companies have become a responsible AI pioneer.

When we compared organizational maturity (the extent and effectiveness of an organizations responsible AI processes and practices) to operational maturity (the extent to which organizations have adopted responsible AI measures) there was a clear disconnect. 

To learn more about the disconnect, read the full report.

The mark of a responsible AI pioneer

When it comes to responsible AI maturity, reaching the pioneer stage should be the North Star. Whilst a small minority (9%) of the companies we surveyed are responsible AI pioneers for organizational maturity, that number is even lower (less than 1%)  for operational maturity.

When combining operational and organizational maturity, no company has become a pioneer. 

So what sets responsible AI pioneers apart? We’ve identified three characteristics, which we explore in more detail in the full report.

Pioneers are:

  • Anticipators
  • Responsible by design
  • Proactive partners

Ready, set, grow: five priorities for responsible AI

A company setting responsible AI principles will inevitably have different priorities as it seeks to improve its responsible AI capabilities, than a company establishing a responsible AI program or putting responsible AI into practice. Nevertheless, our research and work advising clients has shown us that all companies can benefit from focusing on these five priorities to improve their maturity:

Developing a comprehensive responsible AI strategy and roadmap that includes clear policies and guidelines. Our research shows that 55% of companies have already established responsible AI principles.

Understanding risk exposure from an organization’s use of AI is a key component of operationalizing responsible AI. Most companies we surveyed appear to be underestimating the number of AI-related risks they face, which isn’t surprising when over 50% of them do not have a systematic risk identification process in place.

When testing and scaling responsible AI, organizations deploy a broad range of risk mitigation measures across both the AI lifecycle and value chain. Yet just 19% of surveyed companies had scaled more than half of the risk testing and mitigation measures that we asked them about.

Establishing a dedicated AI monitoring and compliance function is crucial for ensuring the compliant, ethical, sustainable performance of AI models within an organization, especially when it comes to generative AI. Despite this, 43% of companies have yet to fully operationalize their monitoring and control processes, making it the weakest element of organizational maturity. Furthermore, 52% of generative AI users do not yet have any monitoring, control or observability.

For a successful responsible AI program, cross-functional engagement must address the impact on the workforce, sustainability, and privacy and security programs across the enterprise. A large majority (92%) of the companies we surveyed acknowledged that employees have important roles to play in mitigating risk.

If the pursuit of responsible AI were ever merely a compliance afterthought, those days are long gone. Companies today know that to maximize their investments in generative AI and other AI tech, they need to put responsible AI front and center. Companies must embrace the five priorities above and become a responsible AI pioneer if they want to stay competitive. As part of these efforts, companies must pursue an anticipatory mindset, commit to continuous improvement and extend their focus beyond their organization to their entire value chain and wider AI ecosystem.

The reward for becoming a responsible AI pioneer will be considerable: consistently turning AI risk into tremendous business value.

WRITTEN BY

Arnab Chakraborty

Chief Responsible AI Officer

Karthik Narain

Group Chief Executive – Technology and Chief Technology Officer

Senthil Ramani

Lead – Data & AI, Global