RESEARCH REPORT
From AI compliance to competitive advantage
5-MINUTE READ
June 30, 2022
RESEARCH REPORT
5-MINUTE READ
June 30, 2022
In a recent report, The Art of AI Maturity, Accenture identified a small group (12%) of high-performing organizations that are using AI to generate 50% more revenue growth while outperforming on customer experience (CX) and Environmental, Social and Governance (ESG) metrics. Among other success factors that have a combinatorial impact on business results, these Achievers are, on average, 53% more likely than others to be responsible by design. That means that they apply a responsible data and AI approach across the complete lifecycle of all their models, helping them engender trust and scale AI with confidence.
Being responsible by design will become more beneficial over time, especially as governments and regulators consider new standards for the development and use of AI. Countries such as the United Kingdom, Brazil, and China are already taking action, either by evolving existing requirements related to AI (for example, in regulation such as GDPR), or through the development of new regulatory policy.
We surveyed 850 C-suite executives across 17 geographies and 20 industries to understand organizations’ attitudes toward AI regulation and assess their readiness to embrace it. Here’s what we learned.
Our research shows that awareness of AI regulation is generally widespread and that organizations are well-informed.
Interestingly, many organizations see regulatory compliance as an unexpected source of competitive advantage. The ability to deliver high quality, trustworthy AI systems that are regulation-ready will give first movers a significant advantage in the short-term, enabling them to attract new customers, retain existing ones and build investor confidence.
Our research also reveals that organizations are prioritizing AI compliance and want to invest. Coupled with the opinion that Responsible AI can fuel business performance, it’s unsurprising that majority of respondents plan to increase investment in Responsible AI.
However, most organizations have yet to turn these favorable attitudes and intentions into action.
While most companies have begun their Responsible AI journey, the majority (94%) are struggling to operationalize across all key elements of Responsible AI.
The question becomes: why? We identified a few primary barriers.
The biggest barrier lies in the complexity of scaling AI responsibly — an undertaking that involves multiple stakeholders and cuts across the entire enterprise and ecosystem. Our survey revealed that nearly 70% of respondents do not have a fully operationalized and integrated Responsible AI Governance Model. As new requirements emerge, they must be baked into product development processes and connected to other regulatory areas, such as privacy, data security and content.
Additionally, organizations may be unsure what to do while they wait for AI regulation to be defined. Uncertainty around rollout process/timing (35%) and the potential for inconsistent standards across regions (34%) were the largest concerns in relation to future AI regulation. This lack of clarity can lead to strategic paralysis as companies adopt a “wait and see” approach. As experienced with GDPR, reactive companies have little choice but to be compliance-focused, prioritizing the specific requirements rather than the underlying risk, which can lead to problems down the road…and value left on the table.
Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.
Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.
AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.
Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.
The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.
Most respondents (56%) report that responsibility for AI compliance rests solely with the Chief Data Officer (CDO) or equivalent, and only 4% of organizations say that they have a cross-functional team in place. Having buy-in and support from across the C-suite will establish priorities for the rest of the organization.
Only about half (47%) of the surveyed organizations have developed an AI risk management framework. What’s more, we learned that 70% of organizations have yet to implement the ongoing monitoring and controls required to mitigate AI risks. AI integrity cannot be judged at a single point in time; it requires ongoing oversight.
AI regulation will require companies to think about their entire AI value chain (with a focus on high-risk systems), not just the elements that are proprietary to them. 39% of respondents see one of their greatest internal challenges to regulatory compliance arising from collaborations with partners, and only 12% have included Responsible AI competency requirements in supplier agreements with third party providers.
Survey respondents reported that they lack talent who are familiar with the details of AI regulation, with 27% citing this as one of their top three concerns. Plus, more than half (55.4%) do not yet have specific roles for Responsible AI embedded across the organization. Organizations must consider how to attract or develop the specialist skills required for Responsible AI roles — keeping in mind that teams responsible for AI systems should also reflect a diversity of geography, backgrounds and ‘lived experience’.
The success of AI can’t be solely measured by traditional KPIs such as revenue generation or efficiency gains, but organizations often fall back on these traditional benchmarks and KPIs. In 30% of companies, there are no active KPIs for Responsible AI. Without established technical methods to measure and mitigate AI risks, organizations can’t be confident that a system is fair. To our previous point, specialist expertise is required to define and measure the responsible use and algorithmic impact of data, models and outcomes — for example, algorithmic fairness.
While there’s no set way to proceed, it’s important to take a proactive approach to building Responsible AI readiness to overcome or avoid the barriers above.
Based on our experience helping organizations across the globe scale AI for business value, we’ve defined a simple framework to help companies become responsible by design. This framework consists of four key pillars:
Organizations can use this framework to inform a Responsible AI foundation that allows them to quickly assess the impact of any new regulation and respond to compliance requirements without starting from scratch each time.
Scaling AI can deliver high performance for customers, shareholders and employees, but organizations must overcome common hurdles to apply AI responsibly and sustainably. While they’ve historically cited lack of talent and poor data quality/availability as their biggest barriers to AI adoption, “managing data ethics and responsible AI, data privacy and information security” now tops the list.
Being responsible by design can help organizations clear those hurdles and scale AI with confidence. By shifting from a reactive AI compliance strategy to the proactive development of mature Responsible AI capabilities, they’ll have the foundations in place to adapt as new regulations and guidance emerge. That way, businesses can focus more on performance and competitive advantage.