By Ben Rapp

American consumers are deeply concerned about the advance of AI.

Data from Pew Research Center suggests that 52% of Americans are more concerned than excited about AI. In certain fields, alarm is even higher; 75% of Americans worry healthcare providers will move too fast using AI, Pew’s work warns.

To ease this anxiety, US businesses must build trust in AI by committing to transparency and high ethical standards. They must define legal and ethical data use within AI models, avoid bias and unfairness, and ensure proper consents are secured.

These are concepts that policymakers in other jurisdictions are already beginning to enshrine in regulation. The EU AI Act, for example, introduces a range of safeguards and requirements designed to protect privacy and fundamental human rights.

US organizations appear to be remarkably complacent about their readiness for the full implications of widespread AI adoption in their organisations.

In our recent survey of 100 US executives with responsibility for privacy, almost two-thirds (65%) describe themselves as very or completely confident about their ability to deal with the additional risks that implementing AI tools and technologies will create (see chart below).

AI graph

 

Respondents also report a high level of confidence about the compliance challenges posed by AI, despite the nascent state of regulation in this rapidly evolving area. Nearly half of our survey respondents (49%) are completely or very confident they understand the risk and compliance implications of AI.

AI risk and privacy

These proportions are surprisingly high, given the rapid pace of developments in AI which create new challenges, also given that our survey reveals that many US businesses lack the capabilities and structures to manage and govern privacy, capabilities which will be essential for AI governance.

With only 36% of executives surveyed confident they can recruit the talent they need to effectively govern privacy and with AI governance expertise in even higher demand, US businesses are likely to struggle to find the talent they need to cope with this challenge.

This skills gap will be made even worse by the growing tendency to add responsibility for AI compliance to the job description of privacy and data protection roles. AI governance requires particular skills that data protection professionals may not have, especially if they come from a legal background.

Nearly six in 10 (58%) say their privacy governance is overseen by the executive who is also responsible for its delivery. Not only is this at odds with generally accepted approaches to governance in other areas, including finance and ESG governance, it also falls foul of EU legislation on AI – a consideration for US businesses that trade with Europe.

With AI advancing rapidly, there is an immediate need to improve the maturity of data governance and privacy management. US businesses who take an honest and pragmatic look at their current capabilities, and address their weak spots, will be better placed to minimise the risk of an AI catastrophe and to earn their customers’ trust.