AI creeps into the risk register for America’s biggest firms • The Register

America’s largest corporations are increasingly listing AI among the major risks they must disclose in formal financial filings, despite bullish statements in public about the potential business opportunities it offers.

According to a report from research firm The Autonomy Institute, three-quarters of companies listed in the S&P 500 stock market index have updated their official risk disclosures to detail or expand upon mentions of AI-related risk factors during the past year.

The organization drew its findings from an analysis of Form 10-K filings that the top 500 companies submitted to the US Securities and Exchange Commission (SEC), in which they are required to outline any material risks that could negatively affect their business and its financial health.

These risk factors skew conservative and are an indicator of corporate consciousness about external factors affecting the business world, more than a carefully calculated analysis of urgent pending threats. Nonetheless, investors track changes to these risk factors, the Autonomy Institute says, as even small edits can serve as early indicators of emerging concerns or evolving strategic priorities.

It found that across every industry sector, more than half of all companies expanded their AI risk disclosures over the past year, with the IT sector having the greatest increase in AI-related risk disclosures, closely followed by Finance and Communication Services.

The hype around AI since the release of the ChatGPT large language model (LLM) toward the end of 2022 has centered on promises that automation will make organizations more efficient and deliver new business models, especially with the recent buzz around “agentic AI.”

But alongside these benefits, AI also brings a growing set of risks, the report says, ranging from ethical concerns and regulatory scrutiny to security vulnerabilities and possible operational disruptions.

For example, it found that 193 companies (39 percent of the S&P 500) expanded their disclosure of risks related to criminals or nefarious folk potentially using AI for threats such as digital impersonation, the creation and spread of disinformation, and to generate malicious code.

The report cites Salesforce’s Form 10-K for the year ended January 2025 as saying: “As our market presence grows, we may face increased risks of cyberattacks or security threats, and as AI technologies, including generative AI models, develop rapidly, threat actors are using these technologies to create new sophisticated attack methods that are increasingly automated, targeted and coordinated, and more difficult to defend against.”

A particular concern is deepfakes. More than twice as many companies over the past year mentioned the threat from digitally manipulated images, video, or audio that convincingly mimic real individuals. The first S&P 500 companies to mention deepfakes were Adobe and Marsh McLennan back in 2019, the report states, just two years after the term itself was coined.

Meanwhile, many companies investing heavily in AI are not finding any clear return on investment, the Autonomy Institute claims. Of those 500 firms, 57 (11 percent) have explicitly cautioned that they may never recoup their spending on AI, or actually realize the expected benefits.

Quantifying tangible gains remains difficult at this stage, to the extent that continued investment at current levels may be unsustainable, it warns.

The EU AI Act has also drawn a great deal of attention among the big US companies, raising concern over the compliance burden and possible financial penalties. While no companies have yet been hit by fines or litigation relating to EU legislation, the authors note there are early instances of companies detailing investigations and litigation from American authorities over use of AI within higher risk domains such as in vehicles.

Perhaps surprisingly, the Autonomy Institute found that only 19 percent of S&P 500 companies had expanded their mentions of data privacy and intellectual property risks associated with the use of AI technologies, although these risks are said to be “particularly acute” for companies relying on third-party AI vendors such as OpenAI or Anthropic.

For example, GE Healthcare warns that it may have limited rights to access the intellectual property underpinning the generative AI model, which could impair its ability to “independently verify the explainability, transparency, and reliability” of the model itself.

Many companies are concerned about operational dependencies that could disrupt their operations if an outage should occur with their AI provider, while legal entanglements involving AI vendors are seen as a potential risk, alongside cybersecurity. In the latter case, the concentration of AI capabilities into the hands of a few providers is seen as a growing threat, as those entities become attractive targets for attackers.

In the face of these vulnerabilities, many companies are looking to hedge against over-reliance, via strategies such as diversifying their AI toolchain and investing in their own proprietary capabilities, the report says.

Despite all this, the concerns raised by corporates differ from those you will see expressed by the general public. There is little discussion or worry over job losses, for example. Rather, business concerns are focused on the potential for AI to harm their business interests or expose sensitive or proprietary data via the operation of LLMs.

“This new analysis sheds new light on AI’s impact within the corporate world, by digging beneath surface level discourse often seen in the media,” Autonomy Institute chief executive Will Stronge said.

“These aren’t speculative fears – this is companies putting down in black and white the threats they see to their bottom line, competitiveness and legal standing. What’s striking is just how rapidly these concerns are growing.” ®

Leave a Comment