Bloomberg Law
April 18, 2024, 9:00 AM UTC

Shareholders Prod Big Tech, Entertainment Giants Over AI Risks

Clara Hudson
Clara Hudson
Reporter

Alphabet Inc. and Warner Bros. Discovery Inc. are the latest big companies facing upcoming shareholder proposals asking them to grapple with how fast-moving AI technology could harm workers and the public.

Investors want the companies to disclose risks that artificial intelligence could pose to their businesses and report on their ethical guidelines. Some other pending shareholder votes, such as one at Meta Platforms Inc., also highlight the risk of AI-produced misinformation or disinformation interfering in upcoming elections across the globe.

The flurry of investor interest has been focused largely on tech and entertainment companies so far, in part because last summer’s Hollywood strikes spurred concern that AI could take credit from writers or be used to replicate actor likenesses. The AFL-CIO said it withdrew shareholder bids at Walt Disney Co. and Comcast Corp. after those companies agreed to disclose more information on the use of AI, but the labor group has similar campaigns underway that are poised for investor votes at Amazon.com and Netflix Inc.'s annual meetings.

AI-related proposals that already faced votes at Apple Inc. and Microsoft Corp. garnered an unusual amount of momentum for relatively new investor bids, though they both failed to gain the majority shareholder support needed to pass over opposition from company management. Companies have said in response to the proposals that they’re already navigating AI risk and even raised the possibility that certain disclosures could give away sensitive information on their AI strategies.

“Obviously AI has captured everybody’s imagination and attention,” said Jonas Kron, chief advocacy officer of Trillium Asset Management, which has a proposal that could go to a vote at Alphabet this summer asking for governance changes on AI.

“It almost goes without saying at this point that it presents a risk and an opportunity for so many companies,” Kron said. “And as with many new technologies, the technology quickly outpaces our ability to manage it.”

Potential for Harm

An AI-related shareholder proposal received some unusual fanfare at Microsoft’s annual meeting in December when former Nirvana bassist Krist Novoselic, a Microsoft shareholder, presented the proposal. Microsoft has launched an AI tool called Copilot and the company is investing in OpenAI, the creator of generative AI chatbot ChatGPT.

The proposal, filed by Arjuna Capital, asked Microsoft to report on AI risks and explain how the company plans to remediate potential harms, including misinformation and disinformation. About 21% of Microsoft investors voted in favor of the proposal.

Microsoft said in its proxy statement that it’s already committed to producing a report to the US government about its AI governance practices. The planned report will cover Microsoft’s approach to mitigating AI misinformation and disinformation risk.

Meta and Alphabet face similar shareholder proposals focused on misinformation and disinformation from Arjuna Capital. The proposal at Meta says the risks stem from the social media company’s recent development of generative AI conversational assistants and advertising tools. Arjuna’s misinformation proposal at Alphabet will also go to a vote alongside the Trillium proposal on AI governance.

“The AI arms race has resulted in greater risk taking as companies compete for market share,” said Natasha Lamb, a managing partner at Arjuna Capital, speaking about the upcoming proposals the firm has at Meta and Alphabet. “But risk taking without guardrails can lead to real world harm,” Lamb said, referencing the possibility “for malicious actors to propagate inaccurate and invented information” in elections.

At Apple’s annual meeting in February, 37.5% of investors supported a shareholder bid asking the company to disclose more information about how it uses AI and navigates the technology’s ethics.

An Apple employee who is also a member of a local union presented the shareholder proposal, raising concerns about the risk of workplace discrimination from AI algorithms. Also speaking at the meeting, CEO Tim Cook teased that the tech giant is planning breakthroughs in generative AI for later this year.

Apple’s proxy statement said the demands of the proposal “could encompass disclosure of strategic plans and initiatives harmful to our competitive position and would be premature in this developing area.”

Engaging With Workers

Up next for a vote in late May is an AFL-CIO proposal urging Amazon.com Inc. to establish a new board committee focused on human rights risks of AI. The proposal mentions for example that AI used in human resources decisions could result in employment discrimination. The AFL-CIO has similar submissions set for a vote at Warner Bros. and Netflix Inc.

“The use of AI in that industry had clearly become a material issue to investors,” said Brandon Rees, deputy director of corporations and capital markets at the AFL-CIO’s investment office, talking about the Hollywood strikes. “We believe that companies need to be engaging in a dialogue with their workers to best decide how AI should be incorporated into their business,” he added.

Amazon pushed back against the proposal seeking a new director panel focused on AI-driven human rights risks, saying in its proxy statement that its board and board committees “are already overseeing human rights and other risks associated with artificial intelligence and machine learning.”

The company said “it would be far more effective for the board committee already responsible for particular types of risks—such as human rights or human capital risks—to retain oversight responsibility for any additional risks associated with AI in those contexts.”

None of the other companies with upcoming investor votes on AI responded to requests for comment for this article. Meta, Alphabet, Warner Bros. Discovery and Netflix have not yet issued their proxy statements with responses to the proposals.


New Rules

AI risk is heightened by new rules like the EU’s AI Act, an upcoming law expected to take effect in the coming months which—like the shareholder proposals—aims to make sure AI systems are governed by safe and ethical principles. The Act bans what it deems “unacceptable risk” like using AI systems that could manipulate individuals or exploit them because of their age or disability, for example.

The upcoming law will apply to providers and developers of AI that are used in the EU even if the companies are based elsewhere.

The US has taken a less stringent touch. The White House issued an executive order late last year with sweeping security and privacy measures and other directives, including a requirement that developers share their safety test results with the US government.

Securities and Exchange Commission Chair Gary Gensler also gave companies a stern warning in December about misleading investors over their AI capabilities, a phenomenon that he called ‘AI washing.’

Developing Risk

Companies across industries are talking more about artificial intelligence in filings to investors, either about the opportunities or risks. Bloomberg Law reported in February that just over 40% of S&P 500 companies mentioned AI in their most recent annual report.

Many businesses are working on a range of AI priorities navigating everything from record-keeping to transparency. There’s also the environmental impact of AI to consider, alongside how it could be used to favor certain backgrounds in recruiting, or even replace portions of the workforce.

Companies will also need to take care over how they present their AI policies to everyone from shareholders to employees and customers, said Arnaud Cavé, a director at FTI Consulting. “It’s the same information, but you need to think very hard about how you communicate it best to different stakeholders,” he said.

Niamh O’Brien, a consultant also at FTI, said the slate of AI shareholder proposals shows that investors want to see companies are prepared.

Producing third-party audits as well as progress reports are important to help demonstrate a company’s preparedness, O’Brien said. While data privacy or cybersecurity policies can provide a helpful framework, navigating AI is a mammoth undertaking, she said.

“Companies are going to be using maybe hundreds of types of AI within their organization,” she said. “And all of those will affect different stakeholders and have different impacts.”

To contact the reporter on this story: Clara Hudson in Washington at chudson@bloombergindustry.com

To contact the editors responsible for this story: Andrea Vittorio at avittorio@bloombergindustry.com; Jeff Harrington at jharrington@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.