A Human Rights Due Diligence Framework for Artificial Intelligence

The widespread adoption of artificial intelligence (AI) by companies has the potential to unleash broad-based economic prosperity by enhancing employee productivity. But, it also carries risks to workers’ rights as AI algorithms increasingly set productivity quotas, make human resource decisions, and direct workers on how to perform their jobs. For example, the use of AI in human resources decisions can result in unlawful employment discrimination. 

According to the UN Guiding Principles on Business and Human Rights, companies have an international obligation to “know and show” that they respect human rights. In using this due diligence framework to manage AI-related human rights risks, companies should: 1) be transparent about how AI is used by the company, 2) establish board-level oversight and monitoring of AI-related risks, and 3) give workers a voice in how AI is used in the workplace. 

First, companies should be transparent with how they use AI in their business operations. Investors are regularly engaging with their portfolio companies about AI as part of their stewardship activities. Many companies are now voluntarily disclosing information on how they use AI to their investors, employees, and customers. By addressing the ethical considerations of AI in a transparent manner, companies can build trust with their stakeholders. 

Second, boards of directors have an important role to play in monitoring and managing AI risks. Under the Caremark standard in Delaware corporate law, directors have a fiduciary duty to oversee their company’s operations by establishing an internal reporting system. At a minimum, companies adopting AI into their business operations need to establish board-level oversight of the risks involved and report on any regulatory noncompliance issues that arise. 

And, finally, companies should view AI as an opportunity to enhance human decision-making by their employees, not as a substitute. Companies that view their workers as partners in implementing AI are more likely to attract and retain a motivated workforce and realize the productivity gains that AI promises. Unions are the best way for workers to negotiate how AI technology should be implemented in the workplace. 

To address these concerns, the AFL-CIO Equity Index Funds have introduced shareholder proposals that ask companies to commission an independent, third-party human rights assessment of their use of AI. Proposals are expected to go to a vote at Amazon and Lyft in 2025, and similar proposals requesting a transparency report on the use of AI received high levels of shareholder support at Apple (37%) and Netflix (43%) in 2024.

 

Brandon Rees
Deputy Director, Corporations and Capital Markets, American Federation of Labor and Congress of Industrial Organizations (AFL-CIO)