Tech giant Microsoft and ChatGPT creator OpenAI have unveiled a $2 million Societal Resilience Fund to address the rising threat of AI-generated misinformation. The fund aims to tackle the dissemination of deceptive AI content and enhance public understanding of artificial intelligence technology on a global scale.
The joint statement released by both companies on Tuesday highlighted the fund’s focus on promoting AI education and literacy, particularly among voters and vulnerable communities as several countries prepare for elections this year.
“As two billion people worldwide participate in democratic elections this year, it’s crucial to empower individuals with the tools and knowledge needed to navigate the increasingly complex digital landscape and access reliable resources,” stated the companies.
Grants from the Societal Resilience Fund will support various organizations, including Older Adults Technology Services from AARP (OATS), the Coalition for Content Provenance and Authenticity (C2PA), International Institute for Democracy and Electoral Assistance (International IDEA), and Partnership on AI (PAI), in their efforts to provide AI education and foster a deeper understanding of AI capabilities.
Microsoft and OpenAI emphasized that the establishment of the fund is part of their commitment to promoting “whole-of-society resilience” against deceptive AI content.
The joint initiative aligns with public commitments made by both companies through the White House Voluntary Commitments and the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections. These commitments involve engaging with a diverse array of global civil society organizations and academics to raise public awareness and enhance societal resilience.
“Our shared objectives include combating the growing threat of bad actors leveraging AI and deepfakes to deceive voters and undermine democracy,” the statement added.
The launch of the Societal Resilience Fund represents a significant step in Microsoft and OpenAI’s ongoing efforts to address challenges in AI literacy and education. Both companies pledge to sustain their collaboration with organizations and initiatives that share their objectives and values.
Concerns about AI misinformation have escalated alongside the proliferation and utilization of AI tools. The World Economic Forum’s Global Risks Report 2024 identified AI-generated misinformation/disinformation as a significant global risk. This advancement in AI technology has facilitated the creation and dissemination of misinformation, with 53% of respondents in the Global Risks Perception Survey (GRPS) ranking AI misinformation as the second most prominent global risk for 2024, following extreme weather.