Photo: osn.org
UN Under-Secretary-General for Global Communications Melissa Fleming called on generative AI developers to put safety and human rights before profit at the Security Council's "Arria Formula" meeting. The session on AI and its impact on hate speech, misinformation and false information (misinformation) was co-chaired by the United Arab Emirates and Albania and featured insights from digital experts Rahaf Harfoush and Jennifer Woodard.
In her speech, Ms Fleming noted that generative AI, if developed and used responsibly, has the potential to improve human rights, including access to information, health, education and public services.
However, it expressed serious concern about the potential for this technology to "dramatically intensify online harm".
Thanks to generative artificial intelligence, large volumes of compelling misinformation - from text to audio to video - can be created at scale, at very low cost and with little human intervention.
Such content can be mass distributed not only on social media and through fake profiles, but also in other personalised channels such as email campaigns, text messages and advertisements.
Generative AI leaves few fingerprints. It is also much harder for journalists, fact-checkers, law enforcement or ordinary people to detect whether content is real or AI-generated. Ms Fleming outlined four areas that are key for the UN:
Peace and Security: AI-driven disinformation is already threatening UN peacekeeping and humanitarian operations, endangering staff and civilians. More than 70 % UN peacekeepers who responded to a recent survey said that misinformation and disinformation severely limit their ability to do their jobs.
Human rights abuses: artificial intelligence is used to create and disseminate harmful content, including child sexual abuse material and non-consensual pornographic images, particularly targeting women and girls. The UN is also deeply concerned that anti-Semitic, Islamophobic, racist and xenophobic content could be repopulated by generative AI.
Democracy at Risk: The potential for artificial intelligence to manipulate voters and influence public opinion during elections poses a significant threat to democratic processes worldwide.
Undermining science and public institutions: For example, AI tools could escalate decades-long disinformation campaigns to undermine climate action by spreading false information about climate change and renewable energy. Underlying these phenomena is a decline in public trust in news and information sources.
Ms. Fleming cited a recent report that found that since May of this year, the number of AI-generated news sites that operate with little or no human oversight has risen from 49 to nearly 600. Some of these sites are filled with thousands of new articles every day. They often mimic well-known news sites and spout completely made-up stories.
In light of these challenges, the UN has established an AI Advisory Body to strengthen the global governance of AI. At the same time, the UN is developing a Code of Conduct for Information Integrity to help make societies more resilient to misinformation and hate speech.
Ms Fleming called for a balanced approach to harnessing the benefits of AI while mitigating its risks, stressing the need for healthy information ecosystems for stable and unified societies. Her message was clear - AI developers must put people and their well-being before profit and ensure that technology serves as a force for good.
AI-assisted analysis.
un.ogr/gn.cz-JaV_07
https://www.un.org/en/hate-speech/ai-concerns