The news industry is well underway to organise self-regulation on how to use artificial intelligence. Leading publishers are setting up strategies to address crucial AI aspects and they broadly address AI along similar lines. These are conclusion in a study by Oxford University of AI guidelines from 52 publishers in 12 countries around the world.
“Our study shows that publishers have already begun to converge in their guidelines on key points such as transparency and human supervision
when dealing with AI-generated content.”
The study shows that 84.62% of organisations stipulated human supervision in some form when using AI.
“Many guidelines emphasise the protection of vulnerable groups and contributors’ privacy, urging against uploading or using confidential or sensitive information in AI engines.”
“Source protection is a recurring theme, with guidelines ensuring that AI platforms are not given access to sensitive, source-protected, or unpublished information.”
“The practices of the early proponents of AI guidelines in news can become a model for others, paving the way for better AI practices across the news industry.”
“AI guidelines can play a pivotal role in responsible and ethical AI integration in journalism. While they are not a panacea for all AI-related challenges, they can potentially provide a robust framework for the ethical use of AI in many news organisations.”
The report says that despite the diversity of countries and contexts, a surprising degree of uniformity exists between these guidelines, Not so much in the way they are structured and formulated but in how news organisations have decided to regulate the technology and ensure that it is used ethically.
The study concludes that there are several critical blind spots in AI guidelines within the news industry:
- Enforcement and Oversight: Many guidelines lacked teeth when it came to enforcing them or overseeing compliance. This raises the question how effective many of these will actually be.
- Technological Dependency: Surprisingly, discussions on the potential impact of technological dependency on external providers of AI were absent, despite the potential risks such dependencies can pose for publishers.
- Audience Engagement: Despite industry discussing about the need to engage with audiences, few guidelines mentioned soliciting audience feedback on AI use in journalism.
- Sustainability and AI Supply Chains: Debates about sustainable AI and AI supply chains, and the environmental and societal implications of the technology, were largely missing.
- Workplace Surveillance and Human Rights: Critical issues like workplace surveillance, data colonialism, labour exploitation, and potential human rights abuses tied to AI training, development and use were not addressed.
Reporters Without Borders (RSF) recently launched an international committee to develop a charter regulating the use of AI in media. The committee is chaired by Nobel Prize laureate Maria Ressa. The committee is to present its result before the end of the year and the Oxford researchers say that their study of existing AI rules in the new media can be useful for the committee.