
Artificial Intelligence accused of violating creators’ rights
Artificial intelligence producing pictures and text is creating conflicts around ethics and other rights. The latest case is international picture agency Getty Images that is protesting against an artificial intelligence tool that the picture agency says is infringing the agency’s rights as the AI, without license, use data from photos taken by humans to make other pictures.
Research company Open AI recently launched ChatGPT that is a mix of a chatbot and language model GPT3. Microsoft is reported to consider including ChatGPT it in its search engine Bing so that users not only get links when they ask questions but can also get answers that are machine written summaries.
Commentators stress that AI-produced content is not “creating” as humans do but is content based on huge amounts of collected pre-existing data and deep machine learning.
Getty Images claims its right have been violated when AI, without licence, is using huge amounts of photo data.
Copyright law protects original works of authorship, like writing, music, and art. A legal question will be if ChatGPT could be considered having produced a text that is sufficiently original and creative to be within the framework of copyright legislation.
Defenders of AI-produced content argue that artists and authors always inspire others to paint and write in the same style. The big impressionists were analysed by other impressionists who painted in the same style, they argue and therefore this collecting of data to produce content is nothing really new, just faster and better using machines.
Critics say such data and content were meant to be protected and therefore unlicensed use of it is a violation of others’ rights.
Marketing and research firm Gartner’s analyst, Bern Elliot, on the company’s website says that AI foundation models such as GPT represent a huge step change in the field of AI. They offer unique benefits, such as massive reductions in the cost and time needed to create a domain-specific model. However, they also pose risks and ethical concerns, including those associated with:
- Complexity: Large models involve billions, or even trillions, of parameters. These models are impractically large to train for most organizations, because of the necessary compute resources, which can make them expensive and environmentally unfriendly.
- Concentration of power: These models have been built mainly by the largest technology companies, with huge R&D investments and significant AI talent. This has resulted in a concentration of power in a few large, deep-pocketed entities, which may create a significant imbalance in the future.
- Potential misuse: Foundation models lower the cost of content creation, which means it becomes easier to create deepfakes that closely resemble the original. This includes everything from voice and video impersonation to fake art, as well as targeted attacks. The serious ethical concerns involved could harm reputations or cause political conflicts.
- Black-box nature: These models still require careful training and can deliver unacceptable results due to their black-box nature. It is often unclear what factbase models are attributing responses to, which can propagate downstream bias in the datasets. The homogenization of such models can lead to a single point of failure.
- Intellectual property: The model is trained on a corpus of created works and it is still unclear what the legal precedent may be for reuse of this content, if it was derived from the intellectual property of others.
Getty Images in a court case claims company Stability AI has infringed intellectual property rights including copyright in content owned or represented by Getty Images. Getty Images says that Stability AI “unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license to benefit Stability AI’s commercial interests and to the detriment of the content creators.”
“Getty Images believes artificial intelligence has the potential to stimulate creative endeavours. Accordingly, Getty Images provided licenses to leading technology innovators for purposes related to training artificial intelligence systems in a manner that respects personal and intellectual property rights.”
Getty says Stability AI did not seek any such a license.
Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.
Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!
We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.