Who owns what is produced by generative artificial intelligence? This is a new legal question following the global focus on this new technology. Microsoft says some of its customers worry about the risk of intellectual property infringement claims if they use output produced by generative AI. The company says it therefore will assume responsibility for the potential legal risks involved.
Since the launch of generative AI tools late last year, especially Microsoft and Google are competing in introducing AI-based services saying this is revolutionizing search as users not only get links as an answer but can also receive machine written search summaries that look like written by a human.
The fast growing use of generative artificial intelligence is raising a number of copyright issues as companies building genAI tools are using huge amounts of online data and information, some of it copyright protected.
Big media companies are blocking these tool makers from using the media content for deep learning of artificial intelligence tools. The owners of the data want to get paid for this new use of content they have produced, some of it protected by copyright.
Microsoft’s president Brad Smith and Hossein Nowbar, CVP and Chief Legal Officer, write in a blog post that customers’ worries are understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services.
“To address this customer concern, Microsoft is announcing our new Copilot Copyright Commitment. As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.”
Smith and Nowbar say that if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, Microsoft will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into Microsofts products.
Summary of their arguments:
- We are charging our commercial customers for our Copilots, and if their use creates legal issues, we should make this our problem rather than our customers’ problem. This philosophy is not new: For roughly two decades we’ve defended our customers against patent claims relating to our products, and we’ve steadily expanded this coverage over time. Expanding our defense obligations to cover copyright claims directed at our Copilots is another step along these lines.
- We are sensitive to the concerns of authors, and we believe that Microsoft rather than our customers should assume the responsibility to address them. Even where existing copyright law is clear, generative AI is raising new public policy issues and shining a light on multiple public goals. We believe the world needs AI to advance the spread of knowledge and help solve major societal challenges. Yet it is critical for authors to retain control of their rights under copyright law and earn a healthy return on their creations. And we should ensure that the content needed to train and ground AI models is not locked up in the hands of one or a few companies in ways that would stifle competition and innovation. We are committed to the hard and sustained efforts that will be needed to take creative and constructive steps to advance all these goals.
- We have built important guardrails into our Copilots to help respect authors’ copyrights. We have incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content.
Legal experts writing in Harvard Business Review say that “ the legal implications of using generative AI are still unclear, particularly in relation to copyright infringement, ownership of AI-generated works, and unlicensed content in training data.”
“Courts are currently trying to establish how intellectual property laws should be applied to generative AI, and several cases have already been filed”, write Gil Appel, Assistant Professor of Marketing at the GW School of Business, Juliana Neelbauer, partner at Fox Rothschild LLP and David A. Schweidel, Professor of Marketing at Emory University’s Goizueta Business School.
A basic summary of experts’ comments so far would be that copyright has never been handed to work without a human involved.
The US Copyright Office recently launched a study of the copyright law and policy issues raised by generative AI and is assessing whether legislative or regulatory steps are warranted.
The office will use the results to advise Congress; inform its regulatory work; and offer information and resources to the public, courts, and other government entities considering these issues.
The office’s idea is that with more information it could advise on
- How AI could use copyrighted data in training;
- If AI-generated content can be copyrighted even without a human involved;
- How copyright liability would work with AI.
Deadline for written comments is October 18.