AI is taking the world by storm. As tech companies compete to produce the best generative AI (GAI), the legal industry is learning how to adapt to this new norm. With new technology comes new legal issues. Law firms are particularly concerned with integrating large language models (LLMs) into their practice because of potential unnecessary legal issues. While there are many companies embracing the development of AI, some companies are saying no to it altogether. Firms embracing this new technology stand to gain significant advantages, possibly outpacing their competitors resisting the GAI revolution. But what are the concerns of firms proceeding with caution when it comes to fully implementing the latest advances in GAI within their practice landscape? Let’s unpack the concerns of some of those waiting on the sidelines.
Risking Copyright Infringement
Law firms are concerned that the output generated from LLMs contain third party intellectual property, and using the information generated by AI could lead to liability for copyright infringement. This is a real risk, as AI companies use third-party, copyright protected content to train their models. The past six months have seen a slew of lawsuits filed against AI companies like OpenAI, the Google, Microsoft, and Amazon-backed company that owns ChatGPT. Artists including writers, authors, and celebrities have filed lawsuits over GAI outputs that infringe on copyrighted material and unauthorized use of intellectually protected content.
Plaintiffs argue “the AI models are trained using copyright-protected content and that certain outputs infringe for copying or being an unauthorized derivative of that content.” Law firms are worried that if they use GAI in such scenarios, they may be liable for unintentionally using copyrighted information.
How do AI Companies Protect Consumers?
As law firms cautiously begin to integrate AI into their legal practice, “a growing number of responsible providers of AI tools are [in fact] indemnifying users.” For example, in response to customers’ concerns about the risk of IP infringement claims, Microsoft announced their new Copilot Copyright Commitment in September. Microsoft’s AI-powered Copilots are LLMs that work with company data in the Microsoft 365 apps (Word Excel, PowerPoint, Outlook, etc.) to make work more efficient.
To protect customers from lawsuits concerning IP and copyright infringement claims, Microsoft pledges to customers that they can use Microsoft Copilot services and the output they generate “without worrying about copyright claims.” Microsoft stated that if customers are legally challenged on copyright grounds, Microsoft will assume responsibility for the legal risks involved.
This legal concept is called indemnification. To indemnify means to compensate for harm or loss. Applied to GAI, it means that if a customer uses an AI tool created by a company, and the AI generates copyrighted output, and the customer is then sued for using that output, the AI company will assume legal responsibility.
Specifically, “if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, [Microsoft] will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters [Microsoft] built into [its] products.”
Motive Behind Indemnification
Why are companies like Microsoft willing to indemnify customers against potential lawsuits arising from the use of GAI? The answer is two-fold. First, AI companies want to instill greater confidence that customers can use their GAI tools without the risk of infringement claims. This is because, if customers are worried about becoming victims of infringement claims and risk legal liability –they are less likely to use the AI products at all. Indemnification offers users security so that they can use AI tools “risk-free.”
The second key reason for indemnities is large tech companies like Microsoft want to control the outcome of litigation surrounding AI and copyright infringement –as this is a new and developing area of case law. “Indemnities of this nature invariably include the conduct of claim provisions whereby the providers will step into the shoes of the user and run the defense (or settlement) of the claim in return for granting the indemnity to the user.”
With indemnification, AI companies can control how legal disputes are litigated in the courts. Microsoft attorneys, for example, would dictate legal strategies to protect against IP and copyright infringement claims and control the defense’s narrative, potentially leading to favorable outcomes in court that will set legal precedent. In light of President Biden’s new AI Executive Order, (which we will get into next week) large tech companies want to be at the forefront of decision making.
Other Companies Using Indemnification to Protect Consumers
Microsoft isn’t the only companying offering this kind of protection for consumers. Back in 2021, Adobe announced a similar indemnification to protect customers who use Adobe’s AI-assisted design tools and were worried “about inadvertently infringing copyright.” Additionally, in October, Google announced that it would also indemnify users “of its generative AI products if they are hit with claims of intellectual property infringement.”
As GAI continues to develop at a rapid pace, it’s important that customers feel secure in using third-party AI tools without risking liability if the LLM output produces copyrighted information. Indemnification is a way for companies like Microsoft, Google, and Adobe to reassure users that they are secure in using their AI tools by being protected from infringement claims. This is an important step in the legal industry as firms cautiously continue to integrate AI into their everyday practice.
Want to incorporate GAI into your Legal Practice?
Interested in integrating GAI and LLMs into your legal practice? Want to learn more about lawsuits involving IP and copyright infringement? Head over to trellis.law. Trellis API simplifies legal research and brings structure and organization to trial court data. Implement Trellis directly into your legal research workflow and sift through thousands of cases that deal with IP and Copyright law, filtering down to cases specifically addressing AI.