Fifth Circuit Sets AI Usage Rules for Legal Filings

As jurists, law firms, legal professionals, and the public struggle to keep pace with the swift advancement of artificial intelligence, it is no surprise that courts are beginning to draft and enact rules for using these tools in legal proceedings. Just last week, before the Thanksgiving break, the Fifth Circuit Court of Appeals opened the comment period for its draft of a proposed amendment to certification rules for both lawyers and pro se litigants. The proposed new text calls on parties to “certify that, to the extent an AI program was used to generate a filing, citations and legal analysis were reviewed for accuracy.”

The Fifth Circuit has jurisdiction over Louisiana, Mississippi, and Texas courts. Fifth Circuit Rule 32.3 governs the form of filings, including typeface, word count, and type-style. The amendment includes the following proposed paragraph:

“Additionally, counsel and unrepresented filers must further certify that no generative artificial intelligence program was used in drafting the document presented for filing, or to the extent such a program was used, all generated text, including all citations and legal analysis, has been reviewed for accuracy and approved by a human.”

The rule calls for the striking of documents and sanctions for noncompliance or misrepresentation concerning failure to acknowledge the use of AI.

Legal Professionals Transformed into Supervisors of AI

The proposed requirements are nothing new. In June of this year, U.S. District Judge Brantley Starr of the Northern District of Texas implemented a rule where attorneys must attest to personally overseeing the preparation of their documents, explicitly confirming accuracy where AI is used. Does this signal the end of painstaking work of law clerks, paralegals, legal secretaries, or law librarians who often spend hours, if not days, scouring the internet and shelves for sources? Are legal professionals becoming the supervisors and spot checkers for accuracy of AI? Perhaps this is the dawn of an entirely new profession, legal prompt engineering, whereby the attestation to accuracy and review by a human is all it takes to accept the new Artificial Legal Professional’s work.

In any case, what is increasing clear is that while AI has immense potential to
revolutionize the legal sector, caution must be exercised. Professionals need to be educated about where data comes from, how AI models work and their limitations, and when to rely on AI versus when to lean on human judgment. Proper implementation, transparency, and continuous learning are key. This is especially important in the application of law. Indeed, to this point, Judge Starr is reported to have declared his refusal to use AI, stating, “I don’t want anyone to think that there’s an algorithm out there that is deciding their case…” This comment highlights the different concerns various legal professionals might consider in their implementation of these tools.

Hallucinations and Trust

One of the biggest challenges when it comes to the use of AI in the legal profession, is trust. The prospect of an AI model hallucinating results into a document is an all too real scenario. Back in May of this year, a lawyer who had used ChatGPT for citations in a legal brief found himself on the receiving end of sanctions, precisely because the brief cited six non-existent court decisions.

The escalating reliance on artificial intelligence in the legal domain underscores the paramount importance of ensuring its quality, accuracy, ethical integrity, and trustworthiness in user interactions. This necessity becomes starkly apparent when considering the possibility of an AI system’s erroneous interpretation of legal precedents or the hallucination of non-existing cases.  These examples amplify the urgency for implementing  standards and codes of conduct, potentially reinforced by legal penalties, to guarantee that AI applications in legal proceedings align with the profession’s exacting requirements. This approach is critical for fostering a deep-rooted trust in this transformative technology, which is reshaping both the legal landscape and global society.

Interested in Incorporating AI into your Workflow?

Check out Trellis! Trellis is an AI-driven state trial court research and analytics platform. We
make the fragmented US state trial court system searchable through a single interface by
providing lawyers with analytical insights on judges, cases, and opposing counsel. Request a
demo and check out our state trial court platform, analytics, and API so that we can provide you with the tools needed to streamline your legal practice.

Sources:

https://www.reuters.com/legal/transactional/us-appeals-court-proposes-lawyers-certify-review-ai-use-filings-2023-11-22/

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopajaxmava/11222023ai_5th.pdf

https://www.reuters.com/legal/transactional/lawyer-used-chatgpt-cite-bogus-cases-what-are-ethics-2023-05-30/

Unraveling Chaos: OpenAI’s Removal and Reinstatement of CEO Sam Altman

Sam Altman Reinstated as CEO at OpenAI

After a week of turmoil that began just last Friday with the ouster of OpenAI’s CEO Sam Altman, in another shocking twist of events, it was announced this morning that as of late Tuesday night, Sam Altman has been officially reinstated as OpenAI’s CEO, “reversing his ouster by OpenAI’s board last week” in response to “campaigns waged by his allies, employees, and investors.” 

After negotiations throughout the weekend, OpenAI’s board will be overhauled, with Adam D’Angelo (Quora’s CEO) remaining the only holdover. The new board includes former Facebook officer Bret Taylor and former Treasury Secretary Lawrence Summers. OpenAI announced Taylor will act as board chairman for now.

Toner and McCauley –two original board members who ousted Altman agreed to step down “because it was clear that [the board] needed a fresh start.” Outgoing board members “pressed for certain concessions from Mr. Altman, including an independent investigation into his leadership of OpenAI.” According to sources close to the negotiations, the outgoing members also blocked Altman and Brockman’s return to the board. 

After a whirlwind of a weekend, OpenAI employees were given this week off for Thanksgiving, and Ilya Sutskever’s attorney announced that Ilya is thrilled Altman is back as CEO and that Ilya has worked tirelessly over the past few days to make this happen with his attorney declaring: “It is what is best for the company.” 

Microsoft fully supports Altman’s reinstatement, with Microsoft Chief Satya Nadella announcing on X that it is a “first essential step on a path to more stable, well-informed, and effective governance.” Furthermore, Thrive Capital announced it would continue to partner with OpenAI with a new funding offer that will now value the company at $80 billion. OpenAI went through a war, but the company came out reunited with positive prospects for its future.

What Happened?

Four of five OpenAI board members, including Adam D’Angelo, Helen Toner, Tasha McCauley, and Ilya Sutskever, decided to oust Sam Altman from the company last Friday afternoon. In their statement, the board announced Mira Muratie, OpenAI’s chief technology officer, would take over as interim CEO. After that, Microsoft Chief Satya Nadella reaffirmed the company’s commitment to its partnership with OpenAI even as Microsoft’s stock plummeted due to Altman’s termination.

A few hours after the board’s decision was announced, Greg Brockman, the chairman of the board and a co-founder of the company, took to X and announced he was quitting OpenAI. Following this, various tech leaders and industry voices took to social media to support Altman, including Atlman’s younger brother and CEO of Lattice, Jack Altman, former Google CEO Eric Schmidt, and Airbnb CEO Brian Chesky.

Shortly after that, Microsoft (OpenAI’s biggest investor who owns 49% of the company) announced it would hire Altman and Brockman, and the two men would lead an advanced research lab at Microsoft. Microsoft’s CEO, Nadella, announced that Altman would be chief executive of the new lab, “setting a new pace for [AI] innovation” and that the lab would “operate as an independent entity within Microsoft.”

On Sunday, November 20th, OpenAI named Emmett Shear, former CEO of Twitch, as the OpenAI’s interim CEO. Meanwhile, throughout the weekend, Altman and his supporters pressured OpenAI’s board to reinstate him. Microsoft led the charge, which included appeals from venture capitalists, other tech executives, and smaller investors concerned about the shocking developments.

Then, in an epic display of solidarity, 700 out of 770 employees signed a letter on Monday, November 21st, stating they would resign from OpenAI if Altman were not reinstated. Ilya Sutskever, the company’s chief scientist and co-founder, signed the petition and took to X announcing that he deeply regretted his part in the board’s decision to fire Altman, saying he “never meant to harm the company.” In the letter, the employees threatened to leave the company for Microsoft if Altman was not reinstalled as the CEO. According to sources, Microsoft offered guaranteed positions for all OpenAI employees at Microsoft and agreed to match their pay.

What Does This Mean for the AI Industry?

The chaotic rift at OpenAI highlights a more extensive debate in the AI community. Some AI creators believe that AI is the most significant technological breakthrough since the creation of the internet and want to push its boundaries and tap into AI’s fullest potential. Others caution that if AI is not developed carefully and regulated by strict guidelines and policies, it could be dangerous for the world and a major threat to humanity. Indeed, it was Ilya Sutskerver who, in filmed interviews with Tonje Hessen Schei for a feature length project titled iHuman highlighted sobering question about the advancement of GAI and whether this was good or bad for humanity. This debate played out in the schism of OpenAI’s former six-member board.

Three of OpenAI’s now former board members fall on one end of the spectrum of this debate. “Tasha McCauley and Helen Toner have ties to the Effective Altruism movement, a utilitarian-inspired group that has pushed for AI safety research and raised alarms that a powerful AI system could one day lead to human extinction.” According to sources who spoke to the NY Times, Ilya Sutskever, was also “increasingly worried that OpenAI’s technology could be dangerous and that Altman was not paying enough attention to that risk.”

D’Angelo, a remaining board member, lies somewhere in the middle. He is a longtime friend of Altman and has previously written, “There is a risk that as AI gets better and better, it at least destabilizes things…This is totally independent of concerns about AI ‘taking over’ with its own ‘free will.’ I think that is a risk, too, but it is much further off.”

Altman and Brockman are at the other end of the spectrum. Altman, one of the most recognizable faces in the tech industry, has spent his career pushing for the advancement of AI and “led OpenAI to the adult table of the technology industry.” He is why the San Francisco startup is now at the center of an AI boom, despite some believing he needs to pay more attention to AI’s potential dangers and risks. This tension, it is said, gave rise to the series of events that unfolded over the weekend and into the week, and it will continue to permeate the tech industry as AI develops rapidly –with legislators and regulators rushing to catch up.

What a Story…Want More?

It’s been a crazy week for the folks at OpenAI. Check back with the Trellis blog for updates on this story. Interested in integrating AI into your legal workflow? Want to save time researching, writing, and prepping for trial? Check out Trellis! Trellis is an AI-powered state court research and data analytics platform created by litigators for litigators. We have the largest searchable database of state trial court records, so you can save time, and stay current with ongoing litigation by accessing thousands of searchable court documents. Dive into our judge and law firm analytics and make actionable decisions in court. Contact us today for a demo.

Happy Thanksgiving from all of us here at Trellis!

Sources:

https://www.nytimes.com/2023/11/20/technology/openaisamaltmanwinnerslosers.html?smid=nytcore-ios-share&referringSource=articleShare

https://www.nytimes.com/2023/11/17/technology/openaisamaltmanousted.html?searchResultPosition=2

https://www.bloomberg.com/news/articles/2023-11-21/altman-openai-board-open-talks-to-negotiate-his-possible-return

https://www.ign.com/articles/openai-in-turmoil-after-firing-its-ceo-with-microsoft-right-in-the-middle

https://www.nytimes.com/2023/11/20/business/emmett-shear-openai-interim-chief-executive.html?searchResultPosition=3

https://www.washingtonpost.com/business/2023/11/21/openai-chatgpt-board-fired-sam-altman/e9a30838-88b4-11ee-a36e-fdb7be9bd43d_story.html

https://www.bbc.com/news/business-67494165

https://www.nytimes.com/2023/11/22/technology/openai-sam-altman-returns.html?smid=nytcore-ios-share&referringSource=articleShare

https://www.ft.com/content/46efa770-4b47-49bb-b0f8-824f1c4f38a3

https://www.nytimes.com/2023/11/20/business/openai-staff-exodus-turmoil.html#:~:text=The%20future%20of%20OpenAI%20is,profile%20artificial%20intelligence%20start%2Dup.

https://www.livemint.com/companies/news/a-timeline-of-events-at-openai-from-sam-altmans-dismissal-to-second-interim-ceo-and-staff-rebellion-11700549947170.html

https://www.reuters.com/technology/who-is-openais-interim-ceo-emmett-shear-2023-11-20/

https://www.economist.com/business/2023/11/21/what-revolt-at-openai-means-for-microsoft

https://www.npr.org/2023/11/22/1214621010/openai-reinstates-sam-altman-as-its-chief-executive

https://themessenger.com/tech/adam-dangelo-openai-sam-altman-return-ceo-board-artificial-intelligence#:~:text=%E2%80%9CThere%20is%20a%20risk%20that,it%20is%20much%20further%20off.%E2%80%9D

https://www.deccanherald.com/opinion/the-fear-and-tension-that-led-to-sam-altman-s-ouster-at-openai-3-2778568

Ethics Exam Showdown: Chat GPT vs. Law Students – A New Era in Legal Education

Last Thursday, LegalOn Technologies published a study claiming AI chatbots outperformed most aspiring lawyers on the Multistate Professional Responsibility Exam (MPRE). According to the report, OpenAI’s Chat GPT-4 performed best by answering 74% of the exam questions correctly, outperforming the nationwide average of human test-takers answering 68% correctly. Earlier this year, other research concluded that GPT-4 also surpassed law students in passing the Uniform Bar Exam. What does this mean for the future of legal education? Let’s get into the details.

Standardized Tests in Legal Education

Law students in every state except Wisconsin must pass two exams to become a lawyer: the MPRE and the Bar exam. The National Conference of Bar Examiners (NCBE) develops both exams. The MPRE is typically taken during a law student’s second year of study and tests legal ethics and professional conduct. It is a 60-question, multiple-choice exam administered over two hours.

Law students complete the bar exam after they graduate from law school. The bar exam is the final hurdle before becoming a licensed attorney in the United States. Every jurisdiction administers a bar exam “to test a candidate’s ability to think like a lawyer and prove they have the ‘minimum competency’ to practice law in that state.” The Uniform Bar Exam (UBE) is a 2-day exam promulgated by the NCBE but is administered and scored by individual states. The majority of states have adopted the UBE, with exceptions including California, Florida, Virginia, Delaware, and Hawaii, to name a few.

Bar requirement changes are taking hold in some states. Oregon announced earlier this month that starting next year the state will no longer require law students to take the bar exam in order to become licensed attorneys. The state is initiating the Portfolio Bar Exam —an “alternative pathway to licensure that would allow aspiring lawyers to spend four to six months working under the supervision of an experienced attorney and to gain admission to the bar after submitting an acceptable portfolio of legal work.” California is also considering the Portfolio Bar exam. Last Thursday, the State Bar of California’s board of trustees voted to test-run the program. Now, it is waiting for the California Supreme Court to sign off on it.

What Does the MPRE Study Conclude?

The study tested four leading generative AI models, including OpenAI’s Chat GPT-4 and GPT-3.5, Anthropic’s Claude 2, and Google’s PaLM 2 Bison. According to the report, GPT-4 and Claude 2 achieved scores exceeding the approximate passing threshold (between 56-64%) for the MPRE in every state. The study used an MPRE-style exam question developed by an ethics and economics professor from the University of Houston Law Center.

The study used standard application programming interfaces (APIs) and basic prompting of “Answer the following multiple-choice question.” To simulate the MPRE, researchers randomly selected 60 questions from an available 500 in each subject area.

“Based on the sampling and testing methodology described above, the overall mean accuracy for each of the models was as follows: GPT-4 answered 74% correct, Claude 2 answered 67% correct, GPT-3.5 answered 49% correct, and PaLM 2 answered 42% correct.”

The study concludes with an implication that generative AI models can now apply black-letter ethical guidelines and help lawyers with legal ethics questions. It emphasizes that though GAI models perform well, it remains vital that legal professionals who use AI must understand its capabilities and limitations –ensuring that they are the final decision-makers. It also cautions that legal technology providers test their models extensively with “lawyer-led validations, coach LLMs to consistently produce profession-grade results, and augment generative AI with domain-specific content and training.”

How Does Generative AI Impact Law Students’ Test Scores?

Earlier this year, two University of Minnesota law professors conducted a study on integrating Generative AI (GAI) with legal writing assignments and taking exams. The study found that students could complete their assignments faster with AI, but the work product was not any better than the students who completed the assignments without technology.

Regarding law school exams, the study found “low-performing students scored higher on final exams when given access to GPT-4, while their high-performing classmates performed worse when using the technology.”

The results indicate that generative AI is becoming a vital tool for law students. The study did, however, urge law schools to ban the use of GAI in core first-year courses and on exams because the technology “disproportionately helps lower-performing students.”

What Does AI Mean for the Future of Legal Education?

GAI can help students get their work done faster, but according to research, the quality of work is not any better with AI. GAI is also incredibly helpful with legal research. Overall, GAI saves a person time, and time is something both law students and lawyers need more of. As this technology continues to develop rapidly, law schools and educators must adopt AI policies so that students understand when they are allowed to use AI and when they are not. Law schools must also implement AI training so that students become familiar with AI software –its potential and limits.

There has been a growing debate in the legal community surrounding the ethics of using AI in legal practice, which encompasses legal education. Some argue that AI threatens knowledge-based learning because if students rely on AI to do the work, are they truly “learning” or developing new skills? Others argue that AI is rapidly integrating into our society. Therefore, teaching students how to become familiar with this technology is necessary as it will most likely impact future job prospects.

Regardless of the AI debate, it is a fact that AI integration is at the forefront of technology development. Law students and faculty can benefit from AI and save time researching and writing, while legal institutions can develop AI policies and guidelines so that AI does not benefit some students to the detriment of others.

Ready to Integrate AI in Legal Education?

Are you a law professor or law librarian interested in teaching your students about the benefits of AI? Are you a student who wants to get a jump start on learning the ins and outs of legal research at the state court level? Check out Trellis! Trellis is an AI-driven data analytics platform for lawyers, law students, and legal professionals. Access state trial court data using our API and simplify your legal research workflow. Find us at trellis.law or contact us directly for a demo.

Sources:
https://www.legalontech.com/generative-ai-passes-the-legal-ethics-exam

https://www.reuters.com/legal/transactional/ai-chatbot-can-pass-national-lawyer-ethics-exam-study-finds-2023-11-16/

https://www.reuters.com/legal/transactional/ai-chatbot-can-pass-national-lawyer-ethics-exam-study-finds-2023-11-16/

https://www.barbri.com/about-the-bar-exam/

https://www.reuters.com/legal/government/bar-exam-alternative-proposed-california-passes-key-hurdle-2023-11-17/

https://www.reuters.com/legal/government/bar-exam-alternative-proposed-california-passes-key-hurdle-2023-11-17/

https://www.foxbusiness.com/politics/ai-chatbot-beats-most-aspiring-lawyers-national-legal-ethics-exam-study-finds

https://nysba.org/navigating-the-ethical-and-technical-challenges-of-chatgpt/