© Copyright CommArc Ltd 2024
Privacy PolicyTerms & ConditionsWebsite by Friday Creative

JUNE_2024

Navigating the Legal Landscape with AI – A Cautionary Tale

Navigating the Legal Landscape with AI – A Cautionary Tale

Date_

11th June, 2024

Author_

Jonn-Paul Lambie

The legal profession stands on the cusp of a technological revolution, with artificial intelligence (AI) poised to transform the way lawyers work. From drafting contracts to conducting research, AI promises efficiency and speed. However, a recent study fromStanford’s Human-Centered AI Institute (HAI) raises significant concerns about the reliability of AI in legal settings1.

The Hallucination Problem

AI tools, particularly generative AI, have been found to “hallucinate” – generating false information – in 1 out of 6 benchmarking queries or more. This phenomenon is not just a minor hiccup; it has real-world consequences. For instance, a New York lawyer faced sanctions for citing non-existent cases generated by an AI tool. The study by Stanford RegLab and HAI researchers highlights the risks of incorporating AI into legal practice without rigorous benchmarking and public evaluations1.

The Risks for Legal Professionals

The implications for legal professionals are clear: while AI can be a powerful assistant, it can also lead to errors that undermine the integrity of legal work. The study tested leading legal research services like LexisNexis and Thomson Reuters and found that even these specialised tools produced incorrect information more than 17% and 34% of the time, respectively1.

Declaring the use of Generative AI

Whilst not specifically part of this study, there is a growing trend among judges to require lawyers to disclose the use of Generative AI in legal document preparation. This is to ensure transparency and adherence to accuracy standards, with some courts mandating a certificate of accuracy for AI-generated content, verified by traditional legal research methods. The movement aims to maintain the integrity of legal proceedings as AI tools become more integrated into the practice.

In New Zealand, the judiciary has developed guidelines for lawyers using Generative AI, emphasising the importance of existing professional obligations and the need for caution due to the risks and limitations of AI tools2.

CommArc’s Commitment

At CommArc, we understand the potential of AI and the challenges it presents. We are dedicated to helping our clients in the legal sector navigate these waters safely. If you’re considering using generative AI in your practice or need assistance in developing a ‘Use of Generative AI in the Workplace’ policy, reach out to us. Our expertise can help ensure that your use of AI is both effective and ethically sound.

For further assistance and to discuss how CommArc can support your legal practice with AI technologies, please contact us.

1: Stanford HAI - AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

2: https://www.courtsofnz.govt.nz/assets/6-Going-to-Court/practice-directions/practice-guidelines/all-benches/20231207-GenAI-Guidelines-Lawyers.pdf

Share_