Hallucinations of Large Language Models and the Importance of Human Oversight in the Legal Field. Recommendations for Use Based on the "Roberto Mata v. Avianca Airlines Inc." Case
This article explores the phenomenon of hallucinations generated by large language models and their potential impact on the legal field. Through the analysis of the case Roberto Mata v. Avianca Airlines Inc., in which a hallucination produced by ChatGPT was introduced into a judicial proceeding, the...
Guardado en:
| Autor principal: | |
|---|---|
| Formato: | Artículo revista |
| Lenguaje: | Español |
| Publicado: |
Facultad de Derecho y Ciencias Sociales y Políticas, Universidad Nacional del Nordeste
2025
|
| Materias: | |
| Acceso en línea: | https://revistas.unne.edu.ar/index.php/rcd/article/view/8685 |
| Aporte de: |
| Sumario: | This article explores the phenomenon of hallucinations generated by large language models and their potential impact on the legal field. Through the analysis of the case Roberto Mata v. Avianca Airlines Inc., in which a hallucination produced by ChatGPT was introduced into a judicial proceeding, the article argues for the necessity of human oversight to ensure the beneficial use of these tools. As a corollary, it proposes guidelines to prevent the inclusion of LLM-generated hallucinations in legal documents. |
|---|