Limits and challenges of large language models for academic and scientific writing: a critical review based on “expert use”
Since 2022, the scientific world has acknowledged the impact of the massification of ChatGPT. Its use, both in scientific and academic contexts, has raised questions about theoretical problems such as the authorship of works written at least partially with this technology, and practical problems, su...
Guardado en:
| Autor principal: | |
|---|---|
| Formato: | Artículo revista |
| Lenguaje: | Español |
| Publicado: |
Instituto de Lingüística. Facultad de Filosofía y Letras. Universidad de Buenos Aires
2025
|
| Materias: | |
| Acceso en línea: | https://revistascientificas.filo.uba.ar/index.php/sys/article/view/16754 |
| Aporte de: |
| Sumario: | Since 2022, the scientific world has acknowledged the impact of the massification of ChatGPT. Its use, both in scientific and academic contexts, has raised questions about theoretical problems such as the authorship of works written at least partially with this technology, and practical problems, such as the lack of adaptation of the texts to academic genres. The present article is an organized presentation of the problems and limits of the use of large language models in academic and scientific contexts. It is based on the analysis of twenty scientific papers that report systematic analysis of data provided by surveys or by systemic literature review. We chose to divide the contexts of use based on the criterion of expert use, which is defined by the convergence of a complete academic literacy and the broad domain of the field about which the chatbot is being inquired.
We will address the untrained use and some of its problematic expressions, and we will also analyse the challenges presented by texts produced by chatbots that appear even in the presence of expert use: lack of traceability and biases, hallucinations, parroting, plagiarism, weak argumentative development and inconsistent, incoherent and contradictory expositions. |
|---|