Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers
It is usually assumed that recent, complex deep-neural models obtain better predictive performance than simpler, more understandable classical machine learning approaches. However, in critical areas such as mental health, the accuracy of the classifiers is not the only aspect to take into account, a...
Guardado en:
| Autores principales: | , , |
|---|---|
| Formato: | Objeto de conferencia |
| Lenguaje: | Inglés |
| Publicado: |
2024
|
| Materias: | |
| Acceso en línea: | http://sedici.unlp.edu.ar/handle/10915/176205 |
| Aporte de: |
| id |
I19-R120-10915-176205 |
|---|---|
| record_format |
dspace |
| spelling |
I19-R120-10915-1762052025-02-06T20:05:36Z http://sedici.unlp.edu.ar/handle/10915/176205 Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers Borrovinsky, Lautaro Cagnina, Leticia Errecalde, Marcelo Luis 2024-10 2024 2025-02-06T13:40:24Z en Ciencias Informáticas Stress Detection Accuracy Interpretability NLP Mental Health It is usually assumed that recent, complex deep-neural models obtain better predictive performance than simpler, more understandable classical machine learning approaches. However, in critical areas such as mental health, the accuracy of the classifiers is not the only aspect to take into account, and the interpretability of models and the explainability of their results also play a fundamental role. In this work, we take a more comprehensive approach to the effectiveness evaluation of machine learning models in mental health. Although we still focus on the models’ accuracy in detecting stress, the selection of models aims at covering systems with different support for interpretability. Models vary from those that are inherently interpretable (logistic regression) to those considered as black-boxes from the interpretability point of view (Transformers). Between these two, a third model that has showed interesting predictive capabilities and adequate interpretability in mental health is also included (SS3). The experimental work shows that even when logistic regression and SS3 have a slightly lower predictive performance, they obtain comparable results as much more complex and difficult to explain transformer-based models (BERT and MentalBERT). On the other hand, the support that each of them provides for explainability allows to confirm observations reported in previous studies about the key role that personal pronouns and self-references play as significant indicators of stress. In this context, to the ability exhibited by logistic regression to evaluate the individual importance of each input token, SS3 adds its ability to hierarchically classify and interpret input words, sentences and paragraphs and identify the exact points in a text where the determination of a case of stress becomes evident. Finally, although Transformers-based models become more opaque for a mental health professional, attention analysis still allows to confirm the relationship between relevant words in the input sentences. Red de Universidades con Carreras en Informática Objeto de conferencia Objeto de conferencia http://creativecommons.org/licenses/by-nc-sa/4.0/ Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) application/pdf 44-53 |
| institution |
Universidad Nacional de La Plata |
| institution_str |
I-19 |
| repository_str |
R-120 |
| collection |
SEDICI (UNLP) |
| language |
Inglés |
| topic |
Ciencias Informáticas Stress Detection Accuracy Interpretability NLP Mental Health |
| spellingShingle |
Ciencias Informáticas Stress Detection Accuracy Interpretability NLP Mental Health Borrovinsky, Lautaro Cagnina, Leticia Errecalde, Marcelo Luis Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| topic_facet |
Ciencias Informáticas Stress Detection Accuracy Interpretability NLP Mental Health |
| description |
It is usually assumed that recent, complex deep-neural models obtain better predictive performance than simpler, more understandable classical machine learning approaches. However, in critical areas such as mental health, the accuracy of the classifiers is not the only aspect to take into account, and the interpretability of models and the explainability of their results also play a fundamental role. In this work, we take a more comprehensive approach to the effectiveness evaluation of machine learning models in mental health. Although we still focus on the models’ accuracy in detecting stress, the selection of models aims at covering systems with different support for interpretability. Models vary from those that are inherently interpretable (logistic regression) to those considered as black-boxes from the interpretability point of view (Transformers). Between these two, a third model that has showed interesting predictive capabilities and adequate interpretability in mental health is also included (SS3). The experimental work shows that even when logistic regression and SS3 have a slightly lower predictive performance, they obtain comparable results as much more complex and difficult to explain transformer-based models (BERT and MentalBERT). On the other hand, the support that each of them provides for explainability allows to confirm observations reported in previous studies about the key role that personal pronouns and self-references play as significant indicators of stress. In this context, to the ability exhibited by logistic regression to evaluate the individual importance of each input token, SS3 adds its ability to hierarchically classify and interpret input words, sentences and paragraphs and identify the exact points in a text where the determination of a case of stress becomes evident. Finally, although Transformers-based models become more opaque for a mental health professional, attention analysis still allows to confirm the relationship between relevant words in the input sentences. |
| format |
Objeto de conferencia Objeto de conferencia |
| author |
Borrovinsky, Lautaro Cagnina, Leticia Errecalde, Marcelo Luis |
| author_facet |
Borrovinsky, Lautaro Cagnina, Leticia Errecalde, Marcelo Luis |
| author_sort |
Borrovinsky, Lautaro |
| title |
Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| title_short |
Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| title_full |
Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| title_fullStr |
Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| title_full_unstemmed |
Effectiveness in stress detection: a comparison from interpretable models to black-box Transformers |
| title_sort |
effectiveness in stress detection: a comparison from interpretable models to black-box transformers |
| publishDate |
2024 |
| url |
http://sedici.unlp.edu.ar/handle/10915/176205 |
| work_keys_str_mv |
AT borrovinskylautaro effectivenessinstressdetectionacomparisonfrominterpretablemodelstoblackboxtransformers AT cagninaleticia effectivenessinstressdetectionacomparisonfrominterpretablemodelstoblackboxtransformers AT errecaldemarceloluis effectivenessinstressdetectionacomparisonfrominterpretablemodelstoblackboxtransformers |
| _version_ |
1845116773870862336 |