Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks
Computer-Human interaction is more frequent now than ever before, thus the main goal of this research area is to improve communication with computers, so it becomes as natural as possible. A key aspect to achieve such interaction is the affective component often missing from last decade developments...
Guardado en:
Autores principales: | , |
---|---|
Formato: | Objeto de conferencia |
Lenguaje: | Inglés |
Publicado: |
2021
|
Materias: | |
Acceso en línea: | http://sedici.unlp.edu.ar/handle/10915/125145 |
Aporte de: |
id |
I19-R120-10915-125145 |
---|---|
record_format |
dspace |
institution |
Universidad Nacional de La Plata |
institution_str |
I-19 |
repository_str |
R-120 |
collection |
SEDICI (UNLP) |
language |
Inglés |
topic |
Ciencias Informáticas Emotions Multimodal Framework Affective computing |
spellingShingle |
Ciencias Informáticas Emotions Multimodal Framework Affective computing Elkfury, Fernando Ierache, Jorge Salvador Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
topic_facet |
Ciencias Informáticas Emotions Multimodal Framework Affective computing |
description |
Computer-Human interaction is more frequent now than ever before, thus the main goal of this research area is to improve communication with computers, so it becomes as natural as possible. A key aspect to achieve such interaction is the affective component often missing from last decade developments. To improve computer human interaction in this paper we present a method to convert discrete or categorical data from a CNN emotion classifier trained with Mel scale spectrograms to a two-dimensional model, pursuing integration of the human voice as a feature for emotional inference multimodal frameworks. Lastly, we discuss preliminary results obtained from presenting audiovisual stimuli to different subject and comparing dimensional arousal-valence results and it’s SAM surveys. |
format |
Objeto de conferencia Objeto de conferencia |
author |
Elkfury, Fernando Ierache, Jorge Salvador |
author_facet |
Elkfury, Fernando Ierache, Jorge Salvador |
author_sort |
Elkfury, Fernando |
title |
Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
title_short |
Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
title_full |
Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
title_fullStr |
Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
title_full_unstemmed |
Speech emotion representation : A method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
title_sort |
speech emotion representation : a method to convert discrete to dimensional emotional models for emotional inference multimodal frameworks |
publishDate |
2021 |
url |
http://sedici.unlp.edu.ar/handle/10915/125145 |
work_keys_str_mv |
AT elkfuryfernando speechemotionrepresentationamethodtoconvertdiscretetodimensionalemotionalmodelsforemotionalinferencemultimodalframeworks AT ierachejorgesalvador speechemotionrepresentationamethodtoconvertdiscretetodimensionalemotionalmodelsforemotionalinferencemultimodalframeworks |
bdutipo_str |
Repositorios |
_version_ |
1764820451298639875 |