Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features

In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extractin...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Gaona, Alvaro Joaquin, Arini, Pedro David
Formato: Artículo publishedVersion
Lenguaje:Inglés
Publicado: FIUBA 2020
Materias:
Acceso en línea:https://elektron.fi.uba.ar/elektron/article/view/101
https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=101_oai
Aporte de:
id I28-R145-101_oai
record_format dspace
spelling I28-R145-101_oai2026-02-11 Gaona, Alvaro Joaquin Arini, Pedro David 2020-12-14 In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extracting instantaneous frequency that will play an important role in the training and testing of the proposed model. More specifically, it involves an Long Short-Term Memory (LSTM) neural network accompanied by the Fourier Synchrosqueezed Transform (FSST) used to extract instantaneous time-frequency features from a PCG. The present approach was tested on heart sound signals longer than 5 seconds and shorter than 35 seconds from freely-available databases. This approach proved that, with a relatively small architecture, a small set of data and the right features, this method achieved an almost state-of-the-art performance, showing an average sensitivity of 89.5%, an average positive predictive value of 89.3% and an average accuracy of 91.3%. En este trabajo se presenta un conjunto de técnicas bien conocidas definiendo un método automático para determinar los sonidos fundamentales en un fonocardiograma (PCG). Mostraremos una red neuronal recurrente capaz de segmentar segmentar un fonocardiograma en sus principales componentes, y una forma muy específica de extraer frecuencias instantáneas que jugarán un importante rol en el entrenamiento y validación del modelo propuesto. Más específicamente, el método propuesto involucra una red neuronal Long Short-Term Memory (LSTM) acompañada de la Transformada Sincronizada de Fourier (FSST) usada para extraer atributos en tiempo-frecuencia en un PCG. El presente enfoque fue evaluado con señales de fonocardiogramas mayores a 5 segundos y menores a 35 segundos de duración extraı́dos de bases de datos públicas. Se demostró, que con una arquitectura relativamente pequeña, un conjunto de datos acotado y una buena elección de las características, este método alcanza una eficacia cercana a la del estado del arte, con una sensitividad promedio de 89.5%, una precisión promedio de 89.3% y una exactitud promedio de 91.3%. application/pdf text/html https://elektron.fi.uba.ar/elektron/article/view/101 10.37537/rev.elektron.4.2.101.2020 eng FIUBA https://elektron.fi.uba.ar/elektron/article/view/101/198 https://elektron.fi.uba.ar/elektron/article/view/101/212 Derechos de autor 2020 Alvaro Joaquin Gaona, Pedro David Arini Elektron Journal; Vol. 4 No. 2 (2020); 52-57 Revista Elektron; Vol. 4 Núm. 2 (2020); 52-57 Revista Elektron; v. 4 n. 2 (2020); 52-57 2525-0159 2525-0159 phonocardiogram fourier transform long short-term memory fonocardiograma transformada sincronizada de fourier long short-term memory Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features Aprendizaje profundo y recurrente para la segmentación de sonidos cardíacos basado en características de frecuencia instantánea info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=101_oai
institution Universidad de Buenos Aires
institution_str I-28
repository_str R-145
collection Repositorio Digital de la Universidad de Buenos Aires (UBA)
language Inglés
orig_language_str_mv eng
topic phonocardiogram
fourier transform
long short-term memory
fonocardiograma
transformada sincronizada de fourier
long short-term memory
spellingShingle phonocardiogram
fourier transform
long short-term memory
fonocardiograma
transformada sincronizada de fourier
long short-term memory
Gaona, Alvaro Joaquin
Arini, Pedro David
Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
topic_facet phonocardiogram
fourier transform
long short-term memory
fonocardiograma
transformada sincronizada de fourier
long short-term memory
description In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extracting instantaneous frequency that will play an important role in the training and testing of the proposed model. More specifically, it involves an Long Short-Term Memory (LSTM) neural network accompanied by the Fourier Synchrosqueezed Transform (FSST) used to extract instantaneous time-frequency features from a PCG. The present approach was tested on heart sound signals longer than 5 seconds and shorter than 35 seconds from freely-available databases. This approach proved that, with a relatively small architecture, a small set of data and the right features, this method achieved an almost state-of-the-art performance, showing an average sensitivity of 89.5%, an average positive predictive value of 89.3% and an average accuracy of 91.3%.
format Artículo
publishedVersion
author Gaona, Alvaro Joaquin
Arini, Pedro David
author_facet Gaona, Alvaro Joaquin
Arini, Pedro David
author_sort Gaona, Alvaro Joaquin
title Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
title_short Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
title_full Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
title_fullStr Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
title_full_unstemmed Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
title_sort deep recurrent learning for heart sounds segmentation based on instantaneous frequency features
publisher FIUBA
publishDate 2020
url https://elektron.fi.uba.ar/elektron/article/view/101
https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=101_oai
work_keys_str_mv AT gaonaalvarojoaquin deeprecurrentlearningforheartsoundssegmentationbasedoninstantaneousfrequencyfeatures
AT arinipedrodavid deeprecurrentlearningforheartsoundssegmentationbasedoninstantaneousfrequencyfeatures
AT gaonaalvarojoaquin aprendizajeprofundoyrecurrenteparalasegmentaciondesonidoscardiacosbasadoencaracteristicasdefrecuenciainstantanea
AT arinipedrodavid aprendizajeprofundoyrecurrenteparalasegmentaciondesonidoscardiacosbasadoencaracteristicasdefrecuenciainstantanea
_version_ 1857042975812485120