Deep Recurrent Learning for Heart Sounds Segmentation based on Instantaneous Frequency Features
In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extractin...
Guardado en:
| Autores principales: | , |
|---|---|
| Formato: | Artículo publishedVersion |
| Lenguaje: | Inglés |
| Publicado: |
FIUBA
2020
|
| Materias: | |
| Acceso en línea: | https://elektron.fi.uba.ar/elektron/article/view/101 https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=101_oai |
| Aporte de: |
| Sumario: | In this work, a novel stack of well-known technologies is presented to determine an automatic method to segment the heart sounds in a phonocardiogram (PCG). We will show a deep recurrent neural network (DRNN) capable of segmenting a PCG into their main components and a very specific way of extracting instantaneous frequency that will play an important role in the training and testing of the proposed model. More specifically, it involves an Long Short-Term Memory (LSTM) neural network accompanied by the Fourier Synchrosqueezed Transform (FSST) used to extract instantaneous time-frequency features from a PCG. The present approach was tested on heart sound signals longer than 5 seconds and shorter than 35 seconds from freely-available databases. This approach proved that, with a relatively small architecture, a small set of data and the right features, this method achieved an almost state-of-the-art performance, showing an average sensitivity of 89.5%, an average positive predictive value of 89.3% and an average accuracy of 91.3%. |
|---|