Copia de Ignacio Garca_Tesis_MAEDI_Sistema_Interactivo_de_Visualizacin_de_Sonido

This thesis deals with an interactive sound visualization system, the research culminates with the prototype of an application that composes a visual sound piece from a MIDI file entered by the user. The system has adjustment controls that allow you to manipulate the interaction between sound and im...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autor principal: García Terra, Ignacio Alfonso
Otros Autores: Pardo, Álvaro
Formato: Tesis de maestría acceptedVersion
Lenguaje:Español
Publicado: Universidad de Buenos Aires. Facultad de Arquitectura, Diseño y Urbanismo 2022
Materias:
Acceso en línea:http://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=aaqmas&cl=CL1&d=HWA_7646
https://repositoriouba.sisbi.uba.ar/gsdl/collect/aaqmas/index/assoc/HWA_7646.dir/7646.PDF
Aporte de:
Descripción
Sumario:This thesis deals with an interactive sound visualization system, the research culminates with the prototype of an application that composes a visual sound piece from a MIDI file entered by the user. The system has adjustment controls that allow you to manipulate the interaction between sound and image. Thanks to technology it is possible to manage, on the one hand, the phenomenon of synesthesia (union of two sensations coming from different sensory domains, auditory and visual) and, on the other hand, and in a deeper way, an artificial intelligence algorithm that learns from the input sound and autonomously creates audiovisual pieces. The path of this research is based on identifying and defining a problem, recognizing the elements that compose it, collecting data, analyzing them, proposing solutions with basic proposals, establishing materials and tools to develop the solution, experimentation, modeling and sketches, to achieve the design of an interactive sound visualization system. The work develops concepts of synesthesia and sound visualization, and merges them into a first version of a system that applies components in design, interaction, technology and art. At the core of the app is the artificial intelligence algorithm that is fed the MIDI audio files. In this way, the system generates a clip in which the sounds are associated with the image, whose operation is covered by the concept of synesthesia, supporting the events associated between the image and the sound. The developed code is available for more advanced users to experiment with different configurations, in this way it is possible to extract other sound parameters and achieve different sound images. The proposed research and the creation of the system are defined under the representation of the union of visual and sound emotions, studying the behavior of sound and image and evaluating how it is possible to link their characteristics. Although there is research in this line, this work emphasizes an open source system, prepared to show the crossing procedure of these sensations in an interactive way and with contributions in the autonomous creation of sound and image.