An analysis of the suitability of test-based patch acceptance criteria
"Program repair techniques attempt to fix programs by looking for patches within a search space of fix candidates. These techniques require a specification of the program to be repaired, used as an acceptance criterion for fix candidates, that often also plays an important role in guiding some...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | Ponencias en Congresos acceptedVersion |
Lenguaje: | Inglés |
Publicado: |
2019
|
Materias: | |
Acceso en línea: | http://ri.itba.edu.ar/handle/123456789/1757 |
Aporte de: |
id |
I32-R138-123456789-1757 |
---|---|
record_format |
dspace |
spelling |
I32-R138-123456789-17572022-12-07T14:13:46Z An analysis of the suitability of test-based patch acceptance criteria Zemín, Luciano Gutiérrez Brida, Simón Godio, Ariel Cornejo, César Degiovanni, Renzo Regis, Germán Aguirre, Nazareno Frías, Marcelo BENCHMARKING DEPURACION DE PROGRAMAS VERIFICACION DE SOFTWARE "Program repair techniques attempt to fix programs by looking for patches within a search space of fix candidates. These techniques require a specification of the program to be repaired, used as an acceptance criterion for fix candidates, that often also plays an important role in guiding some search processes. Most tools use tests as specifications, which constitutes a risk, since the incompleteness of tests as specifications may lead one to obtain spurious repairs, that pass all tests but are in fact incorrect. This problem has been identified by various researchers, raising concerns about the validity of program fixes. More thorough studies have been proposed using different sets of tests for fix validation, and resorting to manual inspection, showing that while tools reduce their program fixing rate, they are still able to repair a significant number of cases. In this paper, we perform a different analysis of the suitability of tests as acceptance criteria for automated program fixes, by checking patches produced by automated repair tools using a bug-finding tool, as opposed to previous works that used tests or manual inspections. We develop a number of experiments in which faulty programs from a known benchmark are fed to the program repair tools GenProg, Angelix, AutoFix and Nopol, using test suites of varying quality and extension, including those accompanying the benchmark. We then check the produced patches against formal specifications using a bug-finding tool. Our results show that, in general, automated program repair tools are significantly more likely to accept a spurious program fix than producing an actual one, in the studied scenarios. " 2019-09-17T16:40:28Z 2019-09-17T16:40:28Z 2017-07 Ponencias en Congresos info:eu-repo/semantics/acceptedVersion 978-1538-62-789-1 http://ri.itba.edu.ar/handle/123456789/1757 en info:eu-repo/semantics/altIdentifier/doi/10.1109/SBST.2017.12 application/pdf |
institution |
Instituto Tecnológico de Buenos Aires (ITBA) |
institution_str |
I-32 |
repository_str |
R-138 |
collection |
Repositorio Institucional Instituto Tecnológico de Buenos Aires (ITBA) |
language |
Inglés |
topic |
BENCHMARKING DEPURACION DE PROGRAMAS VERIFICACION DE SOFTWARE |
spellingShingle |
BENCHMARKING DEPURACION DE PROGRAMAS VERIFICACION DE SOFTWARE Zemín, Luciano Gutiérrez Brida, Simón Godio, Ariel Cornejo, César Degiovanni, Renzo Regis, Germán Aguirre, Nazareno Frías, Marcelo An analysis of the suitability of test-based patch acceptance criteria |
topic_facet |
BENCHMARKING DEPURACION DE PROGRAMAS VERIFICACION DE SOFTWARE |
description |
"Program repair techniques attempt to fix programs by looking for patches within a search space of fix candidates. These techniques require a specification of the program to be repaired, used as an acceptance criterion for fix candidates, that often also plays an important role in guiding some search processes. Most tools use tests as specifications, which constitutes a risk, since the incompleteness of tests as specifications may lead one to obtain spurious repairs, that pass all tests but are in fact incorrect. This problem has been identified by various researchers, raising concerns about the validity of program fixes. More thorough studies have been proposed using different sets of tests for fix validation, and resorting to manual inspection, showing that while tools reduce their program fixing rate, they are still able to repair a significant number of cases. In this paper, we perform a different analysis of the suitability of tests as acceptance criteria for automated program fixes, by checking patches produced by automated repair tools using a bug-finding tool, as opposed to previous works that used tests or manual inspections. We develop a number of experiments in which faulty programs from a known benchmark are fed to the program repair tools GenProg, Angelix, AutoFix and Nopol, using test suites of varying quality and extension, including those accompanying the benchmark. We then check the produced patches against formal specifications using a bug-finding tool. Our results show that, in general, automated program repair tools are significantly more likely to accept a spurious program fix than producing an actual one, in the studied scenarios. " |
format |
Ponencias en Congresos acceptedVersion |
author |
Zemín, Luciano Gutiérrez Brida, Simón Godio, Ariel Cornejo, César Degiovanni, Renzo Regis, Germán Aguirre, Nazareno Frías, Marcelo |
author_facet |
Zemín, Luciano Gutiérrez Brida, Simón Godio, Ariel Cornejo, César Degiovanni, Renzo Regis, Germán Aguirre, Nazareno Frías, Marcelo |
author_sort |
Zemín, Luciano |
title |
An analysis of the suitability of test-based patch acceptance criteria |
title_short |
An analysis of the suitability of test-based patch acceptance criteria |
title_full |
An analysis of the suitability of test-based patch acceptance criteria |
title_fullStr |
An analysis of the suitability of test-based patch acceptance criteria |
title_full_unstemmed |
An analysis of the suitability of test-based patch acceptance criteria |
title_sort |
analysis of the suitability of test-based patch acceptance criteria |
publishDate |
2019 |
url |
http://ri.itba.edu.ar/handle/123456789/1757 |
work_keys_str_mv |
AT zeminluciano ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT gutierrezbridasimon ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT godioariel ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT cornejocesar ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT degiovannirenzo ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT regisgerman ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT aguirrenazareno ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT friasmarcelo ananalysisofthesuitabilityoftestbasedpatchacceptancecriteria AT zeminluciano analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT gutierrezbridasimon analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT godioariel analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT cornejocesar analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT degiovannirenzo analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT regisgerman analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT aguirrenazareno analysisofthesuitabilityoftestbasedpatchacceptancecriteria AT friasmarcelo analysisofthesuitabilityoftestbasedpatchacceptancecriteria |
_version_ |
1765660911878537216 |