TY - GEN
T1 - SBST Tool Competition 2022
AU - Gambi, Alessio
AU - Jahangirova, Gunel
AU - Riccio, Vincenzo
AU - Zampetti, Fiorella
N1 - Funding Information:
We thank the participants in both competitions for their invaluable contribution, BeamNG.GmbH for providing their driving simulator, and the Google Open Source Security Team for providing access to Google Cloud. This work was partially supported by the H2020 project PRECRIME (ERC Grant Agreement n. 787703), the H2020 project COSMOS (Project n. 957254-COSMOS), and the DFG project STUNT (DFG Grant Agreement n. FR 2955/4-1).
Publisher Copyright:
© 2022 ACM.
PY - 2022/2/3
Y1 - 2022/2/3
N2 - We report on the organization, challenges, and results of the tenth edition of the Java Unit Testing Competition as well as the second edition of the Cyber-Physical Systems (CPS) Testing Competition. Java Unit Testing Competition. Seven tools, i.e., BBC, EvoSuite, Kex, Kex-Reflection, Randoop, UTBot, and UTBot-Mocks, were executed on a benchmark with 65 classes sampled from four opensource Java projects, for two time budgets: 30 and 120 seconds. CPS Testing Tool Competition. Six tools, i.e., AdaFrenetic, Am bieGen, FreneticV, GenRL, EvoMBT and WOGAN competed on testing two driving agents by generating simulation-based tests. We considered one configuration for each test subject and evaluated the tools' effectiveness and efficiency as well as the failure diversity. This paper describes our methodology, the statistical analysis of the results together with the competing tools, and the challenges faced while running the competition experiments.
AB - We report on the organization, challenges, and results of the tenth edition of the Java Unit Testing Competition as well as the second edition of the Cyber-Physical Systems (CPS) Testing Competition. Java Unit Testing Competition. Seven tools, i.e., BBC, EvoSuite, Kex, Kex-Reflection, Randoop, UTBot, and UTBot-Mocks, were executed on a benchmark with 65 classes sampled from four opensource Java projects, for two time budgets: 30 and 120 seconds. CPS Testing Tool Competition. Six tools, i.e., AdaFrenetic, Am bieGen, FreneticV, GenRL, EvoMBT and WOGAN competed on testing two driving agents by generating simulation-based tests. We considered one configuration for each test subject and evaluated the tools' effectiveness and efficiency as well as the failure diversity. This paper describes our methodology, the statistical analysis of the results together with the competing tools, and the challenges faced while running the competition experiments.
KW - Tool Competition
KW - Software Testing
KW - Test Case Generation
KW - Unit Testing
KW - Java
KW - Cyber-Physical Systems
KW - Autonomous Vehicles
KW - Search Based Software Engineering
UR - http://www.scopus.com/inward/record.url?scp=85130995957&partnerID=8YFLogxK
U2 - 10.1145/3526072.3527538
DO - 10.1145/3526072.3527538
M3 - Conference contribution
SN - 978-1-4503-9318-8
T3 - Proceedings - 15th Search-Based Software Testing Workshop, SBST 2022
SP - 25
EP - 32
BT - Proceedings - 15th Search-Based Software Testing Workshop, SBST 2022
PB - IEEE
CY - 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
ER -