Abstract
This study aimed to automate the production of unit tests, a critical component of the software development process. By using pre-trained Large Language Models, manual effort and training costs were reduced, and test production capacity was increased. Instead of directly feeding the test functions obtained from the Java projects to be tested into the model, the project was analyzed to extract additional information. The data obtained from this analysis were used to create an effective prompt template. Furthermore, the sources of the problematic tests produced were identified, and this information was fed back into the model, enabling it to autonomously correct the errors. The results of the study showed that the model was able to generate tests covering %55.58 of the functions collected from Java projects across different domains and that re-feeding the model with the generated erroneous tests resulted in a %29.3 improvement in the number of executable tests.
Translated title of the contribution | Automatic Unit Test Code Generation Using Large Language Models |
---|---|
Original language | Turkish |
Title of host publication | 32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024 - Proceedings |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Electronic) | 9798350388961 |
DOIs | |
Publication status | Published - 2024 |
Event | 32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024 - Mersin, Turkey Duration: 15 May 2024 → 18 May 2024 |
Publication series
Name | 32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024 - Proceedings |
---|
Conference
Conference | 32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024 |
---|---|
Country/Territory | Turkey |
City | Mersin |
Period | 15/05/24 → 18/05/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.