Özet
Applying change-level software defect prediction (SDP) in practice has several challenges regarding model validation techniques, data accuracy, and prediction performance consistency. A few studies report on these challenges in an industrial context. We share our experience in integrating an SDP into an industrial context. We investigate whether an “offline” SDP could reflect its “online” (real-life) performance, and other deployment decisions: the model re-training process and update period. We employ an online prediction strategy by considering the actual labels of training commits at the time of prediction and compare its performance against an offline prediction. We empirically assess the online SDP's performance with various lengths of the time gap between the train and test set and model update periods. Our online SDP's performance could successfully reach its offline performance. The time gap between the train and test commits, and model update period significantly impacts the online performance by 37% and 18% in terms of probability of detection (pd), respectively. We deploy the best SDP solution (73% pd) with an 8-month time gap and a 3-day update period. Contextual factors may determine the model performance in practice, its consistency, and trustworthiness. As future work, we plan to investigate the reasons for fluctuations in model performance over time.
Orijinal dil | İngilizce |
---|---|
Makale numarası | e2381 |
Dergi | Journal of software: Evolution and Process |
Hacim | 33 |
Basın numarası | 11 |
DOI'lar | |
Yayın durumu | Yayınlandı - Kas 2021 |
Bibliyografik not
Publisher Copyright:© 2021 John Wiley & Sons, Ltd.