
Registered user since Tue 8 Mar 2022
Contributions
As software systems become increasingly complex, testing has become an essential component of the development process to ensure the quality of the final product. However, manual testing can be costly and time-consuming due to the need for human intervention. This constrains the number of test cases that can be run within a given timeframe and, as a result, limits the ability to detect defects in software in a timely manner. Automated testing, on the other hand, can reduce the cost and time associated with testing, but traditional approaches have limitations. These include the inability to thoroughly explore the entire state space of software or process the high-dimensional input space of graphical user interfaces (GUIs). In this study, we propose a new approach for automated GUI-based software testing utilizing neuroevolution (NE), a branch of machine learning that employs evolutionary algorithms to train artificial neural networks with multiple hidden layers of neurons. NE offers a scalable alternative to established deep reinforcement learning methods and provides higher robustness to parameter influences and improved handling of sparse rewards. The agents are trained to explore software and identify errors while being rewarded for high test coverage. We evaluate our approach using a realistic benchmark software application and compare it to monkey testing, a widely adopted automated software testing method.
File AttachedThis paper presents a novel method for GUI testing in web applications that largely automates the process by integrating the advanced language model GPT-4 with Selenium, a popular web application testing framework. Unlike traditional deep learning approaches, which require extensive training data, GPT-4 is pre-trained on a large corpus, giving it significant generalisation and inference capabilities. These capabilities allow testing without the need for recorded data from human testers, significantly reducing the time and effort required for the testing process. We also compare the efficiency of our integrated GPT-4 approach with monkey testing, a widely used technique for automated GUI testing where user input is randomly generated. To evaluate our approach, we implemented a web calculator with an integrated code coverage system. The results show that our integrated GPT-4 approach provides significantly better branch coverage compared to monkey testing. These results highlight the significant potential of integrating specific AI models such as GPT-4 and automated testing tools to improve the accuracy and efficiency of GUI testing in web applications.
File Attached