The survey adheres to the guidelines of eminent researchers in the field of software engineering. This study reports a systematic literature review of prioritization techniques. Test case prioritization is a way to provide priorities to test cases, to meet various testing goals. Software Testing consumes very significant amount of time in the life cycle of software. Furthermore, the features analysis reveals that (1) the previous updates ratings and (2) the APK size are the most important features for both within and cross-project scenarios. The statistical tests revealed that our approach achieves a clear advantage over machine learning approaches (e.g., random forest, decision tree, etc.) with significant improvements of 18% and 6% in terms of F1-score within-project and cross-project validations, respectively. We evaluate our approach and investigate the performance of both within-project and cross-project validation scenarios on a benchmark of 50,700 updates from 1,717 free Android apps from Google Play Store. In particular, the search process aims to provide the optimal trade-off between two conflicting objectives to deal with the considered classes. To solve this problem, we evolve bad release detection rules using Multi-Objective Genetic Programming (MOGP) based on the adaptation of the Non-dominated Sorting Genetic Algorithm (NSGA-II). We formulate the problem as a three-class classification problem to label the apps updates as bad, neutral or good. To better support mobile applications evolution and cut-off the costs of users dissatisfaction, we propose in this paper, AppTracker, a novel approach to automatically track bad release updates in Android applications (i.e., releases with higher percentage of negative reviews relative to the prior releases). Thus, ensuring that the application updates are deployed in a controlled way is of crucial importance. Hence, introducing changes in this context is risky and can harmfully impact the application rating and competitiveness. Indeed, mobile apps undergo frequent updates to introduce new features, fix reported issues or adapt to new technological or environment changes. The rapid growth of the mobile applications development industry raises several new challenges to developers as they need to respond quickly to the users’ needs in a world of continuous changes. This contrasts with other TCP techniques which require access to the SUT runtime behavior, to the SUT specification models, or to the SUT source code. Static black-box TCP methods are widely applicable because the only input they require is the source code of the test cases themselves. We find that our static black-box TCP technique outperforms existing static black-box TCP techniques, and has comparable or better performance than two existing execution-based TCP techniques. We compare our proposed technique with existing static black-box TCP techniques in a case study of multiple real-world open source systems: several versions of Apache Ant and Apache Derby. Our technique applies a text analysis algorithm called topic modeling to the linguistic data to approximate the functionality of each test case, allowing our technique to give high priority to test cases that test different functionalities of the SUT. We propose a new static black-box TCP technique that represents test cases using a previously unused data source in the test suite: the linguistic data of the test cases, i.e., their identifier names, comments, and string literals. We consider the problem of static black-box test case prioritization (TCP), where test suites are prioritized without the availability of the source code of the system under test (SUT). Development teams need to prioritize their test suite so that as many distinct faults as possible are detected early in the execution of the test suite. In many situations, the test suites are so large that executing every test for every source code change is infeasible, due to time and resource constraints. Software development teams use test suites to test changes to their source code.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |