Selection to ensure reliability of code changes

Abstract:

“Today, most engineering teams heavily depend on automated tests to certify a code change. Often the regression test suite is completely automated thus guaranteeing last-mile sign-off criteria for a production build. However, can we also consider automating the “”test selection criteria”” based on the impact of a code change?

Determining the right test impact can often be challenging – especially in a large monolithic codebase or a complex product stack with many interdependent components. In such cases, the code changes in one area/module can have a cascading impact on other areas as well. Unless identified and tested adequately, this can have an unanticipated effect, thus increasing defect-leakage to production. The north-star success metric for any Quality Engineering team is to reduce defect-leakage to production (ideally to zero), be equipped to perform the right tests and identify defects as early as possible in the development life cycle.

This paper outlines a proposal for automatically predicting the test impact of a code change and an automated framework of executing the set of identified tests. It aligns with the quality principle of “Test Fast, Fail Early”. The prediction model is built on Machine Learning and enables “learning” based on previously identified Customer and internal defects mapped to historical code changes. Insights from Customer (production) defects helps to enrich the training set. It thus increases the probability of trapping defects internally within the test cycle, which could have otherwise escaped to production.

The solution can be scoped to integrate into the build pipeline, thus executing the identified tests automatically (unit and integration tests), before handing over to QA for full functional testing. This also promotes the Shift-Left principle to identify defects early, provide timely fixes and thus improve the reliability of code changes.”

Benefits:

Following are the benefits and takeaways of this paper :

a) Ensure reliability of code changes
b) Reduce defect-leakage to production
c) Enable “Test Fast, Fail Early” thus aligning with Shift-Left test strategy
d) Automatically assess the test impact of a code change, thus saving time for initial test selection and execution 
e) The ML model learns from historical customer defects, thus enabling customer-focussed testing 
f) Overall helps improve test efficiency and effectiveness :
i) Efficiency: The automated engine saves time in identifying tests
ii) Effectiveness: The ML model helps to determine the adequate test impact and ensures tests are rightly selected

Tina Chatterjee
Senior Manager – Quality Engineering, Khoros

Tina Chatterjee is a Quality Engineering leader with 14 years of industry experience in software test strategy, test automation, performance testing, release management, growing and managing high-performing QE and SDET teams. Currently, she is heading the Quality Engineering function from Khoros Bangalore. Her previous workplaces include Amazon, Sprinklr and NetApp. She is a highly engaging Quality Engineer who strongly believes in Customer Obsession. She is passionate about introducing processes and tools to uplift product quality and drive customer success.

Arjun Naidu
 Software Developer – Data Engineering, Khoros

Arjun Naidu is a Machine Learning Engineer at Khoros. He held roles as a QA engineer, Full stack developer, Machine learning engineer in his 8 years of experience at Netapp and Khoros. He worked on developing a Test Automation Framework, a Stress Testing Framework and an orchestration framework. Currently he works at Khoros in its Machine Learning Team, which provides AI as a service to the rest of Khoros and possibly its customers in the future.