Machine learning (ML) has tremendous potential to super-charge the QA teams with excellent decision support systems. ML can help QA professional to take Quantitative decisions backed by fool-proof statistical techniques. ML is the science of identifying useful patterns in data. These patterns usually highlight useful hidden insights about the data and the process that generates the data. These insights serve as a basis for decision-making and forecasting. A variety (size and dimensions) of data is collected while testing the software. Execution logs, bug count over time, time to fix the defects, classification of defects, distribution of testing resources, and coverage information for test-cases are captured at different stages of SDLC. Failure data that captures where and why a failure occur is also of interest. The question then, is how to exploit this data to solve some of the day to day problems tackled by QA professionals with the help of ML. This paper explains how some of the widely used machine learning algorithm, like regression, classification and Natural language processing (NLP), can help QA professionals in their day to day activities. This paper explains a) how to use regression and prediction to forecast the bug count over the days, b) How clustering algorithms can help in automatic classification of defects, c) How NLP algorithms can help in generating language-agnostic test cases and finding similar defects. This paper also walks through some of the important concepts and parameters required by ML algorithms with code snippets using R-lang. The kind of data that has to be captured, the data processing steps (cleaning and transforming), coding the data points and kind of R-Packages to use, will also be discussed in this paper. ML algorithms vary a great deal in terms of their underlying assumptions, and these aspects will be discussed as well.
Over the years I has developed rich expertise across popular test automation tools and acquired the necessary skills to build advanced test automation frameworks. As an APM specialist I have lead performance testing teams by adopting the best approach towards performance testing and ensure that the performance testing methods implemented meet our client’s business needs. I have extensive experience in performance testing web-based applications, mobile-based applications with various load levels. Conducted extensive product research on performance testing tools and developed code snippets for performance monitoring various technology stacks. Currently associated with IIT madras where I am involved in building an open source business analytics stack for Fin-tech. I am responsible for bench-marking (functional and performance) various technology components used in our technology stack. I am also trying to using ML tools to build predictive analytics into the stack that we are building.
APM specialist with expertise on Synthetic User Monitoring, Real User Monitoring, Load Testing, Application Stack monitoring and profiling.
Experience in Mobile Test automation platforms like Appium, Selendriod.
Practical experience in Data analytics and with Hadoop and Machine Learning with R.
Responsible for all new R&D works on new technologies, Code Java or shell script snippets as proof of concepts.
Experienced in building data analytics with Business Intelligence Tools on large volume of data Sound understanding on the Automation Test Plans and Frameworks. Expertise in creating Reusable Test Automation Components and Frameworks using Java, Shell scripts.
Expertise in developing Monitors for Tomcat, My SQL, Glass Fish, Web Logic and Linux Server in Java.
Experience in setting up webpagetest.org servers, Har Viewer and Mobile Application performance Monitors.
Mobile App performance testing and performance monitoring.