AI-Enabled Metrics for Release Decision

Developments in artificial intelligence (AI) can help with the faster, well-informed strategic decision-making process by assessing data, recognizing patterns and variables in complex circumstances, and recommending optimal solutions. The purpose of AI in decision-making is not complete automation. Rather, the goal is to help us make quicker and better decisions through streamlined processes and effective use of data.

In a QA cycle, we capture various metrics to gauge the testing we have done against the baseline values according to industry standards. In this article, we are using an AI model to make the release sign-off decision, calculated with automated metrics.

AI-Enabled Model

AI-based release decision, often referred to as AI model deployment or rollout, involves determining when and under what conditions an AI system should be put into production or made available to end-users. Here are some key considerations for making AI-based release decisions:

Model Evaluation: Before making a release decision, it’s essential to thoroughly evaluate the AI model’s performance using appropriate metrics. This evaluation should include various aspects, such as accuracy, precision, and any other relevant performance indicators. The model should meet predefined quality and accuracy standards.

Here is the AI model designed…

Based on the above, the most important decisions are arrived at, which are mentioned below:

Release Tollgate Decision

This decision entails the criteria for Production Readiness, determining whether to sign off for production or not. The decision is based on the provided values.

Quality Quotient

The Quality Quotient is a percentage derived from established metrics used for assessing and improving software quality. The following parameters are captured, and the quality quotient is determined with a predefined formula. The decision is based on the following range of values: 0% to 98%.

Testing & Validation

Extensive testing is necessary to identify and address potential issues, including edge cases that the AI model might encounter. Testing should cover a wide range of inputs to ensure the system’s robustness. Validation involves verifying that the AI model’s performance aligns with business objectives and requirements to contribute to the desired goals.

Use Cases

This model is evaluated for two projects. One is in the social media domain, which has weekly pushes to production. We have the model with the process of capturing the status of tests and defects through tools like JIRA and qTest. The captured data is fed into a dynamic dashboard with built-in formulas for calculating the metrics needed for sign-off.

The results are greatly helpful in making the release decision. We have some feedback mechanisms which helped to evolve the model and we are recommending the same to the customer.

The second one is for a fortnightly release financial domain project. Here the model gave indicative results for making the release decision.

Release decisions should be data-driven and grounded in a well-defined process that considers the AI system’s technical and business aspects. It’s crucial to strike a balance between delivering AI solutions swiftly and ensuring they adhere to quality, ethical, and security standards. Regularly reviewing and updating the release criteria is essential as the AI system evolves and new information emerges.



Author: Premalatha S
Premalatha Shanmugham with 13+ years of experience In Indium, focusing on Digital Assurance and managing delivery. She holds an M.Phil in Computer Science and has presented papers in various conferences including STC, Step-IN, eWIT forums. With the extensive experience in Infrastructure, Grey box, and Test Ops domains has helped to achieve quick cycle to release, Regression optimization, Test optimization, and risk-free Quality products.