How to Overcome Platform Quality and Reliability Challenges

Software testing is a crucial part of product development. Every product must go through multiple and meticulous testing before being placed in the hands of the end-user or a customer.

However, as the pace of application delivery demands have accelerated, enterprises continue to deal with a range of platform quality and reliability challenges with software testing programs, such as:

  • Shorter release cycles while maintaining quality; long-running regression cycles leading to long test execution cycles. Nearly 60% of North American enterprises want production releases every two weeks or less, but only 30% of test cases are automated.
  • Multiple test configurations, environments, and experiences leading to testing errors.  It is hard to get an end-to-end view of test programs with so many tools, frameworks, and technologies.
  • They want speed to value, and product quality, but are under pressure to deliver high-quality applications. Add multiple changes to applications with every release, and they run into the “Wall of Confusion” between agility and stability.
  • They are looking for cost efficiency, but are faced with decreasing testing budgets, the high cost of test environment and execution infrastructure.
  • They recognize the need for improved speed and robustness in DevSecOps and the release process, but the compressed time pressure puts test data preparation at risk.

To address these problems, we need Test Automation Frameworks, which are a set of guidelines that institutionalize the creation of test cases and optimize its execution.

Test Automation Frameworks generate test reports which capture the required artifacts for the next best action, such as type of error, automation, configuration, data, and application defect. The framework in principle should help automate the entire workflow using the DevSecOps principle

Test Automation Framework is a conceptual part of automated testing that helps testers to use resources more efficiently. A framework is defined as a set of rules or best practices that can be followed systematically to ensure the desired results.

The following frameworks are used in Automation Testing:

  • Linear Scripting Framework, also referred to as the record and playback model is a scripting driven framework. The creation and execution of test scripts are done individually and in an incremental manner where every new interaction is added to the automation tests.
  • Modular Testing Framework: Independent test scripts based on the modules are developed to test the software. An abstraction layer takes care of the modules to be hidden from the application during the test.
  • Data-Driven Testing Framework: A separate file in tabular format is used to store both the input and the expected output results. A single driver script can execute all the test cases with multiple sets of data.
    • This driver script contains navigation that spreads through the program, covering both readings of data files and logging of test status information.
    • Furthermore, equivalence partitioning and boundary value analysis help create data-driven tests that lead to rather high code coverage while keeping the data sets small.
  • Keyword Driven Testing Framework: An application-independent framework uses data tables and keywords to explain the actions to be performed on the application under test. This is also referred to as a keyword-driven test automation framework for web-based applications and can be stated as an extension of a data-driven testing framework.
  • Hybrid Testing Framework: The combination of modular, data-driven, and keyword test automation frameworks based on the combination of many types of end-to-end testing approaches.
  • Test-Driven Development Framework (TDD): Uses automated unit tests to drive the design of software and separates it from any dependencies.
    • TDD defines use cases, increases the speed of tests,e and improves the confidence that the system meets the requirements; in addition to making sure that it is working adequately compared to traditional testing.
  • Behavior Driven Development Framework (BDD): Tests are based on system behavior. Testers can create use cases in simple English language helping non-technical people to analyze and understand the tests quickly.
    • Automated testing tool for software such as FitNesse, a behavior-driven development framework is built around decision tables.

Testing frameworks are centered around helping design test cases with better coverages.

  • Deriving test cases directly from a requirement specification or black box test design technique.
  • Boundary Value Analysis BVA software testing techniques
  • Equivalence Partitioning EP
  • Decision Table Testing
  • State Transition Diagrams
  • Use Case Testing
  • Deriving test cases directly from the structure of a component or system:
  • Statement Coverage
  • Branch Coverage
  • Path Coverage
  • LCSAJ Testing
  • Deriving test cases based on tester’s experience on similar systems or testers intuition:
  • Error Guessing
  • Exploratory Testing

Test coverages help identify test conditions that are otherwise difficult to recognize.

These challenges can be avoided by adopting the right framework and using testing techniques to ensure better coverages. However, today test cases can be made predictable, by an understanding of the code and the entire SDLC.

This is accomplished by starting from the requirement gathering phase until the end state of incidence in production.

Create “breadcrumbs” that allow you to identify every part of the application, from critical functions to dependencies; understand correlations, important vs. non-important parts; used vs. most used. Study historical data, which helps to determine areas that require attention and the level of test coverages needed.

Based on these “breadcrumbs”, predictability allows you to project/predict what will happen if changes are being made to certain areas of the applications. This will help in predicting “builds” as outcomes. For example, if the model was a success and if not, then what are the reasons for its failure. Once the frameworks are equipped with an accurate, predictable model, companies can plan better, save time, and cost, and go to market faster.

In conclusion, it is safe to say software testing frameworks can be instrumental in helping enterprises achieve better productivity.

Watch this space for the next installment as we talk about shifting left with automation testing.

Software Testing: A Brief History And A Peek Into The Future

Many IT professionals tend to be preoccupied with the testing process.  In some ways, testing is as old as mankind itself.  We can all imagine the “hunter-gatherers” thousands of years ago foraging for new food and asking themselves, ‘is this edible?’, is this tasty? to ‘is this safe to eat?’  Perhaps testing became hardwired in us, as did a nascent scientific nature. Testing continued through the pre-industrial era, where people formed guilds to test product quality, and in the industrial era, for example, with the early testing of suction lift water pumps in steam engines.

Testing or software testing is not a new concept. The testing phase of a product is one of the most important tasks of any business today. Before going to market, every product must go through multiple and meticulous tests that guarantee quality before it’s placed in the hands of the end-user or a customer. When computing emerged, ‘Software Testing’ or measurement of the quality of design used in the software and how it corresponded to the design also gained prominence.

Software testing has followed its own evolutionary path, resulting in an end-to-end framework that is used today.

Software testing started its journey with ‘debugging,’ meaning that in order to debug, one had to look for errors and fix it. Subsequently, Alan Turing wrote the very first article on Testing in 1949 about carrying out checks on a program. Since then, testing has been through three phases, starting with the ‘demonstration period’ (1957–1978) where ‘test development’ was popular. The need to clear these tests became more important as more expensive and complex applications were being developed. In 1979 or the “destruction period,” ‘software testing’ became a process of running a program with the intention of finding errors. Finally, in the ‘evaluation period’ (1983–1987), a methodology was proposed that integrated analysis, revision, and testing activities during the software life cycle to obtain an evaluation of the product during the development process.

The current phase or the ‘prevention phase’ has redefined software testing. This phase encompasses the planning, design, construction, maintenance, and execution of tests and test environments. In addition, testing has become a core process in the Systems Development Lifecycle (SDLC), involving several technical and non-technical aspects that include specification, design and implementation, maintenance, process, and management. Businesses advanced their application deployment methods to match evolving business climates and needs, which in turn places QA organizations under tremendous pressure. The increased adoption of DevOps and Agile have forced QA to shorten testing cycles.

Software Testing: The Future

While the pace of application delivery demands accelerated, QA organizations still need to ensure proper test case coverage across functional, regression, usability, integration, performance, and security testing. However, many continue to struggle with an inefficient, expensive, and error-prone manual testing approach; an approach that no longer fits the DevOps and Agile model. Software testing in the future (now):

  1. Leverages automation: Applying automation to testing that went beyond test script execution and test case development enabled organizations to drive more value from their quality assurance program. However, to keep pace with the agile model of development, traditional test automation proved inadequate. Testing organizations needed to innovate with new and emerging technology solutions around automation.
  2. QA + AI and ML: Intelligent Automation solutions are the next and perhaps most promising step in the testing journey. Intelligent automation enhances test quality using predictive analytics supported by Artificial Intelligence (AI) and Machine Learning (ML).

Many businesses have advanced their application deployment methods to match evolving business climates and needs, putting QA organizations under tremendous pressure. The increased adoption of DevOps and Agile forced QA to shorten testing cycles. There is now room for a next-generation test automation framework designed to power Continuous Quality Engine that is an amalgamation of Automation, AI, and ML. The newer framework leads to predictive intelligence for the test planning process by highlighting potential points of failure.

PAQman is Infogain’s next-generation test automation framework designed to power Continuous Quality Engine. It is a machine learning-driven predictive intelligence for the test planning process by highlighting potential points of failure. PAQman uses in-sprint test automation compatible with CI/CD pipelines and behavior-driven development for applications on the web or mobile platforms, modern-day microservices or web services architecture, database testing, and traditional desktop applications. The module is fully integrated with DevSecOps tools.

This article is first in a blog series that will cover predictive scenarios and modules under predictive analytics for quality.

Global Travel Industry Leader Experiences 90% Reduced Regression Execution Time with Automation Testing Solutions

Client Background

The client provides technology solutions to the travel network, airline solution and hospitality solution travel segments. In addition, they operate a global travel marketplace, connecting travel buyers and suppliers. Headquartered in Texas, they serve customers in more than 160 countries.


Challenges included: Business & Technical Challenges

The large enterprise had various challenges that were negatively effecting productivity and reporting. Challenges included:

  • High license cost for Unified Functional Testing (UFT)
  • Lack of consistent and uniform reporting of regression execution to management due to product teams (40) using different format for reporting
  • Lack of root cause analysis (RCA) or history recorded for automation script failures
  • Large number (40) of product components with long regression cycles spanning 4 weeks
  • Various (13) different automation frameworks were developed using 8 different automation tools to automate approximately 46000 Test Cases

Download the case study to read about Infogain’s approach to resolve these challenges and the benefits that resulted from it.


Intelligent compression of test cycles from weeks to days

Delivered significant savings in license cost and test execution effort for a product engineering company

The Client

The client develops software used by the automotive industry to manage Collison and medical claims, replacements and so on. It operates throughout the United States and Canada and processes more than 50 million transactions a year.


Business and Technical Challenges

  • Long test cycle of 3 weeks negatively impacting time-to-market
  • Production issues for a desktop product used by large number of users led to huge support and patch fix cost
  • Lack of scalable test automation platform
  • Heavy dependence on Licensed Microsoft Test Automation tools
  • Lack of reusability of test automation scripts between scrum teams and system t
    esting teams
  • Manual testing wasn’t able to validate all data combinations leading to a high production defect backlog

Can Microservices revolutionize automated testing?


In the field of test engineering, microservices have been making waves in the testing community over the past year, and with good reason. With many companies investing in DevOps and favoring a more microservice-oriented framework of software development, testing practices will also need to change. Microservices will have various significant impacts on the future of testing.

Before we tackle these changes, let us define microservice. Microservices architecture is a branch of services-oriented architecture (SOA) that consists of several extremely narrowly focused services that, when brought together, function as an application. In contrast, monolithic applications are when a single application is comprised of the client access code, business logic, and the data layer combined.

The problem with monolithic applications is that when a company wants to make minor changes to a single line of code, it requires a complete overhaul of entire application. Microservices present a novel solution to this problem, because rather than redeploying all the code, an arduous and complex process, the target layer they wish to change can be altered individually.

A real story of this positive transition is the Amazon application. In 2001, the Amazon retail website was one large architectural monolith, a huge single code base and a clumsy, frankly outdated way to operate the application. In keeping with their forward-looking culture, they took the truly revolutionary approach of decoupling service architectures to simplify their pipeline and enable them to roll out updates every 11.6 seconds. This is a testament to the value of microservices and demonstrates how it is feasible for scalability, relatability, and availability. Other companies like Netflix and HelloFresh are following Amazon’s example and breaking up their apps as well.

As these development processes transform, optimal testing technology choices also change.  Karate is a simple, elegant open-source tool that simplifies microservices testing and claims that the business of testing web APIs might be fun. It does this by reducing the entry barrier to writing a test, thus increasing the readability of tests and making the test easier to maintain.

Furthermore, microservices will change the methodology of testing. When you want to make a specific change a microservice, you can use stubs to isolate individual integration points of the application from each other for unit testing. This will dramatically simplify the testing process. Also, you can automate testing earlier in the process, because you won’t have to test the code that’s driving the user interface and experience, absolving you of onerous, manual, often subjective evaluation of these components. On the other hand, testing microservice architecture poses a unique challenge that did not really exist with monoliths: when running software tests on a microservice, you need to ensure that not only that specific microservice performs as expected, but also that all the microservices that compose an application behave harmoniously as they were intended.

In summary, as teams adopt microservices, companies will observe significant simplifications in the way in which testing takes place.  Infogain’s testing team provides microservices-based testing for Fortune 100 companies and is a world leader in automated, cognitive-automation-driven automation testing.

Vikas Mittal | Head – Testing Expert Centers and Solution Delivery