In this post, we will discuss different phases of automation testing.
Before we proceed with automation, we perform a feasibility study to decide the automation of the test cases. Many criteria are considered for example functional and domain knowledge of the product or application under test. status of development of the product, skill-set etc.
In this phase, we will evaluate various tools before picking the right tool for automation. Identifying the right tool is the secret of the success of automation. Furthermore, it is important to know that no single tool satisfies all the requirements. The tool that meets most of the evaluation criteria should be picked.
Proof of Concept
It is necessary to try the tool for a few use cases with AUT. PoC should give the confidence that automation using the tool will be successful.
Design Automation Framework:
A Framework is guidelines for test automation. A framework abstracts under the hood complex implementations. Also, it also enforces a set of standards for implementation and usage. Different types of automation frameworks are: Keyword, Data-Driven Framework, Modular and Hybrid. Furthermore, we may build upon widely used frameworks and plug-in, extend and provide custom implementations where necessary. ex: TestNG, JBehave, JUnit, etc..
Write Automation Test Scripts/Stories
Test Scripts should be designed to be small, independent and shield application or product changes. In addition, the test logic is often encapsulated in an annotated method, for example, @Test. Also, there could be dependencies but those should be minimal. Tight coupling tests often make them difficult to maintain. Furthermore, they could become a maintenance nightmare.
All certified, code reviewed test scripts are checked/pushed on to code repository. Also, there are several techniques and often depend on the code repository tool being used.
Continuous Integration System(Execution, Results and Analysis)
This phase is like plugging a CI job on to the continuous integration server. Also, we decide things like How test results are reported, where to run the test ( environment ) , how often to run the job , upstream and downstream jobs to send notifications etc. example Jenkins , Circle CI etc.
Build to build improvements on several metrics. Whats going wrong and what needs to be improved. Generally, these are discussed in sprint retrospectives.