EDICOM and its integrated ecosystem for test automation

Sebastián Morcillo and Ignacio Serna, QA Automation Engineers at EDICOM, show us the important work carried out by our Quality and Testing Department. In this article, they go further into the ecosystem of technologies that we use to achieve the highest quality in the development of our software.

    Written by:

    Ignacio Serna

    QA Automation Engineer

    Specialized in QA tools development and test automation. Attracted by the world of CI/CD and quality improvement in software development processes.

    Sebastián Morcillo

    QA Automation Engineer

    Specialized in test automation and development of tools for process improvement. Passionate about new technologies that surround us every day.

Introduction

Software quality is one of the most important aspects of the development life cycle. It can ensure the continuous delivery of software and mitigate the inevitable risks involved in it’s development.

In this article, we will discuss the ecosystem of technologies used by EDICOM to meet the challenges faced in the world of software quality. 

Context 

For some years now, EDICOM has been committed to providing its services through a cloud architecture. This has brought with it some consequences such as the speed in the distribution of new applications and services and their use from different devices, operating systems and browsers.

In the Quality Department, this “power up” in terms of software delivery and availability also had its implications.

The main consequence was that  the implementation of automated testing was almost mandatory. We had to test the API integrations with the services, the always changing web applications and their performance, and take it to the continuous deployment processes. All of this in a scheduled manner that, together with the persistence of the results, would allow us to keep track of the status of our services as well as its monitoring.

In this context is where the following ecosystem of technologies arises to achieve these objectives:

Test ecosystem infrastructure

The construction is divided into four parts, each one with a specific purpose:

  • EDICOM Automated Test Framework: Is a proprietary tool developed in typescript that encapsulates Jasmine allowing to implement and execute tests.
  • CI/CD: As a tool for source code management and CI/CD we use GitLab.
  • Reports: Corresponds to the management of the execution results. Two services are used for this: a report manager and a viewer for monitoring the status of the pipelines (Automonitor). The communication between the different parts is done through an API being the backbone of this entity.
  • Persistence: This piece of the puzzle corresponds to the persistence of the data, i.e. the reports generated for each test execution (ElasticSearch, MySQL and ELTA).

In the upcoming sections we will discuss how this architecture works and which technologies are used by the different entities.

EDICOM automated test framework

Before diving into the facilities provided by the test framework, it is important to highlight the type of automated tests we perform for Edicom products. There are 3 main types:

  • Functional tests: tests focused on verifying the correct functioning of different product use cases. We include both API testing and frontend testing.
  • Performance tests: tests focused on verifying that the software performance is correct and does not decrease through updates/modifications.
  • Accessibility test: tests focused on verifying that the products comply with WCAG accessibility standards.

Performance and accessibility tests are not implemented under the umbrella of the test framework and will be addressed in other posts.

Internally known as EDICOM Automated Test Framework, this is the part of the ecosystem through which the end-to-end automated tests we perform (functional tests) are developed and executed. It is built in Typescript and executed in Node.js, through which different key libraries are integrated as utilities. These technologies are encapsulated thanks to the design pattern called adapter. Instead of adding the technologies and using them directly in our projects, what we do is to define an interface for accessing the library functions. This ensures flexibility in possible technology migrations and simplifies the logic. The main technologies are:

  • As an engine for the implementation of E2E tests we use the Jasmine runner. It is the main core through which we create, execute and validate this type of tests.
  • Another key technology is Axios. It is a node-js based HTTP client. As previously mentioned, the framework is responsible for encapsulating the utilities of the libraries it integrates. In this case, we have a class that handles everything associated with HTTP requests. From the construction of the headers and the body to the processing of the responses. The most interesting thing about this is that this class is able to abstract the construction of SOAP and REST requests, that is, it avoids the need to construct the requests differently depending on whether it is SOAP or REST. In this way, making requests to Edicom services from test projects is greatly simplified, which is widely used in API testing projects.
  • On the other hand, for the testing of our web applications, we use the recent Microsoft tool called Playwright. It is a framework that allows multi-browser and multi-platform execution through an API that interacts with the browser.
  • In addition to these technologies, the test framework is full of small libraries that provide various functionalities such as compression in different formats, comparison of .pdf files, sending and receiving e-mails as well as sending files via FTP and SFTP protocols. Thanks to them we can test the wide range of services offered by EDICOM.

At the same time, the environment has other utilities such as a project auto-generator tool, or another one that allows us to execute projects in an agile way. These consist of a series of instructions interpreted by the command line to generate and launch projects.

Going back to the diagram, this part of the ecosystem connects to two key pieces: the report manager and GitLab. The first of these relationships is responsible for sending test results to a service that stores and displays them on a web page. The connection with GitLab is obvious. We need to store the projects in a code repository that also allows us to schedule their executions.

In the following sections we will discuss both parts of the diagram.

CI/CD

We use GitLab as a code repository, version control and CI/CD tool. Thanks to GitLab we can define pipelines that run projects on a scheduled basis and thus track the status of services.

On the other hand, being GitLab based on Git, this allows us to add our projects as submodules in the services. With this, we can define additional steps in the pipelines that are launched in the deployments of new versions of the service. These additional steps are where we run the service’s test suite, allowing us to roll back the deployment in case the tests fail.

All the results of the pipeline executions are sent to a monitoring service that we will talk about later.

Report service

This part of the diagram is responsible for monitoring the pipelines and the results of project execution. It is composed of three distinct parts: the reporting API, the reporting web and the monitoring tool (Automonitor). We will talk about all of them below.

The reporting API is one of the most important parts of the entire ecosystem. Its function is to store and retrieve the results of project executions. It is built under the Nest.js framework and uses TypeORM for mapping objects to MySQL databases. Its main consumer is the test framework, where in each execution the results are stored in the report manager.

The next key application in this part of the diagram is the reporting web page. Built in Angular, it is responsible for displaying the results of the different executions of each project in a visual and intuitive way. For each of the tests that make up the execution of a project, the HTTP calls that have been launched are stored in the report manager and then consumed by the web page. Going deeper, for each HTTP call the request and response headers and body are stored.

So, in summary, when we access a project report we will see the result of each of the tests. For each of them, all the HTTP calls launched with their respective requests and response.

Later, during a real use case we will see images of this web page.

Finally, we have an application for monitoring GitLab pipelines internally known as “Automonitor“. A grid view shows the execution status of the pipelines of the different projects grouped by tags. The value of this application is that at a glance you can see whether the latest changes to the services have introduced errors or not.

Other functionalities contained in this application are to relaunch the pipelines without the need to access GitLab or to show the last report generated for each project and thus see why it has failed.

As before, during the use case we will see images of this application.

Persistence

In terms of the persistence of the information we work with throughout the ecosystem, we use three different technologies depending on what data we want to persist.

All HTTP call information (headers, parameters, body or URL) is stored in ElasticSearch. This technology is chosen because of the large size of this data in some cases.

On the other hand, we use MySQL to maintain tables representing projects, reports, users and the relationships between them.

Finally, we also rely on a proprietary service known as ELTA (EDICOM Long-Term Archiving) . This is a document storage service. In our case, we use it to store all the attached information, screenshots, files, etc.

Once all the parts of the ecosystem are known, we can work on a use case that will help us to understand in depth what each of these gears provides.

Use case

For the study case we are going to use an open REST API called Cat Fact API. The idea is to set up a small test to validate the operation of one of the endpoints of the service and to show in more detail the entities of the diagram.

The following image shows the code associated with the test:

Code to test “fact” method of Cat Fact API

The test consists of validating that the function “fact” returns a quote with a maximum length of 30 characters. As indicated in the queryVars. The other important thing is the “client” object. This is built based on a file of properties that represent the basic information of the services to which we want to launch HTTP requests. In our case, the file models the API information we are working with:

Cat Fact API endpoint properties

The main advantage of modeling calls to services in this way is that it allows us to work with multiple clients making calls to all types of services.

As mentioned above, when the project is executed, the framework sends the test results to the report manager. In the following image we can see how after the execution, where the results are shown, a link where the generated report is located is also indicated:

Results of the execution printed on the terminal

Regarding the reporting website, this is what it looks like:

Report service webpage

For each test listed we can see the HTTP calls launched by clicking on the button with the cloud icon. In the modal that appears, as already mentioned, we can see the headers, parameters, body, etc. of the request and response of the different calls:

View of the information on the HTTP calls of a test

Located a little further down we find the body of the request response:

Request response body

Once the project is implemented, the next step is to add its execution as an additional job in the product pipelines. In this way, each time changes are made to the product, different tests are run to ensure that the changes have not introduced errors.

On the other hand, these executions can be scheduled to be executed in GitLab. To do this, having the project hosted in a repository we will define a file called “.gitlab-ci.yml” that will contain everything necessary for the execution of pipelines. This file, among other things, indicates the different stages of the pipeline. In our case, the pipeline contains two. The build stage and the test stage where the project is executed:

Gitlab pipeline and his stages

Inside the job where the tests are executed we will be able to find the URL where the report with the test results has been stored in the same way as it is shown when we execute it locally. Finally, to have this scheduled execution we will have to define a GitLab pipeline schedule based on the Cron syntax. For example, at three o’clock in the morning.

Gitlab pipeline scheduler

The result of the execution of these pipelines is what we send to the Automonitor application. In this way we can monitor the status of the services. In the following image we can see how the result of the execution of the pipelines of five projects is shown, including the small demo created:

Automonitor

Conclusion

All this integration of technologies has one purpose: to satisfy the needs that appear when testing our applications. The fact that the framework includes so many libraries and the possibility of changing them without changing the projects is a great advantage. It gives us a lot of robustness and flexibility in testing.
On the other hand, the reporting system gives us valuable current and historical information about test results and the project monitor provides the current status of E2E projects.

We could summarize that this ecosystem provides us with:

  • Flexibility in test implementation.
  • Easy migration to other technologies thanks to design patterns.
  • Maintainability of the projects.
  • Adaptability to different environments by means of configuration files.
  • Continuous monitoring of services.
  • Reliability and speed in the execution of tests and visualization of results.
  • Traceability of the execution results.