Almost all projects have manual testing. Therefore, we regularly have to conduct regression testing and smoke testing as well. Within one of the JET BI’s projects, we decided to automate the smoke test. Manual smoke testing took 2 hours. Since it was carried out most often, we decided to start automation on it.
Codeceptjs was chosen as the framework, and there were some reasons.
What is Codeceptjs?
Codeceptjs shows itself as a nodejs-based handy framework for writing e2e tests. All the necessary tools are already under the hood of Codeceptjs. It includes testing frameworks like Mocha and assertion libraries like Chai. Also, it can be integrated with Cucumber (BDD approach). Inside the Codeceptjs framework browser control is performed by one of such tools like Selenium, Webdriverio, Puppeteer, Playwright, Protractor, NightmareJS, Appium. Thus, there is no need to learn the syntax for each of these tools. And among other things, this syntax is convenient and clear for business users.
Codeceptjs doesn’t have any obstacles for working with Salesforce, it can be integrated with this system and perform both UI steps and interact with the platform via API.
Usability of Codeceptjs
Codeceptjs supports data-driven tests and debugging process with the ability to stop test execution, has the ability to import successful steps to code during debugging, provides commands for API requests out of the box, has the ability to report test results with Allure report and Mocha XML, has clear output in the console, provide easy to read commands for everyone. For example:
Here are how the successful steps look like in the console:
To write the framework the PageObject template was used. All the code is divided into layers so that it is convenient to maintain and read even a non-specialist. In the first layer, we see only general commands written in words that are understandable for everyone. For example, ’user.checkStatus’ means a command where the user will check the status of the request:
Inside this command, on the second layer, there will be a whole list of actions, expectations, checks. For example:
The last layer contains all the tag selectors put into variables. The whole code refers to these variables, and if any selector has been changed, then the changes need to be made only in one place in the code:
The same way we have helper classes for the Salesforce query API. It also uses multiple layers. In total, 5 test suites were written for the smoke test, which include 8 e2e tests.
Possible difficulties in code writing
To be sure the code should be written in the way when the input data changes in the code, you have to make the smallest changes. For example, if you change the org for testing, you need only to make changes to the credentials file, and everything should work as before. This structure needs to be thought over initially, and it causes some difficulties, especially considering that some parameters still need to be passed in code using console command during the launching of the tests in order to use these parameters later in the code.
There could be some difficulties with sites based on Salesforce, as the Salesforce org can use LWC components by itself. This leads to the creation of custom tags (shadow DOM elements). Sometimes it can be difficult to find them, but Codeceptjs has special methods for working with shadow DOM elements:
Multiple browsers settings can also be a challenge. In fact, Codeceptjs can work with the Chrome and Mozilla browsers and also run them in parallel. This seems to be enough.
Integration with Jenkins
After writing the tests the question appeared: how to automate the process of running tests? In the company CI processes are implemented using GitLab and Jenkins, so it was decided to use Jenkins.
Jenkins is a software that provides continuous integration and continuous delivery (CI / CD). In our case, we are using Jenkins on a remote server, and ideally the tests should run automatically after the developers deploy the code to the test org.
There are two main options for creating a job: a free style project, and a pipeline project. For integration with developer actions, it is better to use a pipeline project. However, you can also use a free-style project. The difference between them is that the pipeline uses a Jenkins file, which includes all the necessary actions with a unified syntax, it’s faster and more convenient to create, use, link with other jobs. Free-style projects do all the same but each individual action needs to be created manually.
The Jenkins file is simply hosted on the git repository, so no additional configuration is needed on the Jenkins side. This file configures all actions related to this job, such as the frequency of running tests or triggers, preliminary steps before launching, post-test reports, integration with other systems, such as jira or add-ons for jira.
Jenkins pipeline job configuration and possible difficulties
We will consider only some of the nuances of creating a pipeline project. First, we need to connect our Jenkins job with our Gitlab:
To do this we need to specify the repository and add credentials for our GitLab instance. We can take code for tests from any branch, in this case the master branch is defined.
Nodejs plugin must be installed to Jenkins (probably it’s already installed):
Then, most likely you will face the following issue after your first attempt to run your test (after installing all of the dependencies):
This error can be reproduced only in Linux. To solve this problem we need to install Xvnc plugin:
It’s convenient using reports to display test results. We used Allure Report. The results might look something like this:
In order to configure Allure Report on Jenkins, Allure plugin should be installed:
No specifying in Global Configuration for it. Then Allure report will appear in configuration of the project:
“Output” is the title of the folder where your test framework stores the output files of the Allure report. Jenkins pick up these files and creates Allure report on the Jenkins side according to them:
Integration of Jenkigs with Jira (Xray)
It would be nice if the test results were recorded in our Jira where we use Xray (by the way, it is a test management system for Jira), weren’t they? To implement this we need to do the following steps.
1. Install Xray plugin:
2. Create ClientID and Client Secret in our Jira-Xray instance for the particular user who will be the reporter of the Jira issues. You can find more info here.
Make sure that this user has access to the project you use in Jira. Otherwise you will face a lot of problems here, because error messages from Jira are not always clear.
3. Go to Global Configuration, add credentials (username is a ClientID, password is a ClientSecret), set a title for our Jira instance, select Cloud or Server version:
Check your connection to the instance:
4. Go to the project configuration. Here we select our Jira instance. Then, depending on the type of the test report we are using, select the format. The most useful format which Codeceptjs can generate is Junit XML (or Junit XML Multipart if you want to specify some parameters using JSON).
For Junit XML we need to set a path to the report generated by Codeceptjs, and write down the key for the Jira project. Don’t need to fill in other fields:
For Junit XML Multipart there is need to specify the summary of Test Execution Jira issue, the assignee and so on in JSON. But before that we have to figure out what is the issue type for the test execution in our Jira instance, it can be different for the different instances.
Generally, one test execution will be created, and several test cases. It depends on the number of tests in our test run. This is what the XML report specifies.
The same JSON we use for test case Xray issue:
If such test cases have already been created no new ones will be created. But how does Xray know about existing ones? By this field:
This Definition field is set with an XML report during the first test run. Finally, we have Xray tickets in our Jira instance:
Thus we got automated smoke testing on our Salesforce project, and there is no need to do a manual smoke test after each sprint. It took 24 hours to write the code. As a result, automation will be paid off in 12 sprints.