Skip to content

Automated test suites for multiple CUAHSI software systems, along with associated tools and infrastructure

License

Notifications You must be signed in to change notification settings

CUAHSI/QA-AutomationEngine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QA Automation Engine

Codacy Badge Known Vulnerabilities Requirements Status standard-readme compliant

Automated test suites for multiple CUAHSI software systems, along with associated tools and infrastructure

This repository contains:

  1. Selenium-based automated test suites for the CUAHSI HydroClient and HydroShare systems.
  2. An automated testing framework to support the rapid development of high readability and high maintainability test cases.
  3. Scripts to support execution of the automated test suites on Jenkins and Selenium Grid installations.
  4. Dockerfiles and Docker Compose configuration files to facilitate the creation of servers for QA/CI.
  5. A simulation to characterize and communicate the parallel test execution process.
  6. Additional documentation to explain the theory and purpose of the testing system.

The test suites are designed to run within a Jenkins plus Selenium Grid environment on CUAHSI-managed hardware. Test initiation and results interpretation are handled by Jenkins. Meanwhile, the test case execution is handled by Selenium Grid.

Table of Contents

Background

Jenkins

The test suite execution process begins with a trigger of the "command-core" Jenkins job. This job uses API calls to create (for any new test cases) and run Jenkins jobs. Each test case is given a separate job in Jenkins, which makes it easy to analyze historical test execution data for a specific test. For more information, review the continuous integration README in docker/continuous-integration/.

Selenium Grid

The Selenium Grid system uses Docker containers for rapid parallel execution of test suites. The number and types of Selenium Grid nodes does not need to be established a priori. Rather, the Selenium Grid hub allocates test cases based on what nodes are available at the time. For more information, review the continuous integration README in docker/continuous-integration/

Install

Infrastructure

Please refer to the README in docker/continuous-integration/ for information on how to spin up a QA automation server, which facilitates a continuous integration pipeline. Using other Selenium Grid systems for ad hoc suite executions is possible. Please refer to the Usage section for information on this. The remaining information in this Install section pertains only to standalone suite executions.

Python Packages

For standalone executions, Python packages should be installed with:

$ pip3 install -r requirements.txt

Browser Driver

For standalone executions, a browser driver must be downloaded into a system directory. Further, this system directory must be included in the PATH environment variable. For the Firefox browser, the Gecko driver should be downloaded.

Usage

Test Execution

The test suite can be ran standalone - without the Jenkins and Selenium Grid infrastructure - for test script development and test suite debugging purposes.

Before running, it is recommended that you copy the env.default file into .env and edit the values accordingly. Alternatively, you can export the environment variables in your shell before running the QA tests. For example, to override the value in the .env file:

$ export HS_GITHUB_ORG=hydroshare

To run all test cases (not just those defined in the configuration file):

$ ./hydrotest hydroclient

Specific tests can be executed by including the class and method names, for example:

$ ./hydrotest hydroclient HydroclientTestSuite.test_A_000002

When initiating a Selenium Grid execution, provide the IP of the Selenium Grid hub as an argument. The port is assumed to be the Selenium Grid default of 4444.

$ ./hydrotest hydroclient --grid 127.0.0.1
$ ./hydrotest hydroclient HydroclientTestSuite.test_A_000002 --grid 127.0.0.1

To select a browser for test execution, provide a browser name as an argument. Current choices are 'firefox', 'chrome' and 'safari' (w/o quotes). Default is 'firefox'.

$ ./hydrotest hydroclient --browser chrome
$ ./hydrotest hydroclient --browser safari HydroclientTestSuite.test_A_000002

For loading test data into AWS, first export AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, and then set AWS as the records target:

$ ./hydrotest hydroclient --records aws
$ ./hydrotest hydroclient --records aws HydroclientTestSuite.test_A_000002

To set the target against which tests will be run at runtime, add the --base argument:

$ ./hydrotest hydroclient --base https://www.hydroshare.org
$ ./hydrotest hydroclient HydroclientTestSuite.test_A_000002 --base https://www.hydroshare.org

To run headless (currently supported in Chrome or Firefox), add the --headless argument:

$ ./hydrotest hydroclient --headless
$ ./hydrotest hydroclient HydroclientTestSuite.test_A_000002 --headless

Jenkins Deployments

After following the README in docker/continuous-integration/ to setup a QA automation server, the tests to run can be configured using the following configuration files:

  1. hydroclient.conf
  2. hydroshare.conf
  3. dsp.conf

Flake8 Compliance

To confirm Flake8 compliance, use:

$ ./hydrotest check

Combinatorial Design of Experiments

Calls to the combinatorial design of experiments utility requires a specification of the number of experiments to generate (--experiments), the number of factors for the combinatorial design (--factors), and the number of possible values for each independent variable (--specification). For example, consider the following design of experiments problem:

  1. Pairwise (2-way) combinatorial approach
  2. One independent variable has three possible values, while three independent variables have two possible values
  3. Desired number of experiments is eight (six is the feasible minimum, but specifying the minimum may result in a long solution convergence time) This problem could be solved with the following call:
$ python3 combinatorial.py --experiments 8 --factors 2 --specification 3 2 2 2

Note: This utility is for very simple DOE-based test case generation only. Further, the number of experiments should be well above the feasible minimum in order to generate a solution quickly. The brute force approach quickly becomes impractical for large and close-to-optimal DOE problems.

Test Assets Generation

CUAHSI software systems sometimes require test files in order verify and validate the system. The utilities to automatically generate these files are deliberately independent from the automated testing system. This enables not only the automated test system to use these utilities, but also the greater team at large. The assets folder contains the test file generation utilities.

For HydroServer test files generation, a few environment variables must be defined:

  1. GENSERVER is the Azure SQL database server name
  2. GENDB is the database name
  3. GENUSER is the username for SQL server access - the account must have the authorization to run ad hoc SELECT queries
  4. GENPASSWD is the password for SQL server access

Then, the user must specify the number of data value sets, methods, sites, sources, and variables. An example script call is provided below.

$ python3 gen_all.py --sets 3 --methods 4 --sites 5 --sources 6 --variables 7

As one would expect, this generates a methods.csv file with 4 records, a sites.csv file with 5 records, a sources.csv file with 6 records, and a variables.csv file with 7 records. The number of data value records will typically far exceed the number of records for the metadata csv files previously described. As a result of this size, the data value records must sometimes be uploaded in chunks (multiple files). The "sets" argument dictates how many data value files are generated, with each file having 250k records. This number of records per file strikes a good balance between upload speed and upload size.

Creating Test Cases

The PDF documentation contained in this repository provides a deeper explanation of the test suite framework. However, the general idea is to write test cases at the most abstract level, then support that with new classes, attributes, and methods in the lower abstraction levels as needed.

In the case of the HydroClient software system, test cases should be created in hydroclient.py. Consider a test case which involves running a map search, filtering by data service, then exporting to the workspace. The top level test case script in hydroclient.py would likely only need four lines - one for each of the three steps above and one assert statement to confirm expected test results. This test case is supported at the lower levels by (in decreasing levels of abstraction):

  1. HydroClient macros, which captures common series of actions while working with HydroClient. In our example, the map search step would be defined here and would include clicking the location search box, clearing the box of any existing text, injecting the desired text, then clicking the location search button.
  2. HydroClient elements, which contains attributes and methods for specific element identification on the page. In our example, the location search field and the location search button would both be defined at this level.
  3. Site elements handles all the low level interactions with off-the-shelf Selenium. In our example, the click, clear_all_text, and inject_text methods would be defined at this level. These methods may involve a large number of Selenium commands and should use best practices in simulating real user behavior. For instance, non-CUAHSI test scripts commonly inject large strings of text into fields instantaneously - the inject_text method within site_element.py has a small pause between simulated key presses to better mimic real user behavior. The site elements module is common to all CUAHSI automated testing suites.

The utilities are also common to all CUAHSI test suites. These utilities support those rare actions which do not involve page element interaction, and therefore cannot be handled through the framework above.

suite-design

Maintainers

@cuahsi. @hydroshare. @CZNet. @ndebuhr.

Aditional documentation

MkDocs is used to generate test-level documentation. The configuration lives in mkdocs.yml and /docs. You can serve a local instance of the static documentation site with mkdocs serve or deploy it to https://cuahsi.github.io/QA-AutomationEngine/ using mkdocs gh-deploy

Contribute

Please feel free to contribute. Open an issue or submit PRs.

License

The CUAHSI QA Automation Engine is released under the BSD 2-Clause License.

©2018 CUAHSI. This material is based upon work supported by the National Science Foundation (NSF) under awards 1148453 and 1148090. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.

About

Automated test suites for multiple CUAHSI software systems, along with associated tools and infrastructure

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •