Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Milan snapshot test #56

Merged
merged 17 commits into from
Feb 5, 2025
Merged

Conversation

MilanPospisil
Copy link
Contributor

@MilanPospisil MilanPospisil commented Jan 31, 2025

[AAP-39142]

Description

How the test works

This PR has dozens of files changed, but only 3 are relevant - rest are generated testing data.

This test aims to generate many xlsx files that can be used as snapshots for the further testing.
Idea is that:

  1. Many xlsx files will be in the repo along with stored command arguments and env variables that were used for generating the xlsx files.

  2. Test will run and use the same params and env variables as original xlsx file (read from its respective json file), run command, generate new xlsx files and compare them to old ones. The comparsion is done by textual content inside spreadsheet, not by some binary exact coparsion, which would not work, since each xlsx file contains its own timestamp and possible other different metadata line unique GUIDs.

  3. PR also extending tests (that was not in the issue description) for non snapshot variants that is comparing newly generated xlsx files of two different commands that should generate the same spreadsheet, like command pairs:

build_report --months=2024-02
build_report --since=2024-02-01 --until=2024-02-29

The range spans through the whole month, so it should generate the same spreadsheet.

Note that spreeadsheet comparsion compares only texts inside particular cells, but there are exceptions like created field, that contains date and this will differ and date range, which is either particular month or date range. Those 2 cells are ignored in snapshot test.

Why is the test important

This test will be useful in developement and mainly in refactoring, so we will ensure that code is still producing the same results as it was before. When introducing some changes like new feature or bug fix, the snapshot test will fail. We can then see what failed and if the failure was expected - for example we are repairing / extending some data part and if only that data part is changed, thats ok, if it changed also something unexpected, we may alter something unwanted by mistake. But for every intentional change that alter command workings, we will have to generate the historical test files again.

Description of source code files

Now description of each source code file:

metrics_utility/test/snapshot_tests/CCSP/CCSP_snapshot_generator.py

Run this script to generate reports for further tests:
python -m metrics_utility.test.snapshot_tests.CCSP.CCSP_snapshot_generator

This is generating reports with different parameters and env_variables. It generates CCSPv2 and also CCSP reports.
For each report, it first generates <file_name>.json file with the settings, that can look like this:

{
"env_vars": {
"METRICS_UTILITY_SHIP_TARGET": "directory",
"METRICS_UTILITY_SHIP_PATH": "/awx_devel/awx-dev/metrics-utility/metrics_utility/test/test_data",
"METRICS_UTILITY_PRICE_PER_NODE": "15.01",
"METRICS_UTILITY_REPORT_SKU": "CSP8794CE",
"METRICS_UTILITY_REPORT_SKU_DESCRIPTION": "EX: Red Hat OpenShift Container Platform, Premium Support (1 Node, Monthly)",
"METRICS_UTILITY_REPORT_H1_HEADING": "CCSP EU Direct Reporting Template",
"METRICS_UTILITY_REPORT_COMPANY_NAME": "Partner B",
"METRICS_UTILITY_REPORT_EMAIL": "[email protected]",
"METRICS_UTILITY_REPORT_RHN_LOGIN": "admin_user",
"METRICS_UTILITY_REPORT_PO_NUMBER": "456",
"METRICS_UTILITY_REPORT_END_USER_COMPANY_NAME": "Customer B",
"METRICS_UTILITY_REPORT_END_USER_CITY": "Rivertown",
"METRICS_UTILITY_REPORT_END_USER_STATE": "CA",
"METRICS_UTILITY_REPORT_END_USER_COUNTRY": "Canada",
"METRICS_UTILITY_REPORT_TYPE": "CCSPv2"
},
"params": [
"manage.py",
"build_report",
"--since=2024-02-05",
"--until=2024-04-30",
"--force"
],
"custom_params": {
"run_command": "Yes",
"generated": "2025-01-31"
}
}

And there is folder with the same name as json file that contains generated report that has been generated using subprocess call using this env variables and input command params.

For example:
snapshot_def_2024-02.json
snapshot_def_2024-02/result.xlsx

While env_vars are used for setting env vars for subprocess and params are direct command array of params, custom_params can contains additional paramters for tests:

  1. "run_command" : if Yes, it will run command as subprocess, in future, we may add also ability to run command as direct function call (without suprocess), so we can mock datetime.now to test also relative time parameters like --since=5months, which will differ based on current date.

2)"generated" - date when it was generated if we will need it

Generator will generate many of those files with different ranges and different env_variables (except report type - which is fixed to either CCSPv2 or CCSP, ship path and ship target, which is always the same).

metrics_utility/test/snapshot_tests/CCSP/CCSP_snapshot_test.py

This is scanning all *.json files of test definitions for each definition:

  1. Finds xlsx report that was generated in the past by generator
  2. generate new xlsx file using the same definition found in the *.json
  3. Compare those files, if they are not equal, fail the test

Additionaly, it runs comparsion for month 2024-02 and 2024-03 like:
build_report --months=2024-02
build_report --since=2024-02-01 --until=2024-02-29

Both reports are compared if they are equal.

metrics_utility/test/snapshot_tests/snapshot_utils.py

This file contains support utilities for the files above. Many of those funtions can be reused for later snapshots tests for another command.

Ignoring resource warning failures

openpyxl has some troubles with closing the files, even the xlsx files are closed, the pytest throws error about unclosed files.

However, adding marker:
@pytest.mark.filterwarnings('ignore::ResourceWarning')

for the tests that are using this library solves the issue.

Testing

Prerequisites

Steps to Test

This runs generator (you must be at root folder of metrics utility project):

python -m metrics_utility.test.snapshot_tests.CCSP.CCSP_snapshot_generator

Once testing files are generated, run particular test:

SETUPTOOLS_USE_DISTUTILS=true pytest -s

Expected Results

The tests should pass.

If you want to test its failure, open some xlsx file, for example:
/metrics-utility/metrics_utility/test/snapshot_tests/CCSP/data/CCSPv2/snapshot_def_2024-02/report.xlsx

Enter in some random empty cell text Error

Then you should see assertion error that empty cell does not equals to Error.

Required Actions

  • Requires documentation updates
  • Requires downstream repository changes
  • Requires infrastructure/deployment changes
  • Requires coordination with other teams
  • Blocked by PR/MR: #XXX

Self-Review Checklist

Code Quality

  • Code is properly linted and formatted.
  • All tests (existing and new) pass successfully.
  • Tests are added or updated as needed.
  • Code includes relevant comments for complex sections.
  • Changes are reviewed and approved by at least two team members.
  • Documentation is updated where applicable.
    - If updates are required in the handbook, create a separate PR for the handbook repo.

Notes for Reviewers

Add any additional context or instructions for reviewers here - for example screenshots if needed.

@MilanPospisil MilanPospisil force-pushed the milan_snapshot_test branch 2 times, most recently from a776617 to c0d631a Compare February 3, 2025 10:29
@MilanPospisil MilanPospisil marked this pull request as ready for review February 3, 2025 12:00
@MilanPospisil MilanPospisil force-pushed the milan_snapshot_test branch 4 times, most recently from 54d494e to 90fbbed Compare February 4, 2025 11:48
Copy link
Collaborator

@Ladas Ladas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 looks great

Copy link
Contributor

@ShaiahWren ShaiahWren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Needs rebase

@Ladas Ladas force-pushed the milan_snapshot_test branch from 90fbbed to c47af35 Compare February 4, 2025 16:10
Copy link

sonarqubecloud bot commented Feb 5, 2025

@MilanPospisil MilanPospisil merged commit cb614cf into ansible:devel Feb 5, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants