-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Milan snapshot test #56
Milan snapshot test #56
Conversation
a776617
to
c0d631a
Compare
54d494e
to
90fbbed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 looks great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Needs rebase
90fbbed
to
c47af35
Compare
…ated different tests
…on the fly and not rely on historical data
c47af35
to
6bff59f
Compare
|
[AAP-39142]
Description
How the test works
This PR has dozens of files changed, but only 3 are relevant - rest are generated testing data.
This test aims to generate many xlsx files that can be used as snapshots for the further testing.
Idea is that:
Many xlsx files will be in the repo along with stored command arguments and env variables that were used for generating the xlsx files.
Test will run and use the same params and env variables as original xlsx file (read from its respective json file), run command, generate new xlsx files and compare them to old ones. The comparsion is done by textual content inside spreadsheet, not by some binary exact coparsion, which would not work, since each xlsx file contains its own timestamp and possible other different metadata line unique GUIDs.
PR also extending tests (that was not in the issue description) for non snapshot variants that is comparing newly generated xlsx files of two different commands that should generate the same spreadsheet, like command pairs:
build_report --months=2024-02
build_report --since=2024-02-01 --until=2024-02-29
The range spans through the whole month, so it should generate the same spreadsheet.
Note that spreeadsheet comparsion compares only texts inside particular cells, but there are exceptions like created field, that contains date and this will differ and date range, which is either particular month or date range. Those 2 cells are ignored in snapshot test.
Why is the test important
This test will be useful in developement and mainly in refactoring, so we will ensure that code is still producing the same results as it was before. When introducing some changes like new feature or bug fix, the snapshot test will fail. We can then see what failed and if the failure was expected - for example we are repairing / extending some data part and if only that data part is changed, thats ok, if it changed also something unexpected, we may alter something unwanted by mistake. But for every intentional change that alter command workings, we will have to generate the historical test files again.
Description of source code files
Now description of each source code file:
metrics_utility/test/snapshot_tests/CCSP/CCSP_snapshot_generator.py
Run this script to generate reports for further tests:
python -m metrics_utility.test.snapshot_tests.CCSP.CCSP_snapshot_generator
This is generating reports with different parameters and env_variables. It generates CCSPv2 and also CCSP reports.
For each report, it first generates <file_name>.json file with the settings, that can look like this:
{
"env_vars": {
"METRICS_UTILITY_SHIP_TARGET": "directory",
"METRICS_UTILITY_SHIP_PATH": "/awx_devel/awx-dev/metrics-utility/metrics_utility/test/test_data",
"METRICS_UTILITY_PRICE_PER_NODE": "15.01",
"METRICS_UTILITY_REPORT_SKU": "CSP8794CE",
"METRICS_UTILITY_REPORT_SKU_DESCRIPTION": "EX: Red Hat OpenShift Container Platform, Premium Support (1 Node, Monthly)",
"METRICS_UTILITY_REPORT_H1_HEADING": "CCSP EU Direct Reporting Template",
"METRICS_UTILITY_REPORT_COMPANY_NAME": "Partner B",
"METRICS_UTILITY_REPORT_EMAIL": "[email protected]",
"METRICS_UTILITY_REPORT_RHN_LOGIN": "admin_user",
"METRICS_UTILITY_REPORT_PO_NUMBER": "456",
"METRICS_UTILITY_REPORT_END_USER_COMPANY_NAME": "Customer B",
"METRICS_UTILITY_REPORT_END_USER_CITY": "Rivertown",
"METRICS_UTILITY_REPORT_END_USER_STATE": "CA",
"METRICS_UTILITY_REPORT_END_USER_COUNTRY": "Canada",
"METRICS_UTILITY_REPORT_TYPE": "CCSPv2"
},
"params": [
"manage.py",
"build_report",
"--since=2024-02-05",
"--until=2024-04-30",
"--force"
],
"custom_params": {
"run_command": "Yes",
"generated": "2025-01-31"
}
}
And there is folder with the same name as json file that contains generated report that has been generated using subprocess call using this env variables and input command params.
For example:
snapshot_def_2024-02.json
snapshot_def_2024-02/result.xlsx
While env_vars are used for setting env vars for subprocess and params are direct command array of params, custom_params can contains additional paramters for tests:
2)"generated" - date when it was generated if we will need it
Generator will generate many of those files with different ranges and different env_variables (except report type - which is fixed to either CCSPv2 or CCSP, ship path and ship target, which is always the same).
metrics_utility/test/snapshot_tests/CCSP/CCSP_snapshot_test.py
This is scanning all *.json files of test definitions for each definition:
Additionaly, it runs comparsion for month 2024-02 and 2024-03 like:
build_report --months=2024-02
build_report --since=2024-02-01 --until=2024-02-29
Both reports are compared if they are equal.
metrics_utility/test/snapshot_tests/snapshot_utils.py
This file contains support utilities for the files above. Many of those funtions can be reused for later snapshots tests for another command.
Ignoring resource warning failures
openpyxl has some troubles with closing the files, even the xlsx files are closed, the pytest throws error about unclosed files.
However, adding marker:
@pytest.mark.filterwarnings('ignore::ResourceWarning')
for the tests that are using this library solves the issue.
Testing
Prerequisites
Steps to Test
This runs generator (you must be at root folder of metrics utility project):
python -m metrics_utility.test.snapshot_tests.CCSP.CCSP_snapshot_generator
Once testing files are generated, run particular test:
SETUPTOOLS_USE_DISTUTILS=true pytest -s
Expected Results
The tests should pass.
If you want to test its failure, open some xlsx file, for example:
/metrics-utility/metrics_utility/test/snapshot_tests/CCSP/data/CCSPv2/snapshot_def_2024-02/report.xlsx
Enter in some random empty cell text Error
Then you should see assertion error that empty cell does not equals to Error.
Required Actions
Self-Review Checklist
Code Quality
- If updates are required in the handbook, create a separate PR for the handbook repo.
Notes for Reviewers
Add any additional context or instructions for reviewers here - for example screenshots if needed.