You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
QA for development and releases of the application is largely dependent on manual approaches. While manually testing can have a place in QA procedures, there are downsides such as the higher labor involved and the lack of consistency (which can miss things like #704, #663, #658, etc.). We should strive to implement a repeatable test suite that can be run during develop and prior to releases and then supplement with additional manual QA.
I'll stop short of setting any specific coverage goals, but I think a good first step is just develop a test suite that confirms all endpoints are returning the expected response. From there, we can work on extending to other parts of the code. I'm a big fan of the pytest library, but we should discuss options with the team.
The text was updated successfully, but these errors were encountered:
I could do some basics with a python script - looking at grey box testing (somewhere between black box and exercising white box architectural elements)
I guess the concept is really regression test, as was all working. Most challenging part of working code to keep proving its working after each release.
I've developed some external tests at a system level, as if from an internet user - a broad brush system verification view of processes a ModularSensors user follows.
Seems maybe this is not what is being thought of as a pytest individual module.
Should I put make it a separate issue of a "Soak Test" to track it?.
QA for development and releases of the application is largely dependent on manual approaches. While manually testing can have a place in QA procedures, there are downsides such as the higher labor involved and the lack of consistency (which can miss things like #704, #663, #658, etc.). We should strive to implement a repeatable test suite that can be run during develop and prior to releases and then supplement with additional manual QA.
I'll stop short of setting any specific coverage goals, but I think a good first step is just develop a test suite that confirms all endpoints are returning the expected response. From there, we can work on extending to other parts of the code. I'm a big fan of the pytest library, but we should discuss options with the team.
The text was updated successfully, but these errors were encountered: