Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests #239

Open
dpascualhe opened this issue Dec 10, 2024 · 7 comments
Open

Add tests #239

dpascualhe opened this issue Dec 10, 2024 · 7 comments

Comments

@dpascualhe
Copy link
Collaborator

Add automatic tests using pytest + github actions. A good starting point: metrics.py.

@mansipatil12345
Copy link

mansipatil12345 commented Dec 17, 2024

Hello sir,

I noticed this issue regarding eval bugs and the accuracy metric. I’ve reviewed the problem and have a clear approach to address it:

Improving shape checks,
Handling division by zero safely, and
Refining per-class metric calculations.
Before I proceed, I’d like to get your guidance:

Does this approach align with your expectations?
Are there any specific considerations or requirements I should keep in mind while implementing the fixes?
I can also add tests to validate the changes to ensure everything works as expected. Please let me know your thoughts or suggestions before I move forward!

@dpascualhe
Copy link
Collaborator Author

Hello @mansipatil12345 👋

Thanks for your interest in the project! The current metrics implemented are working as intended, but adding safeguards like the ones proposed seems like a good idea. My only doubt, what do you mean by Refining per-class metric calculations?

If you plan to work on said safeguards, please open another issue as this one is specific for adding tests.

@mansipatil12345
Copy link

Thank you for reviewing my request and providing guidance. Based on your feedback, I will proceed with the following steps to address the issue:

Shape Checks:

Add robust checks to ensure predictions and ground truth tensors match in dimensions. This will prevent shape mismatches during metric computation.

Handling Division by Zero:

Introduce safeguards (e.g., conditional checks or np.where) to handle cases where a class has no ground truth samples (e.g., zero denominator). Metrics for such classes will return NaN to avoid errors and maintain clarity.

Refining Per-Class Metric Calculations:

Ensure that calculations for metrics like IoU and accuracy are correctly handled on a class-by-class basis.
This includes managing edge cases like rare or missing classes, where no pixels exist in the ground truth for a given class.

Validation Tests:

Write unit tests to validate the fixes, ensuring all edge cases (e.g., missing classes, shape mismatches) are covered.
I believe this approach aligns with the issue’s requirements, but I’d appreciate your confirmation or any additional suggestions before I proceed with the implementation.

Looking forward to your thoughts!

@rudrakatkar
Copy link

Hello sir,

I noticed this issue about adding tests using pytest and GitHub Actions. I’ve reviewed the MetricsFactory implementation and have a clear approach to ensure robust validation:

Writing unit tests with pytest to validate all metric calculations like precision, recall, accuracy, etc.
Adding edge case tests like division by zero, all zeros, class imbalance to improve reliability.
Setting up GitHub Actions for automated testing on every commit and pull request.
Before proceeding, I’d like to confirm:

Does this approach align with your expectations for this issue?
Are there any specific scenarios or edge cases you’d like me to focus on?
Looking forward to your guidance. Thanks!

@dpascualhe
Copy link
Collaborator Author

Hello @rudrakatkar 👋

Thanks for your interest in the project! At first glance, sounds good to me. Before proceeding, could you elaborate a bit about how you are planning to implement said tests? Maybe mocking up some label and prediction arrays with known results?

Also, if you want to play around with the codebase to better understance the MetricsFactory module, you can try out our new notebook tutorial: tutorial_image_segmentation.ipynb. AFAIK it hasn't been tested out by any user yet and it would be great if we could have some feedback in that regard 😄 .

@rudrakatkar
Copy link

Hello @dpascualhe 👋

Thanks for your feedback! Here’s how I plan to implement the tests;

  1. Mocking label and prediction arrays with known expected results to validate each metric computation (e.g., precision, recall, IoU).
  2. Testing edge cases, such as handling empty inputs, division by zero, and extreme class imbalances.
  3. Automating test execution using pytest, ensuring correctness across multiple scenarios.
  4. Integrating with GitHub Actions to run tests on each commit/pull request, ensuring continuous validation.

I’ll also check out the tutorial_image_segmentation.ipynb notebook to better understand the MetricsFactory module and provide feedback on it. That sounds like a great way to explore the codebase! 😃

Additionally, I’m planning to apply for GSoC 2025, and I’d love to contribute more to this project. Do you think this issue is a good starting point for a potential GSoC proposal? If there are other areas where I could contribute in a more significant way, I’d love to hear your thoughts!

Looking forward to your guidance.

@dpascualhe
Copy link
Collaborator Author

Hello again @rudrakatkar ,

Sounds reasonable to me (only note here is that running tests after each commit for this project might be a bit overkill, it should be enough running tests just for the PRs)

Mocking label and prediction arrays with known expected results to validate each metric computation (e.g., precision, recall, IoU).
Testing edge cases, such as handling empty inputs, division by zero, and extreme class imbalances.
Automating test execution using pytest, ensuring correctness across multiple scenarios.
Integrating with GitHub Actions to run tests on each commit/pull request, ensuring continuous validation.

Great! let me know if you have any doubts

I’ll also check out the tutorial_image_segmentation.ipynb notebook to better understand the MetricsFactory module and provide feedback on it. That sounds like a great way to explore the codebase! 😃

As you know, orgs for GSoC '25 has not yet been announced 🤞 . In any case, working in entry level issues like this one and testing / playing around with any of JdeRobot projects (e.g. RoboticsAcademy, BehaviorMetrics) is the best way to get to know each other better before thinking about a GSoC proposal.

Additionally, I’m planning to apply for GSoC 2025, and I’d love to contribute more to this project. Do you think this issue is a good starting point for a potential GSoC proposal? If there are other areas where I could contribute in a more significant way, I’d love to hear your thoughts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants