-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests #239
Comments
Hello sir, I noticed this issue regarding eval bugs and the accuracy metric. I’ve reviewed the problem and have a clear approach to address it: Improving shape checks, Does this approach align with your expectations? |
Hello @mansipatil12345 👋 Thanks for your interest in the project! The current metrics implemented are working as intended, but adding safeguards like the ones proposed seems like a good idea. My only doubt, what do you mean by Refining per-class metric calculations? If you plan to work on said safeguards, please open another issue as this one is specific for adding tests. |
Thank you for reviewing my request and providing guidance. Based on your feedback, I will proceed with the following steps to address the issue: Shape Checks: Add robust checks to ensure predictions and ground truth tensors match in dimensions. This will prevent shape mismatches during metric computation. Handling Division by Zero: Introduce safeguards (e.g., conditional checks or np.where) to handle cases where a class has no ground truth samples (e.g., zero denominator). Metrics for such classes will return NaN to avoid errors and maintain clarity. Refining Per-Class Metric Calculations: Ensure that calculations for metrics like IoU and accuracy are correctly handled on a class-by-class basis. Validation Tests: Write unit tests to validate the fixes, ensuring all edge cases (e.g., missing classes, shape mismatches) are covered. Looking forward to your thoughts! |
Hello sir, I noticed this issue about adding tests using pytest and GitHub Actions. I’ve reviewed the MetricsFactory implementation and have a clear approach to ensure robust validation: Writing unit tests with pytest to validate all metric calculations like precision, recall, accuracy, etc. Does this approach align with your expectations for this issue? |
Hello @rudrakatkar 👋 Thanks for your interest in the project! At first glance, sounds good to me. Before proceeding, could you elaborate a bit about how you are planning to implement said tests? Maybe mocking up some label and prediction arrays with known results? Also, if you want to play around with the codebase to better understance the |
Hello @dpascualhe 👋 Thanks for your feedback! Here’s how I plan to implement the tests;
I’ll also check out the tutorial_image_segmentation.ipynb notebook to better understand the MetricsFactory module and provide feedback on it. That sounds like a great way to explore the codebase! 😃 Additionally, I’m planning to apply for GSoC 2025, and I’d love to contribute more to this project. Do you think this issue is a good starting point for a potential GSoC proposal? If there are other areas where I could contribute in a more significant way, I’d love to hear your thoughts! Looking forward to your guidance. |
Hello again @rudrakatkar , Sounds reasonable to me (only note here is that running tests after each commit for this project might be a bit overkill, it should be enough running tests just for the PRs)
Great! let me know if you have any doubts
As you know, orgs for GSoC '25 has not yet been announced 🤞 . In any case, working in entry level issues like this one and testing / playing around with any of JdeRobot projects (e.g. RoboticsAcademy, BehaviorMetrics) is the best way to get to know each other better before thinking about a GSoC proposal.
|
Add automatic tests using pytest + github actions. A good starting point: metrics.py.
The text was updated successfully, but these errors were encountered: