Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] <Unit testing tolrances> #75

Open
oliverchampion opened this issue Aug 14, 2024 · 2 comments
Open

[Feature] <Unit testing tolrances> #75

oliverchampion opened this issue Aug 14, 2024 · 2 comments
Assignees

Comments

@oliverchampion
Copy link
Collaborator

oliverchampion commented Aug 14, 2024

Feature description

Currently, we have taken huge boundaries for the unit testing. I would think it is nicer to have more realistic/strict boundaries.
However, what is a realistic boundary will probably depend on:

  • SNR
  • f (for the boundaries of D and D*)

Moreover, now, the boundaries are defined per algorithm in the json file, which gives a lot of redundancy. The flexibility of changing boundaries per algorithm seems to me unneeded (unfair comparison).

Describe the solution

My initial suggestion would be something like:
rTol(f)=Tol * SNR
rTol(D)=Tol * SNR / (1-f+epsiolon)
rTol(D*)=Tol * SNR / (f+epsilon)

Although it is up to discussion.

Moreover, I think the Tol values could be stored centrally (not algorithm-dependent)

Describe alternatives

No response

Additional context

Are you working on this?

None

@oliverchampion
Copy link
Collaborator Author

@etpeterson do you habe thoughts on this :)?

@oliverchampion oliverchampion changed the title [Feature] <title> Unit testing tolrances [Feature] <Unit testing tolrances> Aug 14, 2024
@etpeterson
Copy link
Contributor

Tolerances are difficult. To get them tight I think there would have to be slight differences for each algorithm. That's more of a testing viewpoint. But a more clinical view would be what parameter SNR is necessary? Maybe that's where you are going with your suggestion? I don't really have great answers for this but just trying it with some plots might be helpful.

In terms of code, it should be really easy. Right now the tolerances are being read from the algorithm section, data_ivim_fit_saved. We could include another global section that would be defaults for all of them. Or even something in the testing code that would override those with some other values.

@oliverchampion oliverchampion self-assigned this Aug 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants