Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C14 new content #604

Merged
merged 17 commits into from
Feb 22, 2025
Merged

C14 new content #604

merged 17 commits into from
Feb 22, 2025

Conversation

jrosen48
Copy link
Collaborator

initial working code and draft

Copy link

netlify bot commented Jan 10, 2025

Deploy Preview for datascienceineducation-1ed failed.

Name Link
🔨 Latest commit 9fcf7ac
🔍 Latest deploy log https://app.netlify.com/sites/datascienceineducation-1ed/deploys/67ba19571301ae0008b57423

@restrellado restrellado changed the title C10 new content C14 new content Jan 26, 2025
@restrellado restrellado added the writing writing new content label Jan 26, 2025
@restrellado restrellado self-requested a review January 26, 2025 17:48
@jrosen48
Copy link
Collaborator Author

jrosen48 commented Feb 4, 2025

This one is ready for your review @restrellado

@jrosen48
Copy link
Collaborator Author

jrosen48 commented Feb 4, 2025

@ivelasq I know you are familiar with tidymodels, so would welcome a look at some point.

Copy link
Collaborator

@restrellado restrellado left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great @jrosen48 . Can you please make the one edit I commented on and then merge? After that, I'll begin a more thorough style review, pushing to this same branch.


A quick statistical note: above, we selected our variable importance method to be "permutation" for our demonstrative example. There are other options available in the {caret} package if you would like to explore those in your analyses.
But, the model seemed to perform differnetly for students who passed versus those who did not. We can see this by looking at the precision and recall metrics: the recall value of .925 tells us that when students passed the call, the model correctly predicted they did so around 92% of the time. The precision, though, tells us that when the model predicted a student passed the course, it was correct around 65% of the time, meaning that the model regularly made _false positive_ predictions. Herein lies the value of metrics other than accuracy: they can help us understand how the model is performing for different outcomes: false positives or false negatives may matter more or less depending on the context, and your knowledge as the analyst and researcher is critical here for determining whether the model is "good enough" for your purposes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should "passed the call" be "passed the course?"

jrosen48 and others added 2 commits February 16, 2025 19:26
Copy link
Collaborator

@restrellado restrellado left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ready to merge.

@restrellado restrellado merged commit e553182 into main Feb 22, 2025
0 of 5 checks passed
@restrellado restrellado deleted the c10-new-content branch February 22, 2025 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
writing writing new content
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants