Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use cfg hyperparams #629

Merged
merged 9 commits into from
Mar 24, 2021
Merged

Use cfg hyperparams #629

merged 9 commits into from
Mar 24, 2021

Conversation

Flova
Copy link
Collaborator

@Flova Flova commented Feb 12, 2021

This includes using SGD instead of ADAM, removing the batch_size and gradient_accumulations from the cli and using nearly all params described in the .cfg for the training.

@Flova Flova mentioned this pull request Mar 19, 2021
@Flova Flova merged commit 6975ccd into master Mar 24, 2021
@Flova Flova deleted the feature/use_cfg_hyperparams branch April 9, 2021 15:46
@jaagut
Copy link
Contributor

jaagut commented Apr 11, 2021

Addresses #616

colmanAstrobotic pushed a commit to colmanAstrobotic/PyTorch-YOLOv3 that referenced this pull request Jul 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants