Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on fine-tuning time #72

Open
JunseokLee42 opened this issue Jan 19, 2024 · 1 comment
Open

Question on fine-tuning time #72

JunseokLee42 opened this issue Jan 19, 2024 · 1 comment

Comments

@JunseokLee42
Copy link

JunseokLee42 commented Jan 19, 2024

Thank you for sharing the paper and code.
While reading the Experimental Settings section in the 5.2 Implementation, I have a question about fine-tuning time.

Could you please let me know approximate fine-tuning time for Multimodal-CoT if you remember?

I am trying to understand the paper and code for re-implementation.
However, due to limited computing resources(no multi-GPUs), I have to use cloud services.
This has led me to calculate the approximate fine-tuning time, as cloud companies charge based on hour.

@cooelf
Copy link
Contributor

cooelf commented May 19, 2024

Hi, it may need 8/24 hours to train a base/large model using an A100 GPU, respectively. This may also depend on the exact GPU. As it has been a long time after the training, I could not ensure if I remember it accurately. An efficient way would be running the code and the log will show the approximate fine-tuning time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants