You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We trained our model using AnghaBench compilation results across four optimization levels (O0~O3), selecting samples under 1024 tokens. That gave us a total of 534,564 samples per level, and we trained for 2 epochs on a cluster of 8 Nvidia A100 GPUs.
As for the training times, they were 10 hours for the 1.3B model, 85 hours for the 6.7B model, and 440 hours for the 33B model.
We are estimating the training budget of reproducing LLM4Decompile. In your previous issue response, _given 534,564 samples per level and a cluster of 8 Nvidia A100 GPUs, 10 hours were cost for the 1.3B model, 85 hours were cost for the 6.7B model, and 440 hours were cost for the 33B model _.
In the 19 june updated paper, fine-tuning the 1.3B and 6.7B LLM4Decompile-End takes 12 and 61 days on 8×A100 respectively given 7.2 million compilable samples and 1.6 million executable samples. There is some confusion about training budget estimation.
Would you please provide more information about training budget and are all the training are fully supervised finetuning?
The text was updated successfully, but these errors were encountered:
In V1, the maximum sequence length is set at 1,024, whereas in Version 1.5 it is increased to 4,096. The computational expenses rise quadratically (theoretically for attention calculation, in practice with acclerations may not be than much) relative to the sequence length. V2 also uses a larger dataset (undergone significant deduplication), these factors collectively lead to a 30x increase in training costs.
Are you training on a single node or multiple nodes out of interest?
For the 1B model, we use a single node. For larger models, they are typically trained across multiple nodes (6B can still be trained on a single node, depending on the budget).
We trained our model using AnghaBench compilation results across four optimization levels (O0~O3), selecting samples under 1024 tokens. That gave us a total of 534,564 samples per level, and we trained for 2 epochs on a cluster of 8 Nvidia A100 GPUs.
As for the training times, they were 10 hours for the 1.3B model, 85 hours for the 6.7B model, and 440 hours for the 33B model.
Let me know if you need more info!
Originally posted by @rocky-lq in #3 (comment)
Hi @rocky-lq @albertan017 ,
We are estimating the training budget of reproducing LLM4Decompile. In your previous issue response, _given 534,564 samples per level and a cluster of 8 Nvidia A100 GPUs, 10 hours were cost for the 1.3B model, 85 hours were cost for the 6.7B model, and 440 hours were cost for the 33B model _.
In the 19 june updated paper, fine-tuning the 1.3B and 6.7B LLM4Decompile-End takes 12 and 61 days on 8×A100 respectively given 7.2 million compilable samples and 1.6 million executable samples. There is some confusion about training budget estimation.
Would you please provide more information about training budget and are all the training are fully supervised finetuning?
The text was updated successfully, but these errors were encountered: