You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @gufengzhou ,
I am trying to train the model with data spread across ~17 states. Approx record count is around 40k.
For each state, robyn takes around 21 minutes to run with 2000 iterations and 5 trials.
Is there any way we can reduce the time taken to run by keeping the same parameters mentioned above?
I have a requirement to run it across a few hundred regions hence the question.
Any help would be appreciated.
Thank you
The text was updated successfully, but these errors were encountered:
You could speed things up by using more cores in robyn_run. Otherwise it'd be a standalone effort to focus on speed improvement that's unfortunately not on the current plan due to resource.
Hey @OmkarEllicium , @gufengzhou
I just submitted PR #1217, which optimizes _geometric_adstock reducing overall model training time by ~40% (tested on tutorial1.ipynb with Trial=1, iterations=2000).
Since _geometric_adstock was a major bottleneck, this should help speed up training, especially for large datasets. Would love your thoughts!
@gufengzhou , since you’re assigned to this issue, I wanted to bring this to your attention in case it's relevant.
Hi @gufengzhou ,
I am trying to train the model with data spread across ~17 states. Approx record count is around 40k.
For each state, robyn takes around 21 minutes to run with 2000 iterations and 5 trials.
Is there any way we can reduce the time taken to run by keeping the same parameters mentioned above?
I have a requirement to run it across a few hundred regions hence the question.
Any help would be appreciated.
Thank you
The text was updated successfully, but these errors were encountered: