You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’ve noticed that for large values of pmap’s batch_size parameter, the progress meter overhead becomes large, which is especially significant since in the cases where you’d want to increase batch_size (individual items are cheap to compute), this makes the overhead dominate over the actual computation.
Example:
using Distributed
addprocs(6)
@everywhereusing ProgressMeter
for batch_size inround.(Int, 10.^range(0, 4, length=20))
@show batch_size
@showprogresspmap(i ->nothing, 1:200_000; batch_size);
end
So it looks like (in this case?) that there’s a “sweet spot” around batch_size=20. I found this surprising, so I’m not sure what the reason is! But it would be good if this behaved better for large batch_size.
The text was updated successfully, but these errors were encountered:
Socob
changed the title
Overhead scales non-linearly with pmapbatch_size
Overhead scales non-monotonically with pmapbatch_sizeApr 26, 2024
what happens in the loop is only put!(channel, true), and I see the same behavior when doing only this (I only tested with 20_000 because I don't have that much patience)
I’ve noticed that for large values of
pmap
’sbatch_size
parameter, the progress meter overhead becomes large, which is especially significant since in the cases where you’d want to increasebatch_size
(individual items are cheap to compute), this makes the overhead dominate over the actual computation.Example:
So it looks like (in this case?) that there’s a “sweet spot” around
batch_size=20
. I found this surprising, so I’m not sure what the reason is! But it would be good if this behaved better for largebatch_size
.The text was updated successfully, but these errors were encountered: