Replies: 1 comment 1 reply
-
Leader weights also participates in merge algorithm to produce merged weights. Merged weights are shared with all nodes. Swarm merging happens when sync interval is met. Epoch divided into batches, once number of batches trained matches with sync interval then Swarm network runs to process merging of weights. Merged weights are loaded into local model and then continues with training process. This process continues through out the training when epochs are training. Gradient information is used to generate local weights as per the training process. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In the calculation of node aggregation, do the model parameters of the lead node participate in the aggregation sharing calculation?
How do each node's local model update and iteration in local epoch? Is gradient information used?
Beta Was this translation helpful? Give feedback.
All reactions