You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A big advantage of StyleGAN is the seamless fine-tuning process, where a previous checkpoint can be used as a starting point for training on a new dataset (say for example fine-tune the FFHQ model on paintings).
Is this possible for ALAE too? Do you have any pointers or feedback on how to approach it?
The text was updated successfully, but these errors were encountered:
@smthomas-sci thanks, but my question was more about the fine-tuning process, then simple dataset conversion.
However I just tried myself, and simply pointing to already present ALAE checkpoints and passing a new dataset to the training config works. Two caveats:
as we reached already top resolution, only one checkpoint is created, and overwritten every time, so you can't roll back to previous fine-tuned versions. The solution is simply to edit the code to create unique checkpoints more often.
process doesn't seem as stable as for StyleGAN. As you can see from my results, it diverges quickly and abruptly
I see what you mean now. I can’t offer much advice here, beyond that it looks like mode collapse. Decrease your learning rate and increase your batch size? You could try adding cumulative gradients to the training script to increase your batch size if memory is an issue.
A big advantage of StyleGAN is the seamless fine-tuning process, where a previous checkpoint can be used as a starting point for training on a new dataset (say for example fine-tune the FFHQ model on paintings).
Is this possible for ALAE too? Do you have any pointers or feedback on how to approach it?
The text was updated successfully, but these errors were encountered: