You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
When using slow style, I get really nice results after 100-200 iterations but if I train a model with the same style image I never get results that look anything similar to slow style. Even after 40k iterations my pictures look like random garbage using only the colors from the style image and nothing like the content image.
How can I get similar results by training a model?
The text was updated successfully, but these errors were encountered:
Did the starry night example train properly? Keep in mind that the slow_style and the model produced by train won't be the same. slow_style produces the best results but is slow. It's useful for prototyping a bit before committing to the 8 hour wait time for training.
Also, note this repo is pretty old and there's been a lot of work in the area where for example you can pass an arbitrary style image at test-time, and GANs that can perform stylization.
If you're certain you're feeding your reference style image in and training correctly, you can try playing with the various hyperparameters exposed in the python arguments (ex: the weight per vgg layer in the perceptual loss, or the relative weight between content and style)
Hello,
When using slow style, I get really nice results after 100-200 iterations but if I train a model with the same style image I never get results that look anything similar to slow style. Even after 40k iterations my pictures look like random garbage using only the colors from the style image and nothing like the content image.
How can I get similar results by training a model?
The text was updated successfully, but these errors were encountered: