Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange results from training #9

Open
kthordarson opened this issue Apr 25, 2018 · 1 comment
Open

Strange results from training #9

kthordarson opened this issue Apr 25, 2018 · 1 comment

Comments

@kthordarson
Copy link

Hello,
When using slow style, I get really nice results after 100-200 iterations but if I train a model with the same style image I never get results that look anything similar to slow style. Even after 40k iterations my pictures look like random garbage using only the colors from the style image and nothing like the content image.
How can I get similar results by training a model?

@ghwatson
Copy link
Owner

Did the starry night example train properly? Keep in mind that the slow_style and the model produced by train won't be the same. slow_style produces the best results but is slow. It's useful for prototyping a bit before committing to the 8 hour wait time for training.

Also, note this repo is pretty old and there's been a lot of work in the area where for example you can pass an arbitrary style image at test-time, and GANs that can perform stylization.

If you're certain you're feeding your reference style image in and training correctly, you can try playing with the various hyperparameters exposed in the python arguments (ex: the weight per vgg layer in the perceptual loss, or the relative weight between content and style)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants