Using pictures taken from different angles, poses, and lighting of a jacket, I trained a Stable Diffusion model using Google’s Dreambooth. Thanks to Shelbatra for modeling it (left).
I have 3D scanned the jacket using photogrammetry and a powder coating so remedy the shiny surface reflections. I then fit the model to an avatar and can render it in different poses and lighting.
The photogrammetry, processing and rendering take a lot of time, but the resulting model is accurate and can be used commercially.
AI training takes time as well (+/-1 hour) and only 1/10 images coming from Stable Diffusion are actually useable.
The jacket can be combined with other models, light and styles. Is this the future of product visualization?