Hey all,
I’ve playing with Google’s Gemini native image to image model and it’s really good (despite being a bit rough)
Without even needing an image to image or inpainting workflow, Gemini can just do it with natural language commands.
With the tech advancing so fast, it feels worthless to learn inpainting or image to image workflows when with larger models we’ll eventually be able prompt what we want into the image.
Thoughts?
submitted by /u/Glittering-Bag-4662
[comments]
Source link