Agreeable-Run-9152
Agreeable-Run-9152 t1_j9c4naa wrote
Yeah i actually agree with your rant. However there is a small Chance they acted in good faith and did Not see that the randomness in the GAN wont do anything.
Agreeable-Run-9152 t1_j3dbcyl wrote
Reply to comment by fakesoicansayshit in [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
Yeah thats true. My comment relates to unconditional diffusion Models a la Song and not stable Diffusion. The Argument might be adapted for conditional Generation.
Agreeable-Run-9152 t1_j33xcmm wrote
Reply to comment by Agreeable-Run-9152 in [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
Note that this argument really isnt about Diffusion or generative models but about optimization. I know my fair Share of generative modelling, but this Idea is a lot more general and might have been popped up somewhere else in optimization/inverse Problems?
Agreeable-Run-9152 t1_j33wpfm wrote
Reply to comment by sjd96 in [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
I thought it wasnt about the latent Code but the Training Set?
Agreeable-Run-9152 t1_j33wlnt wrote
Reply to [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
Lets think about a dataset consisting of only one image x and that the optimization process is known and deterministic.
Then given the weights of the diffusion model, and the optimization procedure P(theta_0,t, x) which maps the initial weights theta_0 to theta_t after t steps trained on image x, this problem would be:
Find x of |Theta_t - P(theta_0,t,x) | = 0 for all times t.
I would IMAGINE (i am not sure) that for enough times t, we get a unique solution x.
This argument should even hold for datasets consisting of more images.
Agreeable-Run-9152 t1_j2s81ce wrote
All of my papers
Agreeable-Run-9152 OP t1_j2pz999 wrote
Reply to comment by bloc97 in [R] On Time Embeddings in Diffusion models by Agreeable-Run-9152
Yep
Agreeable-Run-9152 OP t1_j2px8mz wrote
Reply to comment by bloc97 in [R] On Time Embeddings in Diffusion models by Agreeable-Run-9152
Okay, yeah makes sense. I am currently working in the Context of FNOs. What is the way youd do it there?
Submitted by Agreeable-Run-9152 t3_101s5kj in MachineLearning
Agreeable-Run-9152 t1_j9dh1pu wrote
Reply to comment by Mefaso in [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
I would assume that someone who is capable of programming a GAN and go through all the steps of Parameter Tuning at some Point should realize that the randomness shouldnt do anything.