-xXpurplypunkXx-
-xXpurplypunkXx- t1_j6v3fab wrote
Reply to comment by LetterRip in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
Thanks for context. Maybe a little too much woo in my post.
For me, the fidelity to decide which images are completely stored is either an interesting artifact or an interesting piece of the model.
But regardless it is very un-intuitive to me with respect to how diffusion models would train and behave, due to both mutation of training images as well as foreseeable lack of space to encode that much info into a single model state. Admittedly don't have much working experience with these sort of models.
-xXpurplypunkXx- t1_j6ulhcj wrote
Reply to comment by koolaidman123 in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
I can't tell which is crazier: that it memorizes images at all, or that memorization is such a small fraction of its overall outputs.
Very interesting. I'm wondering how sensitive this methodology is to finding instances of memorization though; maybe this is the tip of the iceberg.
-xXpurplypunkXx- t1_j1bjgqd wrote
Reply to COVID-19 Booster Increases Durability of Antibody Response, Research Shows by Additional-Two-7312
Do we know to what extent antibody durability accounts for patient outcomes, compared with shades of immunological memory, for various strains of SARS-CoV-2? It's a shame such important work hasn't been widely and thoroughy conducted (as far as I can see from the outside googling in), but hope to understand better in the future.
-xXpurplypunkXx- t1_je315e6 wrote
Reply to comment by ghostfaceschiller in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
In my experience, gpt tends to hallucinate the same incorrect response and refuses to make the directed corrections to code.