Optimal-Asshole
Optimal-Asshole t1_jc07v6l wrote
Reply to comment by bpw1009 in [D] What's the mathematical notation for "top k argmax"? by fullgoopy_alchemist
It’s worth nothing, the notation they give makes no sense, where does k appear on the left hand side?
Optimal-Asshole t1_j9ugfhx wrote
Reply to [D] What is the correct term for a non-GAN system where two or more networks compete as part of training? by mosquitoLad
Adversarial is still the right term, it just has another definition in the context of security. You could also try min max or zero sum game
Optimal-Asshole t1_j9jo26z wrote
Reply to comment by buyIdris666 in [D] Bottleneck Layers: What's your intuition? by _Arsenie_Boca_
Residual refers to the fact that the NN/bottleneck learns the residual left over after accounting for the entire input. Anyone calling the skip connections “residual connections” should stop though lol
Optimal-Asshole t1_j9fktzg wrote
Reply to comment by vikumwijekoon97 in [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
> Are there actual NN methods that can PDEs without depending on the initial conditions?
The initial condition needs to be known (but we can actually have some noisy initial condition, like measurements corrupted by noise [1]), but NN based models can efficiently solve some parametric PDEs faster than traditional solvers. [2]
There is also a lot of work in training NNs on data generated from traditional methods, and this can be combined jointly with the above method to solve a whole class of problems at once. [3]
Solving a whole parametric family of PDEs (i.e. a parameterized family of initial conditions) and handling complicated geometries will be the next avenue of this specific field IMO. Actually it is being actively worked on.
[1] https://arxiv.org/abs/2205.07331
Optimal-Asshole t1_j9c4h8d wrote
Reply to comment by AlmightySnoo in [D] On papers forcing the use of GANs where it is not relevant by AlmightySnoo
Okay lol so I’m actually researching kinda similar things and I assumed this paper was related because it used similar tools but upon a closer look, nope nvm. It’s not even using the generative model for anything useful.
So their paper just shows that the basic idea of least squares PDE solving can be used for generative models. Okay now it’s average class project tier. I guess this demonstrates that yes these workshops accept literally anything.
Edit: it’s still not plagiarism. It’s just not very novel. Plagiarism is stealing ideas without credit. What they did was discuss an existing idea and extend it in a very small way experimentally only. Not plagiarism.
Optimal-Asshole t1_j9c20cy wrote
I think these workshops accept every submission that is not incoherent or desk rejected.
From my quick glance, It doesn’t seem like plagiarism, since they do ample citation. As far as the justification goes, there are some generative based approaches for solving parametric PDEs even now. It doesn’t seem like the best paper ever, but I don’t think it’s that bad.
Optimal-Asshole t1_j93j6ez wrote
Reply to comment by bloodmummy in [D] Please stop by [deleted]
I wonder if this is how people with PhDs in virology or climate science feel
Optimal-Asshole t1_j91boue wrote
Reply to [D] Please stop by [deleted]
Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.
"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.
Optimal-Asshole t1_j41nlj5 wrote
Reply to [D] Are there any papers on optimization-based approaches which combine learned parameter initializations with learned optimisers? by Decadz
Here’s this paper which uses gradient descent to train the meta layer, and gradient descent to train the hyper parameters of that gradient descent, and so forth. The hyperparameters of the top most meta layer matters less and less as you add meta-depth, I.e add more meta-“layers”.
Optimal-Asshole t1_j3w5td4 wrote
Reply to comment by thehodlingcompany in [Discussion] Given the right seed (or input noise) and prompt, is it theoretically possible to exactly recreate an image that a latent diffusion model was trained on? by [deleted]
I am a ML researcher, and you are right. You described it in a simpler/better way than I could.
Optimal-Asshole t1_j0m2pg0 wrote
Reply to comment by Acceptable-Cress-374 in [D] AI Isn’t Artificial or Intelligent by Tintin_Quarentino
This article is artificial but certainly not intelligent
Optimal-Asshole t1_iyqtp3l wrote
Reply to comment by Superschlenz in [D] In an optimal world, how would you wish variance between runs based on different random seeds was reported in papers? by optimized-adam
No, the reason for hyper parameter optimization isn’t job security. It’s because choosing better hyper parameters will produce better results which has more success in applications. There are people working on automatic hyperparameter optimization.
But let’s not act like it’s due solely due to some community caused phenomenon and engineers putting on a show. Honestly your message comes off as a little bitter.
Optimal-Asshole t1_jd58x06 wrote
Reply to comment by Gody_ in [D] Simple Questions Thread by AutoModerator
Since you are training the LSTM by using labels, it is supervised or perhaps self-supervised depending on the specifics