Optimal-Asshole

Optimal-Asshole t1_j9fktzg wrote

> Are there actual NN methods that can PDEs without depending on the initial conditions?

The initial condition needs to be known (but we can actually have some noisy initial condition, like measurements corrupted by noise [1]), but NN based models can efficiently solve some parametric PDEs faster than traditional solvers. [2]

There is also a lot of work in training NNs on data generated from traditional methods, and this can be combined jointly with the above method to solve a whole class of problems at once. [3]

Solving a whole parametric family of PDEs (i.e. a parameterized family of initial conditions) and handling complicated geometries will be the next avenue of this specific field IMO. Actually it is being actively worked on.

[1] https://arxiv.org/abs/2205.07331

[2] https://arxiv.org/abs/2110.13361

[3] https://arxiv.org/abs/2111.03794

1

Optimal-Asshole t1_j9c4h8d wrote

Okay lol so I’m actually researching kinda similar things and I assumed this paper was related because it used similar tools but upon a closer look, nope nvm. It’s not even using the generative model for anything useful.

So their paper just shows that the basic idea of least squares PDE solving can be used for generative models. Okay now it’s average class project tier. I guess this demonstrates that yes these workshops accept literally anything.

Edit: it’s still not plagiarism. It’s just not very novel. Plagiarism is stealing ideas without credit. What they did was discuss an existing idea and extend it in a very small way experimentally only. Not plagiarism.

14

Optimal-Asshole t1_j9c20cy wrote

I think these workshops accept every submission that is not incoherent or desk rejected.

From my quick glance, It doesn’t seem like plagiarism, since they do ample citation. As far as the justification goes, there are some generative based approaches for solving parametric PDEs even now. It doesn’t seem like the best paper ever, but I don’t think it’s that bad.

14

Optimal-Asshole t1_j91boue wrote

Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.

"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.

62

Optimal-Asshole t1_j41nlj5 wrote

Here’s this paper which uses gradient descent to train the meta layer, and gradient descent to train the hyper parameters of that gradient descent, and so forth. The hyperparameters of the top most meta layer matters less and less as you add meta-depth, I.e add more meta-“layers”.

https://arxiv.org/abs/1909.13371

3

Optimal-Asshole t1_iyqtp3l wrote

No, the reason for hyper parameter optimization isn’t job security. It’s because choosing better hyper parameters will produce better results which has more success in applications. There are people working on automatic hyperparameter optimization.

But let’s not act like it’s due solely due to some community caused phenomenon and engineers putting on a show. Honestly your message comes off as a little bitter.

2