bernhard-lehner
bernhard-lehner t1_j8r9z7j wrote
Reply to comment by Competitive_Dog_6639 in [D] Lion , An Optimizer That Outperforms Adam - Symbolic Discovery of Optimization Algorithms by ExponentialCookie
I would have named it "Eve", as she came after Adam (if you are into these stories)
bernhard-lehner t1_j7jgi4y wrote
Reply to High-speed cameras and deep learning [Research] by A15L
One practical issue with high speed cameras is the lightning that is required to still get enough exposure. Depending on your situation, you might draw in a lot of bugs, which can then negatively interfere with your system.
bernhard-lehner t1_j3bddsb wrote
Reply to [D] I recently quit my job to start a ML company. Would really appreciate feedback on what we're working on. by jrmylee
You might want to consider a different name, as there is a library already there with a highly similar name: https://pypi.org/project/rubberband/
bernhard-lehner t1_j1lresi wrote
Reply to [D] What are some applied domains where academic ML researchers are hoping to produce impressive results soon? by [deleted]
Uncertainty estimation/Anomaly and Novelty Detection is far from being solved for real-world problems, despite numeruous papers presenting impressive, but mostly cherry-picked results.
bernhard-lehner t1_iywb1n5 wrote
Reply to [D] Best object detection architecture out there in terms of accuracy alone by somebodyenjoy
if compute doesn't seem to be an issue, why not try what works best on your data?
bernhard-lehner t1_iy44yqs wrote
Don't underestimate the usefulness of a simple random projection
bernhard-lehner t1_iuv03xs wrote
Reply to comment by ReginaldIII in [R] Is there any work being done on reduction of training weight vector size but not reducing computational overhead (eg pruning)? by Moose_a_Lini
These are exactly the questions one needs to ask before even starting. I have seen it numerous times that people are working on something that might be interesting, but utterly useless at the end of the day.
bernhard-lehner t1_iuqkh9g wrote
Reply to comment by Ulfgardleo in [R] Is there any work being done on reduction of training weight vector size but not reducing computational overhead (eg pruning)? by Moose_a_Lini
"not reducing computational overhead" is not the same as not reducing performance
bernhard-lehner t1_iuqc992 wrote
Reply to [R] Is there any work being done on reduction of training weight vector size but not reducing computational overhead (eg pruning)? by Moose_a_Lini
It would help if you explain what exactly you want to transmit, the model, results, gradients,...? Btw, how would pruning not reduce the computational demand?
bernhard-lehner t1_irvpvmq wrote
Reply to [D] Looking for some critiques on recent development of machine learning by fromnighttilldawn
Not recent, but still interesting:
The Mythos of model interpretability https://arxiv.org/abs/1606.03490
bernhard-lehner t1_ir43bht wrote
Reply to comment by IdentifiableParam in [R] Stop Wasting My Time! Saving Days of ImageNet and BERT Training with Latest Weight Averaging by rlresearcher
Yeah, thats hardly a novel approach...but I have to admit that I also could spend more time looking if anyone else have had the same idea I'm trying at the moment. We really need "Schmidhuber as a Service" :)
bernhard-lehner t1_ir40v91 wrote
Reply to comment by twocupv60 in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
I would recommend to make sure to subsample in a way that you keep important characteristics of your data, so just randomly sampling might not be good enough.
bernhard-lehner t1_iqpxkj7 wrote
"Document, document, document,...". Lol, and who the hell is going to find time to ever go back and read the documentation? I'm not sure if this post was written by somebody who actually works on the level of research and coding...
bernhard-lehner t1_iqpupcp wrote
Reply to [R] An easy-to-read preprint on Fake News Detection during US 2016 elections - Accuracy of 95%+ by loosefer2905
I wouldn't consider accuracy an adequate metric for this kind of task, though...
bernhard-lehner t1_jalb613 wrote
Reply to [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
I don't think he actually "worked on Google's AI", as in being involved in the research and development part.