ml-research
ml-research t1_j90suwh wrote
Finding open problems
ml-research t1_j4fpav0 wrote
Thanks for sharing!
> The website works by fetching new papers daily from arxiv.org, using PapersWithCode to filter out the most relevant ones.
What do you mean by "relevant"? What kinds of papers do you fetch?
ml-research t1_j45nvno wrote
Reply to [D] Bitter lesson 2.0? by Tea_Pearce
Yes, I guess feeding more data to larger models will be better in general.
But what should we (especially who do not have access to large computing resources) do while waiting for computation to be cheaper? Maybe balancing the amount of inductive bias and the improvement in performance to bring the predicted improvements a bit earlier?
ml-research t1_j3l36cj wrote
Reply to [P] searchthearxiv.com: Semantic search across more than 250,000 ML papers on arXiv by universal_explainer
Does it omit some papers if it fails to parse them? Because I cannot find some arXiv papers.
ml-research t1_j3l1fg3 wrote
Probably look for something Jürgen Schmidhuber wrote or presented.
ml-research t1_iv4nhbh wrote
Reply to comment by lewtun in [P] Learn diffusion models with Hugging Face course 🧨 by lewtun
> Good skills in Python 🐍
For a moment Python 🐍 looked like Python 2 lol.
ml-research t1_itxtcod wrote
Reply to [D]Cheating in AAAI 2023 rebuttal by [deleted]
Suddenly got deleted 🤔
ml-research t1_jb1nlad wrote
Reply to To RL or Not to RL? [D] by vidul7498
People said similar things about deep learning a long time ago.
If you can use supervised learning, then you should, because it means you have tons of data with ground-truth labels for each decision. But many real-world problems are not like that. Even humans don't know if each of their decisions is optimal or not.