ACH-S

ACH-S t1_jawnajq wrote

I'm not sure whether you mean genetic algorithms or evolutionary algorithms or if those terms are interchangeable for you (often, they are not). Anyway, a field that heavily relies on them is Quality-Diversity (https://quality-diversity.github.io/, there is a nice list of papers there). Also, I would recommend that you have a look at the proceedings from the GECCO conference (e.g. https://dl.acm.org/doi/proceedings/10.1145/3512290 , the conference is much smaller than neurips/ICML/etc, and the research quality tends to be a bit more variable, but you'll see that evo algortihms, and in particular genetic ones are far from being dead).

The idea that "designing an experiment for a genetic algorithm requires sufficient prior" doesn't sound correct to me, generally you turn to them when you don't have any reliable priors on the search space (as other comments have pointed out, see CMA-ES as an example. I'll add ES https://arxiv.org/abs/1703.03864 as another useful example that I've personally often used to simplify meta-learning problems).

1

ACH-S t1_j10lgug wrote

It's not really a useful visualisation though and the title of the submission is a bit scarier than it should be. Some mismatch between what happens in the industry and what is covered in the news is expected as "miscellenaous errors" is probably not as exciting for most readers as system intrusion. If you look at the mismatch with academia, things get worse: it's not super clear if those keywords were cited as examples in the academic papers, or if they were the principal topic the papers were addressing, or wheter they were used as the easiest benchmark/baseline to show an idea works etc...

Without explaining some factors like these, the figure doesn't really teach us anything and given the title, it looks like they just want to click bait you to go to their website.

7