mocny-chlapik
mocny-chlapik t1_j8z3vox wrote
How should we control the exposure for people with low cognitive capabilities that might not understand what they are interacting with.
mocny-chlapik t1_j69h5ud wrote
More and more information is popping up about the huge human annotation efforts going on at OpenAI. It seems that the secret ingredient missing was money, that could buy you lots of relevant data. This has several implications: (1) It might be impossible to replicate some of these models without millions of dollars invested in similar data collection efforts, (2) The range of applications can actually be broader than thought previously, if we are willing to pay people to generate the data. (3) They were not able to find significant improvements with scaling anymore. The scaling era might be nearly over.
mocny-chlapik t1_j08w90k wrote
Can aiplanes fly? They clearly do not flap their wings so we shouldn't say they fly. In the nature, we can see that flying is based on flapping wings, not on jet engines. Thus we shouldn't say that airplanes fly, since clearly jet engines are not capable of flight, they are merely moving air with their turbines. Even though we can see that the airplanes are in the air, it is only a trick and they are actually not flying in the philosophical sense of that word.
mocny-chlapik t1_iqlt8jo wrote
Reply to comment by cthorrez in [D] Why is the machine learning community obsessed with the logistic distribution? by cthorrez
It's about the speed of computation, not about the complexity of definition. If you need to calculate the function million or even billion times for each sample, it makes sense to optimize it.
mocny-chlapik t1_j91uejr wrote
Reply to comment by BronzeArcher in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Yeah, I mean people with mental ilness (e.g. schizophrenia), people with debilitatingly low intelligence and similar cases. Who knows how they would interact with seeminingly intelligent LMs.