Grouchy-Friend4235
Grouchy-Friend4235 t1_j9ojfjs wrote
Reply to comment by shwerkyoyoayo in What. The. ***k. [less than 1B parameter model outperforms GPT 3.5 in science multiple choice questions] by Destiny_Knight
All LLM of the ChatGPT kind are essentially trained on the test set 😉
Grouchy-Friend4235 t1_j6kopd9 wrote
Reply to comment by CHARRO-NEGRO in What jobs will be one of the last remaining ones? by MrCensoredFace
You are sadly mistaken on that one. AI will make far fewer errors and those that happen will be an easy win in court. MDs have shown in the pandemic that they have literally no clue.
Grouchy-Friend4235 t1_j6kogon wrote
Reply to comment by ihateshadylandlords in What jobs will be one of the last remaining ones? by MrCensoredFace
Also hairdressers
Grouchy-Friend4235 t1_iu0ozx1 wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
That's pretty close to the text-book definition of "repeating what others (would have) said"
Grouchy-Friend4235 t1_itz20o2 wrote
Reply to comment by kaityl3 in Large Language Models Can Self-Improve by xutw21
Absolutely parroting. See this example. A three year old would have a more accurate answer. https://imgbox.com/I1l6BNEP
These models don't work the way you think they are. It's just math. There is nothing in these models that could even begin to "choose words". All there is is a large set of formulae with parameters set so that there is an optimal response to most inputs. Within the model everything is just numbers. The model does not even see words, not ever(!). All it sees are bare numbers that someone has picked for them (someone being humans who have built mappers from words to numbers and v.v.).
There is no thinking going on in these models, not even a little, and most certainly there is no intelligence. Just repetition.
All intelligence that is needed to build and use these models is entirely human.
Grouchy-Friend4235 t1_itwn1m9 wrote
Reply to comment by kaityl3 in Large Language Models Can Self-Improve by xutw21
It's the same algorithm over and over again. It works like this:
- Tell me something
- I will add a word (the one that seems most fitting, based on what I have been trained on)
- I will look at what you said and what I said.
- Repeat from 2 until there is no more "good" words to add, or the length is at maximum.
That's all these models do. Not intelligent. Just fast.
Grouchy-Friend4235 t1_ittye65 wrote
Reply to comment by kaityl3 in Large Language Models Can Self-Improve by xutw21
> how incredibly impressive it is that these models can interpret and communicate using it.
Impressive yes, but it's a parrot made in software. The fact that it uses language does not mean it communicates. It is just uttering words that it has seen used previously given its current state. That's all there is.
Grouchy-Friend4235 t1_itpfsgd wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
Repeating what others said is not particularly intelligent.
Grouchy-Friend4235 t1_itn55n6 wrote
Reply to comment by 4e_65_6f in Large Language Models Can Self-Improve by xutw21
Not gonna happen. My dog is more generally intelligent than any of these models and he does not speak a language.
Grouchy-Friend4235 t1_je1ffq7 wrote
Reply to Chat-GPT 4 is here, one theory of the Singularity is things will accelerate exponentially, are there any signs of this yet and what should we be watching? by Arowx
Stupidity and ignorance seem to be growing exponentially. Does that count?