Grouchy-Friend4235

Grouchy-Friend4235 t1_itz20o2 wrote

Absolutely parroting. See this example. A three year old would have a more accurate answer. https://imgbox.com/I1l6BNEP

These models don't work the way you think they are. It's just math. There is nothing in these models that could even begin to "choose words". All there is is a large set of formulae with parameters set so that there is an optimal response to most inputs. Within the model everything is just numbers. The model does not even see words, not ever(!). All it sees are bare numbers that someone has picked for them (someone being humans who have built mappers from words to numbers and v.v.).

There is no thinking going on in these models, not even a little, and most certainly there is no intelligence. Just repetition.

All intelligence that is needed to build and use these models is entirely human.

1

Grouchy-Friend4235 t1_itwn1m9 wrote

It's the same algorithm over and over again. It works like this:

  1. Tell me something
  2. I will add a word (the one that seems most fitting, based on what I have been trained on)
  3. I will look at what you said and what I said.
  4. Repeat from 2 until there is no more "good" words to add, or the length is at maximum.

That's all these models do. Not intelligent. Just fast.

0

Grouchy-Friend4235 t1_ittye65 wrote

> how incredibly impressive it is that these models can interpret and communicate using it.

Impressive yes, but it's a parrot made in software. The fact that it uses language does not mean it communicates. It is just uttering words that it has seen used previously given its current state. That's all there is.

0