Size is almost certainly the entire problem with these models. More recent research into how human brains process information has confirmed current generation language models have 6-9 orders of magnitude less compute than humans.
Hardware wise, hopefully 3D silicon and lower nm processes reduce the above gap in the next few years.
sext-scientist t1_iw77ylt wrote
Reply to comment by Ducky181 in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Size is almost certainly the entire problem with these models. More recent research into how human brains process information has confirmed current generation language models have 6-9 orders of magnitude less compute than humans.
Hardware wise, hopefully 3D silicon and lower nm processes reduce the above gap in the next few years.