Accomplished-Bill-45

Accomplished-Bill-45 OP t1_iz8u38x wrote

Yea,

I have tested it including

Social reasoning ( which does a good job)

Psychological reasoning ( bad)

Solving math question ( it’s ok, better then Minerva)

Asking LSAT logic game questions ( it gives its thought process, but failed to give correct answers)

I also wrote up a short mystery novel, ( like 200 words, with context) ask if it can tell is the victim is murdered or committed suicide. It actually did ok job on this one if the context is clearly given that everyone can deduce some conclusion using common sense.

1

Accomplished-Bill-45 OP t1_iz8tp4b wrote

So I just found out that ppl tends to categorize the reasoning

logical reasoning

common sense reasoning

knowledge-based,

social reasoning,

psychological reasoning

qualitative reasoning ( solving some math problem)

So do you mean that If some needs to build a generalized model that can do all of above without specific fine tuning, LLM might be the most straightforward way. We can expecting them to do some simply reasoning like GPT

But to further improvement, can we use GPT as pre-trained model, and adding additional domain specific model ( mostly likely to using symbolic representation) to train.

But can symbolic AI alone perform all of above reasoning ? Can graphical model ( which my intuition tells me is some way representation of logical thought process) be incorporated into symbolic representation ?

2

Accomplished-Bill-45 t1_iz7dgwm wrote

Are currently state of art model for logical/common-sense reasoning all based on NLP(LLM)?

Not very familiar with NLP, but I'm playing around with OpenAI's ChatGPT; particularly impressed by its reasoning, and its thought-process. Are all good reasoning models derived from NLP (LLM) models with RL training method at the moment? What are some papers/research team to read/follow to understand this area better and stay on updated?

1