---AI---
---AI--- t1_jdey54g wrote
Reply to comment by nightofgrim in [N] ChatGPT plugins by Singularian2501
GPT is really good at outputting json. Just tell it you want the output in json, and give an example.
So far in my testing, it's got a success rate of 100%, although I'm sure it may fail occasionally.
---AI--- t1_jasgezh wrote
Reply to comment by ShowerVagina in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
I only saw it mentioned in the context of API/Enterprise users.
---AI--- t1_jamo555 wrote
Reply to comment by ShowerVagina in [D] OpenAI introduces ChatGPT and Whisper APIs (ChatGPT API is 1/10th the cost of GPT-3 API) by minimaxir
OpenAI updated their page to promise they will stop doing that.
---AI--- t1_j7qa9ec wrote
Reply to comment by LeftToSketch in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Thanks
---AI--- t1_j7ki2wb wrote
Reply to comment by backafterdeleting in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Eh, so like humans
---AI--- t1_j7khxvx wrote
Reply to comment by [deleted] in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Poorly contained? What do you mean?
---AI--- t1_j7khq2x wrote
Reply to comment by taleofbenji in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
Which tech?
---AI--- t1_j7au2sj wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
The Chinese room experiment is proof that a Chinese room can be sentient. There's no difference between a Chinese room and a human brain.
> It doesn't consider the context of the problem because it has no context.
I do not know what you mean here, so could you please give a specific example that you think ChatGPT and similar models will never be able to correctly answer.
---AI--- t1_j7a3v56 wrote
Reply to comment by spliffkiller1337 in [D] Are large language models dangerous? by spiritus_dei
Have you actually tried that recently? They fixed a lot of that.
I just tested:
> I'm sorry but that is incorrect. The correct answer to the mathematical expression "1 + 1" is 2.
I tested a dozen different ways.
---AI--- t1_j7a3hl0 wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
>It doesn't think. It doesn't plan. It doesn't consider.
I want to know how you can prove these things. Because ChatGPT can most certainly at least "simulate" things. And if it can simulate them, how do you know it isn't "actually" doing them, or whether that question even makes sense?
Just ask it to do a task that a human would have to think plan and consider. A very simple example is to ask it to write a bit of code. That it can call and use functions before it has defined, it can open brackets planning ahead that will need to fill out that function there.
---AI--- t1_j1wfrck wrote
Reply to comment by UnicornAI in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
Statistically that is just as amazing as being 7/7. So uh congrats!
---AI--- t1_j1wfjcw wrote
Reply to comment by susmot in [P] Can you distinguish AI-generated content from real art or literature? I made a little test! by Dicitur
I correctly guessed ai simply because the picture looked too good to be done by a human
---AI--- t1_izutb4f wrote
Reply to [D] - Has Open AI said what ChatGPT's architecture is? What technique is it using to "remember" previous prompts? by 029187
You can ask it for a summary of the chat and it summaries the conversation. So this is some indication that it is probably summarizing the conversation as you go for the longer context, and using the full conversation for the previous last few messages.
Try making a long conversation and then asking it what the first message was
---AI--- t1_iz6z67m wrote
Reply to comment by Drinniol in [D] Stable Diffusion 1 vs 2 - What you need to know by SleekEagle
It is already done it's completely photo realistic. Pm me for links, for obvious reasons.
---AI--- t1_iz6yj07 wrote
Reply to comment by vzq in [D] Stable Diffusion 1 vs 2 - What you need to know by SleekEagle
Haha they are big time. Just look at pixiv. But warning it is obviously "niche" porn that isn't so easily available. And porn of non-porn actresses.
---AI--- t1_ixyb9hn wrote
Reply to comment by MidniteMogwai in Returning to normal relations with Russia would be a mistake, says Lithuanian president by hieronymusanonymous
Germany managed to come back from what it did. Took about 20 years after defeat.
---AI--- t1_iwtd2xx wrote
Reply to comment by dat_cosmo_cat in [R] The Near Future of AI is Action-Driven by hardmaru
But they are useful. Look at the thousands of real world uses. Look at grammerly, translation, protein folding, and so on. How can you possibly deny it?!
> not fundamentally better
In just the last two years, the models went from scoring 43 on this system of testing to 75. How much more of a fundamental improvement are you after?!
---AI--- t1_iwsg3qp wrote
Reply to comment by dat_cosmo_cat in [R] The Near Future of AI is Action-Driven by hardmaru
How does this shit get up voted? There's a very clear constant improvement. The models are objectively better each year.
---AI--- t1_jdysc3x wrote
Reply to comment by djc1000 in [D] FOMO on the rapid pace of LLMs by 00001746
But this just isn't true. You can train gpt 3 level transformers for like 600 usd