Eggy-Toast
Eggy-Toast t1_jd18vf2 wrote
Reply to [P] TherapistGPT by SmackMyPitchHup
Probably good to display some sort of prominent * TherapistGPT and it’s creators do not practice medicine, TherapistGPT is not an alternative for actual Therapeutic care administered by professionals, etc
Eggy-Toast t1_jcdgpig wrote
Reply to comment by suflaj in [Discussion] What happened to r/NaturalLanguageProcessing ? by MadNietzsche
GPT-4 would understand the point he’s making
“The point being made here is twofold:
The user is praising ChatGPT for its effectiveness in writing sales emails for difficult clients, highlighting how it has streamlined their workload by replacing the need for an additional staff member and allowing them to multitask.
The user is also critiquing the choice of words used by someone else ("humiliated" and "jailbroken") in the given context, suggesting that the person may not have a proper understanding of the situation.
The logical conclusion drawn from these points is that ChatGPT is a valuable tool that can significantly improve efficiency in handling tasks like sales emails, while also implying that it is important to use appropriate terminology and demonstrate a clear understanding of a situation when discussing or debating any issue.”
L+ratio+maidenless
Eggy-Toast t1_j91zl3f wrote
Reply to comment by [deleted] in [D] Please stop by [deleted]
RemindMe! 10 hours
Eggy-Toast t1_j7noya9 wrote
Reply to comment by blablanonymous in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Yeah, I completely agree. There are a few different ideas floating around here, specifically I was referencing the original comment that said it may make open source AI all but impossible to create/maintain in accordance with the EU.
AI can be such a great tool, and it certainly needs regulation. But regulation which would serve to consolidate AI into the hands of the wealthy/powerful would be an absolute travesty.
Eggy-Toast t1_j7bvpp1 wrote
Reply to comment by blablanonymous in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
I’ve thought about it. I do not believe AI is going anywhere or will stop taking jobs. We could slow it down, but I don’t see it stopping without running the risk of falling behind as a technological country. There are a lot of dying industries, we need ways to keep food on those tables regardless of if they were lost by AI or not. Protections for the worker not sanctions on AI.
Eggy-Toast t1_j7agw75 wrote
Reply to comment by blablanonymous in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
There is not a positive spin to this. The downstream pipeline is ultimately what makes AI beneficial to the common workforce. Complicating runs the risk of creating another bureaucratic gauntlet that’s all but impossible for the average startup to complete.
Eggy-Toast t1_j1vhtgg wrote
Reply to comment by Featureless_Bug in [Discussion] 2 discrimination mechanisms that should be provided with powerful generative models e.g. ChatGPT or DALL-E by Exnur0
“This is already wrong — it might work” disingenuous much?
The point of that proposed watermark is that it can be imperceptible to the human eye but perceptible by some algorithm or model nevertheless. It only adds value to the product, but perhaps not as much as it would take to implement.
I think in your comments though you entirely overlooked the fact that DALLE 2 has watermark implementation and it is in no way subtle, but it can be cropped out.
Eggy-Toast t1_j1vg5ly wrote
Reply to [Discussion] 2 discrimination mechanisms that should be provided with powerful generative models e.g. ChatGPT or DALL-E by Exnur0
I agree that something like this would be very useful. I tested out what ChatGPT could do regarding cheating in a very minimal way. It relates closely to your point 2. The thesis was to rate how likely an answer was to be generated by AI, similar to how programs will rate how likely an answer was plagiarized from the internet.
I took a refined response to AP test questions from a teacher and a generated response from ChatGPT, both very good. If I simply asked if it was a student or ChatGPT, with the note that some students would be very intelligent or very educated teachers acting like students, it would either say both were generated by a student or ChatGPT. However, if I asked it to rate the likelihood it came from ChatGPT on a scale from 1-100 I got good results with it ranking the teacher’s response lower than ChatGPT. The score for ChatGPT was in the 90s typically where the teacher’s were 70s or so.
I think refining this functionality would be one way to provide a solution which scales, at least theoretically, infinitely.
Eggy-Toast t1_jdn7cki wrote
Reply to comment by Fit-Recognition9795 in [D] What happens if you give as input to bard or GPT4 an ASCII version of a screenshot of a video game and ask it from what game it has been taken or to describe the next likely action or the input? by Periplokos
ChatGPT doesn’t know that GPT-4 has multimodal input though, right? I assume based on “not [designed] to analyze images or visual data” this is the case.