Eggy-Toast

Eggy-Toast t1_jd18vf2 wrote

Probably good to display some sort of prominent * TherapistGPT and it’s creators do not practice medicine, TherapistGPT is not an alternative for actual Therapeutic care administered by professionals, etc

1

Eggy-Toast t1_jcdgpig wrote

GPT-4 would understand the point he’s making

“The point being made here is twofold:

The user is praising ChatGPT for its effectiveness in writing sales emails for difficult clients, highlighting how it has streamlined their workload by replacing the need for an additional staff member and allowing them to multitask.

The user is also critiquing the choice of words used by someone else ("humiliated" and "jailbroken") in the given context, suggesting that the person may not have a proper understanding of the situation.

The logical conclusion drawn from these points is that ChatGPT is a valuable tool that can significantly improve efficiency in handling tasks like sales emails, while also implying that it is important to use appropriate terminology and demonstrate a clear understanding of a situation when discussing or debating any issue.”

L+ratio+maidenless

7

Eggy-Toast t1_j7noya9 wrote

Yeah, I completely agree. There are a few different ideas floating around here, specifically I was referencing the original comment that said it may make open source AI all but impossible to create/maintain in accordance with the EU.

AI can be such a great tool, and it certainly needs regulation. But regulation which would serve to consolidate AI into the hands of the wealthy/powerful would be an absolute travesty.

1

Eggy-Toast t1_j7bvpp1 wrote

I’ve thought about it. I do not believe AI is going anywhere or will stop taking jobs. We could slow it down, but I don’t see it stopping without running the risk of falling behind as a technological country. There are a lot of dying industries, we need ways to keep food on those tables regardless of if they were lost by AI or not. Protections for the worker not sanctions on AI.

1

Eggy-Toast t1_j1vhtgg wrote

“This is already wrong — it might work” disingenuous much?

The point of that proposed watermark is that it can be imperceptible to the human eye but perceptible by some algorithm or model nevertheless. It only adds value to the product, but perhaps not as much as it would take to implement.

I think in your comments though you entirely overlooked the fact that DALLE 2 has watermark implementation and it is in no way subtle, but it can be cropped out.

1

Eggy-Toast t1_j1vg5ly wrote

I agree that something like this would be very useful. I tested out what ChatGPT could do regarding cheating in a very minimal way. It relates closely to your point 2. The thesis was to rate how likely an answer was to be generated by AI, similar to how programs will rate how likely an answer was plagiarized from the internet.

I took a refined response to AP test questions from a teacher and a generated response from ChatGPT, both very good. If I simply asked if it was a student or ChatGPT, with the note that some students would be very intelligent or very educated teachers acting like students, it would either say both were generated by a student or ChatGPT. However, if I asked it to rate the likelihood it came from ChatGPT on a scale from 1-100 I got good results with it ranking the teacher’s response lower than ChatGPT. The score for ChatGPT was in the 90s typically where the teacher’s were 70s or so.

I think refining this functionality would be one way to provide a solution which scales, at least theoretically, infinitely.

1