e-rexter
e-rexter t1_j5i55p1 wrote
Reply to comment by new_name_who_dis_ in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Check out gptZero as one example. It uses perplexity and other characteristic differences between human and AI generated text. Not perfect, but works on longer text passages. Unfortunately, on can train AI to have more variation, this defeating the detector.
e-rexter t1_j5i4rs8 wrote
Reply to comment by andreichiffa in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
I used a detector called GPTZero and it did pretty good, but completely missed something written as a tweet or in the style of…
e-rexter t1_j5i4jfg wrote
Reply to comment by dineNshine in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Signed authenticity for news and other high quality human content needs to scale. I know some news ors have been working on this for years. It is time to roll it out at scale.
e-rexter t1_j5i49g4 wrote
Reply to comment by ISvengali in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Great book. Required reading back in the mid 90s when I worked at WIRED.
e-rexter t1_j5i42p1 wrote
Reply to comment by EmmyNoetherRing in [D] Couldn't devs of major GPTs have added an invisible but detectable watermark in the models? by scarynut
Reminds me of the movie multiplicity, in which each copy gets dumber.
e-rexter t1_j7bn2tw wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
The danger, as is often the case, is human lack of understanding of the technology, leading to misuse, not the technology itself. Where is the intention of the AI? It is just doing word (partial word) completion, and feeding on lots of human dystopian content and playing it back to you. You are anthropomorphizing the AI.