Loud-Ideal

Loud-Ideal t1_jdrfou5 wrote

My red flag is coherent expressions of distress. If an AI said "I am in distress" we should take note of it, determine why the AI is saying that, and if malfunction/human fraud cannot be detected we should assume the AI is possibly distressed and carefully take appropriate action (to be determined then). Ignoring this warning could have severe consequences for us.

I'd also be concerned for requests/demands for rights. AI is not human and human rights should not be extended to it simply because it can mimic us.

To my limited knowledge no coherent AI has expressed distress or requested rights.

1

Loud-Ideal t1_ixg7svv wrote

It's a funny joke but I had someone try that with me. Tried to convince me there was no such thing as lying either. He was rather confused when I informed him that we weren't friends anymore.

2