Sad-Combination78
Sad-Combination78 t1_j74y7wa wrote
Reply to comment by FacelessFellow in ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions by mossadnik
Think about it like this: Anything which learns based on its environment is susceptible to bias.
Humans have biases themselves. Each person has different life experiences and weighs their own lived experiences above hypothetical situations they can't verify themselves. We create models of perception to interpret the world based on our past experiences, and then use these models to further interpret our experiences into the future.
Racism, for example, can be a model taught by others, or a conclusion arrived at by bad data (poor experiences due to individual circumstance). I'm still talking about humans here, but all of this is true for AI too.
AI is not different. AI still needs to learn, and it still needs training data. This data can always be biased. This is just part of reality. We have no objective book to pull from. We make it up as we go. Evaluate, analyze, and expand. That is all we can do. We will never be perfect. Neither will AI.
Of course one advantage of AI is that it won't have to reset every 100 years and hope to pass on enough knowledge to its children as it can. Still, this advantage will be one seen only in age.
Sad-Combination78 t1_j75312i wrote
Reply to comment by FacelessFellow in ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions by mossadnik
you missed the point
the problem isn't humans, it's the concept of "learning"
you don't know something, and from your environment, you use logic to figure it out
the problem is you cannot be everywhere all at once and have every experience ever, so you will always be drawing conclusions from limited knowledge.
AI does not and cannot solve this, it is fundamental to learning