I just hope that whatever regulations Congress choose to implement actually end up being effective at promoting ethics while not crushing the field. After seeing Congress question Zuckerburg, I can’t say I have 100% faith in them. But I’m willing to be optimistic that they’ll be able to do a good job, especially since I believe that regulating AI has largely bipartisan support.
I agree that AGI is an important concern. However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices. For one thing, if a company can just fire the ethics team whenever they don’t like what they’re saying, then how would they actually be able to make any difference when it comes to AGI? In addition, I have also heard anecdotes from others that some in AI ethics are somewhat out of touch with actual ML engineering/research, which makes some of their suggestions inapplicable (admittedly they’re just anecdotes so I take them with salt as this may not generally be true, but I think it’s a concern worth considering). Is there any way that AI ethics teams can overcome these hurdles to help make save AGI?
Edit: also wanted to note that I don’t work in the field, if I got anything wrong please let me know!
namey-name-name OP t1_jcf601z wrote
Reply to comment by Remarkable_Ad9528 in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
I just hope that whatever regulations Congress choose to implement actually end up being effective at promoting ethics while not crushing the field. After seeing Congress question Zuckerburg, I can’t say I have 100% faith in them. But I’m willing to be optimistic that they’ll be able to do a good job, especially since I believe that regulating AI has largely bipartisan support.