Exactly! The issue is not that people think AI models are deliberately biased, it's that they inherently are when there's a human inputting the code behind them. As stated in the article, the model will only be as good as the data you feed it, so if the data is biased (for example resume samples from only white men in a certain state), the model will be biased. This law will force companies wanting to use automated hiring tools to audit them first and ensure eliminate bias from the model creation point.
So from what I understand, the models can be biased if they’re created by humans with particular bias - it’s hard to measure exactly how this happens which is why when this law comes in, companies using automated systems will have to have them audited by independent organizations. The goal is of course for the models to be as unbiased as possible, but what happens today (in some cases, not all) is that the AI model will have inherent biases against certain profiles.
Yes definitely - that's part of what's causing the delay. Companies will need to explain what goes into their algorithms, what decisions are made in the model, what the accuracy rate is, who made the model, and so on. It seems complicated but I guess it's because it's the first law of its kind.
Now scheduled to come into force in April, Local Law 144 of 202 would bar companies from using any “automated employment decision tools” unless they pass an audit for bias. Any machine learning, statistical modeling, data analytics, or artificial intelligence-based software used to evaluate employees or potential recruits would have to undergo the scrutiny of an impartial, independent auditor to make sure they’re not producing discriminatory results.
The audit would then have to be made publicly available on the company’s website. What’s more, the law mandates that both job candidates and employees be notified the tool will be used to assess them, and gives them the right to request an alternative option.
Employers may establish minimum and maximum salary levels at 75% and 125% of the midpoint salary range. If you feel fully capable of taking on the advertised role and being able to hit the ground running, you should calculate the midpoint salary figure and ask for that, or slightly more.
Background-Net-4715 OP t1_j1tn30y wrote
Reply to comment by Titan_Astraeus in NYC's AI bias law is delayed until April 2023, but when it comes into effect, NYC will be the first jurisdiction mandating an AI bias order in the world, revolutionizing the use of AI tools in recruiting by Background-Net-4715
Exactly! The issue is not that people think AI models are deliberately biased, it's that they inherently are when there's a human inputting the code behind them. As stated in the article, the model will only be as good as the data you feed it, so if the data is biased (for example resume samples from only white men in a certain state), the model will be biased. This law will force companies wanting to use automated hiring tools to audit them first and ensure eliminate bias from the model creation point.