4 Questions to Ask When Evaluating an AI Prototype for Discrimination • TechCrunch

[ad_1]

It’s true there. Progress has been made in US data protection with the passage of several laws such as the California Consumer Privacy Act (CCPA) and non-binding documents such as the AI ​​Bill of Rights. However, there are currently no formal guidelines for how tech companies should mitigate AI bias and discrimination.

As a result, many companies are lagging behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the US are male and 66% white, highlighting the lack of diversity and demographic representation in the development of automated decision-making tools, which often leads to skewed data output.

Significant improvements in design review processes are needed to ensure that technology companies consider all people when creating and modifying their products. Otherwise, firms may lose customers to competition, damage their reputations, and face serious lawsuits. According to IBM, 85% of IT professionals believe that consumers prefer companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue to stand up to harmful and biased technology.

So what should companies keep in mind when analyzing their example? Here are four questions development teams should ask themselves:

Have we eliminated all bias in our prototype?

Technology has the potential to revolutionize society as we know it, but if it doesn’t benefit everyone equally, it will ultimately fail.

To build effective, unbiased technology, AI teams develop a list of questions to ask during the evaluation process, allowing them to identify potential issues in their models.

There are many methods that AI teams can use to evaluate their models, but before doing that, it’s important to assess the end goal and whether there are teams that will be disproportionately affected by the results of AI use.

For example, AI teams need to consider that the use of facial recognition technologies can inadvertently discriminate against people of color – something that happens all too often in AI algorithms. In the year A 2018 study by the American Civil Liberties Union found that Amazon’s facial recognition had successfully captured 28 members of the US Congress with mugshots. 40% of the wrong matches were people of color, even though they make up only 20% of Congress.

By asking challenging questions, AI teams can find new ways to improve their models and prevent these situations from occurring. For example, a closer inspection can help them determine if they need to see more information or if they need a third party to review their product, such as a privacy expert.

Plot4AI is a great resource for those just getting started.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *