Unethical AI unfairly impacts protected classes – and everybody else as well

Advertisement

BEGIN ARTICLE PREVIEW:

(via Shutterstock.com)
There are well-documented examples of AI systems making decisions that affect protected classes, such as housing assistance or unemployment benefits. AI can be used to screen resumes; banks apply AI models to grant individual consumers credit and set interest rates for them.
Many small decisions, taken together, can have large effects, such as: AI-driven price discrimination could lead to certain groups in a society consistently paying more. But are there AI applications today that affect everyone, no matter their “class”?
Let’s start with deepfakes:

Deepfakes are manipulated videos or other digital representations produced by AI that yield fabricated images and sounds that appear to be real. 
Personal reputations, relationships and livelihoods can be destroyed by deepfakes.
Gender discrimination cuts across age, income and education brackets. 

As I mentioned earlier, we are shifting our AI Ethics courses to more practical, useful techniques. And we discovered a way to spot deep fakes: Benford’s Law.
Benford’s Law of anomalous numbers, or the first-digit Law, is an observation about the frequency distribution of leading digits in many real-life numerical data sets. Take a picture or video in just a set of numbers. Find the distribution of the digits 0-9, and if …

END ARTICLE PREVIEW

READ MORE FROM SOURCE ARTICLE