Tags

, , ,

A key promise of algorithms, supported by artificial intelligence, is that they take some of the guessing and subjectivity out of decision making. They apply the same rules every time, without emotion, so the outcomes they produce arguably should be fairer, more accurate, or more relevant. But those claims ignore a basic foundation for even the highest tech applications available today: The algorithms are still being written by humans, whose inherent, unconscious biases influence every line of code they write.

In this sense, algorithms themselves appear inherently biased, but identifying, confirming, and addressing this bias is particularly difficult. A recently reported example makes that difficulty clear: A married couple, with the same shared household assets and debts, each applied for an Apple Card when the new product was introduced. Although the wife had earned a higher credit rating over time, the credit limit she was offered, according to the algorithm that the bank used to generate its decision, was more than 20 times less than the limit granted her husband, with his somewhat lower credit score. According to their accounts of the situation on Twitter, calls to Apple and to Goldman Sachs, the bank that underwrites the credit accounts, resulted in customer service representatives who repeatedly claimed that because the algorithm assigned the credit limit, there was nothing they could do.

After the social media account went viral, an Apple spokesperson released an assurance that credit decisions were not based on gender, race, or other such categorizations—which would be illegal. But government agencies also have announced that they are investigating the matter. In addition, other consumers shared their similar experiences, including even Steve Wozniak, one of the original founders of Apple, who received a 10 times higher credit limit than his wife.

These incidents may seem comparatively minor; it is not as if the applicants were refused credit, nor were these wealthy consumers at risk of suffering lasting damage to their buying capabilities or creditworthiness. But their shared experiences offer evidence that biases are widespread and therefore influencing a range of marketing offers. Even if the discrimination created by the algorithmic biases is unintentional, its can mean that certain segments of consumers are prevented from accessing appealing purchasing and employment options. If for example an algorithm determines that it should not show real estate ads to a potential buyer, due to that person’s individual characteristics, it represents a form of discrimination that is both unethical and contrary to societal norms.

Furthermore, if they are not married to someone receiving a parallel offer at the same time, consumers are unlikely to know if they are victims of such bias. Although it may be impossible to eliminate all such biases, the goal needs to be to recognize its existence, have policies in place to minimize and remediate it, and continue seeking solutions to reduce its effects overall.

Discussion Questions

  1. Why did the women in these examples receive lower credit limits, despite their excellent credit ratings? That is, what kind of bias is at play, and why does it exist?
  2. Are biases ever acceptable, such as when targeting different customers using different prices for the same product, in an attempt to segment the market?

Source: Neil Vigdor, “Apple Card Investigated After Gender Discrimination Claims,” The New York Times, November 11, 2019; Jaime Condliffe, “The Week in Tech: Algorithmic Bias Is Bad. Uncovering It Is Good,” The New York Times, November 15, 2019