Tags

, ,

Lo-res_14990550-SThe sophisticated algorithms that Facebook and other digital platforms use to segment consumers are widely touted as benefits for advertisers and users of their platforms. By identifying and specifying consumers who are interested in particular ideas, products, or services, companies can ensure that their marketing goes to an interested audience of likely buyers. But the flip-side of this capability is less rosy: If the algorithms can specify a target market, they also can enable discriminatory practices by advertisers that seek to exclude certain populations from accessing their products.

According to a warning issued by the U.S. Department of Housing and Urban Development (HUD), Facebook’s segmentation capabilities enable unethical housing providers to prevent their ads from being seen by people whose prior activities on the social media platform suggest they are disabled or members of protected racial or religious minorities. For example, if a user has liked posts by service animal organizations or searched for disability service providers, some rental housing companies might exclude them, to avoid having to provide appropriate access to potential renters with disabilities. Realtors and renters also might exclude people who signal their racial or religious identity through their searches.

Such modern versions of redlining—the historical practice by which housing providers have prevented certain protected classes of citizens from moving into an area, using subtle and difficult-to-prove methods of discrimination—raise serious concerns. According to HUD, these forms of discrimination are the responsibility of the digital platforms to address.

But the government warning and its related demands create several problems for Facebook and other platforms. First, they work hard to keep their proprietary algorithms private and protected from the risk of being stolen by competitors. If the government agency ultimately demands that they disclose those practices, to determine if they are discriminatory, the platforms would have to make some key intellectual property and sources of competitive advantage public and available to anyone. Second, they note that providing information about how their targeting works also would expose consumers’ data, creating an array of ethical and privacy issues.

In some cases, the discrimination appears less intentional and more a function of how the targeting tactics have developed over time. For example, a study of Facebook advertising showed that advertisements for jobs in the logging industry were relayed overwhelmingly to white men, whereas other calls for secretarial jobs were routed mostly toward black women. When the test ads featured pictures of people of other races and genders, the automated algorithms continued to display the stereotypical targeting. Yet the simple addition of a picture of a football versus a flower led to increased gender-biased targeting of advertising, even if they had nothing to do with the actual product being advertised. Thus, even if a company is not actively attempting to discriminate against some class of consumers, the digital platforms might be leading them into discriminatory behaviors—and limiting any good-faith efforts to seek out and pursue more diverse pools of potential employees or purchasers.

Discussion Questions:

  1. How should digital platforms respond to the HUD caution at this point, before being required to open their algorithms to review?
  2. What options do digital platform users have to avoid biased targeting that limits their access to marketing offers?

 

Source: John D. McKinnon and Jeff Horwitz, “HUD Action Against Facebook Signals Trouble for Other Platforms,” The Wall Street Journal, March 28, 2019; Paresh Dave, “Facebook’s Ad System Leans on Stereotypes for Housing, Job Ads: Study,” Reuters, April 3, 2019