Despite the widespread media coverage and controversy surrounding Facebook’s access to and exploitation of users’ data, it appears that most people remain oblivious to exactly how the social media site gathers and uses their personal information. According to a recent survey of representative Facebook users, approximately three-quarters of them did not know that advertisers could access their personal traits and preferences by requesting information gathered by Facebook.
In actuality, Facebook aggregates information across its various sites, including Instagram and WhatsApp, to develop profiles of every user. The profiles reflect how each user behaves on these sites, including when and how often they visit, what they click on or like, and which information they share. It also integrates nonbehavioral details, such as where they live, their ages, and who their family members are. From these data, Facebook creates descriptions of users that indicate not only which products they might be interested in but also what their political leanings likely are and which “multicultural affinities” they exhibit.
For advertisers, such information is priceless, because it supports improved personalization and targeting. Knowing that a Facebooker consistently likes pictures of dogs tells a pet food seller that ads for treats, balls, and high-quality food are likely to reap rewards if exposed to this particular consumer.
But in a more nefarious sense, the resulting profiles could be (and allegedly have been) used to manipulate people in ways that matter far more. Reports indicate that unethical actors gathered political and racial profiles to target some groups of citizens with influence tactics designed to discourage them from voting. Even Facebook’s own experiments, in which it sought to determine if showing more positively or negatively oriented feeds to people altered their behavior, raise questions about the rights that it has to define and influence users’ behavior.
A contradictory argument holds that the type of experiments Facebook conducts, often referred to as A/B tests, because they involve two main versions of a similar experience (e.g., more positive or more negative news feeds), are both legitimate and necessary. According to this argument, marketers engage in such experiments all the time to determine which product display, pricing strategy, or color combination works best to prompt customers to buy. In this sense, Facebook’s attempt to gather users’ behavioral information is just one more contributing element to any good marketing strategy.
Such questions become particularly pertinent in light of additional information provided by the original survey though. When shown their profiles, about one-quarter of the people surveyed indicated that the description of their political leanings was inaccurate. More than one-third protested the incorrectness of their assigned multicultural affinity. Beyond questions of whether Facebook should develop such profiles and how it may use them, it thus confronts the issue of whether it is even able to establish accurate depictions of users, based on their behaviors on its site. If it cannot, then what’s the point?
Discussion Questions:
- Outline the ethical concerns associated with (a) user profiles based on Facebook activity, (b) user profiles that include political or cultural preferences, (c) users’ lack of awareness of the existing of these profiles, and (d) inaccurate profiles.
- Are there any other ethical considerations that arise from your reading of this information?
Source: Sapna Maheswari, “Facebook Advertising Profiles Are a Mystery to Most Users, Survey Says,” The New York Times, January 16, 2019; Hal Conick, “Are A/B Tests Ethical?” Marketing News, October 31, 2018