Google is great at a lot of things, often due to its superlative technological prowess and creative innovations. But some of those advances also can create unforeseen problems, especially as their use expands and grows. In particular, Google uses an automated advertising function to determine where to place marketers’ messages. On its YouTube channel, the program determines that advertisements for products that appeal to young men should go next to content featuring gun-wielding heroes in video games, for example.

What the system cannot determine though is whether the content really features fictional heroes or actual villains. To a computer, a person waving a gun looks like a person waving a gun, whether it appears in popular entertainment or, more worryingly, in recruitment videos for terrorist organizations. For advertisers, this inability to distinguish is a huge problem; no one wants their brand name on the same screen as a horrific ISIS video.

Not only do such placements risk damage to the brand’s reputation, but they also are inadvertently funding such groups, through their automated advertising payments. Advertisers might simply have complained if their marketing appeared next to a racy or NSFW video. They are threatening to boycott YouTube altogether if Google cannot guarantee that their dollars will not go to terrorists.

For Google, the problem is teaching computers context. That is, the computers must learn whether the context implies appropriate images or not. To do so, Google is leveraging the machine-learning techniques that it developed to assign ratings to videos. With frame-by-frame analyses, the system assesses the images and words, as well as descriptions by video creators. This information provides the context, and Google continues to train the program, such as by feeding incorrect assessments back in to the system, to get better. For now though, human handlers are constantly available.

Also in the meantime, Google has made it easier for advertisers to select what kind of content should be excluded from their packages. For example, they can insist that their ads never appear next to salacious material. In addition, if an advertiser wants to be next to controversial videos, it must actively opt in to that choice. Google also has banned the use of any hate speech in paid advertising.

The problems are not very widespread; of the thousands of advertisements Unilever posts on YouTube, it has found only three that appeared next to questionable content. But unless Google can reassure advertisers that their brand will not be at risk, due to content beyond their control, the problem will remain relevant.

Discussion Questions:

  1. What other options might advertisers have to limit the risk of their advertising appearing next to controversial content?
  2. Do you think Google can train machines well enough that they can assess content, similar to the way humans do?

Source: Daisuke Wakabayashi, “Google Training Ad Placement Computers to Be Offended,” The New York Times, April 3, 2017; Jim Kerstetter, “Google’s Ad Issues Expose a Vulnerability,” The New York Times, April 3, 2017