Tags

, ,

low-res-412-19684For many working parents, the most important and most difficult purchase choice they face is finding a provider to care for their children. Especially for first-time parents, determining who is trustworthy, responsible, and loving enough to trust with their precious babies is an incredibly daunting task. To address this service challenge, another service has arisen, promising to minimize the uncertainty by leveraging modern technological tools.
The need for this new service exists because of the inherent elements of service provision. That is, parents hiring a caretaker for their children cannot know about the quality of the service in advance, and that quality can change in each service interaction. Such uncertainty may be inherent, but it is unacceptable to worried, nervous parents who want some sort of guarantee that their children will be safe and well cared for.
Predictim promises at least to reduce that uncertainty. It scans job candidates’ social media posts, analyzing them for characteristics such as bad attitudes, respectfulness, or likely drug use. Combining all the data, it produces “risk ratings” on five-point scales. Thus potential employers might learn that a candidate is unlikely to abuse drugs but might be more likely to exhibit disrespect.
The algorithm that Predictim uses is, of course, proprietary. In scanning social media feeds, it likely looks for explicit content, such as obscene language or depictions of risky behaviors. But the methods it uses to measure positive or negative attitudes are harder to pin down, and neither the candidate nor the parent receives any justification or explanation for the scoring provided. That is, a potential babysitter might earn a poor score on ability to work with others, and thus lose out on a job, without knowing what led to that score.
Indeed, many job candidates never even learn their scores. The company only shares the information with its clients, that is, the parents who solicit the reviews. It requires the job candidates to give their permission to search their social media posts, but this option is not really much of a choice. Potential hires who refuse to allow access likely get excluded right away or accused of having something to hide.
This combination leaves the service open to some criticisms. In particular, computer algorithms still are notoriously bad at differentiating sarcasm or satire. A user who posts an absurd or silly movie quote thus might be flagged for abusive language, when in reality, the post hints at her or his great sense of humor. Furthermore, many service providers in the childcare sector are young, and these digital natives have spent most of their lives on social media. If they
posted something mildly inappropriate when they were an adolescent, should that poor decision define their job possibilities as a young adult?
Predictim brushes off such complaints, arguing that its algorithms are accurate and insightful. It also asserts that the trade-off of risk and reward swings clearly in the direction of using its site: If families can prevent harm to their children, the risk of some minor misclassifications of job candidates is not important enough to be counted. In addition, it cites some future advances it plans, such as integrating personality test questions into its assessments, which it promises will give parents new and deeper insights into the applicants.
But it cannot brush them off completely. Facebook and Twitter recently banned Predictim from its sites (including Facebook-owned Instagram), citing their rules against using the sites for “surveillance purposes, including background checks.” Even faced with this hurdle though, Predictim vowed to continue its efforts. In particular, it noted that because it does not use automated scraping technologies, it is not in violation of their policies. Thus it has contested its ban and promised that it will keep offering parents the information they seek.
Discussion Questions:
1. If asked, would you open your social media to a service like Predictim and allow a potential employer to review the results?
2. Can, and should, such review services expand into other uncertain service provision markets, like health care? Why or why not?

 

Source: Drew Harwell, “Wanted: The ‘Perfect Babysitter.’ Must Pass AI Scan for Respect and Attitude,” The Washington Post, November 23, 2018; Dan Patterson, “AI Babysitting Service Predictim Vows to Stay Online After Being Blocked by Facebook and Twitter,” CBS News, November 29, 2018