Is AI that screens potential babysitters a good idea?
Experts skeptical about AI that screens potential babysitters for risk
When economist Avi Goldfarb first learned about AI-hiring startup 'Predictim,' he was both intrigued and skeptical.
Started by three MBA graduates from the University of California, Berkeley's Haas School of Business, Predictim claims it can parse through a potential caregiver's social media profiles — Facebook, Instagram and Twitter specifically — to generate a four-part risk assessment report.
According to the company's website, the platform uses "advanced artificial intelligence" to grade caregivers on their "propensity towards Bullying/Harassment, Disrespectfulness/Bad Attitude, Explicit Content, and Drug Abuse."
All Predictim needs is a name, email address, the consent of someone over the age of 18, as well as $24.99 US for one scan, $39.99 US for a pack of two scans or $49.99 US for a three-pack.
- York University prof steps up with babysitting in class
- Babysitter recounts rescue of abused children
Goldfarb — the Ellison Professor of Marketing and the Rotman Chair in Artificial Intelligence and Healthcare at the University of Toronto's Rotman School of Management — told Spark host Nora Young that he'd like to use AI to screen babysitters as a parent, but that he's uncertain about Predictim's measurement metrics.
For him, the issue isn't whether AI can be used to screen babysitters and other caregivers, but whether the technology powering Predictim's platform is properly trained to handle that task.
After all, artificial intelligence needs to be trained using comprehensive datasets before it can function accurately. If it's trained using problematic data — like in the case of the now-scrapped hiring tool used by U.S. e-commerce giant Amazon — then the AI will most likely produce faulty results as well.
"So the key challenge is going to be is this data ... the right data to tell us what trustworthy looks like?" said Goldfarb.
Tracey Lauriault, an assistant professor of critical media and big data at Carleton University's School of Journalism and Communication, told Spark that she's concerned Predictim might prey on people's fears.
"When I went through their website, it was [a] kind of manufacturing of fear," said Lauriault, during a Skype interview.
She added that users "don't know what the science is, and I'm not sure there's any science that predicts the likelihood that your babysitter is going to bully your child."
... I'm not sure there's any science that predicts that likelihood that your babysitter is going to bully your child.- Tracey Lauriault, Carleton University
Lauriault expressed additional concerns about overall privacy, pointing out that users don't know what the company will do with the data it collects.
"What does it mean to start having people under this kind of corporate surveillance, and what does it mean to not only have these people on a kind of corporate surveillance kind of system, but what does it mean to put them on a list?" said Lauriault.
Natural language processing doesn't always come naturally
Daniel Lizotte — an assistant professor in the Department of Computer Science and the Department of Epidemiology & Biostatistics at Western University — was also skeptical about Predictim's effectiveness.
He pointed out it can be quite difficult for machine learning algorithms to parse through the complex manipulation of language — like sarcasm — that's often part of social media posts.
"I would [wager] that probably what Predictim is doing is just keyword searches based on stuff that they think is indicative of bad behaviour, and then displaying the posts that contain that questionable content," said Lizotte.
Echoing Goldfarb's concerns, Lizotte speculated about possible bias from Predictim's human handlers that might seep into potential results.
"I would be very concerned that what this would end up doing would be to just reinforce stereotypes about who's a risky hire that really aren't warranted," said Lizotte.
Social media giants weigh in
While experts might be skeptical about Predictim, social media giants Facebook and Twitter have already taken action to limit Predictim's access to their users and their platforms.
Cameron Gordon, the head of communications for Twitter Canada, told Spark that the company conducted an investigation into Predictim, and "revoked their access to Twitter's public [application programming interfaces (API)]."
"Per our guidelines, we strictly prohibit the use of Twitter data and APIs for surveillance purposes, including performing background checks," wrote Gordon, in an email.
I would be very concerned that what this would end up doing would be to just reinforce stereotypes about who's a risky hire that really aren't warranted.- Daniel Lizotte, Western University
According to a Nov. 27, 2018 BBC News report, Facebook is currently investigating Predictim for scraping publicly available user data, and may ban Predictim altogether.
According to a Facebook spokesperson, the company has already revoked most of Predictim's access to the social network's users.
Predictim only has access to users' names, profile photos and email addresses, though consent must be given for access to these personal details as well.
Additionally, Facebook confirmed that Predictim is currently under investigation for web scraping — a form of data extraction in which a program is able to quickly gather data from websites.
"Scraping people's information on Facebook is against our terms of service," said Facebook, in an email to Spark.
"We are investigating Predictim for violations of our terms, including to see if they are engaging in scraping."
Spark contacted Predictim CEO and co-founder Sal Parsa for comment, though Parsa was unable to conduct an interview in time for publication.