Spark

Your social media data could be used to set your insurance rates

AIs use Instagram too

AIs use Instagram too

A computer screen shows a man holding binoculars with the Facebook logo on the lenses. (Glen Carrie/Unsplash)
Listen10:20

Back in the day, insurance companies used physical exams, medical tests and questionnaires to determine rates and premiums. But more and more, the information that we post about ourselves online and an social media are being taken into account — and it could have far-reaching impacts on who gets coverage and how.

Earlier this year, regulators for the insurance industry in New York State released a series of new guidelines, telling insurance companies that they would now be allowed to use people's online and social media data to investigate claims and set premiums.

The practice of insurers digging into our digital lives isn't new — insurance companies have been using social media to investigate claims. Last year, a professor in Montreal was denied disability insurance for his depression, then later learned the insurance company was tracking his social media posts.

What was new in New York State's guidance letter was that it allowed the insurance companies to use online data to set insurance rates.

Using algorithms a cause for concern

To deal with all of this additional information, AI algorithms are being deployed to make connections and decide how risky someone is.

According to Rick Swedloff, a professor of law at Rutgers University, this reliance on algorithms can potentially lead to problems.


Related Links

"Because these algorithms are so complicated, it's hard or even impossible to figure out why they're making the decisions they're making," Swedloff, told Spark host Nora Young.

"I think, secondly, that these algorithms are going to come up with prices that end up being discriminatory against, for instance, people of color in a way that is already prohibited by law."

The use 'proxies' could lead to discrimination

Rick Swedloff is a professor of law at Rutgers University. (Bob Laramie/Rutgers)
When assessing risk, companies are not allowed to consider certain protected categories, things like race and religion. But they also aren't allowed to consider "obvious proxies" — certain pieces of information that would help reveal that a person belongs to a protected category.

One example of an obvious proxy being used is redlining. Redlining is when cities are divided up according to demographics, with some areas inhabited by more people of a protected category. Decisions are then made based on neighbourhoods, and the protected category is discriminated against indirectly.

The new insurance industry guidelines in New York do outline that the collected data can't be used for discriminatory practices. And while companies aren't allowed to use obvious proxies to make their decisions, these algorithms might discover other indicators that the average person wouldn't connect, or "non-obvious proxies".

"A non-obvious proxy," Swedloff explained, "might be something like searching for sunset on the internet." While nothing in that search explicitly denotes race or religions, a practicing Jewish person might be more likely to search for the time of sunset to know when the Sabbath begins.

One way to prevent this potential for discrimination, Swedloff said, is to have more information about what insurance companies are doing. "Simply by having more information, we would have better data about what is really happening. And I think simply doing that would jumpstart a series of conversations about when and where we should price certain kinds of insurance," he said.

"Part of the problem, of course, is that these algorithms are doing things that we don't know and we don't know why."

In reporting this story, we contacted several experts on insurance law in Canada, but none were able to comment before our deadline. If we hear more, we will update this post.

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.