Spark

How designing better algorithms can help us design better, more just societies

There's been a lot of discussion about algorithmic bias, but the focus has been on bias in historical data. We take a look at why it's so difficult to encode fairness, and why a rising computer science star still believes we can use machine learning for social good.

Computer scientists are looking for ways to add a human element to algorithm design

Computer scientist Rediet Abebe co-founded Mechanism Design for Social Good (MD4SG), an interdisciplinary initiative that uses algorithm design techniques to improve access for underserved communities. (Jennifer Leahy)

Computer algorithms play a role in many parts of our lives, both big and small: from content that shows up on our social media feeds and search engine results, to the quality of our education and our chances of getting a job interview. 

Lately, a kind of reckoning began as people realized these algorithms are not the perfectly rational and impartial alternatives to flawed human reasoning we once thought them to be. Now, computer scientists are looking for ways to fix algorithms by adding the human element back into their design.  

Computer scientist Rediet Abebe believes that better algorithm design principles can be used to diagnose some of the deep-seated societal issues embedded in their results.

"A lot of times when we think about algorithmic fairness or algorithmic justice or algorithmic discrimination, we focus on one particular use of an algorithm and we try to improve that," she told Spark's Nora Young. "And I think we lose sight of the broader issue, that there's these issues further upstream."

In a sense, it's sort of like an instance of algorithmic discrimination. But ... it was also an opportunity for us to think about what was going on in education here in Cambridge.- Computer scientist Rediet Abebe

In 2009, Abebe, then an undergraduate student at Harvard University, worked on a project that investigated the system that matched students in Cambridge, Mass. with public schools. An algorithm assigned students to public schools, giving priority based on proximity and whether the student had siblings already enrolled at the school — though it was supposed to take into account the students' stated top-three school choices, too.

But what Abebe and her team discovered is that the system failed to consider the segregated nature of the city's neighbourhoods. As a result, students from higher-income neighbourhoods were more likely to be assigned to the top public schools in the city, while students from racialized and lower-income households were often matched with schools they didn't want.

"In a sense, it's sort of like an instance of algorithmic discrimination. But ... it was also an opportunity for us to think about what was going on in education here in Cambridge," Abebe explained.

WIRED journalist Sidney Fussell identified two main issues with algorithm use in public domains like education: the process of selecting a dataset to "train" the algorithm, and the transparency of this selection process. 

He used the example of this summer's "A-level fiasco" in the U.K. — where an algorithm was used to calculate students' exam marks, and thousands saw their grades drop below their university admissions requirements — to illustrate the two issues. 

Students protested over the U.K. government's handling of A-level results, which involved using an algorithm to work out marks. (Jacob King/PA/The Associated Press)

Fussell said to figure out what went wrong with the grading algorithm, "the whole supply chain" of that algorithm should be examined.

He said the algorithms served as a way to standardize the predictions for the students' test results based on past performance data, not just of their own, but of other students from their schools. The problem with this approach is that students could end up held back by these data points, unable to access opportunities based on their own hard work studying for a final exam.  

Then, there's a wider issue of consent in using algorithms to make decisions in public spaces like education. "With algorithms that are designed to predict things, you end up in situations where the people that are going to be judged have the least amount of say in every other step of the process."

Inclusive algorithm design

One way to achieve transparency is to involve community members in the process of algorithm design — something Rediet Abebe thinks is vital in her work.

It's one of the pillars of the Mechanism Design for Social Good (MD4SG), an interdisciplinary initiative co-founded by Abebe, which uses algorithm design techniques to improve access for underserved communities.

"We are really interested in building these trust- and respect-based collaborations with other domains, and also really working towards making sure that our little community is as representative of the people that we're working for as possible," Abebe said.  

MD4SG's interdisciplinary team helps avoid treating technological solutions as the only option. "We have to think critically about how we see computer science as being a part of a broader solution," she said.

Fussell said that algorithm use can often signal a lack of investment of other social resources in the area. "We're sort of shortcutting [with] these algorithms, which is leading us to realize in all these different spaces, 'Oh, we don't actually know how a lot of this stuff works. We don't actually know what human to talk to, to hold accountable when things go wrong.'"

WIRED tech journalist Sidney Fussell says we are developing algorithm ethics "in real time." (Submitted by Sidney Fussell )

The alignment problem 

Another challenge of using algorithms to solve social problems Fussell identified is the selection of training data.

Part of the issue is something called an alignment problem, or the discrepancy between what the algorithm is intended to do and what it actually does.

"Machine learning are systems that, rather than being explicitly programmed, are trained by examples, and hope that with enough repetition they get the pattern," author Brian Christian explained. "And the question is, is the pattern that they're getting, the pattern that you intend for them to get?" 

Christian, who is a visiting scholar at University of California, Berkeley, discusses this issue in his new book, The Alignment Problem: Machine Learning and Human Values. He said that while aligning the intention of algorithms with their output is a technological problem, ensuring that the design is ethical and fair is "more of a sociological or political problem."

Author Brian Christian says there's an 'alignment problem' between what algorithms are intended to do and what they actually do. (©Eileen Meny Photography)

"I think we still have a huge set of questions to deal with that await us on the other side of that, around what is to be aligned with whom." 

Journalist Sidney Fussell believes recent cases of algorithm discrimination and misuse, like the Cambridge Analytica scandal, have generated enough public pressure to start thinking about an ethical framework.

"I think that we are in real time developing … ethics around algorithms and technology," he said. "We can develop ethical systems for the things we consume. It just takes a while."


Written by Olsy Sorokina. Interviews produced by Nora Young, Adam Killick, Sameer Chhabra and Michelle Parise.

 

For more stories about the experiences of Black Canadians — from anti-Black racism to success stories within the Black community — check out Being Black in Canada, a CBC project Black Canadians can be proud of. You can read more stories here.

(CBC)

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.

now