Ishtiaque Ahmed is turning to the power of AI to help communities tackle online hate.
His research project, “Making the Internet Safer through Community-Powered Artificial Intelligence,” is the recipient of a 2023–2024 award from the Connaught Community Partnership Research Program.
The Connaught Community Partnership Research Program encourages collaborative research partnerships that will foster access to each other’s unique knowledge, expertise and capabilities on issues of shared interest.
In the proposed project, Ahmed will co-design, develop, deploy and evaluate a community-powered AI system to improve upon existing content moderation processes on social media and online discussion forums.
Ahmed, who is an assistant professor in U of T’s Department of Computer Science and a faculty affiliate of the Schwartz Reisman Institute for Technology and Society, will use the award to support partnerships with the Chinese-Canadian National Council for Social Justice and the Foundation for a Path Forward (FFPF), two non-profit organizations, to better address online hate speech aimed at Chinese and Muslim communities in Canada.
He will be working alongside two U of T faculty members: Shohini Bhattasali, an assistant professor in the Department of Language Studies at U of T Scarborough and Shion Guha, an assistant professor in the Faculty of Information, cross-appointed to the Department of Computer Science.
The researchers will collect and analyze online posts that target Muslim and Chinese communities on Facebook, Twitter and Reddit, and interview community members to understand their experiences and perspectives on online hatred. They will use the gathered input to label posts as harmful or not harmful and create two open datasets for research and awareness.
Leveraging a deep-learning model, their proposed tool will take the form of keyboard software offering users an interface similar to the AI-based writing assistant Grammarly. As a user writes in their web browser, it will highlight sentences that may contain misinformation or hate speech and provide users with context that explains why the text as written could be considered harmful or offensive.
In the case of debated issues, instead of taking a specific side, the software tool will show all possible explanations. Additionally, users will be able to provide feedback to a team of human moderators recruited from within the respective communities. The moderators will see these challenges on a separate interface in a web portal, where they can provide the final verdict based on their community values. If the user’s contestation is valid, the tool will show that as a new interpretation of the text to the next user who writes the same or similar content.
This project will be carried out over a two-year timeline with software deployment scheduled for next summer. From there, the team will collect data and complete evaluations on both its technical functionality and how well it’s helping communities both within and outside of these communities, in the next academic year.
We spoke with Assistant Professor Ishtiaque Ahmed for details on the project and his hopes on how this approach can make the internet safer for marginalized communities.
What sparked your interest in researching how AI could help tackle harmful social media content?
The reason why we need AI is mostly because of the scale of the online posts. Right now, there are human moderators who check if a post is hateful or not, if it’s reported, and then they make the decisions, but this process doesn’t work that well for thousands and thousands of posts that are being reported every single day. Especially when there is a conflict somewhere, you will find a lot of hate speech coming up. Also, the knowledge of a human moderator is often bounded by their background, and they cannot make good judgment on posts coming from a different context. An intelligent system can help the human moderators do this more accurately and quickly. In fact, in many cases, human moderators are actually being helped by various AI tools. However, those AI tools are not trained with data from historically underrepresented groups. Hence, the results are often biased against those groups. This is why I felt the need to develop a responsible, fair, and accountable AI system to support the social media moderation system.
How did you stumble upon this specific area of research?
The main focus of my research is to support marginalized communities with the help of computing. One problem that I was facing while doing this research is that AI algorithms or computing in general often operate on logic of “modern” scientific data, and not on cultural knowledge from these communities. A lot of my work is in global south countries where they have different cultural norms, and there is a clear conflict between what they consider believable and what AI systems qualify as valid data. Their judgment and rationality are not being implemented in AI technologies, including the automated moderation systems. As a result, the existing tools are flagging the posts of those communities that are based on their traditional faith, religion, myth, and folklore. So, I felt we need to train an AI algorithm with something that is aligned with the community’s cultural values. In coming up with this model, we thought about a community-powered AI system, where the community values will be taught to the AI system so that it can train itself to help moderators to work better.
What is the goal of this project?
We are trying to build a software keyboard, similar to Grammarly. But instead of highlighting sentences that are grammatically wrong, we are highlighting sentences that may contain misinformation or hate speech. Now, we don’t stop at the highlighting; you also need to tell people why it is problematic. These explanations are not always readily available on the internet. So oftentimes, we need to work with these communities to find out what they find offensive, so that we can tell people, “If you, write this, it may hurt these communities in this way, maybe reframe this in this way” or “you may want to read this and give it a second thought before posting this.”
Right now, we are focusing on Islamophobia and Sinophobia, but if it works well, we’re happy to work with other communities and expand this tool for others. It’s a free tool that we’re building called ‘Compassionately,’ and it will work as an add-on to a browser or a mobile phone application that helps you refine your text before you post it. Grammarly fixes your grammar and Compassionately tells you how you can be more compassionate.
Why is a partnership approach best suited for this work?
I have been working with these two non-profit organizations for more than three years now. They have a long history of handling hateful activities in real life. With social media and the internet, hate can scale and spread more easily. So, we understand their need for this automated tool. We also need them to tell us what is right or wrong in order for us to build this kind of community-powered AI. So, in that way, we believe it is helping both of us.
Our project is not purely technical. A lot of our work is very ‘social sciencey,’ — qualitative research where we collaborate closely with the community, we hear from them, we take their history, their memories, and bring that as a knowledge that we can feed to our AI system.
Why is it important in this project to involve historically marginalized groups in content moderation on social media?
Historically marginalized communities are very poorly represented in the computing world. Many of them do not use computers or the internet, so their data is not available. AI algorithms can only learn information that is available digitally. So, if they don’t know what could offend the particular community, at the language level, they cannot take an action based on that. If there is no one to report it, and the AI algorithm doesn’t understand whether it’s offensive or not, the internet becomes a toxic and unsafe space for them. So, our broader goal is to make the internet safer for everyone. It’s also aligned with a lot of the ongoing work to make AI technologies more ethical. I think our work on this project will contribute to that bigger initiative of ethical AI by making it more participatory, bringing in the voices of marginalized people whose voices weren’t represented there before.
How do you feel winning a Connaught Community Partnership Research Program award?
I’m super happy and excited, mostly because we have been working on this particular problem for a while now. This is the problem we found while working with the communities in the field. It came from the community and then we found that there is a fundamental problem in the way we do machine learning that should be addressed, and we cannot address this problem without the help of this community. It’s definitely exciting news for me and my team and the community partners that we’re going to work with.
This project wouldn’t be possible without the help of our different community partners, my graduate students and researchers in my Third Space research group. They are awesome. I would also like to mention the contribution of my U of T colleagues, Profs. Shion Guha and Shohini Bhattasali, and my external collaborators, Prof. Rumee Ahmed (UBC), Sarah Masud Preum (Dartmouth), and Daphne Ippolito (UPenn).
What does receiving this award mean to you?
Receiving this award means a lot. I see the impact of this award in two ways. First it will help me enormously to advance my research program, to strengthen my relationship with the community partners here in Canada. But I’m also excited about the real-life impact of this problem, because we are going to help these communities to handle online hate speech. To be able to make such an impact is most rewarding for me.
This interview has been edited for clarity and length.