Yubo, a video-based streaming and chat platform aimed at Generation Z, has introduced real-time content moderation based on AI analysis of short snippets of audio.
Environments like Yubo that let strangers interact by audio and video have historically been marred by trolls and bigoted users, whether they’re harassing fellow players on gaming platforms or shouting vulgarities on chat services. By automatically detecting offensive words and phrases, such as homophobic and racist comments, Yubo should be able to quickly respond to this kind of behavior, says Chief Operating Officer Marc-Antoine Durand.
“It’s one of the biggest missing pieces of the moderation puzzle,” he says. “There are a lot of use cases that we are able to catch with this technology.”
When the AI system, created in collaboration with cloud-based content moderation company Hive, detects offensive language in the 10-second snippets of video it automatically transcribes, the material is flagged for speedy review by one of Yubo’s human moderators. They can then make sure that there’s no false positive and take appropriate action, whether that means suspending the offending user or even notifying law enforcement in the case of, say, threats.
“If you are saying racist stuff or doing bullying, we take action against the user,” Durand says.
Using particular keywords or phrases doesn’t necessarily mean a user is doing something wrong, Durand says, emphasizing that’s where the human moderators come in. False positives are also occasionally triggered by music playing in the background, or users themselves singing, he says.
“The user can use some keywords in a different way with a good intention,” he says.
So far, the automated moderation has been rolled out for English-language content in the United States, United Kingdom, Australia, and Canada, after an initial U.S. rollout in late May. Durand says the company expects to roll the feature out to additional countries and languages, including Spanish and French. Users can also report offensive content themselves, and Yubo will over time use those reports to find material the AI system overlooks and improve the automated moderation.
The mini-transcripts automatically generated by Yubo’s software are typically deleted within 24 hours, including if they’re found by human review to contain false positives. Those that are kept for internal investigations or law enforcement use can be held on to for up to a year, according to the company. Durand says the company, which is based in France, is in compliance with Europe’s data protection regulations.
The company also recently announced that it has rolled out AI-based facial analysis to roughly verify user ages, to enforce its age separation rules that keep adults and teens isolated.
“Our top priority is to protect our users,” Durand says.