YUBO: “We set limits and make users accountable for their behaviour”

YUBO: “We set limits and make users accountable for their behaviour”
Innovation

Rude Baguette sat down with Annie Mullins OBE, founder of the Trust + Safety Group and user protection adviser to social media platform Yubo, to talk about Yubo’s bespoke algorithms, child protection policies, and challenges in shielding children from harm on the internet.

RudeBaguette: Tell us a bit about yourself and your role in Yubo.

Annie Mullins: I’m an independent online safety expert involved in safeguarding children for all of my career, as a qualified social worker and child protection expert in the offline world. I came to work in the online industry around 2001, when I worked with Yahoo on child protection issues. Then I worked for Vodafone for 10 years in a global role as they launched the internet on mobile phones.

For the past 6 years I’ve been working independently with social media companies in implementing polices and formulating product safeguards. I have a lot of experience in leading self-regulation for the industry and agreeing good practice guidelines for social media in the UK government and in the EU. So, I’ve been quite a figurehead of leadership in that respect.

I came to Yubo about two and a half years ago following criticism they’d received in the press about being similar to Tinder. I have been helping them to innovate and deploy a strategy of child safety and user protection so that if things go wrong, we can take action immediately to protect users and can assist law enforcement quickly. We set about immediately putting  the right policies in places to set community rules on which to take action.

RB: On that point, what concrete changes were you able to implement to placate all the negative comments that you received?

AM: We first set ethical and responsible user guidelines and policies. There are a lot of teenagers live-streaming from their bedrooms and they’re bursting with hormones to experiment and test boundaries with each other. Were we going to allow for young people to sit in their underwear, for young men to be topless showing off their bodies or for young girls to be topless? This was the first major policy to be put in place: No, that is not acceptable, and we will not have that on the service.

After that Yubo developed algorithms that can detect people that may be nude or presenting only in their underwear. They intervene live and directly – it can sometimes kill the user stream depending on the gravity of what the algorithm believes it has found. If it’s people in their underwear, the moderators will send them a warning to dress appropriately. They have one minute before we close their stream.

While we don’t automatically close their accounts, we give them some breathing space to think about what they’ve done. We want to educate young people, not punish them. You’re helped to think about what happened and why you broke the rules and the potential consequences of this behaviour including putting yourself at risk.

People don’t read terms of service or community rules, so having a live intervention in the middle of chatting and communicating with others is an innovative approach. Yubo has taken a very responsible approach to the fact that many young people are using the service: We set limits and make users accountable for their behaviour.

RB: Is the algorithm that was put in place proprietary or is it linked to Yoti?

AM: For the live-streaming recognition of nudity or young people in their underwear it is proprietary and has nothing to do with Yoti.

But the Yoti solution helps us check user profiles carefully to see if there are fakes. A lot of that is to keep out the under thirteens, particularly in markets like Scandinavia with a high penetration of mobile phones and digital devices at a very young age. Yoti age recognition has really played a huge part in trying to keep those off the service, which is great.. In the first few months about 20 000 profiles were removed.

We’ve also been able to age-gate the community: there are the under eighteens, eighteens and over. And if we find a photograph on a profile that doesn’t seem to fit, we use the Yoti verify system, where users have to give their ID credentials through Yoti system that is separate from Yubo.

RB: Do you think that other social networks could learn from Yubo’s experience?

AM: Certainly! I think they’ll have to look at it. Everybody from governments, regulators, NGOs and advocates knows that digital identity is critical – for the future, for the integrity of services like banking, local services, shopping…

And it’s the same with social media. There’s much more accountability now due to terrorism and other things that are happening in the world. I think there is a lot of pressure for those solutions to be developed. Yubo is a forerunner in that.

RB: Are there ways to further improve the system?

AM: Of course! We want to make sure that users upload real photos of themselves – not their cats or their dogs – so that’s something we’re working on. You can never sit back and think you’ve achieved everything you need to in user safety. You need to keep challenging yourself, your product and what it is you’re doing to protect users. There’s always going to be more to do.

RB: Do you think that the educational role that these algorithms have had on teenagers has helped them improve their online behaviour in general?

AM: Yes. I think the whole thing is about behaviour. We have teenagers bursting with hormones and people, such as parents or teachers and Yubo have to play a role in setting boundaries and give them clues on where they may be putting themselves at risk. Like I said – it’s not about punishing but helping and supporting young people set their own boundaries and learn as they use social media services such as Yubo.

When we send users a warning, they comply pretty quickly and the community starts telling each other because other users can see when someone gets a warning or their stream shuts down. They start to self-police and set the boundaries themselves, telling each other “don’t do that, you’ll get into trouble”. That’s a really positive indication.

And it really is about behaviour as much as it is about technology to hold people accountable for their behaviour and nudge them in a more positive way to keep themselves safe. Education is critical because they are just young people and still developing and learning and we need to foster that. Platforms have a lot of responsibility to ensure that the environment that young people are operating, chatting, communicating and having fun in is actually positive and as safe as possible.

RB: How would you rate Facebook’s efforts to adapt to these particular challenges?

AM: They’ve made a lot of efforts, it would be untrue to say otherwise. And they’ve got huge scale, which brings a lot of challenges. They’ve set up initiatives around terrorism and they’re using algorithms and all sorts of processes to identify harmful content and protect users.

What they haven’t done I think is be more proactive within their products and services. Facebook has done a lot reactively and a lot such as using cache database to identify illegal contents of child abuse images. There’s no doubt they put in a lot of work. But where they perhaps are lacking is on the prevention side, where users are protected from making mistakes and coming to harm both to themselves or others.

RB: In closing, would you like to add anything?

AM: Safety is part of Yubo’s DNA now. It has to be. Otherwise it wouldn’t survive as a company if it were neglectful. People will not allow lots of teens being at risk of hurting themselves or being hurt by others. That’s why it’s so important.