Unitary, a startup that's developing AI to automate content moderation for "harmful content" so that humans don't have to, has picked up £1.35 million in funding. The company is still in development mode but launched a trial of its technology in September.
Led by Rocket Internet's GFC, the seed round also includes backing from Jane VC (the cold email-friendly firm backing female-led startups), SGH Capital, and a number of unnamed angel investors. Unitary had previously raised pre-seed funding from Entrepreneur First, as an alumnus of the company builder program.
"Every minute, over 500 hours of new video footage are uploaded to the internet, and the volume of disturbing, abusive and violent content that is put online is quite astonishing," Unitary CEO and co-founder Sasha Haco, who previously worked with Stephen Hawking on black holes, tells me. "Currently, the safety of the internet relies on armies of human moderators who have to watch and take down inappropriate material. But humans cannot possibly keep up."
Not only is the volume of content uploaded increasing, but the people employed to moderate the content on platforms like Facebook can suffer greatly. "Repeated exposure to such disturbing footage is leaving many moderators with PTSD," says Haco. "Regulations are responding to this crisis and putting increasing pressure on platforms to deal with harmful content and protect our children from the worst of the internet. But currently, there is no adequate solution".
Which, of course, is where Unitary wants to step in, with a stated mission to "make the internet a safer place" by automatically detecting harmful content. Its proprietary AI technology, which uses "state of the art" computer vision and graph-based techniques, claims to be able to recognise harmful content at the point of upload, including "interpreting context to tackle even the more nuanced videos," explains Haco.
Meanwhile, although there are already several solutions offered to developers that can detect restricted content that is more obvious, such as explicit nudity or extreme violence (AWS, for example, has one such API), the Unitary CEO argues that none of these are remotely good enough to "truly displace human involvement".
"These systems fail to understand more subtle behaviours or signs, especially on video," she says. "While current AI can deal well with short video clips, longer videos still require humans in order to understand them. On top of this, it is often the context of the upload that makes all the difference to its meaning, and it is the ability to incorporate contextual understanding that is both extremely challenging and fundamental to moderation. We are tackling each of these core issues in order to achieve a technology that will, even in the near term, massively cut down on the level of human involvement required and one day achieve a much safer internet".