SafeKidsPro uses artificial intelligence to identify and alert parents to potential cyberbullying on social media - Startup Daily
News, Insights and Stories from the Australian and New Zealand tech ecosystem.

SafeKidsPro uses artificial intelligence to identify and alert parents to potential cyberbullying on social media

As far as misbehaviours go, bullying is a classic. As Richard Donegan (2012) notes, “[b]red from a capitalistic economy and competitive social hierarchy, bullying has remained a relevant issue through the years”, with technology only scaling the problem. Bullies are now using social media platforms like Facebook, Twitter and Instagram to abuse, intimidate and aggressively dominate others. The same goes for sexual predators. Although there are commercial solutions available that warn parents of potential threats to their children’s safety, they fall short in many ways. Specifically, existing solutions like NetNanny and MinorMonitor rely on the matching of key words and phrases and end up reporting on innocent activity or being unnecessarily disruptive.

SafeKidsPro’s algorithm, on the other hand, emulates brain function and identifies the potential intent of someone sending multiple instances of concerning content over a set time period, while also taking into consideration the severity of the language used and how someone may feel as the recipient of the information. SafeKidsPro doesn’t just alert parents of instances where their children are the victims or perpetrators of cyberbullying, but also of potential predatory interactions such as grooming, stalking and sextortion that take place on their social media accounts.

SafeKidsPro is a product of Sydney-based software development company KevTech Apps, founded by Anil Chatterjee (CEO) and David Meldrum (CTO). Chatterjee said their solution, which was launched in April this year after three and a half years of development, “enables responsible parenting in the digital age.”

“Most parents wouldn’t dream of watching their kids walk out of their front door at any hour of the day to go and play with complete strangers. However, they do the equivalent of this by allowing their kids to roam unsupervised online,” Chatterjee said. “SafeKidsPro allows parents to watch out for their kids on social media, today’s virtual playground, as they would in the physical world.”

Chatterjee said prior to embarking on this venture, he was shocked at some of the content a younger relative had posted on Facebook. “It concerned me that they could damage their reputation.”

A couple of days later, Meldrum, whose background includes leading the development of commercially successful detection software for PC Tools (later acquired by Symantec), suggested that they create technology that detects potentially damaging problematic content, as well other inappropriate, antisocial and potentially risky behaviours on social media.

“That was the spark moment that led us to investigate the problem further,” said Chatterjee.

“The more we dug into the issues that exist, the more alarmed and saddened we became by the negative impact of bad social media behaviours, conscious of the lives it can damage or even destroy in the short and long-term. It struck us that whilst the problems are extremely well-known there was no obvious ways of protecting kids from them. It was then that we made the resolve to develop a solution to plug this gap.”

In 2011/12, KevTech commissioned a joint research thesis with the University of Queensland; the objective was to determine if and how artificial intelligence could analyse messages in digital contexts and detect cyberbullying on social networks. The research was then used to improve the efficacy of existing solutions.

For the uninitiated, the way SafeKidsPro works is technologically complex. But to simplify the process as much as possible, data is collected via SafeKidPro’s input collectors (like the real-time APIs of Facebook, Instagram and Twitter), then pushed through an event processor to determine its ‘type’ – that is, whether it’s a comment, direct message, an image with a comment attached, a location sharing event, a new ‘friend’ invite, and so forth. The comments and messages are then passed through pre-processing engines to strip out ‘junk’ data – that is, remove words like ‘is’, ‘a’ and ‘of’, as well as URLS; change slang to plain English; correct misspelled words; and add identifiers to emoticons (e.g. 🙂 means ‘happy’). Once stripped, the comments and messages are passed through pattern detection engines to identify credit card usage, user defined keywords and phishing.

The comments and messages are also passed through SafeKidsPro’s contextual detection engines to identify those which contain ‘bad content’ – that is, content that is threatening, sexual, offensive, derogatory or contains profanities.

The ‘bad content’ is then passed through SafeKidPro’s artificial intelligence detection technology. One of KevTech’s contributors Dr Adrian Colimitchi designed an algorithm which emulates brain function and identifies the potential intent of someone sending ‘bad content’ multiple times over a period of time. The algorithm also takes into consideration how frequently that content is shared and the severity of the language used, as well as how someone would feel as the recipient of this information.

dsah 1024x600
SafeKidsPro Dashboard
macbook4
SafeKidsPro Desktop

Through this process, SafeKidsPro is able to identify potential cyberbullying (whether the child is the victim or the perpetrator) and predatory interactions. The data is turned into events, and once events have been classified, alerts are raised on the user’s dashboard and email alerts are sent to parents. From the dashboard, the parent can view the content or event that triggered the alert, when it happened and who was involved. For events which contain potential risks, SafeKidsPro provides links to pages where parents can receive further information and advice.

From one angle, SafeKidsPro appears to be ‘moral panic’-induced innovation. Children have been historically seen as innocent, lacking agency, less capable of making informed decisions, and vulnerable to exploitation and abuse. As we diverge rapidly into a digital world, it’s become increasingly difficult for parents to keep tabs on their children – specifically, their online behaviour and the content they’re being exposed to or choosing to access. This is unhelped by the fact that a lot of children, especially adolescent children, don’t want their parents looking over their shoulders as they participate online (or in the physical world for that matter). 

Chatterjee admitted that many parents using SafeKidsPro have been surprised by the number of strangers their children are connected to online, and the number of friend/connection requests their children receive from unknown people.

“For too long children have roamed unsupervised on social media without having to adhere to what is deemed as acceptable by their parents, their neighbours or their community at large. Commonly, children’s behaviour is informed by their peers and, in some cases, by those posing as their peers,” said Chatterjee.

“This has seen children acting in a way they would never dream of in the real world whether that is simply using foul or offensive language, taking overtly sexual selfies and sending them to parties known and unknown, or even being engaged in cyberbullying.”

While SafeKidsPro can certainly help parents intervene in instances their children are in danger, these technologies should not be seen as an alternative to educating children about online safety. As Kerry H. Robinson writes in her book Innocence, Knowledge and the Construction of Childhood, “Denying children access to knowledge – and to frank and open discussions around the questions they have in relation to their bodies, sexuality and relationships – leave children ‘to sort out their scripts with peers, media or alone in secretive and dark corners’” (Plummer in Robinson, 2013, p. 11).

Chatterjee does acknowledge the importance of education and guidance, but awareness on behalf of parents is the first step: “It is a fact that most children, on some occasions, make bad decisions. SafeKidsPro provides parents with the information they need to provide guidance needed to save their kids from damaging their reputations, being involved in cyberbullying, or interacting with people who might do them harm.”

SafeKidsPro specifically targets the English-speaking parents of children aged between 10 and 16, who are active on social media.

“We are aware that the minimum age to be on Facebook, Instagram and Twitter is 13, however there are many kids on their who are younger, including the children of some of our users,” said Chatterjee.

“SafeKidsPro is particularly effective when used to monitor children who are just starting out on social networks. These younger kids have not necessarily been exposed to the dangers and pitfalls that exist and can more easily fall foul them. SafeKidsPro gives parents a platform to discuss what is and is not acceptable online behaviour, [while also giving] parents the information needed to intervene early and minimise the damage caused if their kids is involved with something they shouldn’t be.”

Although SafeKidsPro has been self-funded to date, Chatterjee said they’re looking to raise capital to fund the development of a mobile app and to properly market the product locally and in the US and UK markets.

SafeKidsPro currently operates on a subscription model; users are charged $49.95 per year. This allows parents to concurrently monitor up to eight Facebook, Instagram and Twitter accounts.

Although SafeKidsPro is only beginning to be commercialised, Chatterjee said he is already proud of the feedback they’ve received from parents about how it has given them the opportunity to look out for their kids in digital environments.

“As a parent myself, this is really satisfying,” said Chatterjee.

The founders are also in discussions with law enforcement agencies about how SafeKidsPro could be used to identify crime.

Featured image (L to R): Anil Chatterjee & David Meldrum, Co-Founders, KevTech. Source: Provided.





Startup Daily