AI project App Danger: Blacklist for apps that are harmful to young people
Brian Levine, a computer scientist at the University of Massachusetts at Amherst, was fed up with his 14-year-old daughter asking permission to install certain smartphone apps. In order to avoid constant checks and also to help other parents, he developed a calculation model that automatically evaluates customer ratings of social apps. An artificial intelligence (AI) searches the assessments of users in the Apple and Google app stores for keywords such as “child porn” or “pedophilia”. The system then checks the context to prevent easily recognizable misjudgments.
App Danger Project searches child abuse reviews
On this basis and with additional manual checks, Levine, together with colleagues, has created a website with the App Danger Project over the past two years with a blacklist for applications through which content that is potentially dangerous to children and young people is exchanged or distributed with a comparatively high probability . “To increase the visibility of reports of child sexual abuse, we collected app reviews that raise concerns about child exploitation,” the team explains on the homepage. The applications listed would be “sorted by the number of ratings from the app stores in which users have reported dangerous situations for children”.
To discover relevant assessments, “we use a machine learning algorithm,” the scientists explain. The team, consisting of around a dozen computer scientists, does not contact the authors of relevant reviews, Levine reported to the New York Times. However, every single user report fished out by machine is checked. If child safety concerns were not confirmed, the assessments would be discarded.
Over 550 apps screened for child sexual abuse
Levine complains that Apple’s and Google’s app stores don’t offer keyword searches themselves. In doing so, the operators made it difficult for parents to locate warnings about inappropriate sexual behavior. The team has now examined over 550 apps with a focus on social media and messaging based on relevant reviews. A fifth of these had two or more complaints about child sexual abuse content. 81 offers had seven or more ratings of this type. A total of 182 apps are currently on the list, which may include applications for Google or Apple twice.
In addition to popular services such as WhatsApp, Snapchat, Reddit, Discord, Tumblr and various dating apps, the directory also includes the three applications Hoop, MeetMe and Whisper with numerous negative mentions. According to the market research company Sensor Tower, they made a combined turnover of around 30 million US dollars via the app stores last year. But they also appear in various criminal cases that, according to the US Department of Justice, are related to child sexual abuse.
Researchers see Apple and Google as having an obligation
The researchers believe Apple and Google have a duty to provide parents with more information about the risks posed by some apps. In addition, operators should better monitor applications that have already contributed to abuse. Not every program with ratings pointing to child molesters should be thrown out, Hany Farid, a Berkeley computer scientist who is working on the project, told the Times. However, Apple and Google should always check why some of the problematic applications are still available.
The two tech companies say they regularly scan app user reviews with their own computer models and investigate allegations of child sexual abuse. If applications violated their policies, they would be removed. Apps had age ratings to help parents and kids. Special software allows legal guardians to prohibit downloads. It also offers app developers tools to monitor child abuse depictions. Apple revealed to the newspaper that it had removed ten programs from the list after a thorough examination. Google said it had found no evidence of relevant material.
According to the report, Hoop’s new management assured that it had tightened the rules for deleting content. The Meet Group, which owns MeetMe, said it has no tolerance for images of abuse, uses AI to search for it, and reports inappropriate and suspicious behavior to authorities. Whisper did not respond to a query from the Times. Levine hopes that the project, along with similar services like Common Sense Media, will continue to help identify providers who are not providing users with adequate protection. In Germany, youth protection officers have had to take interaction risks into account when assigning age ratings for computer games since 2021.
#project #App #Danger #Blacklist #apps #harmful #young #people