Like many social media platforms that are under fire right now for not being ethical enough, Instagram has long wanted to be a platform associated with online positivity. With the advent of cyberbullying and harassment taking on many shapes and forms, Instagram has taken active steps to combat this kind of dangerous online behavior. The platform’s latest update addresses a larger swath of negative interactions by hiding negative comments and sending warnings.
Cyberbullying is one of the hardest aspects to define and locate on social media platforms. Even Instagram’s parent company, Facebook, has a hard time estimating how much cyberbullying is going on on its platform. With a core user group made up of teenagers, a group that is especially impressionable at a young age, growing up with the social age as it is, it’s now more important than ever for media platforms to be aware of this kind of behavior and to put their best efforts into stopping it.
To that end, Instagram added new tools to its repertoire: the platform will first and foremost automatically hide comments that are perceived to be bullying, and second, it will send warning messages to uses whose comments are being flagged as toxic. This new feature uses machine learning to find comments that use language as the one reported for bullying in the past.
For once, technology and social media are being proactive about the mental health of its users, and we’re very excited about it.