How the World is stepping up to combat hateful, harmful content online
Although it is relatively common in mainstream society to condemn hate speech, including online hate speech, closer study shows that this mainstream consensus is not well enforced.
Starting with the definition of hate speech and striking a balance between freedom of speech, freedom from hatred or fear, and the protection of children and youth, there is currently a lack of clarity on how hate speech can be counteracted by the mainstream.
Young people, who increasingly integrate multiple forms of social media into their intimate, social and political lives, produce and confront hate speech online. Furthermore, they do so in a context where what constitutes hate speech, and what is recognized as racism, are key dimensions of online engagement and conversational conversations.
Let’s have a look at how researchers around the world are working to combat online hate.
Twitter has closed many accounts for abusing, bullying and intimidating others. The parliamentary committee has also summoned the CEO of Twitter.
Twitter says it is tackling those who spread hatred and use abusive language in the online world strictly.
It wants to protect its users from ‘online bullying’. A large number of people in the world are victims of such ‘online bullying’.
It is believed that 20 to 33 percent of children are victims of such bullying in schools. An equal number of employed people are victims of bullying in the offices.
These bullying or intimidation, abusing, have a deep impact on people’s minds. May spoil their future life.
Sometimes bullying also has a bad effect on relationships. Bullying is becoming a common thing in the online world today.
On Instagram, someone’s picture is made fun of. Some are targeted as animals, some fat and some ugly.
Similarly, on Twitter-Facebook, if people disagree with your views, then they get abusive. This online bullying is very harmful for our mental health.
The victims of this many times harm themselves.
Bullying in the Online World
Understand how much bullying has increased in the online world, from the fact that 59% of American teenagers are victims of bullying in the online world.
Now if this challenge has arisen due to new technology. So, with the help of this technique, an attempt is being made to deal with it.
With the help of artificial intelligence, efforts are being made to identify and deal with such miscreants in the online world.
It is more or less impossible for a person to read every online post. In such a situation, it has become a necessity for us to take the help of artificial intelligence, so that we can deal with online bullying or trolling.
An artificial intelligence machine is developed in Belgium that identifies trolls with the help of certain words used in online posts.
As an experiment, this machine equipped with artificial intelligence was given to read 1.14 lakh posts of social media site AskFM.
With the help of some special words, this machine identified those who wrote bullying posts. However, it failed to identify the sarcastic comment.
Abuse is being used widely in the online world. People use such language for many reasons.
Sometimes there is no abuse in many ugly comments. It becomes difficult to verify such accounts.
But some researchers around the world are training algorithms to find such crooks in online posts.
Researchers have also identified those targeting women, blacks and overweight people on a social media website called Reddit.
For this, Artificial Intelligence has been taught to investigate the post by making certain words as keywords.
The results of our research show that the use of hate speech on social media can be stopped, but to an extent.
Artificial intelligence can be of great use to us in this. Even miscreants who use words like animals can be identified.
Companies like Instagram have started monitoring posts using AI.
A 2017 survey showed that 42% of youth on Instagram were bullied or made fun of. These include Britain’s famous guitarist Brian May.
Brian May said after becoming a victim of online bullying, “It came to mind from my experience of children who become victims of such people in the online world. When their friends become enemies. What would be the effect of this, now I understand very well.”
Now Instagram monitors its users with artificial intelligence.
People who post intimidating or abusive posts are identified and their accounts closed.
Videos and pictures posted of such people are also tracked.
The intelligent machine of Instagram also keeps an eye on the jokes people make by comparing two pictures.
Instagram says victims of online bullying often don’t complain to themselves. Therefore, it is necessary to take such steps in their defense.
By the way, bullying happens only in the online world, it is not so. Incidents of sexual abuse have come to the fore in many companies recently.
Women are often the victims of discrimination. Such people can also be helped with technology.
Scientists at the University College of London have designed a robot, named Spot.
It records their experiences by talking to victims of bullying or discrimination in the offices. This data can be used later and memory is stored as evidence through spots.
The chatbot named Botler, developed in the US, has even surpassed Spot. It is designed to help victims of sexual abuse.
This machine has been made by feeding data of 3 lakh court cases in the US and Canada.
By listening to people’s conversations, this machine finds out whether they are victims of sexual abuse. Then an attempt is made to help them.
So far this machine has been successful in giving 89% correct results. Artificial intelligence doesn’t just protect people from intimidation and exploitation.
Rather, they are also being used to save lives of people. Every day more than 3,000 people commit suicide around the world. That is, every 40 seconds a person takes his own life.
If we can find out by looking at someone’s behavior that he is contemplating suicide, then many people can be saved.
However, predicting one’s mental health is a huge challenge. Artificial intelligence can collect and review a lot of information. It can warn us of many dangers. Researchers are training intelligent machines to examine the health records of patients and tell whether they are going to harm themselves.
So far this machine has been seen to accurately predict 92 percent of the cases. Such experiences can help those who take care of mental health. Mental therapy providers will be able to help patients with these intelligent machines. If mechanical data is being used to help the people in trouble, then what is wrong.
Apps like Wobot and Vice try to solve people’s problems by communicating with them. Similar apps are also becoming helpful in dealing with online bullying.
However, still no system is perfect enough that such cases can be completely eliminated.
Now some work will have to be done by humans too. Machines cannot take every responsibility.