#Tech #Technology
Tech
via https://www.AiUpNow.com
Terri Coles, Khareem Sudlow
The human cost of monitoring for offensive content from social networks and other sites can be high, but advances in artificial intelligence and machine learning are making it possible to offload that work (some of it, anyway), benefiting individuals and organizations alike.
In 2014, Wired ran an article about the emotional and psychological toll taken on people in the Philippines who work to remove offensive content in their roles as content moderators for social media platforms like Facebook. In March of this year, the BBC's Storyville ran an episode called "The Internet's Dirtiest Secrets: The Cleaners." Like the Wired article, the BBC documentary focused on the terrible toll that filtering the worst of the internet takes on humans.
LinkedIn and other companies (and countries) may have hit on a way to mitigate that cost: machine learning. Indeed, ML and AI have become valuable tools for purposes not previously predicted, such as preventing certain content on professional platforms, says Joana Gutierrez, the CEO and founder of the virtual assistant Meethappy.
The task of keeping nudity and profanity--and much worse--off LinkedIn and other internet platforms isn’t an easy one. LinkedIn has well over half a billion members who post in a couple dozen different languages. If the social network is going to rely on machine learning to keep the site clean, it has to work really well.
Fortunately for LinkedIn, it does. That’s thanks to an algorithm made by Rushi Bhatt, who works in Bengaluru, India. Factor Daily reported on Bhatt’s work, which happens at the company’s development center in Bengaluru and is tied to the site’s ongoing focus on user-generated content. For the Factor Daily article, Bhatt described how machine learning is used to identify the quality of content for LinkedIn, including categorizing content and pre-processing images. The ability to rely on machine learning for this work has expanded in the time since Wired reported on Filipino content moderators, thanks to the advancement in neural network technology.
The benefits of such use don’t come only with content created outside a company. They can also save a firm from an HR (and PR) nightmare if an employee were to post inappropriate material on a company feed, either intentionally or accidentally, Gutierrez pointed out.
There are more advancements to come. For example, IBM’s Watson Tone Analyzer looks at what content intends as well as what it specifically says--an important tool for sites trying to balance freedom of speech with protection of users and removal of illegal or harmful content. Amazon announced in 2017 that Rekognition could now detect explicit or adult content in images, and Google promotes Cloud Vision API as being able to do the same. (It should be noted that Rekognition has come under fire for the accuracy of its technology and the organizations to which it sells the platform.)
Challenges also still remain--for free speech, but also freedom of artistic expression, as Gutierrez pointed out. One person’s offensive nude may be another’s artistic photograph that gets filtered off a gallery site that shares the image to promote new work, for example. The difference between NSFW or offensive content and content a particular audience wants is not always completely clear.
Make no mistake: There are still plenty of human beings involved in the process, both on Bhatt’s team at LinkedIn and for other big tech platforms.
“AI has not reached the point where it can function accurately without human interception,” Gutierrez said.
Also, as we’ve seen from Facebook’s much-maligned newsfeed, predictive content is not perfect. But if machine learning can take over the potentially damaging work of keeping the worst of the internet off our newsfeeds, perhaps it will soon also make those feeds better with what it adds, not only what it removes.
In 2014, Wired ran an article about the emotional and psychological toll taken on people in the Philippines who work to remove offensive content in their roles as content moderators for social media platforms like Facebook. In March of this year, the BBC's Storyville ran an episode called "The Internet's Dirtiest Secrets: The Cleaners." Like the Wired article, the BBC documentary focused on the terrible toll that filtering the worst of the internet takes on humans.
LinkedIn and other companies (and countries) may have hit on a way to mitigate that cost: machine learning. Indeed, ML and AI have become valuable tools for purposes not previously predicted, such as preventing certain content on professional platforms, says Joana Gutierrez, the CEO and founder of the virtual assistant Meethappy.
The task of keeping nudity and profanity--and much worse--off LinkedIn and other internet platforms isn’t an easy one. LinkedIn has well over half a billion members who post in a couple dozen different languages. If the social network is going to rely on machine learning to keep the site clean, it has to work really well.
Fortunately for LinkedIn, it does. That’s thanks to an algorithm made by Rushi Bhatt, who works in Bengaluru, India. Factor Daily reported on Bhatt’s work, which happens at the company’s development center in Bengaluru and is tied to the site’s ongoing focus on user-generated content. For the Factor Daily article, Bhatt described how machine learning is used to identify the quality of content for LinkedIn, including categorizing content and pre-processing images. The ability to rely on machine learning for this work has expanded in the time since Wired reported on Filipino content moderators, thanks to the advancement in neural network technology.
The benefits of such use don’t come only with content created outside a company. They can also save a firm from an HR (and PR) nightmare if an employee were to post inappropriate material on a company feed, either intentionally or accidentally, Gutierrez pointed out.
There are more advancements to come. For example, IBM’s Watson Tone Analyzer looks at what content intends as well as what it specifically says--an important tool for sites trying to balance freedom of speech with protection of users and removal of illegal or harmful content. Amazon announced in 2017 that Rekognition could now detect explicit or adult content in images, and Google promotes Cloud Vision API as being able to do the same. (It should be noted that Rekognition has come under fire for the accuracy of its technology and the organizations to which it sells the platform.)
Challenges also still remain--for free speech, but also freedom of artistic expression, as Gutierrez pointed out. One person’s offensive nude may be another’s artistic photograph that gets filtered off a gallery site that shares the image to promote new work, for example. The difference between NSFW or offensive content and content a particular audience wants is not always completely clear.
Make no mistake: There are still plenty of human beings involved in the process, both on Bhatt’s team at LinkedIn and for other big tech platforms.
“AI has not reached the point where it can function accurately without human interception,” Gutierrez said.
Also, as we’ve seen from Facebook’s much-maligned newsfeed, predictive content is not perfect. But if machine learning can take over the potentially damaging work of keeping the worst of the internet off our newsfeeds, perhaps it will soon also make those feeds better with what it adds, not only what it removes.
Tech
via https://www.AiUpNow.com
Terri Coles, Khareem Sudlow