#SeoTips
January 8, 2020 at 07:23AM
via https://www.aiupnow.com
Richard Adhikari, Khareem Sudlow
Facebook on Monday promised to remove certain "misleading manipulated media": videos edited or synthesized in ways not apparent to an average person, which likely would mislead viewers to believe that video subjects said words they did not say; products of artificial intelligence or machine learning that merge, combine, replace, or superimpose content onto a video, creating a fake video that appears to be authentic.
The new policy does not apply to content that is parody or satire, or videos edited to omit words or change the order of spoken words.
Videos that do not fit Facebook's criteria for removal still may be reviewed by Facebook's independent third-party fact checkers -- more than 50 partners worldwide that conduct fact-checking in more than 40 languages.
Facebook will "significantly reduce" the News Feed distribution of photos or videos flagged by fact checkers. Such photos or videos will be rejected if they are being run as an ad. People who see, try to share, or already have shared such photos or videos will be alerted that they are false.
"If we simply removed all manipulated videos flagged by fact checkers as false, the videos would still be available elsewhere on the Internet or social media ecosystem," Monika Bickert, Facebook's vice president, global policy management, pointed out. "By leaving them up and labeling them as false, we're providing people with important information and context."
However, the fact-checking program does not apply to posts construed as political speech.
"Any change other than removing background noise and artifacts outside of the speaker's control effectively changes the record and may change how people view the speaker or interpret what they say," he told the E-Commerce Times.
As it stands, the ban "implies you could alter a video using a video editor and it would then be OK," Enderle said. "Given we have state players trying to change perceptions and alter elections, that isn't enough of a hurdle to overcome to protect against manipulation and fraud."
For example, Twitter declined to remove a video of presidential candidate Joe Biden manipulated to appear that he supports white nationalism, noted Liz Miller, principal analyst at Constellation Research.
"Because the video itself was not a deepfake and was a case of editing away context, they felt it did not go against their policy," she told the E-Commerce Times.
When is a video that's edited to remove context and thus become misleading actually a fake or a fraud? she wondered. When is a video that retains words and context but has the audio slowed to make a subject appear to be drunk a deepfake that's purpose-built to harm versus a satire?
Notably, the ban will not apply to a widely circulated video of House Speaker Nancy Pelosi with the audio slowed down to artificially slur her speech, because it was not generated by AI.
However, Twitter has proposed plans to label content that is fake, misleading, or artificially manipulated or created using artificial intelligence, Miller said.
Facebook has painted its battle against deepfakes as part of a larger effort against fake news and misinformation that includes teaming up with various organizations and institutions of higher learning as well as third-party fact checkers.
The company last year launched the Deep Fake Detection Challenge to encourage research and the creation of open source tools to detect deepfakes.
"Facebook is becoming the land of the also-ran when it comes to ethics, privacy and security," she observed. "If they really wanted to battle fake news and misinformation, they would have a third-party analyst or research group review every fact-checking source to give an unvarnished assessment of veracity and ability to deliver factual response."
Facebook's deepfakes ban "will only be as effective as the tools they apply to enforce it. If they apply a rock-solid algorithm and have truly neutral assessors checking accuracy, then they have a shot at being effective," Miller remarked.
"I think they don't know what the right call is between free speech and censorship," said Ray Wang, principal analyst at Constellation Research.
"The selective approach keeps it confusing," he told the E-Commerce Times.
Facebook has the technology to make its strategy work, Wang said. "I think they will get better over time."
The new policy does not apply to content that is parody or satire, or videos edited to omit words or change the order of spoken words.
Videos that do not fit Facebook's criteria for removal still may be reviewed by Facebook's independent third-party fact checkers -- more than 50 partners worldwide that conduct fact-checking in more than 40 languages.
Facebook will "significantly reduce" the News Feed distribution of photos or videos flagged by fact checkers. Such photos or videos will be rejected if they are being run as an ad. People who see, try to share, or already have shared such photos or videos will be alerted that they are false.
"If we simply removed all manipulated videos flagged by fact checkers as false, the videos would still be available elsewhere on the Internet or social media ecosystem," Monika Bickert, Facebook's vice president, global policy management, pointed out. "By leaving them up and labeling them as false, we're providing people with important information and context."
However, the fact-checking program does not apply to posts construed as political speech.
Major Loopholes
The ban also should apply to videos edited to omit words or change the order in which they are spoken, maintained Rob Enderle, principal analyst at the Enderle Group."Any change other than removing background noise and artifacts outside of the speaker's control effectively changes the record and may change how people view the speaker or interpret what they say," he told the E-Commerce Times.
As it stands, the ban "implies you could alter a video using a video editor and it would then be OK," Enderle said. "Given we have state players trying to change perceptions and alter elections, that isn't enough of a hurdle to overcome to protect against manipulation and fraud."
For example, Twitter declined to remove a video of presidential candidate Joe Biden manipulated to appear that he supports white nationalism, noted Liz Miller, principal analyst at Constellation Research.
"Because the video itself was not a deepfake and was a case of editing away context, they felt it did not go against their policy," she told the E-Commerce Times.
Deep Confusion
"Everyone is trying to figure this out," Miller said. "This is getting to be a complex issue made even more chaotic by a lack of understanding or real awareness around what people are talking about."When is a video that's edited to remove context and thus become misleading actually a fake or a fraud? she wondered. When is a video that retains words and context but has the audio slowed to make a subject appear to be drunk a deepfake that's purpose-built to harm versus a satire?
Notably, the ban will not apply to a widely circulated video of House Speaker Nancy Pelosi with the audio slowed down to artificially slur her speech, because it was not generated by AI.
However, Twitter has proposed plans to label content that is fake, misleading, or artificially manipulated or created using artificial intelligence, Miller said.
Facebook has painted its battle against deepfakes as part of a larger effort against fake news and misinformation that includes teaming up with various organizations and institutions of higher learning as well as third-party fact checkers.
The company last year launched the Deep Fake Detection Challenge to encourage research and the creation of open source tools to detect deepfakes.
A Question of Ethics
"There is a question of ethical production and clear-cut fraud. In a world that is unable to self-regulate ethical behavior, guide rails must be erected," Miller said."Facebook is becoming the land of the also-ran when it comes to ethics, privacy and security," she observed. "If they really wanted to battle fake news and misinformation, they would have a third-party analyst or research group review every fact-checking source to give an unvarnished assessment of veracity and ability to deliver factual response."
Facebook's deepfakes ban "will only be as effective as the tools they apply to enforce it. If they apply a rock-solid algorithm and have truly neutral assessors checking accuracy, then they have a shot at being effective," Miller remarked.
"I think they don't know what the right call is between free speech and censorship," said Ray Wang, principal analyst at Constellation Research.
"The selective approach keeps it confusing," he told the E-Commerce Times.
Facebook has the technology to make its strategy work, Wang said. "I think they will get better over time."
January 8, 2020 at 07:23AM
via https://www.aiupnow.com
Richard Adhikari, Khareem Sudlow