Facebook Has a New Platform Policy Surrounding Deepfake Content
Daniel Johnson — January 8, 2020 — Tech
Facebook now has a new platform policy that will ban deepfake content from the social media site. The new policy will target any content that was created using AI or machine learning to make the video look legitimate. The new platform policy is intended to make it so the average user will not confuse deepfake content with real video, however, Facebook will not remove all fake video content from the platform.
The social media giant appears to be rolling out the new policy in order to protect against deepfake content that could have implications for the 2020 presidential election. Facebook also spoke on its new policy, "If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we're providing people with important information and context."
Image Credit: Shutterstock
The social media giant appears to be rolling out the new policy in order to protect against deepfake content that could have implications for the 2020 presidential election. Facebook also spoke on its new policy, "If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labeling them as false, we're providing people with important information and context."
Image Credit: Shutterstock
3.8
Score
Popularity
Activity
Freshness