A photo shows the logo signs of Google and YouTube at their stand ahead of the World Economic Forum annual meeting in Davos in 2022.
Fabrice Coffrini/AFP via Getty Images
YouTube will soon begin alerting viewers when they’re watching a video made with artificial intelligence.
The Google-owned video platform says creators must disclose when they use AI or other digital tools to make realistic-looking altered or synthetic videos, or risk having their accounts removed or suspended from earning advertising revenue on YouTube. The new policy will go into effect in the coming months.
YouTube will also allow people to request videos be removed if they use AI to simulate an identifiable person, under its privacy tools.

The proliferation of generative AI technology, which can create lifelike images, video and audio sometimes known as “deepfakes,” has raised concerns over how it could be used to mislead people, for example by depicting events that never happened or by making a real person appear to say or do something they didn’t.
That worry has spurred online platforms to create new rules meant to balance between the creative possibilities of AI and its potential pitfalls.

Beginning next year, Meta, the owner of Facebook and Instagram, will require advertisers to disclose the use of AI in ads about elections, politics and social issues. The company has also barred political advertisers from using Meta’s own generative AI tools to make ads.
TikTok requires AI-generated content depicting “realistic” scenes be labeled, and prohibits AI-generated…
Read the full article here
Leave a Reply