Plans are underway to announce new regulations in reaction to social media sites’ indifferent handling of the urgent warning to take action against doctored films. Criminal prosecutions may be brought by both victims and those who knew the content was deep fake.
Sites that are discovered to be breaking the guidelines may be blocked. The criminal code will be applied to determine penalties. In recent months, there have been numerous deep fake occurrences that have targeted well-known people. The most recent victim, cricket legend Sachin Tendulkar, had to denounce as phoney an AI-generated video that had him endorsing a gaming game.
Source: Live Law
Concern over the possibility of deep fake abuse and chatbots posing as candidates during the next Lok Sabha elections is mounting. Establishing a strict regulatory framework is crucial.
Social media companies now have a responsibility to treat the threat of deep fake content with the seriousness it merits in light of new regulations that link its sale to forgeries. It’s a timely alert regarding the legal ramifications of doing nothing or taking a careless approach to averting injury.
Source: WION
AI algorithms are used in “deep fakes” to create realistic-looking and sounding videos, audio recordings, or photographs. They produce incredibly convincing phoney content, obfuscating the distinction between fact and fiction.
Privacy is also threatened by deep fakes. To properly handle the concerns at hand, industry participants and regulators must work together. From creating strong detection methods to encouraging proactive reporting and increasing awareness, the endeavour is challenging. Accountability of both the producers and the distributors is crucial.
What do you think about this? Comment below.