Congress launch a new act to tackle AI’s threat to music
Politicians look to tackle deepfakes and cloning with their newly proposed bill, protecting artists and public figures.
US politicians have introduced the No AI Fraud Act. The US House of Representatives saw the new bill proposed as an effort to protect from AI replicas of people’s images and voice.
The act follows the recently proposed No Fakes Act, also looking to tackle AI. The No AI Fraud Act is acronym for: No AI Fake Replicas and Unauthorized Duplications. They say it targets “abusive AI deepfakes, voice clones, and exploitive digital human impersonations”.
It will establish a “right of publicity”, meaning protection against the use of someone’s likeness, voice, or other personal aspects without permission. It will allow people to “seek monetary damages for harmful, unauthorized uses of their likeness or voice”.
Mitch Glazier, Chairman and CEO of the Recording Industry Association of America (RIAA), said: “Putting in place guardrails like the No AI Fraud Act is a necessary step to protect individual rights, preserve and promote the creative arts, and ensure the integrity and trustworthiness of AI.”
He continues: “The No AI Fraud Act is a meaningful step towards building a safe, responsible and ethical AI ecosystem, and the RIAA applauds Representatives Salazar, Dean, Moran, Morelle, and Wittman for leading in this important area.”
Industry response across the board has been positive. Discussions have heated up in the last year over AIs place in music as its usage spreads and its potential gets stronger. Whilst it is being introduced to enhance streaming services for listeners and power unique tools like YouTube’s Dream Track, it also poses a threat.
Recreations of artists voices and style have proliferated across the internet with no repercussions. Scams involving supposed leaks from artists being sold for excessive amounts have been made by AI.
Government’s are quickly trying to catch up to the fast-moving technology in legal terms. The No AI Fraud Act would be federal and therefore provide legal protection across the country.
Senior advisor of one the most vocal campaign groups for the topic, Dr. Moiya McTier of the Human Artistry Campaign said: “Timely action is critical as irresponsible AI platforms are being used to launch deepfake and voice impersonation models depicting individuals doing and saying things they never have or would. This not only has the potential to harm these artists, their livelihoods and reputations, but also degrades societal trust.