Enact New Federal Deepfake Fraud Law, Says Microsoft
One of the recommendations made by the IT major to address the challenges arising from abusive AI-generated content.
IT major Microsoft has asked the US Government to enact a comprehensive new law to combat the misuse of synthetic (AI) content, more commonly known as “deepfakes”.
In an extensive 42-page white paper released on July 30, 2024 titled, “Protecting the Public from Abusive AI-Generated Content”, Microsoft’s Vice Chair and President Brad Smith has said while current US federal fraud statutes could be revised and enhanced to address synthetic content, the most comprehensive solution would be to enact a new federal synthetic content fraud statute. This should include both civil and criminal provisions, offering criminal penalties, civil seizure and forfeiture, and injunctive and other equitable relief, he added.
The report aims to address the challenges arising from abusive AI-generated content and provide policy recommendations to combat these problems. At the same time, it has also spelled out the steps initiated by Microsoft itself to combat online abusive AI-generated content.
Other Remedial Steps
In addition to the suggestion of passing a brand new law against deepfake makers, the report has laid out two more ideas that could have a significant impact in combating deceptive and abusive AI-generated content. These included AI system providers to label synthetic content using provenance tooling, and expanding collective abilities to promote content authenticity and detect abusive deepfakes.
Further, the Microsoft President has emphasized the need for collaboration between the public and private sectors, as well as the government in establishing regulatory frameworks and policies for responsible AI development and usage.
According to the report, enacting a new federal "deepfake fraud statute" would involve the following:
Comprehensive approach: The statute would encompass both civil and criminal provisions and could provide for criminal penalties, civil seizure and forfeiture, as well as injunctive and other equitable relief.
Template for consideration: The US Congress can consider the “Truth in Caller ID Act of 2010” as a useful template. This act criminalizes the transmission of misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value. It includes civil forfeitures, criminal fines, imprisonment, and enforcement by state attorneys general.
The report pointed out that financial fraud scams, too, have been increasing, particularly targeting older Americans who are seen as vulnerable. With the advancement of technology like generative AI, these numbers are expected to grow.
In addition to enacting a new federal statute, other actions that can be taken include:
Revising sentencing guidelines: The United States Sentencing Commission can revise the federal sentencing guidelines for fraud-related offenses to include sentencing enhancements for the fraudulent use of synthetic content during the commission of a crime. This would allow federal judges to consider the use of synthetic content as an aggravating factor in sentencing.
Prioritizing enforcement: The United States Deputy Attorney General (DAG) can issue a memorandum prioritizing synthetic content fraud enforcement by US Attorneys. This would provide guidance and direction for the investigation and prosecution of unlawful conduct related to deepfake fraud.
FTC penalties: The FTC is authorized to seek penalties from perpetrators of unfair and deceptive practices, including those involved in deepfake fraud. This would provide additional avenues for holding fraudsters accountable.
What Microsoft Has Done So Far To Prevent Abusive Use Of Tech
According to the white paper, Microsoft had initiated the following steps:
Implementing a safety architecture that includes red team analysis, preemptive classifiers, blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system.
Automatically attaching provenance metadata to images generated with OpenAI’s DALL-E 3 model in Azure OpenAI Service, Microsoft Designer, and Microsoft Paint.
Developing standards for content provenance and authentication through the Coalition for Content Provenance and Authenticity (C2PA) and implementing the C2PA standard so that content carrying the technology is automatically labeled on LinkedIn.
Taking continued steps to protect users from online harms, including by joining the Tech Coalition’s Lantern program and expanding PhotoDNA’s availability.
Launching new detection tools like Azure Operator Call Protection for our customers to detect potential phone scams using AI.
Executing our commitments to the new Tech Accord to combat deceptive use of AI in elections.
The blog post by Brad Smith
Click here for the white paper



