New Law in New Hampshire Creates a Private Right of Action for Victims of Deepfakes
This post is part of a series sponsored by IAT Insurance Group.
There is no shortage of examples in recent years of how advanced technology can be used in shocking ways:
- Fraudsters recently impersonated the CFO of a multinational company on video, convincing the employee to pay $25 million to the company in payment to the fraudsters.
- A disgruntled athletic director at a high school in Maryland is accused of creating and distributing a fake recording of the school’s principal that contains racial slurs and racial slurs.
- Reports are emerging across the country of fake intimate photos being used as a tool for cyberbullying, such as face swapping and “undressing” apps.
These are clear use cases for deepfakes that are generated by hitting three main types of content: video, audio, and image.
As technology has improved and damage has been done to victims, concern about deepfakes has continued to grow. Recently, this has culminated in the passing of a new law in New Hampshire that could affect the rest of America.
New Hampshire: Deepfake generation can lead to civil and criminal actions against the perpetrator
Not mentioned above – but perhaps the point of deep fear – came at the beginning of 2024 when a serious recording of Joe Biden was broadcast throughout New Hampshire by individual robocalls, suggesting that the voters of New Hampshire do not participate in the presidential election of the country.
This prompted a civil lawsuit against the audio manufacturer, as well as the phone companies that distributed the phone. The New Hampshire Attorney General has also charged the person who created the deepfake with several counts.
A few months later, the Governor of New Hampshire signed into law HB 1432, the first state law enacted that specifically allows for a private right of action from victims of deepfakes. From the law:
A person may take action against any person who knowingly uses any likeness in video, audio, or any other media of that person to create a deepfake for the purpose of embarrassing, harassing, stalking, defaming, extorting, or causing any financial or reputational damage to that person through damages. such use.
The law also states that the producer of a deepfake is guilty of a class B crime “if a person creates, distributes, or knowingly presents any likeness in video, audio, or any other media of an identifiable person that creates a deepfake for this purpose. to embarrass, harass, stalk, defame, extort, or cause financial or reputational harm to an identifiable person.”
The law will come into force from 1 January 2025.
New Hampshire Law May Provide a Playbook for Other States
Even in different times, it makes sense that there would be broad bipartisan advocacy for more substantive legislation. No politician is immune to the dangers posed by these deepfakes, and their collaborators are likely to be equally concerned about the negative impact deepfakes can have.
As of June, according to the Voting Rights Lab, there were 118 bills in 42 state legislatures on the table containing provisions aimed at controlling illegal information generated by AI.
What will be worth watching is if the laws that end up being passed are written broadly to capture actions produced in a non-political environment, and if they follow New Hampshire and allow a private right of action for those affected by deepfakes. Legislation proposed by New York Governor Kathy Hochul this past Spring would provide for this private right of action.
Insurance and Impact Risk
Private Right of Action are four words that will always catch the attention of liability insurance professionals. General Liability and Homeowners policies, as well as other Specialty Business lines – may be impacted if and when public actions involving deepfakes increase.
General Liability
Regarding General Liability insurance, use cases involving deepfakes should primarily be considered in the context of Coverage B – Personal And Advertising Injury – of the ISO Commercial General Liability policy. The definition of “personal injury and publicity” in the ISO CG 00 01 foundation policy includes the following two clauses:
d. Oral or written publication, in any form, of material that defames or defames a person or organization or disparages a person or organization’s goods, products or services;
e. Oral or written publication, in any form, of material that violates a person’s right to privacy.
It is certainly possible that infringements involving deepfakes may support claims brought under this coverage section. Coverage B differs from Coverage A in that, depending on the exclusion, there may be some degree of coverage for intentional acts. If a business undermines and/or violates another party’s right to privacy with a deepfake, claims may reach that business’s GL carrier.
Landlords
Cyberbullying, which can result in civil claims involving invasion of privacy, intentional infliction of emotional distress, and negligent entrustment, has been discussed as a Homeowners insurance exposure since the early days of the Internet. Most US states have laws in place that determine parental responsibility for a child’s misdeeds.
With deepfake (and other AI tools) readily available for young people to misuse, this vulnerability has spread as many applications use this technology. Ultimately, determining whether Homeowner’s coverage will come into play depends on the applicable policy language – and the jurisdiction of the case.
Special Lines
In addition to General Liability and Homeowners insurance, specialty lines of business may have a greater impact, including Crime, Cyber, and D&O policies. Excess policies may also be involved if decisions track recent social inflation trends and result in 7 or 8 figure payouts.
Finally, as deepfake technology continues to improve, the barrier to entry is lowering: anyone with an internet connection can create a deepfake and expose themselves to credit. Given these dynamics, it will be important for risk and insurance professionals to do the following:
- Understand how the use cases for deepfakes – and artificial intelligence technology in general – continue to evolve.
- Track how regulations and laws – both at the federal and state level – are being implemented to deal with deepfakes.
- Note how the language of the insurance may respond in the event of a claim.
Articles
The law
Source link