AI, deepfakes and covert recordings in family proceedings

The rise of AI-generated evidence and the legal system’s response

30.09.20254 mins read

Key takeaways

AI-generated evidence in family law cases poses major risks

Deepfakes can fabricate events and mislead courts.

Covert recordings raise fairness concerns

Courts weigh motives, authenticity, and child impact.

Can we show the Court videos?

You can sometimes rely on video recordings in children act proceedings.

Artificial Intelligence (AI) is reshaping the evidential landscape in family law. Deepfakes, AI generated audio, video, or images all pose unique challenges in proceedings where credibility and child welfare are paramount. Alongside this, the increasing use of covert recordings, often captured via smartphones or home security devices like video doorbells, adds complexity to questions of admissibility and fairness.

AI-generated evidence

Deepfakes use machine learning to create convincing but false representations of individuals. Unlike traditional editing, these technologies can fabricate entire conversations or events. In family disputes, such evidence could be used to falsely suggest abuse, neglect, or inappropriate conduct, potentially influencing decisions on child arrangements or financial settlements. Rapid advancements in AI are making it increasingly difficult to verify whether evidence is authentic or artificially generated, posing significant challenges for family law proceedings.

Covert recordings in family proceedings

Covert recordings have long been contentious in family law. On 15 May 2025 The Family Justice Council issued guidance on the topic which addresses issues such as:

  • Motivation - Why was the recording made? To protect a child or gain tactical advantage?

  • Impact on the Child - Does the recording harm trust or relationships?

  • Authenticity and Context - Is the recording complete and accurate?

Modern technology complicates this further. Devices like video doorbells, nanny cams, and smart speakers can capture audio/video without explicit consent. While these recordings

may provide genuine evidence of events (e.g., exchanges at handovers), they also raise privacy concerns and questions about selective editing.

Legal framework and judicial response

Currently, there is no specific legislation addressing deepfakes or synthetic media in family law. Courts rely on general principles of authenticity and reliability. AI generated evidence that misrepresents reality cannot satisfy these standards and should be treated as inadmissible unless verified by independent forensic experts. The Judiciary’s AI Guidance urges caution but lacks detailed protocols, creating a pressing need for bespoke rules similar to those governing covert recordings.

For covert recordings, the Family Justice Council guidance emphasises:

  • Best interests of the child.

  • Full disclosure and transparency.

  • Judicial discretion on admissibility.

Similar principles should apply to AI generated evidence, but additional safeguards, such as expert verification are evidently needed. Deepfakes and covert recordings both challenge evidential integrity, but the risks posed by synthetic media are unprecedented. While existing guidance on covert recordings offers a starting point, family law urgently requires clear protocols for AI-generated evidence to safeguard fairness and protect vulnerable parties.

This article was co-authored by Laura Clarke, Trainee Solicitor at Hill Dickinson.

Your content, your way

Tell us what you'd like to hear more about.

Preference centre