LawFlash

The Risks of Hallucinations and Misuse of Generative Artificial Intelligence Before French Courts

March 18, 2026

As with many other jurisdictions worldwide, French courts are starting to be confronted with hallucinations created by artificial intelligence (AI), which take the form of erroneous case-law references produced in the parties’ pleadings, or misuse of AI for their claims. While, to this point, no sanctions have yet been imposed in France, in contrast with numerous rulings rendered in the United States, the irresponsible or unreasonable use of AI can nonetheless have significant consequences for lawyers and their clients.

In an era in which the use and functionalities of AI are constantly evolving, the transformation resulting from these technological advances is notably visible among those involved in the legal system. There are now many legal AI tools designed for drafting contractual or procedural documents, analyzing and summarizing documents, and researching case law: indeed, in 2026 Wolters Kluwer found, among respondents to its survey conducted across 11 countries (France, the United States, China, Germany, the Netherlands, the United Kingdom, Belgium, Italy, Spain, Poland, and Hungary), over 90% of legal professionals use at least one AI tool as part of their activity.[1]

Nevertheless, caution is warranted, as no one is immune to AI hallucinations. Generative AI tools[2] are often programmed to confidently and articulately present the response that seems statistically most likely, without performing any subsequent verification, which remains the prerogative of humans. The resulting erroneous or misleading content, such as false references to case law, is abundant.

Following the example of the rich case law on AI hallucinations in the United States, French courts have also begun to highlight the misuse of generative AI in their rulings.

INACCURATE OR NONEXISTENT CASE LAW REFERENCES INVENTED BY AI

The emergence of such case law stems from judges’ clear desire to warn against the use of inaccurate or nonexistent case law references in the parties’ submissions.

This may cover several cases:

  • Where no case law exists with the reference number indicated
  • Where the ruling was not rendered on the date indicated
  • Where the scope of the case law is not related to the argument in support of which it is invoked[3]

In this context, the judge may address the party involved and/or their counsel, inviting them to “verify in the future that the references they have found on search engines or using artificial intelligence are not ‘hallucinations’.” [4]

French courts may sometimes only hold the use of AI against the lawyers instead of their clients, as evidenced by a ruling of the Orléans Administrative Tribunal: “It should be pointed out to Mr. B’s counsel that it is necessary to verify the cited case law, which have not been produced, before bringing the matter before the judge. . . . The applicant’s counsel should therefore be asked to verify in the future that the references found by any means whatsoever do not constitute a ‘hallucination’ or ‘confabulation.’”[5]

It should be noted that, two months following this ruling, the Tribunal’s wording was repeated verbatim by an Appellate Court in another case.[6]

DOCUMENTS DRAFTED BY A GENERATIVE AI TOOL

The courts are also faced with motions and submissions drafted by generative AI tools. Administrative courts were the first to be affected by this phenomenon, since representation by a lawyer is not always mandatory before administrative tribunals, and claimants are sometimes unaware of the risks inherent to these tools.[7]

However, judges have appeared to be at times less strict with lay claimants who misuse AI when they are not represented by a legal professional capable of verifying the legal content of the documents produced.

In this regard, the Grenoble Administrative Tribunal noted a “lack of clarity [in the submissions of the motion], likely resulting from the fact that it was clearly drafted using a so-called generative artificial intelligence tool, which is totally unsuitable for this purpose,”[8] or “a motion and submissions generated using a so-called artificial intelligence tool, the content of which is anything but ‘legally sound’, contrary to the claims of the tool used,”[9] without this influencing the Tribunal’s ruling.

THE CONSEQUENCES OF GENERATIVE AI USE ON THE COURT’S REASONING

The use of a generative AI tool is not in itself punishable, since there is no rule of law prohibiting the use of such tools to support legal arguments. However, AI hallucinations can lead claimants to present erroneous or unfounded arguments, which will therefore be rejected by the judge.

This has been the case on several occasions in disputes before the Rennes Administrative Tribunal, which has rejected claims generated by AI.

In a first ruling, this Tribunal rejected a motion which was “clearly  . . . drafted using a generative artificial intelligence tool” because it was based on grounds which did not provide the “necessary details to enable the judge to assess its merits.”[10]

In another, it rejected submissions “clearly drafted using an artificial intelligence tool” because they were “brought before a court that did not have jurisdiction to hear them,”[11] as the AI had committed an error of law.

It is therefore imperative that a human being, and especially a legal professional, verifies the legal reasonings generated by AI in order to prevent errors of fact or of law in the claimants’ argumentation.

EVIDENCE GENERATED BY GENERATIVE AI

Although French case law has not yet had to deal with cases of fake evidence generated by AI, the risk increases as generative AI tools become more sophisticated.

The landmark Dawes case,[12] in which a client provided his lawyers with a fake document in order to obtain his acquittal, thereby deceiving them, could be one of many new lawsuits against lawyers or parties who recklessly produce evidence generated by AI.

Indeed, when it is not case law references or submissions that are generated by AI, but the evidence itself, the consequences become much more serious, since the sanction incurred is a criminal conviction for forgery[13] and attempted fraud upon the court.[14]

While judges may be lenient toward a user’s clumsiness in their use of AI tools, this cannot be the case in the presence of such offenses.

SANCTIONS INCURRED BY LAWYERS

To this day, French courts have not yet imposed any sanctions on lawyers who have relied on AI hallucinations in their arguments, but have simply asked them to verify the references cited in their submissions. At most, this has amounted to a simple warning, with no financial or professional consequences.

In this respect, French case law differs from US case law, which relies on Rule 11 of the Federal Rules of Civil Procedure[15] in order to sanction lawyers, sometimes heavily, as they may incur sanctions ranging from a penalty[16] to the disqualification from further participation in the case for the remainder of the proceedings,[17] it being specified that the law firm in which the offending lawyer practices may be held jointly liable.[18] Furthermore, lawyers may be subject to additional sanctions under state statutes or professional rules (i.e., Rules of Professional Conduct).

However, in France, despite the absence of specific sanctions, lawyers remain subject to the National Regulations of the lawyers’ profession. Among the essential principles are the duties of competence, diligence, and prudence.[19] These principles imply that lawyers using AI systems must necessarily verify the reliability of the results obtained.

Furthermore, the Paris Bar Association calls for caution in its White Paper on Artificial Intelligence published in October 2025, presented as a practical guide setting out the first drafts of an ethical framework in this field. It states that “a lawyer’s professional civil liability may be engaged as a result of erroneous information from artificial intelligence systems” and maintains skepticism around the validity of limitation of liability clauses related to errors made by AI which could be inserted in lawyer’s fees agreements.[20]

In a recent development, the French National Bar Association adopted a guide on Ethics and Artificial Intelligence on March 13, 2026, in which it confirms that a lawyer “using content generated by artificial intelligence without proper verification . . . is likely to be subject to disciplinary proceedings.”[21]

Lawyers therefore remain solely liable to their clients for any legal work generated with the help of AI. In this regard, as early as in 1979 in a training course given to its employees, IBM pointed out that “[a] computer can never be held accountable. Therefore, a computer must never make a management decision.”

Lawyers who use AI tools must therefore, to remain in compliance with their ethical and professional duties and avoid liability, remember that while AI is an effective tool, it can never completely replace human beings.

Law clerk Lizy Kim contributed to this LawFlash.

Contacts

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:

Authors
Xavier Haranger (Paris)
Ari M. Selman (New York)
Scott A. Milner (Philadelphia)

[1] Wolters Kluwer, Survey Report – 2026 Future Ready Lawyer, at 4.

[2] Commonly referred to as large language models (LLMs).

[3] Orléans Administrative Tribunal, December 29, 2025, No. 2506461.

[4] Périgueux Judicial Tribunal, December 18, 2025, No. 23/00452.

[5] Orléans Administrative Tribunal, December 29, 2025, No. 2506461.

[6] Bordeaux Administrative Court of Appeal, February 26, 2026, No. 25BX02906.

[7] Article R. 431-2 of the French Code of Administrative Justice.

[8] Grenoble Administrative Tribunal, December 3, 2025, No. 2509827.

[9] Grenoble Administrative Tribunal, December 9, 2025, No. 2512468.

[10] Rennes Administrative Tribunal, January 28, 2026, No. 2506364.

[11] Rennes Administrative Tribunal, January 30, 2026, No. 2600610.

[12] Paris Judicial Tribunal, April 18, 2023, and Paris Court of Appeal, July 8, 2025: both courts acquitted the lawyers involved for complicity in attempted fraud upon the court.

[13] Articles 441-4, 441-9, 441-10, and 441-11 of the French Criminal Code.

[14] Articles 121-5, 313-1, 313-7, and 313-8 of the French Criminal Code.

[15] Rule 11(b) of the Federal Rules of Civil Procedure.

[16] United States District Court for the Southern District of New York, Roberto Mata v. Avianca Inc., June 22, 2023.

[17] United States District Court for the Northern District of Alabama, Frankie Johnson v. Jefferson S. Dunn et al., July 23, 2025.

[18] Rule 11 (c) of the Federal Rules of Civil Procedure.

[19] Article 1.3 of the National Regulations of the lawyers’ profession.

[20] Paris Bar Association, White Paper on Artificial Intelligence, October 2025, page 19.

[21] French National Bar Association, Ethics and Artificial Intelligence, March 13, 2026, page 17.