Cayman Court issues warning on AI use in legal filings
Update: 2025-10-02
Description
The Cayman Islands Court of Appeal has issued a strong warning on the risks of using generative artificial intelligence in court proceedings in the recent decision in Samuel Johnson v Cayman Islands Health Services Authority [2025] CICA (Civ) 15 (Johnson v HSA).
The judgment is the first in the jurisdiction to directly address the use of AI in legal submissions and highlights the duty of candour owed by litigants, whether they are represented by counsel or appearing on their own.
Background
The Appellant, a self-represented litigant, brought an appeal against the Cayman Islands Health Services Authority. Johnson filed a skeleton argument that included references to two legal cases but was unable to produce copies of those cases when requested by the Court.
The Appellant admitted that the cases had been generated using a generative AI tool and confirmed that at least one of them did not exist. At first, the Appellant did not disclose that he had relied on AI at all, only acknowledging it when questioned by the Court. This raised a serious and unprecedented issue for the Court with regard to the reliability of AI-generated legal research and the potential for misleading submissions undermining the administration of justice.
The Court's response
The Court of Appeal, led by Justice Clare Montgomery KC, was unequivocal in its criticism. It emphasised the following:
litigants-in-person are subject to the same duty as lawyers not to mislead the court, intentionally or otherwise;
the Appellant's actions as a breach of this duty, particularly because he initially failed to disclose the use of AI and only admitted it when pressed;
while it chose not to impose sanctions in this instance, it issued a stern warning about the future consequences of such conduct. These could include contempt of court proceedings, referral for criminal investigation, costs orders, and/or the case being stayed or struck out; and
the Court made clear that any future use of AI in preparing court documents must be disclosed, and the party submitting those documents remains personally responsible for ensuring their accuracy.
The risks of Generative AI
Citing the English case of R (Ayinde) v London Borough of Haringey [2025] EWHC 1383, which similarly explored the pitfalls of relying on generative AI tools such as ChatGPT for legal research, the Court highlighted that, while AI can be a helpful tool, it cannot replace human judgment or legal expertise.
The English court likened the use of AI to relying on the work of a trainee solicitor or pupil barrister: just as a lawyer remains fully responsible for checking the accuracy of their junior's work before it goes before the court, so too must anyone using AI take personal responsibility for verifying the results.
A regional perspective: Turks & Caicos
The Cayman decision in Johnson v HSA comes at a time when other Caribbean jurisdictions are beginning to grapple with how generative AI should fit within court processes. Just weeks before this judgment, the Turks & Caicos Islands Judiciary issued Practice Direction 1 of 2025, a landmark step in setting clear boundaries for AI use in litigation.
This practice direction applies to all courts in Turks & Caicos and introduces a structured framework for managing the risks of AI-generated content. It requires parties to explicitly disclose when AI has been used in preparing submissions, skeleton arguments, or other documents, and to independently verify all legal authorities and factual content before filing.
It also prohibits the use of AI for creating evidentiary material, such as affidavits and witness statements, recognising that these must reflect first-hand human knowledge and accountability.
Perhaps most importantly, the Practice Direction empowers judges to sanction non-compliance, including striking out improperly prepared documents, imposing adverse costs orders, and, in serious cases, referring matters for further investigation. This signals a strong judicial c...
The judgment is the first in the jurisdiction to directly address the use of AI in legal submissions and highlights the duty of candour owed by litigants, whether they are represented by counsel or appearing on their own.
Background
The Appellant, a self-represented litigant, brought an appeal against the Cayman Islands Health Services Authority. Johnson filed a skeleton argument that included references to two legal cases but was unable to produce copies of those cases when requested by the Court.
The Appellant admitted that the cases had been generated using a generative AI tool and confirmed that at least one of them did not exist. At first, the Appellant did not disclose that he had relied on AI at all, only acknowledging it when questioned by the Court. This raised a serious and unprecedented issue for the Court with regard to the reliability of AI-generated legal research and the potential for misleading submissions undermining the administration of justice.
The Court's response
The Court of Appeal, led by Justice Clare Montgomery KC, was unequivocal in its criticism. It emphasised the following:
litigants-in-person are subject to the same duty as lawyers not to mislead the court, intentionally or otherwise;
the Appellant's actions as a breach of this duty, particularly because he initially failed to disclose the use of AI and only admitted it when pressed;
while it chose not to impose sanctions in this instance, it issued a stern warning about the future consequences of such conduct. These could include contempt of court proceedings, referral for criminal investigation, costs orders, and/or the case being stayed or struck out; and
the Court made clear that any future use of AI in preparing court documents must be disclosed, and the party submitting those documents remains personally responsible for ensuring their accuracy.
The risks of Generative AI
Citing the English case of R (Ayinde) v London Borough of Haringey [2025] EWHC 1383, which similarly explored the pitfalls of relying on generative AI tools such as ChatGPT for legal research, the Court highlighted that, while AI can be a helpful tool, it cannot replace human judgment or legal expertise.
The English court likened the use of AI to relying on the work of a trainee solicitor or pupil barrister: just as a lawyer remains fully responsible for checking the accuracy of their junior's work before it goes before the court, so too must anyone using AI take personal responsibility for verifying the results.
A regional perspective: Turks & Caicos
The Cayman decision in Johnson v HSA comes at a time when other Caribbean jurisdictions are beginning to grapple with how generative AI should fit within court processes. Just weeks before this judgment, the Turks & Caicos Islands Judiciary issued Practice Direction 1 of 2025, a landmark step in setting clear boundaries for AI use in litigation.
This practice direction applies to all courts in Turks & Caicos and introduces a structured framework for managing the risks of AI-generated content. It requires parties to explicitly disclose when AI has been used in preparing submissions, skeleton arguments, or other documents, and to independently verify all legal authorities and factual content before filing.
It also prohibits the use of AI for creating evidentiary material, such as affidavits and witness statements, recognising that these must reflect first-hand human knowledge and accountability.
Perhaps most importantly, the Practice Direction empowers judges to sanction non-compliance, including striking out improperly prepared documents, imposing adverse costs orders, and, in serious cases, referring matters for further investigation. This signals a strong judicial c...
Comments
In Channel




