South African legal professionals are sounding the alarm over a growing threat that executives, entrepreneurs, and everyday people rarely consider when typing their problems into a chatbot: any conversation you have with an AI chatbot could be used against you in court. Unlike confidential discussions with your lawyer, those chats with ChatGPT or Claude aren’t protected by attorney-client privilege, and the consequences of this gap in legal protection are becoming increasingly clear.
The warning carries real weight in our local context, especially as South Africans become more reliant on generative AI tools to help them navigate everything from contract disputes to compliance headaches. What seems like a private conversation with a machine is anything but – it’s potentially discoverable evidence that could undermine your legal position when matters escalate to litigation.
Ahmore Burger-Smidt, director at Werksmans Attorneys, has been watching how courts are treating these interactions globally, and the picture isn’t reassuring for AI users. “Legal professional privilege in South African law protects confidential communications between a client and a legal adviser acting in a professional capacity for the purpose of obtaining or giving legal advice,” she explains. “Communications with an AI tool do not meet those criteria, and third-party disclosure typically waives privilege.”
The issue came into sharp focus following a February 2026 ruling by US district judge Jed Rakoff, who ordered Bradley Heppner – the former chairman of bankrupt financial services firm GWG Holdings – to hand over 31 documents he’d generated using Anthropic’s Claude chatbot. Heppner, facing securities and wire fraud charges, had used the tool to prepare materials for his legal team. Rakoff’s decision was unambiguous: no attorney-client relationship exists between an AI user and a platform like Claude, meaning the prosecutors got their hands on the documents without resistance.
What’s particularly interesting is that on the very same day, Michigan magistrate judge Anthony Patti reached the opposite conclusion in a different case, ruling that a self-represented plaintiff’s interactions with ChatGPT were protected. This inconsistency is precisely why South African businesses and individuals need to understand where our courts would likely stand on the matter.
AI chatbots and legal liability: what South African courts are likely to do
Burger-Smidt makes it clear that South Africa’s legal position mirrors the Rakoff ruling rather than the Patti one, which means if you’re using AI chatbots for legal matters, you’d better assume those conversations are fair game in court. The implications for how people should approach these tools are significant. She warns that AI use should be “properly framed” – limited to helping lay people understand the basics of the law, and to improving internal legal operations within companies. Lawyers, however, face a much higher bar.
“Lawyers remain responsible to the court and the client for any AI-assisted output, with potential personal consequences for negligence and sanctions if AI hallucinates authorities,” Burger-Smidt cautions. This is critical: if your legal counsel uses an AI tool that fabricates case law or misrepresents statutes, both the lawyer and potentially the law firm could face disciplinary action from the Law Society, sanctions from the court, or malpractice claims.
There’s also the matter of data protection. “As a matter of privacy, AI chats constitute personal information and should be processed in accordance with the Protection of Personal Information Act, while remaining susceptible to lawful interception and disclosure processes,” she notes. This double whammy means your information could be exposed both through discovery in litigation and through compliance with interception laws.
The hallucination problem – where AI tools confidently cite non-existent cases or misquote statutes – has already rattled South Africa’s legal establishment. After instances where fabricated case law appeared in court documents, the Legal Practice Council moved in July 2025 to develop a governing framework for AI use by legal professionals. This regulatory response underscores how seriously the profession takes the risks.
Lucien Pierce, director at PPM Attorneys, identifies additional hazards that go beyond privilege concerns. The advice generated by AI could be outdated, derived from the wrong jurisdiction entirely, or based on completely fictitious court decisions. “We have seen this happening time and again, where even lawyers have fallen victim to accepting the outputs of AI tools and chatbots as being the truth,” Pierce says, speaking from his experience watching cases where these errors have caused real damage.
The term and conditions of whatever AI tool you’re using matters enormously. Pierce emphasises that free-to-use chatbots typically state explicitly that uploaded data will be used to train the underlying models, meaning sensitive information could end up in the public domain. Your texts, photos, and videos could potentially be incorporated into the AI’s training data, accessible to who knows who. This applies equally whether you’re a lay person, a corporation, or a lawyer – which is why forward-thinking law firms are now training their staff on AI use protocols.
Beyond these evidentiary and privacy concerns, there’s another legal minefield: AI tools can create binding commitments that organisations then struggle to honour. In 2022, Air Canada’s chatbot promised a customer a discount that wasn’t actually available. When the airline tried to dodge responsibility by claiming the chatbot acted independently, a tribunal ordered them to refund the customer nearly C$812. This case illustrates a fundamental principle: organisations deploying AI agents remain legally accountable for what those systems do.
“This highlights the need for organisations that use AI agents and bots to ensure that they are monitored,” Pierce emphasises. “When major decisions, such as those that result in legally binding commitments, are made, a human should be kept in the loop.” That’s not just good practice – it’s becoming a legal necessity.
The landscape around AI chatbots and legal liability is evolving rapidly, and South African courts and regulators are watching developments abroad while forming their own positions. The takeaway for anyone tempted to run a legal scenario past ChatGPT is stark: keep humans – specifically qualified lawyers – in the conversation if you want any real protection.