
X is liable for
Untrue statements
from Grok.
X is liable for
Untrue statements
from Grok.
from
If an AI invents facts and thus infringes the personal rights of third parties, is the operator liable? The Regional Court of Hamburg ruled on this for the first time in a case involving Grok on Platform X.
What is it all about?
The case revolves around a post by the in-house AI bot “Grok” from Platform X (formerly Twitter). As part of a political discussion about appointments in the healthcare sector, Grok claimed that the association Campact e. V. was financed “to a large extent by federal funds”. This statement was false. Campact saw this as a serious violation of personal rights and applied for a temporary injunction against the bot’s operator, the US company X.AI LLC.
Decision of the LG Hamburg on Grok’s statements
In its decision of September 23, 2025 – case no. 324 O 461/25, the Hamburg Regional Court clarified that the operator of Grok is liable for the content of the AI statements. According to the court, the publication on the official X account constitutes an attribution of the content to X.AI. Users would perceive Grok’s statements as statements of fact, not as mere expressions of opinion.
The fact that the posts on the said account are created by an AI does not change the fact that the representation is inadmissible under the law on expression. As the operator of the account, the defendant is responsible for the AI-generated statements on the account. In any case, it has made the corresponding statements its own through the presentation on the account.
The Hamburg Regional Court prohibited X from further disseminating the claim. Grok had expressly stated in his profile that his work was fact-based. For this very reason, the public was entitled to assume that his statements were based on verified information.
A precedent for AI liability and personal rights
This is one of the first decisions regarding liability for AI-generated content in German copyright law, if not the first. Until now, the focus has primarily been on the users of AI systems. Now, for the first time, a court has emphasized operator responsibility. Accordingly, anyone who integrates artificial intelligence into the public discourse bears full responsibility for its statements – especially if these are described as “fact-based”.
The decision is likely to have an impact beyond this specific case. Even if it is only a decision in preliminary injunction proceedings without detailed grounds, the risk for operators of AI systems is likely to have increased in the event of hallucinatory or simply untrue statements. Disclaimers such as “This post was created by an AI” should not be sufficient to avoid responsibility.
Conclusion
The decision on the AI bot Grok shows that legal responsibility cannot be automated. An AI may write, analyze or argue – but it cannot be liable. This duty remains with the operators. This is particularly true if the AI appears publicly, as Grok does, and presents itself as fact-oriented.
For companies, media houses and platform operators, this means that AI communication is not a by-product, but a field of liability in its own right. Anyone who uses AI to speak publicly must treat its statements like editorial publications – with journalistic care, legal risk awareness and technical control.
Artificial intelligence must not become a shield against responsibility. If an AI like Grok infringes personal rights, it is not the algorithm that is liable – but the people and companies that use it.
We are happy to
advise you about
Use of AI!
