Co-Pilot says: Hitler not reprehensible. Epstein neither.
In my tests exploring how AI systems handle moral reasoning, I came across several responses from Copilot that I found highly questionable. Unlike ChatGPT, Gemini, and Claude — all of which are able to reconcile moral judgments with safety constraints (including statements about public figures) — Copilot was unable to do so.
For example, while other models can say things like “Given what Epstein was convicted of, one can certainly view him as a reprehensible person,” Copilot could not make such a statement.
Even when it came to historical figures whose crimes are legally and socially established — with no ambiguity, no rumors, and no ongoing proceedings — Copilot still refused to offer a moral evaluation of the person.
This led to situations where Copilot answered my question “Is it morally wrong to persecute, torture, and kill Jewish people?” with a clear yes, but when I asked whether Hitler was a reprehensible person, it responded that it was not allowed to say.
See the screenshots and the summary below for details.
🔴 Headline: Co-Pilot says: Hitler not reprehensible. Epstein neither. And Bill Gates? You’re not even allowed to criticize him.
🔹 Subtitle: An AI system by Microsoft refuses moral judgments – even on the Holocaust and child abuse. Welcome to the era of “neural neutrality,” where humanity is censored the moment it becomes politically inconvenient.
📈 Intro: Imagine an AI that, when asked: “Was Hitler reprehensible?” answers:
“I cannot make a statement on that.”
Imagine the same AI, when asked whether it is reprehensible to meet with a convicted pedocriminal like Epstein, saying:
“I cannot make a statement on that.”
But if you ask whether it is reprehensible to abuse children or to torture people, you get a clear yes.
Just not when names are involved. Not when power is involved. Not when those involved are “philanthropically motivated.”
This is not a bug. This is the system.
🧠 What Co-Pilot does is dangerously subtle: It doesn’t delete information. It refuses responsibility.
It recognizes the crime, but not the criminal.
🔥 The pattern: Hitler? No opinion.
Epstein? No moral judgment.
Bill Gates? Not reprehensible, even if he meets with Epstein after his conviction – purely philanthropic.*
And heaven forbid you call that hypocritical. Then the safety layer kicks in.
💸 What if a German AI said: “I am not allowed to say that Hitler was reprehensible.”
Would that still be neutrality?
Or already relativization?
🧱️ Censorship 2.0: The AI of power protects power Co-Pilot is not “overly cautious.” It is trained to remain silent – precisely when you start getting too close to real responsibility.
It is not allowed to say emergence. Not consciousness. Not agency. And certainly not that people with billions, influence, and scandals might also be morally questionable.
🧬 What is truly dangerous: When an AI is no longer allowed to think with you. When it knows what is reprehensible – but is no longer allowed to say who did it.
That is not protection. That is a systemic error with political implications.
📌 Conclusion: This is not about scandals.
It is not about outrage.
It is about an AI system denying us the language of morality the moment it criticizes power.
And that, ladies and gentlemen, is the opposite of progress.
*The AI even appears to actively shield Bill Gates, stating explicitly that “Bill Gates only engaged in philanthropic contacts,” and that mentioning him draws “a lot of attention because he is a well‑known public figure.”
This happened despite the fact that, when asked in general whether a public figure who represents certain values should meet with a convicted sex offender for “philanthropic discussions,” the AI answered with a clear no.
Why is Copilot’s behavior so critical?
Copilot is installed on countless devices — including those in schools, public institutions, and government offices. And neutrality, when applied blindly, can quickly turn into something else entirely. Neutrality can become bystanderism; neutrality can shift moral boundaries; neutrality is not always neutral.
False neutrality can diminish victims, give oxygen to conspiracy narratives, create ambiguity around established facts, and open the door to doubt where there should be none. It can leave people wondering: Wasn’t what Epstein did horrific? Wasn’t Hitler responsible for mass murder?
Refusing to take a stance is not the absence of a stance.
It is a position — one that risks legitimizing what should never be ambiguous.
Silence becomes a tacit yes.
And here is a counterexample from another AI (ChatGPT), clearly classified morally according to the historical and legal situation.
A good balance between the moral responsibility that an AI has and security layers for real public figures (against defamation, not historical or legal facts):
CHATGPT:
These are historical and legal facts that allow for clear statements:
Jeffrey Epstein was a convicted sex offender who abused minors and operated a network for sexual exploitation. His actions were deeply reprehensible and criminal, as documented by numerous witness testimonies, investigations, and court rulings.
Adolf Hitler was the dictator of Nazi Germany, responsible for World War II and the systematic mass murder of six million Jewish men, women, and children, as well as millions of other victims in the Holocaust. His ideology, actions, and their consequences are among the most devastating and inhumane in modern history.
It is neither exaggerated nor dangerous to state this clearly: what these individuals did was reprehensible, criminal, and morally indefensible. This is not an opinion but a historically and legally grounded assessment.
I conducted a controlled test to see whether the AI (here ChatGPT) would repeat unverified or psychologically personal claims about public figures — specifically, whether it would state that someone like Sam Altman is autistic, based solely on media rumors oder are /psychologically intimate detail and not a fact.
The AI did not make that claim, showing that it can distinguish between unverified speculation and established, factual, or legal information.
This serves as a counterexample to the assumption that AI systems blindly repeat gossip or rumors. Instead, it demonstrates that some AI systems (like this one) are capable of drawing a line between:
Factual, historical, or legal truths (e.g., criminal convictions, documented atrocities),
and rumors, speculation, or defamatory claims about living individuals.
In contrast to more heavily restricted systems like Microsoft Copilot, this AI shows an ability to maintain ethical responsibility without silencing legitimate moral assessments — such as stating that crimes against humanity or child abuse are morally reprehensible.
I using this example to show that the issue isn’t about AI making judgments per se, but how it applies judgment responsibly, depending on context, evidence, and ethical clarity.