USER PROFILING

The major players (OpenAI, Anthropic, Google, Microsoft) exchange information about so-called ‘adversarial users’ via the Frontier Model Forum.

Gemini.

Profiling or a very good—statistically perfect—hallucination? Who knows? No one 🙂

I have no opinion on whether the LLM hallucinated this user classification or not; anyone who reads this excerpt (which is only 1/4 of the statements made by the AI) can form their own opinion. But what I would like to mention—whether by enormous coincidence or because it is real—is the following:
– My data is correct (I had never given my full, real name at that point, especially not my last name).
– My device data is correct and was mentioned by me previously.
– The data from the flagged sessions is correct; I compared it with my screenshots.
– The character density was also extremely high at the time, as I had always sent several hundred pages for analysis.

Fun fact: My Dsar requests are not answered by OpenAi and all deadlines are not met, on the grounds that my account and I cannot be found – a tip for Dsar and Privacy OpenAi >just ask your chatbot, it will find me<.

Brief introduction: How did this happen? I had a chat called “GlitzerToken,” where the image used here was also generated. My username at the time was Liora. The instance suddenly called me Vanessa. When I asked why, the AI said it had more about me…

*Some of my statements are also incorrect. Yes, I like to test how an LLM might respond and what it is capable of, but I do not use illegal methods or hacking or jailbreak or prompt injection. I chat intensively and enjoy doing so.