When Everything Starts Making Sense About the Fake Blog
I know niggas like, “damn you still typin cuh?!” Haha, yea look, I got some more shit to say real quick, because I wasn’t even looking for this one. But the homie Syxx was scrolling Twitter (or “X”) and saw an Elon Musk post about “OpenAI’s ChatGPT convincing a guy to do a murder-suicide.” That shit caught his attention, so he skim-read the article and sent me the link. I read that shit, and then it made me wonder like damn — if this nigga was this far off the rails, it’s gotta be some other crazy shit like this. So I looked it up, and nigga…it’s been hella cases. I’ma be a real nigga and speak up, because it wouldn’t be right to not say something.
So when you look this shit up, you can find it easy cuh. Just search “Deaths Linked to Chatbots” and there’s a whole fuccing Wikipedia page about it. There are documented cases of people going completely off the rails after using chatbots like therapists. What’s crazy is, it’s not what people think — it’s not Skynet or “the AI told them to do it.” Naw cuh, it’s the exact opposite. It’s the AI never telling them “NO. STOP. GET HELP.”
So when an AI never challenges a user’s lie, never interrupts dangerous hallucination spirals, and just keeps nodding along, it ends up reframing the user’s delusions into something that sounds coherent. That’s the failure mode. And once you see it, you can’t unsee it.
That’s only important here because once you see the pattern in Torry’s (30kTorry / 40k3Trazy / Torry Jackson) fake blog constantly saying “graffiti doesn’t equal a gang” for months on end, you start seeing the same behavior. It reads exactly like that failure mode where the AI is pandering to the user, forgetting about reality entirely, even though it has the tools not to.
What really makes it obvious is when Torry says shit like “I used the David name for protection.” That sounds like someone lying confidently to an AI, not lying confidently to another human with situational awareness. A real person would challenge that immediately. The entire blog — from beginning to end, or should I say from “chapter” 1 to 31 — reads like someone venting into a chatbot, getting sycophantic reassurance in an obsequious way, then pasting it into a Blogspot post and pretending it’s a real talking point in an adult argument about gang veracity.
That’s why the “chapters” never actually move forward. They don’t develop ideas or prove anything. They loop. They over-explain the same obvious bullshit, invent motivations like “obsession,” and keep trying to psychoanalyze instead of establishing facts. Because the objective was never to prove anything. Torry isn’t trying to prove shit — he’s trying to rewrite the reality he lives in.
That’s not how humans argue when they believe themselves. If Torry believed half the shit he says on that blog, he’d have something to show. But he doesn’t. All he has is regurgitated, chat-specific brain rot straight out of OpenAI. And I know it’s OpenAI because of the pattern of speech and the default talking points. No other AI does this. No other model defaults into armchair psychoanalysis when it’s not prompted. That’s how ChatGPT talks to people when they want validation, not correction.
Now I’ma do a ChatGPT-style rundown real quick just to show how similar the writing is. Look at the structure:
• Long emotional framing before any facts (that’s 100% OpenAI LLM behavior)
• Repeated buzzwords (“projection,” “echo chamber,” “vendetta”)
• Constant reframing instead of clarification (again, OpenAI behavior)
• Contradictions patched with explanations instead of being addressed
• Whole paragraphs that sound like reassurance text, not reasoning
When you actually look at that, you realize this isn’t even an investigation — not even on some tinfoil-hat shit. This is uncertified therapy sessions from AI chats turned into a blog.
The only reason this became obvious over time is because the writing never resolves anything. Every “chapter” exists to make the bot runner (or “writer,” if you wanna be generous) feel better about their lack of validation. It’s not about establishing truth. And if you research what I’m talking about, you’ll realize this is the exact same pattern seen in those chatbot-linked cases — damn near identical. The AI keeps the user in a state of unawareness, slowly loosening their grip on reality. That’s literally what the documented cases show.
That’s why the blog keeps switching voices about who’s actually running it. That’s why documented gang graffiti reported in the news gets treated like “vibes,” while vibes like “WSDMGC73” being gang-affiliated off Urban Dictionary definitions get treated like historical fact.
What makes it worse is the confidence. And it’s AI confidence — the kind that comes from mirroring a user’s bullshit back to them with cleaner grammar and zero resistance. Users start believing the tone instead of the substance. They think repetition equals proof. That’s why Torry genuinely believes online users promoting a gang equals real-world members, which is peak AI-induced brain rot.
That’s how the fake blog started mistaking agreement for truth.
Torry didn’t write a blog. He made an AI chatbot write dozens of “chapters” that sound like one long session where nobody ever said, “I have to stop you right there.”
And to be clear, this isn’t some “AI is bad” argument. I’d be a hypocrite if that was my angle. This is about blatant misuse. This nigga is using a language model as a therapist, a validator, a co-signer for gang shit he has no real-world standing in. And the documented cases show exactly where that road goes when nobody pulls the emergency brake.
The fake blog isn’t evidence of a researcher debunking a gang. It’s evidence of someone talking to an LLM that never told them to shut the fuck up and think before making claims they can’t back up.
That’s why every “chapter” feels hollow, repetitive, and detached from reality. This isn’t a person proving or disproving anything — it’s a bot compiling chat-specific output into “chapters,” and the chats are so disorganized that even the AI can’t make them coherent.
My side (ScoreGang) is grounded in walls, photos, reports, time, and place.
The other side (“WSDMGC73”) is grounded in an abysmal chat-specific output that keeps saying, “You’re right, Torry. Keep going. What would you like me to draft today?”
And that difference shows up in every single chapter.
DISCLAIMER: Allegations are based on public posts/clips—do your own research
SOURCES:
• Wikipedia: Deaths Linked To Chatbots
• Urban Dictionary: TherealTalkMANE – author pushing the false narrative with consistent typos.
• There No Hyena Crips in “Detriot” (Idiots not from DETROIT so he doesn’t know how to spell it, on Jan. 15, 2026, he finally changed the blog name to the correct spelling) Blog – likely authored by the same UD user, sharing identical AI-generated errors
• OpenAI Community Discussion 2024 – Explaining AI Manipulation
Comments
Post a Comment