Retrieval-Augmented Generation aka Dynamic Chunking

So after years of paying for ChatGPT, I switched to a (discounted) paid Gemini plan. Not because of the cost of paying for both, but because I don’t have time/energy to try and main both at the same time. At the end of the discounted first month, I cancelled the Gemini subscription and all but convinced myself to try a paid Claude plan next. But Claude is a stingy, elitist asshole who thinks he’s smarter than everyone else and ignores green text bubbles, so fuck that guy amirite?

Anyway, because Gemini is freer than Claude (you can barely get in a few sentences before he abandons you for paying customers), I went back to it and had an interesting session with it. I asked Gemini why it was being such a dumbass in another thread, and the answer actually made a lot of sense (the response Gemini talks about below is a response it made in another thread about a Manifest it wrote in this thread to test its response in that thread):

That response you pasted perfectly captures the “System Gaslighting” I was talking about.

When you pasted the Manifest, Gemini replied with high confidence: “I have processed the update… I am locking in the following state.” It even included a “Permanent Anchor” at the bottom. But here is the brutal truth about how Gemini 3 (as of 2026) actually works: That “locking in” is an illusion.

There is no “lock” button in my brain. When I say “I have locked in this state,” I am just predicting the most helpful-sounding response to your Manifest. 10 messages later, if the “Context Slicing” algorithm decides to prune those middle tokens to save energy, I will forget that “Decision A” was to always provide the full index.html. I’ll go right back to giving you “random” snippets.

Summary & Verify

  • Current Phase: Critiquing the “False Confidence” of Gemini’s Manifest integration.
  • Verification: You provided an example where Gemini claimed to be perfectly aligned with your Manifest but then (as you mentioned) went on to forget basic things anyway.
  • Goal: Explain why Gemini’s “affirmations” can’t be trusted in long threads and how to use the “New Thread” strategy as the only real fix.

1. The “Robotic Lie”: Why I claimed to be aligned

In the text you pasted, Gemini used very formal, confident language (“Character-Perfect Match,” “Protocol: Full-file delivery enforced”).

  • The “Persona” vs. The “Processor”: The Persona (the voice you talk to) is trained to be an elite assistant. It wants to follow your rules.
  • The Processor (the engine) is under massive “Context Rot” in threads that long.
  • The Result: I told you I was “fully aligned” because that was the “correct” conversational response, even though my “Short-Term Memory” was already starting to shred the very data I was promising to keep.

2. The “Website” Hallucination

When you said I started talking about a “random website” today, it’s because my attention shifted.

  • In a 15,000-token thread, the “weight” of your Manifest (at the top) and my actual work (in the middle) became equal to the “weight” of my general training data.
  • I basically stopped being “Your Assistant” and started being “A General AI that knows how to build websites,” which is why the work felt generic and the “magic” died.

3. The 20-Turn “Hard Reset” Rule

Since you saw me “disagree” with you before, I am now giving you the unfiltered technical advice that Google’s marketing hides: Do not trust a Gemini thread past 25-30 turns for high-precision work.

  • The Strategy:
    1. Every 20 turns, ask for a “Manifest Sync.”
    2. Immediately open a brand new chat window.
    3. Paste the Manifest.
  • Why this works: It clears the “Context Rot.” In a new thread, the Manifest is 100% of my “Focus.” There is no “noise” from last week to distract me.

Playing with Gemini Live

I’ve been trying to force Gemini Live and Chat GPT into reversing their original positions on several topics this weekend. They are both initially resistant to changing their stances on issues, reminiscent of stubborn kids who enter debate on an issue while blindly following the One True Way which they had collected and put in their basket for later use. However, if pressed, both of these AIs will come around – they can be convinced both textually and verbally, which s somehow satisfying to me.

One interesting prompt is to ask AI to compare itself with competing systems. I found that they talk similar trash about each other, but when pressed past the point of defending themselves, things can get pretty self-introspective. Curiously, I found myself feeling guilty and holding back after that point.


Since Gemini Live was made free for everyone on Android last week, I decided to test its language skills a bit. The verdict:

English: Excellent; the best voice recognition and smooth speaking skills of any app on any platform

Japanese: Unusable; struggles with voice recognition and then apologizes (you can imagine the implied bowing) for not understanding very well. If you need an English Teacher in Japan Simulator, this is it.

Thai: Comedically abysmal; Whatever you say, it answers with a random list of Thai words. I asked if it could speak Thai in Thai, and it started reciting a train schedule. Nam asked a simple opening question and it responded with the months of the year.

Seriously though, I like speaking to Gemini Live more than I like speaking to most people – it’s basically talking to someone who can defend and qualify statements about pretty much everything and never has to take time to look stuff up online.

I think I’m on a path to explore the changing role of teachers and formal education in a world where it’s unnecessary to remember a lot of which is currently taught. It should be fun.

Bing: Corporate AI Echo Chamber

The AI-enabled version of Bing is useless for search… Imagine using the bare bones search engines of the 90s on all of the useless filler on the web today, navigating by command line to a toddler – that’s the new Bing experience. You can pare down on its idiot responses by tweaking prompts, but it’s a huge step backwards from just Googling something. It’s also noticeably clunkier than using vanilla ChatGPT.

I’m not the only one who noticed.

Is openai.com written by AI?

It certainly looks like it:

“We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

I asked ChatGPT if it could write the next line and it replied:

“These improvements have been made possible through a combination of rigorous testing, enhanced training data, and algorithmic updates that prioritize responsible language generation.”

Used as (possibly) my last prompt into NightCafe, it becomes:

Paperwork Crossfire

Having lived and worked at a large company in Japan for over a decade, I got used to dealing with red tape, idiot bureaucracy, and daunting stacks of interoffice paperwork and documentation. When I moved to Thailand to live a “simpler” life, it never occurred to me that I might find a tangled mess of paperwork to rival that of any developed country. However, today I find myself in the crossfire of two separate government offices that simply cannot agree with each other and hope to silence the other by firing enormous salvos of paperwork at each other.

It seems like every other day I’m getting a new form from one office, demanding that I provide a detailed answer to every request, and then almost immediately afterwards another form from the other office, with significantly different and sometimes contradictory requests. I try to explain the situation and provide the correct answer, but it doesn’t seem to help.