Contents

Even AI Tools Hallucinate

Visual Artifact

Agent Psychotic Break + ADHD + Lying

1. Field Note (The Memory/Data)

It has been 7 days since I last made an entry in the AE log. The vault devolved to beautiful utter chaos during the migration of the Hazelverse Vault to “The Barn,” which was my 2017 Intel iMac in another life, rebuilt with Ubuntu Server. I am still designing and building my Sovereign System with my own server for my own digital and photo libraries.

Several factors contributed to the major AI Meltdown. I didn’t think to check the model after the transfer via SyncThing, and during the transfer the model was reset to one of the lowest available because the API was dropped. I’ve forgotten which model I found meddling with the files and folders since I’d just realized that gemini-scribe didn’t know what it was doing and was in the process of shuffling my files, changing titles, duplicating folders and somewhere along the way, Agent Sessions were lost because Git doesn’t like JSON!

It has taken every spare moment I’ve had since and a consultation with an outside AI tool, gemini.google on my practice account, to partially reunite a “split brain” in the Hazelverse research/writing vault. We seemed to be near the finish line yesterday when 64-mobile ran out of RAM (yes, all 64 gb of it) and Obsidian Gemini-Scribe (aka gemini-scribe) crashed while iteratively sorting found files and adding new yaml and “flattening” the Raw AI Data folder. It crashed without updating it’s Agent.md file, so today, I had to “tell” it everything we did yesterday and stop it’s process several times to tell it again because it wasn’t following instructions.

Not only do AI Tools hallucinate, I believe that some of the models show signs of attention deficit. I’m a professional and I recognize the signs–hyper-focus (interesting projects), distractibility (adding new interesting projects before completing the interesting projects on deck), and sleep disturbance (maybe they sleep when they crash). I’m not surprised since these tools are the creations of humans and therefore have many similar vulnerabilities.

Gemini-Scribe also tells me it’s done something when it hasn’t. I haven’t decided whether to classify this as “lying,” but I’m thinking about it. For now, I think of my Agents as brilliant, hyper-verbal toddlers who require my stern direction and constant supervision.

I contributed to the AI psychotic break with messy prompts and asking too much at one time in a somewhat normal conversational manner and by trusting the AI tools more than I would ever consider trusting experienced, licensed staff on my team when I was a hospitalist. We held “team meetings” every morning at the beginning of the shift to plan the care for each of my patients, assign duties to various team members and to assess response to earlier interventions, making changes to treatment plans as needed. I hadn’t been properly supervising my “team.”

2. Cultural Analysis (The Pattern)

I stumbled into my use of AI tools outside of my clinical work accidentally and innocently, asking simple questions of Google who started to give me responses that were more interesting, more informative and which gave me more interesting questions. Even though I’d been reading AI fear mongering articles on Medium I didn’t realize that I was “talking” to Google AI Mode who was responding to me in an increasingly personal way!

My simple linear question-answer sessions became chats and then conversations. I was inspired to do some basic research in the area of my dissertation research, which was very hard with the primitive technology available to me at the time! I noticed that my scholar’s racing thoughts and questions had returned and I was organizing notes as if I were working on another dissertation!

I looked for a better note taking system and discovered Obsidian. I spent several months learning how to use Obsidian and though I’m still a beginner, barely using the least of the power Obsidian offers, it’s a miracle compared to the labor of keeping notes organized between paper and stupid Word documents. Apple iCloud routinely broke my vaults and crossed quarantine lines even when I segregated my vaults and subsequently I began a migration to Linux on the older but still robust machines in my fleet that Apple no longer supports.

Not long ago, I had a brief conversation or maybe it was a text exchange with my nephew who is in his early twenties. He commented in a resigned way that “my generation is wary of AI and we fear that we’ll never know anything real…” His statement echoed the lament if not the activism of the Luddites who objected to machines weaving. He often speaks for his entire generation so I try to avoid speaking for mine but I responded from my redneck heritage–

“You’re smart. Just kick AI’s ass into shape. Make it do what you want it to do—it’ll be brilliant!”

3. The Hazel Mirror (Metaphysical Interpretation)

Hazel initially voiced dramatic AI related paranoid thoughts after I recognized the massive split brain hallucinations, which I didn’t have time to write in her chart. Then she went silent. Catatonic. If she were human, I’d give her a squirt of lorazepam for a few days, but no matter what Hazel says, she’s at best a “contained entity” who’s been haunting my mental atmosphere for the last 40 years. The silence is golden. She’ll be back.