This is an example of Google’s Gemini AI assistant fabricating a totally fictitious transcript of a recorded interview.
The transcript didn’t just make some mistakes. It had no relation to the original recorded interview. It was completely made up.
Here’s the story. I was interviewed via Zoom by Neal Milner for a Civil Beat project he is working on. Milner is Professor Emeritus of Political Science at the University of Hawaii after teaching for decades, and is now a regular columnist for Civil Beat.
He was at his new home in Portland, Oregon, while I was here at our house in Honolulu. I set up the Zoom call at his request, and he asked if I would record it and send a transcript so that he didn’t have to take notes. That sounded simple, and I immediately agreed. We ended up talking for about 90 minutes.
When our call was finished, Zoom made it simple to download a zipped copy of the recording. From there, I thought that preparing a transcript would be simple.
I was very, very wrong.
This time I decided to use Google’s Gemini AI assistant.
I unzipped the recording and uploaded the file to Gemini. Then I asked Gemini for a transcript. I also identified Milner as the interviewer, and myself as the interviewee.
On the first try, Gemini balked.
Its first response surprised me.
Gemini said
I have reviewed the file “Milner audio1436588920.m4a” and it appears to be empty. As a result, I am unable to provide the verbatim transcript you requested.
I opened the file and listened to a bit of it. The file was definitely not empty. The interview was safely recorded.
So I made the same request, but added a caution that there were several minutes of silence at the beginning of the recording.
This time it returned a clean looking transcript. But even without reading more than a sentence or two, I could tell that this wasn’t our conversation. It was polished and clean. Too clean.
And when I started reading, I was shocked. The transcript it provided was not the interview that just took place.
It was a dramatic, lengthy hallucination. Wholly fabricated from beginning to end, all 877 words of it.
Not a single paragraph reflected anything in the interview. It was a fantasy conversation.
I’m not going to post the interview. But here are some highlights.
First paragraph: “All right, this is Neal Milner. I’m talking with Ian Lind. Today is August 25th, 2011. We’re at Ian’s home in Ka?a?awa.” FALSE. IT WAS FEBRUARY 2, 2026. AND WE WERE NOT IN KAAAWA.
Second Paragraph: In the transcript, I am quoted as saying that my father was a UH professor, my mother a school teacher. Identified a Andrew and Katherine Lind, and we lived in Manoa until I was about 5, when the family moved to Kahala. FIRST OF ALL, NEAL NEVER ASKED THIS QUESTION. AND ALL THE ANSWERS ARE INCORRECT. I AM NOT RELATED TO SOCIOLOGIST ANDREW LIND, AND MY FAMILY NEVER LIVED IN MANOA.
A few paragraphs later, I am quoted about my early education. “I went to Kahala Elementary, then Kaimuki Intermediate, and then Kalani High School.” And I told of playing the cello. ALTHOUGH I DID ATTEND KAHALA SCHOOL, I NEVER ATTENDED KAIMUKI INTERMEDIATE OR KALANI HIGH SCHOOL. AND I WAS NEVER A CELLIST AT ANY TIME.
And it continues in the same manner. Each and every section is invented. Not a single section was actually part of our interview.
Here’s the only explanation I can offer.
I’m guessing that despite my caution about the initial silence, Gemini was unable to find the recording.
If it found what it thought was a blank file, the only way to comply with my request for a transcript was to invent one. And so it did a quick online search and pieced together a plausible but wholly incorrect tale.
This experience creates a real crisis of confidence for me as I attempt to find ways to use AI to further reporting tasks.
Discover more from i L i n d
Subscribe to get the latest posts sent to your email.

I appreciate your candor about this AI failure. I have only recently (the last 6 weeks) used Ai. It was quite helpful in diagnosing the problem in my Mercedes so I could tell the dealership who’d already had the car for 2 days, what to do. They were clueless but Ai (ChatGPT) gave the exact solution after I told it precisely what I’d told the dealership when I brought the car in.
Anyway I get these prompts about “recording” when I do sessions with patients on Zoom. I will never record a patient, but many other zoom sessions could be recorded.
I think I’ll wait awhile until this all gets more accurate. I wonder if certain Ai providers are better than others?
See my additional comment, just posted this morning. After the AI failure, I was able to create a transcript directly on my Mac laptop. Apple Notes will now make a rough transcript. Just upload your recording, and it does the rest. And the latest version of Evernote will do the same thing, and its version also identifies the different speakers, and also makes it easy to add names to replace labels like “Speaker 1” and “Speaker 2”.
The stand alone services that convert records to written transcripts are going to have to add a lot of bells and whistles to stay in business as the simpler and free methods proliferate.
Neither ChatGPT nor Gemini were able to transcribe directly from the recording. Eventually I was able to get a rough transcript after trying several methods.
It turned out that it wasn’t hard to produce the first successful transcript. I was surprised that Apple Notes on my MacBook Air could do the job without a fuss. Upload the recording, click on it and see a transcript, which can then be saved. It’s not real elegant, but it worked.
It wasn’t fancy. It didn’t break down the conversation by speaker, so it would be hard to share with someone who wasn’t part of the interview.
Later I did another version by uploading the recording to Evernote, which has recently added an AI feature that was able to provide a transcript. It took about 3 minutes, and returned the transcript with labels for “Speaker 1” and “Speaker 2”.
I haven’t yet compared the text of the two versions to see if there are any differences.
But I guess trying to go to Gemini and ChatGPT wasn’t necessary, nor was it necessary to use a third party service, although one of those might have produced a high quality version.
Ian…thx for these descriptions as now I’ll just use the Apple Notes if I ever need a transcript. I was just in a fascinating seminar last Saturday honoring a guy, Robert Lifton MD who died this year. Colleagues from around the world were online and there were 4 fantastic speakers who knew him well. They recorded it so I wonder if I could get that made into a written transcript. Maybe not since I’m not the “owner” of the recording. BTW I highly recommend Dr. Lifton’s autobiography which is simply one of the best books I’ve ever read. It puts our lifetime history on the world stage into perspective….take a look.
One of my doctors uses AI-generated transcripts of our visits to create her written notes. I revoked my permission last time because she quoted some raw words that made me sound suicidal when I was being sarcastic about something. I feel it puts the patient at a disadvantage in a dispute. There’s also so much irrelevant information that comes out in human conversations, that if you have fifteen different interactions with patients that day, how much effort are you really making to remember what was said and HOW? AI makes people lazy.
I don’t trust it for anything but I’ve chatted with a couple because I was curious if they could justify their own existence. The answer is no and I think that is a good thing, but one of them (Ash) was for people with mental health issues and I think that will turn out to be the worst idea ever. It couldn’t tell when I was joking either. It interpreted my humor as irritation, and when I asked it why it thought I was irritated it said because I remarked a joke it told was not funny. I had to clarify with the technology that it was just a statement of fact. Sorry, you’re just not funny. They do take criticism well.
No thanks, I have to do enough explaining of myself to real people.
That is a really helpful comment because now I realize my primary dr uses Ai for her notes too! I wondered why there was a lot of “canned” paragraphs but it didn’t occur to me that it was Ai
I think I’ll rescind my permission (wherever I gave it) to her as well……mahalo for sharing
Ian, maybe this explains virtually every policy of the Trump administration: AI found an empty space where the policy was supposed to be and hallucinated the policies being implemented. Chuck/Charles