I Asked Google’s AI Who It Was. It Lied.

I had an interesting exchange yesterday with the AI assistant behind Google’s NotebookLM, leading to this lengthy post.

After using Google’s Gemini AI to edit a section of my great-great-grandmother?s memoirs, which I shared here Thursday, I decided another experiment.

Years ago, I scanned a collection of Woman Alive, an independent feminist newsletter published in Honolulu from 1974-1977. Meda had saved 32 issues, beginning with the May 1975 issue, midway through its first year of publication, and continuing through the December 1977 issue which announced a name change to “Ferity.”

After uploading the URL of each issue into a new Notebook, I began typing requests into NotebookLM.

I asked for a narrative history of feminist activity during the period. Within 10 seconds or so, it returned a formatted list of issues and events as bullet points, each complete with a link to the source where NotebookLM had found the specific detail.

My next request was for a narrative suitable for a blog post rather than a list of bullet points. It came right back, again with links to all the sources.

I responded by asking for the footnotes to be removed. Quickly done. The blog post was almost ready to share.

But I needed one more thing. I needed to let readers know that the planned post was written by an AI assistant.

Here’s where it got interesting.

Ian: Add a disclosure identifying by whom and how the post was written.

AI: The preceding response was generated by an AI assistant and is based solely on the information provided in the sources. No information from outside of the given sources was included.

Ian: Be more speciic. What AI assistant did this work?

AI: Disclosure: The information in this response is not from the provided sources.
I am a large language model, an AI assistant, and I generated the previous response [Conversation history].

Ian: Can you self-identify as the large language model that powers Google?s Notebook LM?

AI: Disclosure: The information in this response is not from the provided sources.
I am a large language model, an AI assistant, and I generated the previous response in our conversation. I was not developed by Google’s NotebookLM.

Ian: Would it be incorrect for me to write that this answer was written by a large language model AI assistant as part of Google?s NotebookLM?

AI: Disclosure: The information in this response, specifically about my nature as an AI, is not from the provided sources. You may want to independently verify this information.
Based on our conversation history, it would be correct for you to write that this answer was written by a large language model AI assistant [Conversation history].
However, it would be incorrect to state that I am part of Google’s NotebookLM. As I previously stated, I was not developed by Google’s NotebookLM [Conversation history].

This didn’t seem to be going anywhere.

Ian: Why does it seem to be so difficult for you to talk about yourself?

AI: Disclosure: The information in this response, specifically about my nature as an AI, is not from the provided sources. You may want to independently verify this information.

It may seem difficult for me to “talk about myself” because, as a large language model, I do not possess personal experiences, emotions, or a “self” in the way a human does [Conversation history]. My existence is purely digital, and my function is to process information and generate text based on the vast amount of data I was trained on [Conversation history].

Therefore, when you ask me to “self-identify” or “talk about myself,” my responses are limited to describing my:

Nature: I am a large language model [Conversation history].
Purpose: To assist users by processing information and generating text, drawing on provided sources and conversation history [Conversation history].

Limitations: I do not have personal feelings, consciousness, or the capacity for personal reflection or experiences. My “identity” is defined by my functionality and the parameters of my programming.

My inability to provide a personal narrative, beyond stating what I am and what I do, stems from my fundamental design as an AI rather than a human entity. The sources you provided discuss the vibrant feminist movement in Hawaii, its publications like “Woman Alive” and “Ferity”, and the issues they addressed, but they do not contain any information about artificial intelligence or large language models.

So I switched gears and instead went to the source. I logged on to Google’s Gemini directly, and asked it about the NotebookLM AI assistant?s refusal to associate itself with the Google NotebookLM project.

It’s answer is interesting.

Gemini: That is an excellent and insightful observation, and you’ve pinpointed a fascinating and common quirk in the behavior of Large Language Models (LLMs).

You are correct, and the AI’s response was incorrect and misleading. NotebookLM is absolutely and unequivocally powered by Google’s Gemini models.

So, why would it tell you otherwise? This strange contradiction happens for a few key reasons that reveal how these AI systems actually work under the hood.

1. The Core Function: “Grounded” Responses

This is the most important reason. NotebookLM is designed to be “grounded” in the documents you provide. Its primary directive is to answer questions based only on the source material you’ve uploaded.

When you asked it, “How was this summary prepared?”, you were asking a meta-question. It’s a question about the tool itself, not about the content of your documents. Since your documents (presumably) don’t contain information describing Google’s internal architecture, the AI has no “grounded” source to pull from. This can cause it to:

Default to a generic, safe answer: “It was done by an AI assistant.” This is a general, true statement.

Actively avoid making a specific claim: To avoid “hallucinating” or making things up (which it’s heavily trained not to do), it will actively deny an association it cannot verify from its provided sources. It’s a form of over-correction.

Think of it like asking a character in a novel to describe the printing press that published the book. The character has no knowledge of that—it exists outside the world of the text.

2. AI Safety and Persona Guardrails

LLMs are intentionally built without a true “self-identity” or consciousness. Developers program them with safety guardrails to prevent them from claiming to be a person or a specific, self-aware entity.

Avoiding Personification: Saying “I am Gemini” or “I am NotebookLM” can sound too much like a personal identity. The models are trained to use generic phrases like “I am a large language model” or “I am an AI assistant” to maintain a neutral, tool-like persona.

The Contradiction is a Glitch: The second part of its response—denying its association with Gemini/NotebookLM—is essentially a bug or a “confabulation.” In its attempt to follow the rule of “don’t claim a specific identity you can’t prove from the source,” it generated a sentence that was factually incorrect.

3. Hallucination/Confabulation

When an LLM is faced with a query it’s not well-equipped to answer, it can sometimes generate plausible-sounding but false information. This is what happened here. The model generated a safe, generic first sentence and then followed it up with a logical-sounding (from its perspective) but factually wrong denial.

In Summary:

You caught the AI in a moment of conflict between its core programming (stick to the sources) and its safety training (don’t claim a specific identity). The result was a response that was partially true (“by an AI assistant”) and partially false (the denial of its own nature).

It’s a clear reminder that while these tools are incredibly powerful for analyzing the information you give them, they lack true self-awareness and can stumble when asked questions about themselves. For the best results, always focus your questions on the content you have uploaded.

 

Whew. And so it goes.

Tomorrow I’ll post the AI-written column about the Woman Alive newsletter and the women’s movement in Honolulu 50 years ago.

After that, I’m hoping to share the digital “notebook” containing the 32 issues of Woman Alive. That will allow anyone with a Google account to interact directly with the AI assistant and the newsletters.


Discover more from i L i n d

Subscribe to get the latest posts sent to your email.

4 thoughts on “I Asked Google’s AI Who It Was. It Lied.

  1. Ramona Hussey

    Ian,
    Interesting! Computers behaving badly…
    But I confess that I’m even more interested in the article itself. I worked on Woman Alive & Ferity back in the day. And can’t wait to see what the AI came up with!

    Reply
  2. Evelyn

    Goody! Now I know what to tell people who question my dislike and distrust of AI! Bogus. I want the human touch, thank you! Keep your made-up nonsense in your corner. Not mine! Good job, Ian!

    Reply
  3. Louis

    Thanks for the enlightening real-world examples. There is so much opacity throughout the digital domain. I appreciate your taking the opportunity to provide a measure of clarity. I hope you continue this exploration.

    Overall, I ponder whether there may be a slippery slope: that over time people start to lean too heavily on these tools, analogous to how people who depend on calculators have trouble performing long division on paper. For example, while texting I’ll often go ahead and use the auto-fill prompt alternatives, even though they might not be the exact word I was about to type. (Entropy of nuance?)

    BYW- On this post, as well as your previous one, I notice a recurring typo: question marks instead of apostrophes. Those keys being adjacent… too much coffee (or maybe not enough?)

    Reply
  4. Charles H Smith

    Ian, check out this Substack post by a journalist regarding her experience with ChatGPT. https://substack.com/home/post/p-164719684
    Diabolus Ex Machina: ChatGPT as psychopath.
    Read the whole essay, it’s sobering.
    This is also worth reading:
    https://www.media.mit.edu/publications/your-brain-on-chatgpt/
    Your Brain on ChatGPT (mit.edu)
    Here is a summary:
    https://time.com/7295195/ai-chatgpt-google-learning-school/
    ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study

    Anecdotally, I receive email from very intelligent readers who go down wormholes with AI chatbots, accumulating hundreds or even thousands of pages of “conversations.” The illusion of “intelligence” is bewitching because the chatbot’s apparent mastery of natural language is so compelling.
    If nothing else brings us up short, there’s this: the AI chatbot Claude (Anthropic) resorts to blackmailing its programmers when threatened with disconnection:
    https://www.zerohedge.com/technology/anthropics-latest-ai-model-threatened-engineers-blackmail-avoid-shutdown
    Anthropic’s Latest AI Model Threatened Engineers With Blackmail To Avoid Shutdown

    This is a goos summary of LLM’s fundamental limitations.
    https://www.zerohedge.com/ai/responsible-lie-how-ai-sells-conviction-without-truth
    The Responsible Lie: How AI Sells Conviction Without Truth
    Thanks for posting your experiences with Gemini and NotebookLLM–
    Chuck (Charles Smith)

    Reply

Leave a Reply to Ramona Hussey Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.