A little afternoon experiment. I uploaded a photo of a Sudoku puzzle and asked both Gemini and ChatGPT to solve it.
Gemini went first. It took only a few seconds. Very few. There was its completed solution, along with an explanation of its methodology.
Then I turned to ChatGPT. It took longer. Not seconds. At least a minute, perhaps two. Then it too returned an answer, with explanations of how it reached its solution.
Surprising to me–The two solutions were quite different.
When I work my way through one of these puzzles, it seems like each correct choice is the only one for that particular position. One by one, they fall into place as other possible choices are eliminated.
But at first glance, each answer seemed plausible.
But as I examined each answer, I spotted the problem.
To win in Sudoku, you must fill the 9×9 grid so every row, column, and each 3×3 box contains digits 1-9 without repetition.
Gemini managed to get only two of three dimensions correct.
Gemini properly placed numbers 1 through 9 only once in each row and in each of the 3×3 blocks. But each of its columns included at least one number that was repeated more than once.
ChatGPT, on the other hand, appears to have solved the puzzle correctly.
Then I went back and asked Gemini a question: “Is there a chance that this solution is incorrect?”
It responded.
Actually, you have a sharp eye—my previous response contained a few errors where numbers were repeated in the same row or column. In Sudoku, each number from 1 to 9 must appear exactly once in every row, column, and 3×3 grid.
Here is the corrected solution, double-checked against the original puzzle’s constraints
It’s response made clear that Gemini knew the rules, but it again made the same type of errors, with a number repeated in each of several columns.
It’s quick answer masked a fundamental error in its logic.
I wonder if there are instructions I could have given that would have prevented Gemini from returning these erroneous answers.
Overall implications? I haven’t a clue.
Should Google’s investors have concerns?
Discover more from i L i n d
Subscribe to get the latest posts sent to your email.

I don’t trust Google’s AI search summaries because I have caught too many errors. The investors do not seem very concerned about that. I use Brave search and browser now, with much better search results and AI. It doesn’t pull already-flawed results from other search engines. This is what its AI says about itself:
“Brave AI is not based on external APIs—it uses its own independent infrastructure. The core of Brave’s AI capabilities, including Leo AI and Brave Search, relies on Brave’s proprietary search index, which is built from its own web crawlers and user-driven data through the Web Discovery Project (WDP). This index is independent of Google, Bing, or other major search engines.”
In other related news, Amazon just notified me by email that a device I own is available to connect to Alexa. I don’t have Alexa. I also didn’t buy the device from Amazon, and I went out of my way to look for privacy options to prevent the dang thing from doing anything I don’t know about. If a privacy paranoiac like me can’t put an end this kind of unwanted invasion I’m starting to think no one can. Cue The Terminator in 1…2…3….
There absolutely is something you can do to make sure that the AI doesn’t fall back to default quick answer just giving you errors and that’s to provide it with what they would call a constitution that prevents hallucination, And quick responses you can ask it to think longer harder and give its best answer the first time it’s only how you word it and how detailed the input is will dictate how detailed the output is not necessarily in over use of words but more so as a dramatic use of constraining it The more you can strain an AI the better the output will be because you’re forcing it to focus on a very very narrow amount of information I know it’s very contradictory but the more you make it focus the bigger and better the answer will be. I hope this helps.