I find it worrying that this was upvoted so much so quickly, and HN users are apparently unable to spot the glaring red flags about this article.
1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.
2. The author is anonymous.
3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.
This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.
I'm especially suspicious of the handwriting analysis. It seems like the kind of thing a vLLM would be pretty bad at doing and very good at convincingly faking for non-experts.
Gemini 3 Pro, eg, fails very badly at reading the Braille in this image, confusing the English language text for the actual Braille. When you give it just the Braille, it still fails and confidently hallucinates a transcription badly enough that you don't even have to know Braille (I don't!) to see it's wrong.
This is literally a "my two cents' worth" answer from Gemini Pro. It's a straightforward inference from the fact that "Anno Mundi" means "in the year of the world", thus especially in the "year since creation", and that the main text references Abraham's birth with conflicting dates. It's nifty that we now have automated means of extracting a sensible scholarly consensus of "what could this possibly mean" but there's absolutely no mystery here.
I think there’s something very interesting here and would be interested in hearing more about the date discrepancies- it’s a shame the article is mostly just the raw output of gemini instead of more commentary
I make no judgement on this particular claim, I have not checked it out.
But what immediately comes to mind from reading the title are all the "AI solutions" for the as-of-yet undecoded voynich manuscript that are posted with surprising (and increasing) frequency to at least one forum. They're all incompatible and fall apart on closer inspection.
One probably important distinction is that the Voynich manuscript was deliberately obfuscated. Puzzling it out requires context that may not even exist anymore (consider discovering an intact TLS log a thousand years in the future, without the private cert you'd never know it was just someone posting to HN!).
The notes in the linked article are presumptively-legible notes made in good faith, just not with enough detail for someone-who-is-not-the-author to understand . AI training sets are much broader than mere human intuition now.
My colleagues do this as well with AI and it fucks me right off.
They just present the raw output, in its long form an expect everyone to follow the flow. Context is everything, damn it.
Looking into it further there isn't really a mystery as to what they are, or at least none that I could find suggesting that its unknown. Especially given the context of the page.
Its great that gemini can do this, its a shame that lots of the ancillary "analysis" about the writing doesn't appear to be correct (humanist minscule I would suggest is too new, too heathen and too Italian for a german manuscript of the time https://medievalwritings.atillo.com.au/whyread/paleographysu...)
1. Let's start with where the post was published. Check what kind of content this blog publishes - huge volumes of random low-effort AI-boosting posts with AI-generated images. This isn't a blog about history or linguistics.
2. The author is anonymous.
3. The contents of the post itself: it's just raw AI output. There's no expert commentary. It just mentions that unnamed experts were unable to do the job.
This isn't to say that LLMs aren't useful for science; on the contrary. See for example Terence Tao's blog. Notice how different his work is from whatever this post is.
Gemini 3 Pro, eg, fails very badly at reading the Braille in this image, confusing the English language text for the actual Braille. When you give it just the Braille, it still fails and confidently hallucinates a transcription badly enough that you don't even have to know Braille (I don't!) to see it's wrong.
https://m.facebook.com/groups/dullmensclub/posts/18885933484...
As far as I can tell, Gemini 3 Pro is still completely out of its depth and incapable of understanding Braille at all, and doesn't realize this.
I find this a little hard to believe.
But what immediately comes to mind from reading the title are all the "AI solutions" for the as-of-yet undecoded voynich manuscript that are posted with surprising (and increasing) frequency to at least one forum. They're all incompatible and fall apart on closer inspection.
A collection of them can be found at https://www.voynich.ninja/forum-59.html .
The notes in the linked article are presumptively-legible notes made in good faith, just not with enough detail for someone-who-is-not-the-author to understand . AI training sets are much broader than mere human intuition now.
My colleagues do this as well with AI and it fucks me right off.
They just present the raw output, in its long form an expect everyone to follow the flow. Context is everything, damn it.
Looking into it further there isn't really a mystery as to what they are, or at least none that I could find suggesting that its unknown. Especially given the context of the page.
Its great that gemini can do this, its a shame that lots of the ancillary "analysis" about the writing doesn't appear to be correct (humanist minscule I would suggest is too new, too heathen and too Italian for a german manuscript of the time https://medievalwritings.atillo.com.au/whyread/paleographysu...)
There isn't verification, and its based on the assertion that this marginalia is a mystery. None of which appears to be backed up.
It then doesn't actually do any analysis of the output, any verification, just pastes the dumps at the end, with no attempt to make it readable.