The Voynich manuscript has long been a mysterious object, seemingly a medicinal or magical survey of plants, or someone’s play on such documents, but written in an unknown alphabetic script:

AI is now being brought to bear on the matter. According to phys dot org:
>>U of A computing science professor Greg Kondrak, an expert in natural language processing, and graduate student Bradley Hauer used artificial intelligence to decode the ambiguities in human language using the Voynich manuscript as a case study.
Their first step was to address the language of origin, which is enciphered on hundreds of delicate vellum pages with accompanying illustrations.
Kondrak and Hauer used samples of 400 different languages from the “Universal Declaration of Human Rights” to systematically identify the language. They initially hypothesized that the Voynich manuscript was written in Arabic but after running their algorithms, it turned out that the most likely language was Hebrew.
“That was surprising,” said Kondrak. “And just saying ‘this is Hebrew’ is the first step. The next step is how do we decipher it.”
Kondrak and Hauer hypothesized the manuscript was created using alphagrams, defining one phrase with another, exemplary of the ambiguities in human language. Assuming that, they tried to come up with an algorithm to decipher that type of scrambled text.
“It turned out that over 80 per cent of the words were in a Hebrew dictionary, but we didn’t know if they made sense together,” said Kondrak.
After unsuccessfully seeking Hebrew scholars to validate their findings, the scientists turned to Google Translate.
“It came up with a sentence that is grammatical, and you can interpret it,” said Kondrak. “‘She made recommendations to the priest, man of the house and me and people.’ It’s a kind of strange sentence to start a manuscript but it definitely makes sense.”>>
Of course, there is a strong AI context for this: “Kondrak is renowned for his work with natural language processing, a subset of artificial intelligence defined as helping computers understand human language.”
But, the question remains: would the computer UNDERSTAND the text or speech, or would it merely process it in ways that we find useful or impressive?
(This is close to the difference between a ball reflecting certain wavelengths of light that sensors may detect and which may then be processed algorithmically and the ball appearing redly to a conscious intelligent individual.)
So, what would be implied or required for a computational substrate to understand something? Is that not, then a claim to conscious awareness?
If so, would a successful natural language processing algorithm be enough to become self-aware?
How can this be responsibly tested?
Or, are we seeing a case of inappropriate anthropomorphising that creates a grossly exaggerated impression of what AI has or will achieve for the foreseeable future? END
AI and the Voynich Manuscript
If it’s anything like the computer I’m typing on now, I don’t believe it would understand the text.
***
It will be interesting to see how this unfolds. This document has allegedly been “decoded” many times, but the work never holds up. This doesn’t inspire a great deal of confidence:
DS, that Google Translate reference is at least an orange flag. We will see. KF
Having no theological nor theoretical insight or investment in the source or mechanics of consciousness, I would have to say I really don’t know.
However, if we don’t expect the brain is the source of human thought and/or consciousness, that it/they come(s) from beyond… probably not. If we do allow it to be, it’s obviously far more sophisticated and complete than any AI system we have as of yet built…so, again, probably not.
To get an idea of the power (or lack thereof) of mainstream AI systems to understand languages, read this article by famed cognitive scientist, Douglas Hofstadter.
The Shallowness of Google Translate
Hofstadter eviscerates the grandiose claims of the AI community and treats them like toddlers in kindergarten. It’s the Searle’s Chinese Room all over again.
FF, I see your point. A sobering corrective. KF
F/N: I notice a discussion of agency, from the AI paradigm:
The mind-set is clear.
It would seem obvious that a thermostat is a simple regulator using negative feedback and well known control loop dynamics, not an agent in any sense worth talking about.
As to simply slipping in the word “mind,” that is itself suggestive of anthropomorphising.
Going further, a robot is a fairly complex cybernetic system, but it is in the end an extension of numerical control of machines and of automation, though there is some inherent flexibility in developing a reprogrammable manipulator-arm that can use various tool-tips.
The complaint on want of adaptability points to the root cause of performance: programming. Where, obviously, programming for detailed step by step response to an indefinitely wide array of often unforeseen circumstances is an obviously futile supertask.
Programming in common sense, deep understanding of language and of visual-spatial environments seems to also be difficult. So instead, there has been a shift towards so-called learning machines, which is where the AI approach comes in. The idea is, put enough in for the machine to teach itself the rest. But is it really teaching itself, so that it understands, forms responsible goals, makes free and rational decisions then supervises its interactions towards its goal?
And, doubtless, more.
KF
PS: The same authors (Poole and Mackworth) define and expand:
How about HI (Human Intelligence) instead?
Professor Stephen Bax seems to have had a compelling provisional approach from a linguistics perspective.
https://stephenbax.net/ (intro video)
https://www.youtube.com/watch?v=fpZD_3D8_WQ
Interesting.
-Q