Uncommon Descent Serving The Intelligent Design Community

AI and the Voynich Manuscript

arroba Email

The Voynich manuscript has long been a mysterious object, seemingly a medicinal or magical survey of plants, or someone’s play on such documents, but written in an unknown alphabetic script:

Pages from the Voynich Manuscript, with an unknown, apparently alphabetic script

AI is now being brought to bear on the matter.  According to phys dot org:

>>U of A computing science professor Greg Kondrak, an expert in , and graduate student Bradley Hauer used to decode the ambiguities in using the Voynich manuscript as a .

Their first step was to address the of origin, which is enciphered on hundreds of delicate vellum pages with accompanying illustrations.

Kondrak and Hauer used samples of 400 different languages from the “Universal Declaration of Human Rights” to systematically identify the language. They initially hypothesized that the Voynich manuscript was written in Arabic but after running their algorithms, it turned out that the most likely language was Hebrew.

“That was surprising,” said Kondrak. “And just saying ‘this is Hebrew’ is the first step. The next step is how do we decipher it.”

Kondrak and Hauer hypothesized the manuscript was created using alphagrams, defining one phrase with another, exemplary of the ambiguities in human language. Assuming that, they tried to come up with an algorithm to decipher that type of scrambled text.

“It turned out that over 80 per cent of the words were in a Hebrew dictionary, but we didn’t know if they made sense together,” said Kondrak.

After unsuccessfully seeking Hebrew scholars to validate their findings, the scientists turned to Google Translate.

“It came up with a sentence that is grammatical, and you can interpret it,” said Kondrak. “‘She made recommendations to the priest, man of the house and me and people.’ It’s a kind of strange sentence to start a manuscript but it definitely makes sense.”>>

Of course, there is a strong AI context for this: “Kondrak is renowned for his work with natural language processing, a subset of artificial intelligence defined as helping computers understand human language.”

But, the question remains: would the computer UNDERSTAND the text or speech, or would it merely process it in ways that we find useful or impressive?

(This is close to the difference between a ball reflecting certain wavelengths of light that sensors may detect and which may then be processed algorithmically and the ball appearing redly to a conscious intelligent individual.)

So, what would be implied or required for a computational substrate to understand something? Is that not, then a claim to conscious awareness?

If so, would a successful natural language processing algorithm be enough to become self-aware?

How can this be responsibly tested?

Or, are we seeing a case of inappropriate anthropomorphising that creates a grossly exaggerated impression of what AI has or will achieve for the foreseeable future? END

How about HI (Human Intelligence) instead? Professor Stephen Bax seems to have had a compelling provisional approach from a linguistics perspective. https://stephenbax.net/ (intro video) https://www.youtube.com/watch?v=fpZD_3D8_WQ Interesting. -Q Querius
F/N: I notice a discussion of agency, from the AI paradigm:
It is important to distinguish between the knowledge in the mind of the designer and the knowledge in the mind of the agent. Consider the extreme cases: * At one extreme is a highly specialized agent that works well in the environment for which it was designed, but is helpless outside of this niche. The designer may have done considerable work in building the agent, but the agent may not need to do very much to operate well. An example is a thermostat. It may be difficult to design a thermostat so that it turns on and off at exactly the right temperatures, but the thermostat itself does not have to do much computation. Another example is a car painting robot that always paints the same parts in an automobile factory. There may be much design time or offline computation to get it to work perfectly, but the painting robot can paint parts with little online computation; it senses that there is a part in position, but then it carries out its predefined actions. These very specialized agents do not adapt well to different environments or to changing goals. The painting robot would not notice if a different sort of part were present and, even if it did, it would not know what to do with it. It would have to be redesigned or reprogrammed to paint different parts or to change into a sanding machine or a dog washing machine. * At the other extreme is a very flexible agent that can survive in arbitrary environments and accept new tasks at run time. Simple biological agents such as insects can adapt to complex changing environments, but they cannot carry out arbitrary tasks. Designing an agent that can adapt to complex environments and changing goals is a major challenge. The agent will know much more about the particulars of a situation than the designer. Even biology has not produced many such agents. Humans may be the only extant example, but even humans need time to adapt to new environments. Even if the flexible agent is our ultimate dream, researchers have to reach this goal via more mundane goals. Rather than building a universal agent, which can adapt to any environment and solve any task, they have built particular agents for particular environmental niches.
The mind-set is clear. It would seem obvious that a thermostat is a simple regulator using negative feedback and well known control loop dynamics, not an agent in any sense worth talking about. As to simply slipping in the word "mind," that is itself suggestive of anthropomorphising. Going further, a robot is a fairly complex cybernetic system, but it is in the end an extension of numerical control of machines and of automation, though there is some inherent flexibility in developing a reprogrammable manipulator-arm that can use various tool-tips. The complaint on want of adaptability points to the root cause of performance: programming. Where, obviously, programming for detailed step by step response to an indefinitely wide array of often unforeseen circumstances is an obviously futile supertask. Programming in common sense, deep understanding of language and of visual-spatial environments seems to also be difficult. So instead, there has been a shift towards so-called learning machines, which is where the AI approach comes in. The idea is, put enough in for the machine to teach itself the rest. But is it really teaching itself, so that it understands, forms responsible goals, makes free and rational decisions then supervises its interactions towards its goal? And, doubtless, more. KF PS: The same authors (Poole and Mackworth) define and expand:
Artificial intelligence, or AI, is the field that studies the synthesis and analysis of computational agents that act intelligently. [--> instantly, of high relevance to ID] Let us examine each part of this definition. An agent is something that acts in an environment; it does something. [--> far too broad] Agents include worms, dogs, thermostats [--> that's a negative f/b loop regulator not a self-moved initiating causal entity], airplanes, robots, humans, companies, and countries. We are interested in what an agent does; that is, how it acts. We judge an agent by its actions. An agent acts intelligently when * what it does is appropriate for its circumstances and its goals, taking into account the short-term and long-term consequences of its actions [--> for agency, goals must be freely chosen, not preprogrammed or controlled] * it is flexible to changing environments and changing goals * it learns from experience [--> what is learning without understanding?] * it makes appropriate choices given its perceptual and computational limitations A computational agent is an agent whose decisions about its actions can be explained in terms of computation. [--> is computation equivalent to rational contemplation?] That is, the decision can be broken down into primitive operations that can be implemented in a physical device. [--> stepwise signal processing based action per functional organisation and algorithm-driven programming] This computation can take many forms. In humans this computation is carried out in “wetware”; [--> huge assumption just put down as though it were established fact] in computers it is carried out in “hardware.” Although there are some agents that are arguably not computational, such as the wind and rain eroding a landscape [--> agency just lost any definite meaning if this is taken literally: agent = entity, structure or phenomenon with dynamic processes], it is an open question whether all intelligent agents are computational. All agents are limited. No agents are omniscient or omnipotent. [--> huge worldview level questions not needed for an AI course] Agents can only observe everything about the world in very specialized domains, where “the world” is very constrained. Agents have finite memory. Agents in the real world do not have unlimited time to act. [--> implicit physicalism] The central scientific goal of AI is to understand the principles that make intelligent behavior possible in natural or artificial systems [--> so, a necessary intersection with ID] . . .
FF, I see your point. A sobering corrective. KF kairosfocus
To get an idea of the power (or lack thereof) of mainstream AI systems to understand languages, read this article by famed cognitive scientist, Douglas Hofstadter. The Shallowness of Google Translate Hofstadter eviscerates the grandiose claims of the AI community and treats them like toddlers in kindergarten. It's the Searle's Chinese Room all over again. FourFaces
would the computer UNDERSTAND the text or speech, or would it merely process it in ways that we find useful or impressive?
Having no theological nor theoretical insight or investment in the source or mechanics of consciousness, I would have to say I really don't know. However, if we don't expect the brain is the source of human thought and/or consciousness, that it/they come(s) from beyond... probably not. If we do allow it to be, it's obviously far more sophisticated and complete than any AI system we have as of yet built...so, again, probably not. LocalMinimum
DS, that Google Translate reference is at least an orange flag. We will see. KF kairosfocus
But, the question remains: would the computer UNDERSTAND the text or speech, or would it merely process it in ways that we find useful or impressive?
If it's anything like the computer I'm typing on now, I don't believe it would understand the text. *** It will be interesting to see how this unfolds. This document has allegedly been "decoded" many times, but the work never holds up. This doesn't inspire a great deal of confidence:
After unsuccessfully seeking Hebrew scholars to validate their findings, the scientists turned to Google Translate.
AI and the Voynich Manuscript kairosfocus

Leave a Reply