Courtesy Wikipedia’s entry on steganography, this photo has an encrypted photo within it:
The hidden image is revealed by removing all but the two least significant bits of each color component and a subsequent normalization. The hidden image is shown below.
Finally, we come to the research theme that I find most intriguing. Steganography, if you look in the dictionary, is an archaism that was subsequently replaced by the term “cryptography.” Steganography literally means “covered writing.” With the rise of digital computing, however, the term has taken on a new life. Steganography belongs to the field of digital data embedding technologies (DDET), which also include information hiding, steganalysis, watermarking, embedded data extraction, and digital data forensics. Steganography seeks efficient (that is, high data rate) and robust (that is, insensitive to common distortions) algorithms that can embed a high volume of hidden message bits within a cover message (typically imagery, video, or audio) without their presence being detected. Conversely, steganalysis seeks statistical tests that will detect the presence of steganography in a cover message.
Consider now the following possibility: What if organisms instantiate designs that have no functional significance but that nonetheless give biological investigators insight into functional aspects of organisms. Such second-order designs would serve essentially as an “operating manual,” of no use to the organism as such but of use to scientists investigating the organism. Granted, this is a speculative possibility, but there are some preliminary results from the bioinformatics literature that bear it out in relation to the protein-folding problem (such second-order designs appear to be embedded not in a single genome but in a database of homologous genomes from related organisms).
While it makes perfect sense for a designer to throw in an “operating manual” (much as automobile manufacturers include operating manuals with the cars they make), this possibility makes no sense for blind material mechanisms, which cannot anticipate scientific investigators. Research in this area would consist in constructing statistical tests to detect such second-order designs (in other words, steganalysis). Should such second order designs be discovered, the next step would be to seek algorithms for embedding these second-order designs in the organisms. My suspicion is that biological systems do steganography much better than we, and that steganographers will learn a thing or two from biology, though not because natural selection is so clever, but because the designer of these systems is so adept at steganography.
Such second-order steganography would, in my view, provide decisive confirmation for ID. Yet even if it doesn’t pan out, first-order steganography (i.e., the embedding of functional information useful to the organism rather than to a scientific investigator) could also provide strong evidence for ID. For years now evolutionary biologists have told us that the bulk of genomes is junk and that this is due to the sloppiness of the evolutionary process. That is now changing. For instance, Amy Pasquenelli at UCSD, in commenting on long stretches of seemingly barren DNA sequences, asks us to reconsider the contents of such junk DNA sequences in the light of recent reports that a new class of non-coding RNA genes are scattered, perhaps densely, throughout these animal genomes. (microRNAs: Deviants no Longer. Trends in Genetics 18(4) (4 April 2002): 171-3.) ID theorists should be at the forefront in unpacking the information contained within biological systems. If these systems are designed, we can expect the information to be densely packed and multi-layered (save where natural forces have attenuated the information). Dense, multi-layered embedding of information is a prediction of ID.
It’s time to bring this talk to an end. I close with two images (both from biology) and a final quote. The images describe two perspectives on how the scientific debate over intelligent design is likely to play out in the coming years. From the vantage of the scientific establishment, intelligent design is in the position of a mouse trying to move an elephant by nibbling at its toes. From time to time the elephant may shift its feet, but nothing like real movement or a fundamental change is about to happen. Let me emphasize that this is the perspective of the scientific establishment. Yet even adopting this perspective, the scientific establishment seems strangely uncomfortable. The mouse has yet to be squashed, and the elephant (as in the cartoons) has become frightened and seems ready to stampede in a panic.
The image that I think more accurately captures how the debate will play out is, ironically, an evolutionary competition where two organisms vie to dominate an ecological niche (think of mammals displacing the dinosaurs). At some point, one of the organisms gains a crucial advantage. This enables it to outcompete the other. The one thrives, the other dwindles. However wrong Darwin might have been about selection and competition being the driving force behind biological evolution, these factors certainly play a crucial role in scientific progress. It’s up to ID proponents to demonstrate a few incontrovertible instances where design is uniquely fruitful for biology. Scientists without an inordinate attachment to Darwinian evolution (and there are many, though this fact is not widely advertised) will be only too happy to shift their allegiance if they think that intelligent design is where the interesting problems in biology lie.