Design inference Intelligent Design Mathematics Origin Of Life Science

Zettabytes – by Chance or Design?

Spread the love

A new measure of information has been invented – the Zettabyte = 1,000,000,000,000,000,000,000 bytes, or 10^21 bytes.
Zettabytes overtake petabytes as largest unit of digital measurement Heidi Blake, 4 May 2010, The Telegraph UK “The size of the “digital universe” will swell so rapidly this year that a new unit – the zettabyte – has been invented to measure it.”

Humanity’s total digital output currently stands at 8,000,000 petabytes – which each represent a million gigabytes – but is expected to pass 1.2 zettabytes this year.
One zettabyte is equal to one million petabytes, or 1,000,000,000,000,000,000,000 individual bytes. . . .
“A huge increase in video and digital photography – in the old days people would take one photograph, now they can knock off 20 photos and rather than store just one, people store all 20. Then there is the fact that the number of devices where information can be generated and stored has also increased.”
As a result the digital universe is forecast to expand by a factor of 44 over the next decade, according to the survey.

Werner Gitt observes that the storage capacity of

“1 cubic cm of DNA is 10^21 bits. (DNA – deoxyribonucleaic acid.)”

Comparison of different quantities of information. p 188, In The Beginning was Information. Werner Gitt. 2000 CLV ISBN 3-89397-255-2. See also 2006 edition ISBN 0890514615

In today’s terminology, DNA has a storage capacity of 0.1 zettabyte/ml.

By comparison, Hitachi considers “very large capacity drives” as holding 2TB (2*10^12 bytes).

Werner Gitt further estimated the:

“Daily information flows in the human body” as about 3.4 * 10^24, or 3.9 * 10^19 bits/s.”

Figure 33, The Information Spiral, ibid p 189. i.e. about 430 zettabytes/day.

Now what are the probabilities of zettabytes complex specified information (CSI) being generated by natural stochastic processes (“chance”) as compared to Design?

For extra credit, how would one objectively evaluate the probabilities of Daniel’s prediction being fulfilled? See:
“many will go here and there to increase knowledge.” Daniel 12:4
Corrected “zetabyte” to “zettabyte”. See: Zettabyte

20 Replies to “Zettabytes – by Chance or Design?

  1. 1
    bornagain77 says:

    DLH, that reminds me of this article:

    A fun talk on the teleportation (of a single human) – Braunstein
    Excerpt: Just how much information are we talking about anyway? Well the visible human project by the American National Institute of Health requires about 10 Gigabytes (that’s about 10^11 = 100,000,000,000 “bits,” or yes/no answers, this is about ten CD ROMs) to give the full three dimensional details of a human down to one millimeter resolution in each direction. If we forget about recognizing atoms and measuring their velocities and just scale that to a resolution of one-atomic length in each direction that’s about 10^32 bits (a one followed by thirty two zeros). This is so much information that even with the best optical fibers conceivable it would take over one hundred million centuries to transmit all that information! It would be easier to walk! If we packed all that information into CD ROMs it would fit into a cube almost 1000 kilometers on a side!

    Thus it seems the entire digital output of the entire world doesn’t even equal the base level measure of location information in one human body.

  2. 2
    andrewjg says:


    A couple of observations relating to your post follow. But I am not trying to argue it. Just thought you may be interested.

    1. Sun Microsystems already defined what a zetabyte is. When they developed there 128 bit file system they invented the term. A 128 bit file system can reference 3.4×10^38 bytes so it can actually reference a lot more than a zetabyte. They wanted to make their new file system absolutely future proof. The calculated that to flip each bit on such a system would require enough energy to boil all the water on the planet.

    2. Data can be transmitted much faster in 2010 than in 1995. Although I imagine it would still take a long time.

    3. The imagery is likely highly compressed, I doubt it is stored in an uncompressed format so that actually it is much worse than you imply.

    I would imagine the data on the internet is made up of text, imagery, video and audio. Now again I imagine that video is likely the bulk of all data on the internet. I believe video on average has a much higher compression ratio than imagery so that means that information content would be understated. But having said that I am very certain that there is a lot of duplication on the internet. So I think that would then mean the information content on the internet is vastly overstated. Maybe by as much as an order of magnitude.

    4. But since imagery i.e. the colour spectrum is continuous there are actually an infinite number of colours. It is likely that the imagery referenced is either 16 or 24 bit which will have an impact on storage size.

    4. Bitmap based imagery is likely a very poor information store for the human body. Just as with imagery vector based imagery is far more suitable that bitmaps there could actually be better ways of storing information biological life that can be used to generate accurate imagery. In fact you could probably just use vectors rather than bitmaps and store the data on personal computer – although your computer would die a horrible death trying to render it.

  3. 3
    bornagain77 says:

    andrewjg, not to argue, in fact I agree with much of what you said, but Samuel Braunstein was talking about actually physically teleporting a human being to another spot entirely, not just representing a human as information, and was also talking of the best fiber optic conceivable to the limits of physics.

    Professor Samuel L. Braunstein
    Computer Science, University of York
    Quantum Information & Computation
    Quantum Computing;
    Scalable Quantum Computers;
    Quantum Information with Continuous Variables

  4. 4
    bornagain77 says:

    andrewjg, just to clarify my last post:

    Samuel Braunstein was talking about actually changing a human being into pure transcendent information so as to teleport them to another physical spot . The 10^32 bits roughly represents only the amount of information needed to “decode” the teleportation event, and not the actual amount of information involved in the teleportation. The actual amount of information involved would be “infinite specified information” for each quantum wave/particle teleported. Also of note the 10^32 “decoding” bits must travel by speed of light considerations so as to maintain the speed of light No-communication theorem , which simply means it is impossible for two “3-D material” entities in this universe to communicate faster than the speed of light i.e. this decoding speed of light “caveat” is the “price” we pay for living in the ordered structure of this space-time continuum.

  5. 5
    DLH says:

    Thanks for the info.

    Solaris ZFS — A better, safer way to manage your data
    “Mind-boggling scalability: 128-bit file system, 16 billion billion times the capacity of 32- or 64-bit file systems”.

    The future-proof file system

    Solaris ZFS offers a dramatic advance in data management with an innovative approach to data integrity, near zero administration, and a welcome integration of file system and volume management capabilities. The centerpiece of this new architecture is the concept of a virtual storage pool which decouples the file system from physical storage in the same way that virtual memory abstracts the address space from physical memory, allowing for much more efficient use of storage devices. In Solaris ZFS, space is shared dynamically between multiple file systems from a single storage pool, and is parceled out of the pool as file systems request it. Physical storage can be added to or removed from storage pools dynamically, without interrupting services, providing new levels of flexibility, availability, and performance. And in terms of scalability, Solaris ZFS is a 128-bit file system. Its theoretical limits are truly mind-boggling — 2128 bytes of storage, and 264 for everything else such as file systems, snapshots, directory entries, devices, and more. To make it possible to recover corrupted data, ZFS implements RAID-Z an implementation of RAID designed for ZFS. RAID-Z improves upon previous RAID schemes with parity or even double-parity, striping, and atomic operations. These features make ZFS ideally suited for managing industry standard storage servers like the Sun Fire 4500.

  6. 6
    kai5263499 says:

    Great post!

    Point of fact, however, the highest unit of data so far is the yottabyte which is 10^24 bytes.

    Additionally, the highest link connection in use today is 100 Gigabytes per second.

    However, as big as those numbers are, the biggest problems posed by large amounts of data is the processing power required to actually do anything with it, like error correction, sorting, and looking up the data you need.

    It’s only very recently that the terabyte sort was recently modified to the 100 terabyte sort. The current winner of the terabyte sort being able to crunch 100 terabytes in 173 minutes (which is blazingly fast).

    I can only imagine the processing power it would take to crunch 430 zetabytes (or .43 yottabytes) per day.

  7. 7
    bornagain77 says:

    andrewjg, I love the boil the ocean quote with 2^128; 3.4 x 10^38 bits, but I can’t seem to find the maximum speed at which it can operate for data transfer.

    just for fun there are 4.3 x 10^46 molecules of water in the ocean:

  8. 8
    DLH says:

    Delving further into the archives, it appears Zetta as (1000)^7 or 10^21 was established by the 19th Conférence Générale des Poids et Mesures (CGPM).
    The BIPM also established the prefix for 10^24 as yotta Y.
    See NIST on SI prefixes.

  9. 9
    andrewjg says:


    I heard the “boiling the oceans” line when watching a presentation by Bill Moore one of the primary architects behind ZFS. But I found some interesting but useless “facts” regarding ZFS’s theoretical storage capacity.

  10. 10
    bornagain77 says:

    Thanks for the link. I must have a special place in my heart for useless information such as boiling the ocean and consuming 1000 moons because I will definitely try to find a place to use this “useless” knowledge somehow.

  11. 11
    andrewjg says:


    Me to. I am just not very good at cataloging it. I often find your links interesting e.g. the Euler one recently.

  12. 12
    bornagain77 says:

    andrewjg, you’ve probably seen this number before, yet if you haven’t the largest “material” number I’ve ever seen semi-directly associated with “information” is this one:

    According to esteemed British mathematical physicist Roger Penrose (1931-present), the odds of one particular individual constant, the “original phase-space volume” of the universe, required such precision that the “Creator’s aim must have been to an accuracy of 1 part in 10^10^123”. This number is gargantuan. If this number were written out in its entirety, 1 with 10^123 zeros to the right, it could not be written on a piece of paper the size of the entire visible universe, even if a number were written down on each sub-atomic particle in the entire universe, since the universe only has 10^80 sub-atomic particles in it.

    Roger Penrose discusses initial entropy of the universe. – video

    The Physics of the Small and Large: What is the Bridge Between Them? Roger Penrose
    Excerpt: “The time-asymmetry is fundamentally connected to with the Second Law of Thermodynamics: indeed, the extraordinarily special nature (to a greater precision than about 1 in 10^10^123, in terms of phase-space volume) can be identified as the “source” of the Second Law (Entropy).”

    How special was the big bang? – Roger Penrose
    Excerpt: This now tells us how precise the Creator’s aim must have been: namely to an accuracy of one part in 10^10^123. (from the Emperor’s New Mind, Penrose, pp 339-345 – 1989)

    This 1 in 10^10^123 number, for the time-asymmetry of the initial state of entropy for the universe, also lends strong support for “highly specified infinite information” creating the universe since;

    “Gain in entropy always means loss of information, and nothing more.”
    Gilbert Newton Lewis – 1923 Authored Thermodynamics and the Free Energy of Chemical Substances with M. Randall.

  13. 13

    Zettabytes appear explicitly in my 2004 THE DESIGN REVOLUTION (p. 120):;f=false

  14. 14
    bornagain77 says:

    DLH, I just stumble across a couple of Gitt’s quotes:

    DNA molecules contain the highest known packing density of information. This exceedingly brilliant storage method reaches the limit of the physically possible, namely down to the level of single molecules. At this level the information density is more than 10^21bits/cm3.
    W. Gitt, In The Beginning Was Information, pg 195.

    Man is undoubtedly the most complex information processing system existing on the earth. The total number of bits handled daily in all information processing events occuring in the human body, is 3 x 10^24. The number of bits being processed daily in the human body is more than a million times the total amount of human knowledge stored in all the libraries of the world, which is about 10^18 bits.
    W. Gitt, In The Beginning Was Information, pg 88.

  15. 15
    bornagain77 says:

    100 gigabyte per second

    1 x 10^38 bytes

    1 x 10^11 bytes per second
    1 x 10^27 seconds

    Did anybody bring a lunch? I think this is going to take a while.

  16. 16
    bornagain77 says:

    andrewjg: some more “useless” knowledge:

    MapReduce: simplified data processing on large clusters
    Excerpt:,,, an average of one hundred thousand MapReduce jobs are executed on Google’s clusters every day, processing a total of more than twenty petabytes of data per day.

    i.e. Google processes 20 petabytes of data per day. That is, 20,000,000,000,000,000 bytes every day. This works out to around 231,000,000,000 bytes per second!!!

    which reminds me of these articles:

    Cells Are Like Robust Computational Systems, – June 2009
    Excerpt: Gene regulatory networks in cell nuclei are similar to cloud computing networks, such as Google or Yahoo!, researchers report today in the online journal Molecular Systems Biology. The similarity is that each system keeps working despite the failure of individual components, whether they are master genes or computer processors. ,,,,”We now have reason to think of cells as robust computational devices, employing redundancy in the same way that enables large computing systems, such as Amazon, to keep operating despite the fact that servers routinely fail.”

    Nanoelectronic Transistor Combined With Biological Machine Could Lead To Better Electronics: – Aug. 2009
    Excerpt: While modern communication devices rely on electric fields and currents to carry the flow of information, biological systems are much more complex. They use an arsenal of membrane receptors, channels and pumps to control signal transduction that is unmatched by even the most powerful computers.

  17. 17
    DLH says:

    William Dembski re

    Zettabytes appear explicitly in my 2004 THE DESIGN REVOLUTION (p. 120):

    Now why didn’t I remember that?
    Good reminder to dig deeper before going along with reporter enthusiasm.

  18. 18
    bornagain77 says:

    Off topic:

    Immense murmuration of Spectacular Starlings

  19. 19
    bornagain77 says:

    Much better Off topic:
    The hand of God… or Darwin. HDV starling footage cut to Pachelbel

  20. 20
    bornagain77 says:

    Just when you think you’ve seen it all:

    Wingsuit flying with rocketboots – CRAZY!


    BASE wingsuit mountain swooping

Leave a Reply