- Share
-
-
arroba

Recall the genome mapping mess that Kas Thomas alerted us all to? Physicist Rob Sheldon writes to say that we should pay attention to what Thomas is saying here:
What it tells us, is that modern “annotation” tools for finding genes in the database of transcribed codons, use iterative methods that have to be “trained”. Sort of like the way speech recognition programs have to be trained. Nothing wrong with that, there are all sorts of reasons why iterative processes are efficient and effective. However, they are not deterministic. There is not a fixed “rule” that determines the output. Instead, the “rules” are fuzzy logic based on a set of examples, a “training set”.
Okay, now we get to the nub of the problem. If errors creep into the database, then they get used the training set, and then even more errors are made, until the percentage of errors grows to some limiting value. However, if the database itself is being expanded at an exponential rate, then early errors propagate exponentially, so that there is no limit to the percent corruption of the database. From the charts in this blog, the annotations in the genome data base are now over 50% wrong, they are more often wrong than right.
And this is exactly what is happening to all science fields. Materialism is a bad philosophy that allows “what is observed” to replace “what exists”, because it has no correction algorithm, for example, nothing to tell evolutionary biology that “what is observed” must be fragmentary and unusual and not a rule of nature making competition between close cousins inevitable.
Instead, science makes a model, baptizes it with “reality” and then bases models on the models, all the while “tuning” the models to match the observations. These enormously complicated curve-fits in multi-dimensions are then equated with reality because they can be made to fit the observations, just like 11 epicycles were used before Copernicus to explain the motion of Jupiter.
How did we get in this mess? By making careers dependent on “research”, by tying the funding to the acceptance of “standard models”, by refusing to allow teleology into our observations, by claiming that a 5 parameter fit to the data was a “better model” than a 1-parameter fit which was “fine tuned”. And like the genome database, the models become the “training set” for the next set of models, until we have an elaborate epicyclic universe with boundless error.
How do we escape this mess? By going back to basics, to deterministic models not based on other models–eg. philosophically based foundations. By not tying funding to acceptance of peers. By not tying careers to research dollars received. By recognizing all the ways in which positive feedback encourages the exponential explosion of error, and making an institutional and foundational change to prevent it. Sort of like the unwritten rule that professors who get a PhD at an institution are barred from teaching there.
But that goes against the entire superstructure of post-WWII science that gave us all the marvels of the 21st century!
Yes it does. Some positive feedback parasites are decadal, some generational. This is a generational one. It is time for us to address it so that, like marrying your cousin, it doesn’t come back and kill us in 30 years. Science and careers are 2nd cousins. Science and government are 1st cousins. It’s time for some consanguinity laws.
The end of everything is in its beginning, they say. The science that put people on the moon has degenerated to speculation about why ET isn’t returning our calls and how many multiverses can be crammed into a black hole.
Readers, thoughts?
Follow UD News at Twitter!