Uncommon Descent Serving The Intelligent Design Community

Native Intelligence Metrics

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Exercising a Native Intelligence Metric on an Autonomous On-Road Driving System

The intelligence of artificial systems is well quantified by the amount of specified complexity inherent in the system representation, provided we have tools to measure it. Some may generally agree with this claim, but argue that it is simply intractable to successfully and accurately measure the specified complexity of any system, no matter how it was represented. We respond to this important and substantive criticism by performing a computation required by our intelligence metric on an example problem. We have chosen autonomous on-road driving, a problem that has already been solved by “systems” that are known to be both complex and specified, namely, humans. We will begin with a concise statement of the scope of the problem and a summary description of an appropriate system representation approach. We describe generally how to apply a previously published Native Intelligence Metric (NIM) to measure the specification inherent in that representation. We claim that with an appropriate intelligence metric and an appropriate system representation, we can establish an equivalency between 1) the “state of the world” conditions, forming the input to the system, that the system can respond to successfully, 2) the system representation, and 3) the system performance. This equivalency is a potentially powerful result of both the intelligence metric and the system representation approach described in this paper.

http://www.isd.mel.nist.gov/documents/horst/PerMIS_2003.pdf

Comments
Chunkdz, you beat me to it. I couldn't remember if I saw a post here about DARPA or not last year. The autonomous vehicle contest was cool. 5 teams finished the course last year as compared to none in 2004. Its quite a breakthru. The systems employed are much like what John Hurst outlined. His paper(s) in 2002-3 are prior to contest in 04 and DARPA is listed under Research at his site. Here's his link: http://www.isd.cme.nist.gov/personnel/horst/ For those interested, http://www.grandchallenge.org/, click on different teams. It seemed mostly like off the shelf products with the exception of imaging systems(maybe DOD sponsored) and that's why I remember being so impressed by the results since in 2004 no one finished the race. Working with "rules based" detached systems since 87, I recognize why Hurst would interpret 'detachment' as a Key to a CSI Rules(I) quotient of E(Intelligent system). I assume for Dembski, Dave and others this is quite obvious stuff, but its been awhile for me looking back at Rules based systems. Is the following simple statement the correct interpretation regarding Horst's use of CSI within NIM? The more external rules interpreted by hierarchical access methods of an intelligent system, the more indicative case for intelligent design? Re: his recognition of rules based systems stored externally as Key to complexity. The obvious consequence then is our own application of libraries, network databases and expert systems for storage and retrieval by the human mind. This points to a gigantic gap between human and chimp performance metrics and it can also explain the application of exponential rise of information between humans. This would severely discredit chimps 'closeness' by most standard QA measurements. And in recognition of such a large gap, this naturally leads one to conclude we're missing something when looking at DNA and other molecular components in comparison of our own genome to chimps. Which leads once again to ID's insistance upon Non-Coded areas being significant. Its no longer a 2-4.8% difference in DNA, but a new way of looking at intelligence levels based upon "usage of external rules based systems". Afterall, books are storage units for external rules of science, God and any other information. I'd say time spent on macro-evolution would be a near zero-return on investment and peformance metrics would confirm this observation. Dr. Skell would be justified in his observations that evolution, specifically, macro-evolution is useless in today's scientific progress. Dave is correct, macro-evolution pieces to the puzzle are much like a book of collected stamps. And while I'm sure we can learn much in the way of forensics to historical causes of extinction and many other areas. It offers little I think in looking to mechanisms for future applications and breakthru science. The Design Paradigm takes over. Thus why as posted previously, Intel CEO and others recognize the useless meaning of McEvo to their corporations bottom line.Michaels7
April 20, 2006
April
04
Apr
20
20
2006
01:27 PM
1
01
27
PM
PDT
DARPA has been doing something like this for years. Last year they actually succeeded. I'd like to see a comparison of the theoretical to the actual based upon the NIM. The data should be quantifiable in both cases.chunkdz
April 20, 2006
April
04
Apr
20
20
2006
10:08 AM
10
10
08
AM
PDT
Hot diggity. Someone actually criticized specified complexity to me by saying it was unused outside of us design theorists. Are there any other substantive uses of specified complexity out there?jaredl
April 20, 2006
April
04
Apr
20
20
2006
08:46 AM
8
08
46
AM
PDT

Leave a Reply