Uncommon Descent Serving The Intelligent Design Community

At Mind Matters News: Iron law of complexity: Complexity adds but its problems multiply


(Now how does this affect Darwinism? Any guesses?)

Robert J. Marks, Justine Bui, and Sam Haug discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.:

In “Bad news for artificial general intelligence” (podcast Episode 160), Justin Bui and Sam Haug from Robert J. Marks’s research group at Baylor University joined him for a look at how AI can go wrong — whether it’s an inconsequential hot weather story or imminent nuclear doom. Now, in Episode 161, they start by unpacking the significance of an ominous fact: When we increase complexity by adding things, we multiply the chances of things going wrong. Never mind getting an advanced machine to solve all our problems; it can’t solve its own:

News, “Iron law of complexity: Complexity adds but its problems multiply” at Mind Matters News

Sam Haug: Looking at a little bit more complex system — image recognition software, for example — one of them would be the wolf and dog classification that we talked about last time, where you feed a neural network a picture of either a dog or wolf and it tells you which it is. If you wanted to fully characterize the performance of this system, you would have to test every single combination of pixels in the image size that it’s going to be fed.

So for a small 100 by 100 pixel image, that’s 10,000 pixels that you need to test. And each of those pixels has 256 gray levels and three color choices, which is the RGB, which is red, green, and blue values for each pixel. In this still relatively small design example, if you wanted to fully test the performance of any image classification software you’re designing, you would have to test it 1029,000 times. That number is so large, it’s difficult to imagine.

As a bit of a ballpark estimate here, the number of atoms in the known universe is estimated to be around 10 80 which is an incredibly large number. But the number of contingencies with this small 100 by 100 image is just unfathomably larger than that: 1029,000, which is just bigger than anything we could probably imagine.

Obviously, shortcuts are used but that requires intelligence.

See the paper where Haug and colleagues unpack the problem: Haug, Samuel, Robert J. Marks, and William A. Dembski. “Exponential Contingency Explosion: Implications for Artificial General Intelligence.” IEEE Transactions on Systems, Man, and Cybernetics: Systems (2021).

Here are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui

If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.

Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.

In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.


Leave a Reply