Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty:
Robert J. Marks: … I’m thinking of the design of human beings. We’re still not perfect. I don’t know if there are unintended contingencies or not, but things like COVID, for example., We weren’t designed to handle COVID, especially old people like me, or even something similar, like eating hemlock, the way that Socrates was killed. We also see defects like ibirth defects, diseases such as cancer and things of that sort. Isn’t this an example of contingencies which we would prefer not to see in the design of humans?
Note: The great philosopher Socrates (470–399 BC) drank hemlock after being condemned for corrupting young people by encouraging them to ask too many questions.
Sam Haug: The way I like to think about how human beings fail in certain circumstances falls into two categories. The first category is that our creator intentionally did not design us to withstand this particular contingency. When designing a human being or any incredibly complex system, there are some design trade-offs. You can design a human being to be able to resist the effects of eating hemlock, for example, but the cost for doing that may be large.
For example, you would need to include an entirely new metabolic pathway to account for that particular poison. And doing that for any number of poisons may just not be feasible in the size of the human body. I don’t claim to know about all the design implications of making a human being, but I’m sure that there was some level of intentionally in not designing human being to withstand some things for trade-off reasons…
News, “The Pareto tradeoff — choosing the best of a mixed lot” at Mind Matters News (December 3, 2021)
Takehome: Computer engineers Robert J. Marks, Sam Haug, and Justin Bui look at the constraints that underlie any engineering design — even the human body.
Here’s are Parts 1 and 2 of Episode 159, featuring Robert J. Marks and Justin Bui
If not Hal or Skynet, what’s really happening in AI today? Justin Bui talks with Robert J. Marks about the remarkable AI software resources that are free to download and use. Free AI software means that much more innovation now depends on who gets to the finish line first. Marks and Bui think that will spark creative competition.
Have a software design idea? Kaggle could help it happen for free. Okay, not exactly. You have to do the work. But maybe you don’t have to invent the software. Computer engineer Justin Bui discourages “keyboard engineering” (trying to do it all yourself). Chances are, many solutions already exist at open source venues.
In Episode 160, Sam Haug joined Dr. Marks and Dr. Bui for a look at what happens when AI fails. Sometimes the results are sometimes amusing. Sometimes not. They look at five instances, from famous but trivial right up to one that nearly ended the world as we know it. As AI grows more complex, risks grow too.
In Episode 161, Part 1, Marks, Haug, and Bui discuss the Iron Law of Complexity: Complexity adds but its problems multiply. That’s why more complexity doesn’t mean more things will go right; without planning, it means the exact opposite. They discuss how programmers can use domain expertise to reduce the numbers of errors and false starts.
In Part 2 of Episode 161, they look at the Pareto tradeoff and the knowns and unknowns:
Navigating the knowns and the unknowns, computer engineers must choose between levels of cost and risk against a background with some uncertainty. Constraints underlie any engineering design — even the human body.