Uncommon Descent Serving The Intelligent Design Community

Applied Intelligent Design, Part 2: ID and Software Engineering

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In part 1 of this series, I discussed one application of ID concepts to the biological sciences. In this post, I’m going to move my focus into two areas where I apply Intelligent Design to my own field of software engineering.

In part 1, I referred to a recent paper of mine where I extended the concept of Irreducible Complexity to found it on computation theory. In that post, I used my extended concept of Irreducible Complexity to note that while open-ended evolution cannot occur, but parameterized evolution can. In this post we are going to relate these findings to the field of software engineering, and see how we can apply them here.

When relating evolutionary concepts to software design, it makes for a usable analogy to consider the evolutionary process to be a designer, but one without much skill, or foresight. That is, evolution can tinker, but it doesn’t plan ahead its designs, and then execute them. Interestingly, this parallels quite well with the way that normal users interact with software. Developers and power users, of course, interact with software at a level that looks more like design. Regular users, however, interact with software at a level that resembles evolution. They try a bunch of stuff, see what works, and then keep doing that.

It is a temptation in software development to try to build systems that please every possible scenario. In order to appeal to the widest possible market, managers tend to want every possible feature, and every possible mode of interaction to be encoded in the software, and consider it to be a bug if it is not. What they want is for the software to be so flexible that it can fit into any workflow. In other words, since we can’t know ahead of time all possible scenarios that someone may want to use the software in, in order to meet this requirement, the software must be open-ended.

However, I have noted in my paper on Irreducible Complexity, open-ended systems, by computational necessity, are chaotic spaces – meaning that it is not possible for an unintelligent process to navigate it successfully. That isn’t to say that users are unintelligent, but rather that their intelligence should be focused on solving their own problems, not making software do what they want it to do.

My research indicates that there is a hard limit for what unintelligent processes should be expected to do. Anything requiring an open-ended loop should be considered far outside the boundary of what an unintelligent process should be expected to do – it is Irreducibly Complex. Now, there are varying levels of possibility below that. Users can take advantage of close-ended loops, functions, and other similar parameterized abilities. In fact, good software design can be considered a good parameterization of the tasks that you need to do. In the paper, Turing Completeness Considered Harmful, McDirmid made the following comment about Turing-complete (i.e. open-ended) software systems. He said that, in them “programmers can express arbitrarily complicated forms of program control flow that are difficult to write down, debug, and otherwise reason about.” That is, the chaotic space that is introduced in an open-ended system makes it difficult to think about what the software is doing.

What is needed is a system that is simplified enough that the users don’t need to reason about it to know how it works – they can simply see the effects. In addition, it needs to be simplified along the lines of tasks that they are likely to do. Another term for simplifying something along certain demarcated lines is parameterization.

The problem with parameterization is that it is necessarily close-ended. That is, decisions have to be made up-front about what sorts of solutions people can use your new software product for, and what sorts of solutions they can’t use it for (or at least requires custom development).

Thus, appealing to everyone actually causes problems in software design, because you are making your users reason too hard about how their actions affect the final state of the program. By making a well-parameterized system, you limit your audience, but you also set them free, and make exploration of the software’s potential easy. When the system is not well-parameterized, even basic usage is clunky, even if it can do everything.

ID, therefore, has a lot to do with software engineering. By having a theory of design, and what sort of things require design, and even metrics on how to calculate how much design something requires, a software engineer can deduce how much work they are forcing onto their users. Software engineers (myself included) are notoriously bad about this. By looking at the measurements defined by the ID community, we can estimate the amount of unnecessary work software systems are imposing on their users.

Along the same lines, these concepts can be used to extend traditional notions of complexity for software lifecycle management and testing. Cyclomatic complexity, for instance, is a measure of how many pathways exist within a computer program, and therefore how many independent tests are required to assure its uniqueness. ID, with my extended concept of Irreducible Complexity, can also identify parts of a program which are especially sensitive to error, and should receive priority when testing.

Therefore, ID, by providing a theory of design, and ways of measuring/determining the types of systems which require a designer (and to what degree), give software engineers both a way of determining whether their interfaces are unintuitive, and also give them a way of identifying sections of code which are especially prone for the introduction of bugs.

Comments
@scordova,
Pattern recognition and reverse engineering (the heart of ID, since “ID is the search for patterns which signify intelligence”) …
Do you claim that the activity IDists call design detection would be called reverse engineering by engineers? If so, is this notion getting any traction within the ID movement?Freelurker
March 15, 2010
March
03
Mar
15
15
2010
03:52 PM
3
03
52
PM
PST
Software development is an activity of almost constant design.
God: "I going to need to refactor that last bit of doggie genome I just wrote, but for now it works."Mung
March 14, 2010
March
03
Mar
14
14
2010
08:57 PM
8
08
57
PM
PST
Every small change must prove itself immediately and carry its own weight, otherwise it will be selected away.
Significantly, 94% of the time it won't even be statistically noticed ;).Mung
March 14, 2010
March
03
Mar
14
14
2010
08:53 PM
8
08
53
PM
PST
ohnnyb @7,
Therefore, evolution cannot occur along lines that require forward reasoning.
Agreed, as evolution has no future goal. Evolution is simply the reward a mutation gets for surviving another generation.Toronto
March 14, 2010
March
03
Mar
14
14
2010
08:12 PM
8
08
12
PM
PST
osteonectin - "1. Which of the tools ID provides are you using for such calculations?" Irreducible Complexity. See, specifically, my paper which uses computability theory as a theoretical basis for IC. "Could you provide an example of software you’ve treated that way?" What we do is a slight modification of this. We have a custom content management system that we use for advanced modes of content management. There are parts of it that are useful to us, the developer, and parts that are useful to the users. If a part of the interface requires knowledge of loops to implement, we hide it from the user, and if it doesn't, we generally show it. For instance, custom HTML is within the scope of things that advanced customers can do, but custom layouts that require iterating through design elements are usually not. "How much unnecessary work is life/DNA imposing on us and how is it calculated?" I think you've misunderstood a little what I'm saying. Life/DNA isn't imposing unnecessary work. Evolution is what is doing the work in the "life" scenario, and it is incapable of performing what I am classifying as "unnecessary work" (i.e. forward reasoning). Therefore, evolution cannot occur along lines that require forward reasoning. Applications of this are available in my referenced paper.johnnyb
March 14, 2010
March
03
Mar
14
14
2010
07:32 PM
7
07
32
PM
PST
I've pointed out before why survival and natural selection are poor ways to elucidate the existence of a functional design. There are functional designs which will by-and-large be invisible to selection. Pattern recognition and reverse engineering (the heart of ID, since "ID is the search for patterns which signify intelligence") will be a superior method to elucidating functional designs versus Darwinism. I laid out the reasons here why Natural Selection is the wrong heuristic to evaluate the existence of functional architectures, and why it will fail to detect function when it would otherwise be evident to an engineer: Airplane Magnetos, Contingency Designs, and Reasons ID will Prevail Thus, it stands to reason, the Design paradigm is scientifically superior to natural selection in the identification of designs in nature!!!!scordova
March 14, 2010
March
03
Mar
14
14
2010
12:19 PM
12
12
19
PM
PST
Therefore, ID, by providing a theory of design, and ways of measuring/determining the types of systems which require a designer (and to what degree), give software engineers both a way of determining whether their interfaces are unintuitive, and also give them a way of identifying sections of code which are especially prone for the introduction of bugs.
A very easy way is sequence comparisons to establish linguistic patterns, and thus reverse engineering the syntax and grammar of systems. Violations of linguistic patterns suggest bugs. There may be grammars which can be elucidated via DNA comparisons across all species. John Sanford showed me some of the work of ENCODE. My jaw dropped! There will be much fruitful research, maybe even $$$ here.scordova
March 14, 2010
March
03
Mar
14
14
2010
12:14 PM
12
12
14
PM
PST
Hello software developers, I was recently a participant in a several-days-long design session for work we will be starting imminently. Since this work has an initial delivery deadline, a constant theme of our discussions was how much to do "now" (before the deadline) vs how much to put off until "later" (after the deadline). Some of us grumbled that we really needed more time to be able to implement a better/proper design. As it is, we have to implement our changes in "design increments" which match our delivery schedule. From a design perspective, using design increments is definitely the harder approach. It occurred to me that Darwinism doesn't have the opportunity to implement a better design by either complete designs or intermediate design increments. Every small change must prove itself immediately and carry its own weight, otherwise it will be selected away. This is definitely the hardest approach -- by a long shot. Thank heavens that we software designers do not have to work with such Darwinian constraints! Large architectural changes would be virtually impossible to make. At best, making major architectural changes one small self-supporting step at a time would take an immensely long time! A software development approach known as "Agile" (a lighter-weight process which is presumably more effective and more efficient) encourages a test-driven approach where every corner of the system is constantly being tested. If any of these tests identify an error, it can quickly be "selected" out (by applying the effort of an intelligent agent to fix the problem). Even in an environment where intelligent agents are constantly designing and implementing changes, natural (ok, maybe artificial) selection via test-driven development still has its place. Software development is an activity of almost constant design. Implementation of software (known as "programming") although normally not considered part of the "software design" process, is really just design at a very fine level. Software developers are therefore designing constantly! I think it would be fascinating to compare the constant design activity software development process with its Darwinian counterpart, which, ostensibly, produces output of comparable (actually, substantially greater) complexity and quality. Interestingly enough, the creator of Linux once compared the Linux open-source development process of many contributors adding to the software baseline with the process of evolution (Linus Says, Linux Not Designed; It Never Was). That comparison, though pretty lose, does capture the idea of having many programmers trying out many solutions to a programming problem, with the "fittest" one surviving. Maybe evolution did happen that way: many individual designers competing for the best solution. I think there is food for thought here. Software engineers are among those professions which are constantly engaged in design. One would think their experience on what activities can produce "the appearance of design" (and the difficulty with which a "quality" design can be produced) would shed some light on the feasibility of the Darwinian model.EndoplasmicMessenger
March 14, 2010
March
03
Mar
14
14
2010
12:07 PM
12
12
07
PM
PST
Have you explored whether software design patterns can be found in living organisms?Mung
March 14, 2010
March
03
Mar
14
14
2010
09:46 AM
9
09
46
AM
PST
johnnyb, lets get together with Nakashima and create a simulation of evolution and put all this to the test.Mung
March 14, 2010
March
03
Mar
14
14
2010
09:45 AM
9
09
45
AM
PST
ID, therefore, has a lot to do with software engineering. By having a theory of design, and what sort of things require design, and even metrics on how to calculate how much design something requires, a software engineer can deduce how much work they are forcing onto their users. Software engineers (myself included) are notoriously bad about this. By looking at the measurements defined by the ID community, we can estimate the amount of unnecessary work software systems are imposing on their users.
1. Which of the tools ID provides are you using for such calculations? 2. Could you provide an example of software you've treated that way? 3. How much unnecessary work is life/DNA imposing on us and how is it calculated?osteonectin
March 13, 2010
March
03
Mar
13
13
2010
10:19 PM
10
10
19
PM
PST

Leave a Reply