In part 1 of this series, I discussed one application of ID concepts to the biological sciences. In this post, I’m going to move my focus into two areas where I apply Intelligent Design to my own field of software engineering.
In part 1, I referred to a recent paper of mine where I extended the concept of Irreducible Complexity to found it on computation theory. In that post, I used my extended concept of Irreducible Complexity to note that while open-ended evolution cannot occur, but parameterized evolution can. In this post we are going to relate these findings to the field of software engineering, and see how we can apply them here.
When relating evolutionary concepts to software design, it makes for a usable analogy to consider the evolutionary process to be a designer, but one without much skill, or foresight. That is, evolution can tinker, but it doesn’t plan ahead its designs, and then execute them. Interestingly, this parallels quite well with the way that normal users interact with software. Developers and power users, of course, interact with software at a level that looks more like design. Regular users, however, interact with software at a level that resembles evolution. They try a bunch of stuff, see what works, and then keep doing that.
It is a temptation in software development to try to build systems that please every possible scenario. In order to appeal to the widest possible market, managers tend to want every possible feature, and every possible mode of interaction to be encoded in the software, and consider it to be a bug if it is not. What they want is for the software to be so flexible that it can fit into any workflow. In other words, since we can’t know ahead of time all possible scenarios that someone may want to use the software in, in order to meet this requirement, the software must be open-ended.
However, I have noted in my paper on Irreducible Complexity, open-ended systems, by computational necessity, are chaotic spaces – meaning that it is not possible for an unintelligent process to navigate it successfully. That isn’t to say that users are unintelligent, but rather that their intelligence should be focused on solving their own problems, not making software do what they want it to do.
My research indicates that there is a hard limit for what unintelligent processes should be expected to do. Anything requiring an open-ended loop should be considered far outside the boundary of what an unintelligent process should be expected to do – it is Irreducibly Complex. Now, there are varying levels of possibility below that. Users can take advantage of close-ended loops, functions, and other similar parameterized abilities. In fact, good software design can be considered a good parameterization of the tasks that you need to do. In the paper, Turing Completeness Considered Harmful, McDirmid made the following comment about Turing-complete (i.e. open-ended) software systems. He said that, in them “programmers can express arbitrarily complicated forms of program control flow that are difficult to write down, debug, and otherwise reason about.” That is, the chaotic space that is introduced in an open-ended system makes it difficult to think about what the software is doing.
What is needed is a system that is simplified enough that the users don’t need to reason about it to know how it works – they can simply see the effects. In addition, it needs to be simplified along the lines of tasks that they are likely to do. Another term for simplifying something along certain demarcated lines is parameterization.
The problem with parameterization is that it is necessarily close-ended. That is, decisions have to be made up-front about what sorts of solutions people can use your new software product for, and what sorts of solutions they can’t use it for (or at least requires custom development).
Thus, appealing to everyone actually causes problems in software design, because you are making your users reason too hard about how their actions affect the final state of the program. By making a well-parameterized system, you limit your audience, but you also set them free, and make exploration of the software’s potential easy. When the system is not well-parameterized, even basic usage is clunky, even if it can do everything.
ID, therefore, has a lot to do with software engineering. By having a theory of design, and what sort of things require design, and even metrics on how to calculate how much design something requires, a software engineer can deduce how much work they are forcing onto their users. Software engineers (myself included) are notoriously bad about this. By looking at the measurements defined by the ID community, we can estimate the amount of unnecessary work software systems are imposing on their users.
Along the same lines, these concepts can be used to extend traditional notions of complexity for software lifecycle management and testing. Cyclomatic complexity, for instance, is a measure of how many pathways exist within a computer program, and therefore how many independent tests are required to assure its uniqueness. ID, with my extended concept of Irreducible Complexity, can also identify parts of a program which are especially sensitive to error, and should receive priority when testing.
Therefore, ID, by providing a theory of design, and ways of measuring/determining the types of systems which require a designer (and to what degree), give software engineers both a way of determining whether their interfaces are unintuitive, and also give them a way of identifying sections of code which are especially prone for the introduction of bugs.