One of the central requirements of design arguments is to evaluate the probability of patterns emerging through undirected processes. Examples of evaluation schema have included Behe’s irreducible complexity, Ewert et al.’s algorithmic specified complexity (ASC), and Hazen et al.’s functional information. In my previous article, I focused on the last measure. All of these approaches attempt to quantify what is termed specified complexity, which characterizes complex patterns containing meaningful (i.e., specified) information. The various approaches have been generalized by computer scientist George Montañez (see here and here). He enumerated the core steps for constructing and evaluating any measure of specified complexity:

1. Determine the probability distribution for observed events based on assumed mechanisms. In other words, identify for each possible event the probability for it to occur.

2. Define a function that assigns to each event a specificity value.

3. Calculate the canonical specified complexity for an outcome by taking the negative log (base 2) of the specified complexity kardis, which is the ratio of the event’s probability to its specificity value multiplied by a scaling factor.

4. Determine the likelihood for an event to occur resulting from any proposed mechanism with the assumed probability distribution. The upper probability is equal to the kardis. If the probability is exceedingly small, the claim that the outcome occurred through the proposed mechanism can be rejected with high confidence.

Brian Miller, “The Information Enigma: Going Deeper” atEvolution News and Science Today:

Provided we are still allowed to have the discussion, of course.