After all, he argues, random processes are used all the time to model things in science:
When we test a sequence of numbers for randomness, we are essentially testing how easy it is to predict the sequence of numbers. One of the simplest tests is to measure how frequently heads and tails occur during a series of coin flips. If the distribution is heavily skewed one way or the other after a large number of flips, then we can be pretty certain the coin is not fair. We cannot be absolutely certain, since there is always a small probability for a really long run of heads, but as the run lengthens, the probability of achieving the run with a fair coin drops exponentially. If we cannot find any predictable patterns in a series of numbers, then we say the series is at least pseudo random.
However – and this is the really important point, so pay attention – we can never say a series is truly random just by examining it, since we would have to run an infinite number of randomness tests to look for all conceivable patterns. Thus, without actually knowing the original cause of a number sequence, the best we can ever say is a sequence is pseudo random with regard to the set of randomness tests that we have run. This conclusion is mathematically provable with Kolmogorov complexity.
Now we come to the second really important point, so don’t switch to YouTube just yet! Observe that the reverse is not true. Once we have detected a predictable pattern in a number sequence, then we are able to say, at least with some confidence, the sequence is not random. And the longer the sequence and higher the predictability, the greater our confidence grows.
Eric Holloway, “Why Is randomness a good model, but not a good explanation?” at Mind Matters News