Uncommon Descent Serving The Intelligent Design Community

Some people hope we can just “evolve” inventions like self-driving cars

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The other day at the DeepMind blog, someone came up with an idea for improving Waymo’s self-driving cars: Evolution

Waymo’s self-driving vehicles employ neural networks to perform many driving tasks, from detecting objects and predicting how others will behave, to planning a car’s next moves. Training an individual neural net has traditionally required weeks of fine-tuning and experimentation, as well as enormous amounts of computational power. Now, Waymo, in a research collaboration with DeepMind, has taken inspiration from Darwin’s insights into evolution to make this training more effective and efficient.

YU-HSIN CHEN, “HOW EVOLUTIONARY SELECTION CAN TRAIN MORE CAPABLE SELF-DRIVING CARS” AT DEEPMIND BLOG

If the researchers have specific goals in mind, they are acting as goal-directed intelligent designers. They are using a mechanism (which they call Darwinian natural selection) to produce a specific outcome. It may work; we shall see. But it is not natural evolution in general.

One reason for confusion is that, quite often, school systems have tended to teach only a dumbed-down version of evolution: the natural selection to which the researchers refer.

Another reason to revamp Darwin-only curricula.

Comments
So called "deep learning" is indeed a form of evolution. The neural network starts out incapable, and then, as it is exposed to examples, adjusts itself (pseudo-random mutation), based on feedback (natural selection) to improve its behaviour. However, it is not Darwinian since it has a goal, albeit one unknown to the neural network itself. The network eventually (after thousands of trials) "learns" to do the specific task it is trained to do (e.g. detect photos of cats). But it can do no other task, and in particular, cannot evolve from one task to another. Also, once it has gotten as good as it can at its task, it cannot get any better, and cannot "explain" how it does its task. There may be an analogy here to how genes work, and why evolution cannot generate a novel, functional gene. Once the neural network starts being able to do its task, it can get better with more trials and feedback. Similarly, a weakly performing gene can improve its function via Darwinian means (hill climbing), but can never change into a different gene for another function. This has to do with the sparseness of functional genes in the enormity of genetic space. Darwinian mechanisms can climb hills with positive slopes, but cannot jump to new hills.Fasteddious
September 9, 2019
September
09
Sep
9
09
2019
09:10 AM
9
09
10
AM
PDT
Even with a restricted set of goals within the self-driving domain, this approach has zero chance of success. The curse of dimensionality kills it dead before it's even born. There's a reason that genetic algorithms have only succeeded in creating toy applications.FourFaces
September 7, 2019
September
09
Sep
7
07
2019
11:44 AM
11
11
44
AM
PDT

Leave a Reply