When neural nets [computer programs that mimic the human brain] were all the rage in physics, some 25 years ago, I spoke with the author of a paper who was using neural nets to predict space weather. After a year of playing with predictive abilities of various 1-level, 2-level and higher node nets, he confided that they reached a certain level of ability and then failed to improve. What made them better, he told me, was having more physics inserted into the model.
That is, the nets couldn’t recreate Newton’s Laws, and if presented with just raw data, would perform very poorly because they didn’t conserve energy, didn’t conserve momentum, and generally predicted worse than college freshman. The real power of nets, was to extend a physical model past the limits of our physical understanding. In the future, a better physical understanding will make the models better, and once again the nets will underperform. Therefore the nets are not a substitute for understanding, they are a substitute for ignorance.
See also: AI and pop music: Can simple probabilities outperform deep learning?
Rob Sheldon comments on the “dirtiest fight in physics”