Above, I posted a recent interview with Vladimir Vapnik, one of the most important authorities on machine learning. He makes the interesting point that for the most part, science has proceeded since the 17th century by searching for simplicity. The underlying assumption has always been that we will ultimately find some small number of natural laws that can be written down and understood.
Vapnik thinks that this is a pipe dream. The world is much more complicated than that. So complicated, he thinks that we should give up hope of *understanding* it and instead focus on *predicting* it’s behavior.
It’s interesting that he draws such a distinction between understanding and predicting. My (probably naive) view on philosophy of science is that the purpose was always prediction. The importance of Newton’s law of gravitation was that it allowed us to predict how long it would take before a dropped object would hit the ground, not just that it would hit the ground which we already knew. It was an accident that Newton’s law is easy to understand. It was predictive and that was the important thing.
Vapnik points out that understanding and predicting are not required to go hand in hand. Even more, that we are being silly to expect that they would. In learning theory we speak of generalization from training data to new data, predicting that is. We don’t speak of extracting “understanding” from data, whatever that may mean.
Disclaimer: Of course I don’t mean anyone to think that I don’t view conceptual understanding as important in science. It’s just that I don’t think it’s the main goal. Certainly a scientist’s conceptual understanding of his field is his most important tool and the purpose of this tool is to generate predictive theories.