Machine learning: faithful servant of climate models
In the last decades, climate models have taken a central place in climatology. Nowadays, as their growing accuracy demands more and more processing power, artificial intelligence offers an alternative yet to investigate.
How can we represent a world of an infinite complexity with necessarily limited tools? Modelling climatologists consistently face this question, bound to choose what to represent and how accurately. “Otherwise, there would be no limits in details we can add to a model, it could go down to the scale of a millimetre… Every scale has an influence on climate” explains Venkatramani Balaji, researcher at Princeton university and long-term visiting scholar at the Institut Pierre-Simon Laplace. The challenge is then to find the right balance between a necessary simplification and an accurate mirroring of reality. To ensure the quality of a model is not affected by simplifying assumptions, researchers must test and modify them until they reach a reliable model.
Nonetheless, as the power of supercomputers rises year after year, the subtlety and resolution of models improve significantly and with it, the cost of calculation. “There are more and more high-resolution simulations of our planet, but the cost in calculation, energy, carbon and in time gets heavier” points Venkatramani Balaji. In this context, machine learning programs offer an alternative to optimise these processes.
A promising lead
“Now the question is to know if machine learning programs might find simplifying assumptions to get a model to represent small-scale phenomenon without needing to simulate everything very accurately” states the researcher. One method to get there consists in training the program by confronting it with known data. For instance, to teach it to differentiate the photograph of a dog or a cat, it will face a vast amount of pictures already labeled as “dog” or “cat”. As it processes data, it lists specific characteristics for each category. Thus, when facing an unknown image, it will attempt to spot these characteristics to determine the according category.
Likewise, training a program to recognise some climate phenomenon in high-resolution models could enable it to give key characteristics for these phenomena. “Having learned with highly accurate data, the computer program becomes able to detect characteristics at a larger-scale” , says Venkatramani Balaji. Then, researchers would know how to simplify their models without affecting its reliability. But this technology offers further possibilities: “Through methods called unsupervised learning, we tell the program categories exists among data, and it has to suggest the ones it can distinguish.” If today this tool can consolidate existing theories and help in the creation of models, to Venkatramani Balaji its applications could be wider in the future: “We could imagine that when a theory is missing in a field of research, the program could find one by itself. We are not there yet and might never be, or maybe machine learning would only give us clues in a future “collaboration” between “human” and “artificial” learning”.
By Marion Barbé for IPSL