Guest contribution by Eric Worrall
According to the big computer, we are doomed to suffer more and more harmful weather extremes. But the researchers can’t tell us exactly why because their black box neural network won’t explain its prediction.
Human activity influences global precipitation, study results
Anthropogenic global warming has contributed to extreme precipitation events around the world, researchers say globally
Wed July 7, 2021 3 p.m. AEST
Although there are regional variations and some locations are becoming drier, Met Office data shows that overall, heavy rainfall is increasing worldwide, meaning the rainiest days of the year are getting wetter. Changes in precipitation extremes – the number of days with very heavy rainfall – are also a problem. These short, intense periods of rain can lead to flash floods with devastating effects on infrastructure and the environment.
“We are already observing a temperature increase of 1.2 degrees Celsius compared to the pre-industrial level,” emphasized Dr. Sihan Li, a senior research fellow at Oxford University who was not involved in the study. She said: “If the warming continues to increase, we will have more intense episodes of extreme precipitation, but also extreme drought events.”
Li said that the machine learning method used in the study is state-of-the-art, but does not currently allow the assignment of individual factors that can influence precipitation extremes, such as anthropogenic aerosols, land use changes or volcanic eruptions.
The machine learning method used in the study learned from data alone. Madakumbura pointed out that in the future we can support this learning by dictating climate physics to the algorithm so that it learns not only whether the extreme rainfall has changed, but also the mechanisms why they have changed. “That’s the next step,” he said.
Read more: https://www.theguardian.com/environment/2021/jul/07/human-activity-influencing-global-rainfall-study-finds
The abstract of the study;
Anthropogenic influence on extreme precipitation over global land areas, which can be seen in several observation datasets
Gavin D. Madakumbura, Chad W. Thackeray, Jesse Norris, Naomi Goldenson & Alex Hall
The intensification of extreme precipitation under anthropogenic influences is predicted robustly by global climate models, but is very difficult to recognize in the observation data. Large internal variability distorts this anthropogenic signal. Models generate different orders of magnitude of precipitation reactions to anthropogenic forcing, mainly due to different schemes for the parameterization of processes on a subgrid scale. In the meantime, there are several global observation data sets of daily precipitation, which were developed with different techniques and inhomogeneously sampled data in space and time. Previous attempts to identify the human impact on extreme precipitation did not take into account the model uncertainty and were limited to certain regions and observation datasets. With machine learning methods that take these uncertainties into account and identify the temporal development of the spatial pattern, we find a physically interpretable anthropogenic signal that can be detected in all global observation data sets. Machine learning efficiently generates multiple lines of evidence that support the detection of an anthropogenic signal in extreme global precipitation.
Read more: https://www.nature.com/articles/s41467-021-24262-x
As an IT professional who has developed commercial AI systems, I find it incredible that the researchers appear naive enough to believe that their AI machine spending has value without corroborating the evidence. They admit they will try to understand how their AI works – but in my opinion they skipped the gun and made big claims based on a black box result.
Consider the following;
Amazon has ditched the AI recruiting tool that prefers men for technical jobs
Specialists have been creating computer programs for reviewing résumés since 2014 in order to automate the search process
Amazon’s machine learning specialists discovered a big problem: their new recruiting engine didn’t like women.
But in 2015, the company found that its new system was not evaluating candidates for software developer jobs and other technical positions on a gender-neutral basis.
This is because Amazon’s computer models were trained to screen applicants by observing samples in résumés submitted to the company over a 10 year period. Most came from men, reflecting male dominance in the tech industry.
In fact, Amazon’s system taught itself that male candidates are preferred. It punished CVs containing the word “women”, as in “Women’s Chess Club Captain”. And it has downgraded graduates from two all-women colleges, according to people familiar with the matter.
Amazon has edited the programs in such a way that they are neutral to these special conditions. But that wasn’t a guarantee that the machines wouldn’t develop other ways of sorting candidates that could prove discriminatory, people said.
Read more: https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
In retrospect, it is clear what happened. The Amazon AI was told to try to select the most suitable candidates and it found that more male candidates were being accepted for technical jobs, likely because more male candidates were applying. It was concluded from this that men are more suitable for technical professions.
It is important to note that this male bias in engineering professions is a purely Western cultural problem. When I went to a software development shop in Taipei, there were as many women as men developing software. The women I met in western IT shops and in this IT shop in Taipei were just as smart and tech-savvy as any man. Somehow we persuade our women not to take up technical professions.
My point is, when scientists unleash a black box AI on a dataset, they have no way of knowing if that AI’s output is what they think it is until they carefully tear the AI apart to find out exactly how it formed his conclusions.
Climatologists believe they have discovered a significant camouflaged anthropogenic impact. Or they may have discovered a large hidden bias in their data or models. To be fair, they admit that there could be problems with their training data and the climate models they use to retrospectively see what conditions would have been without anthropogenic impact. “… Additionally, the training GCMs may underestimate the low frequency natural variability such as the Atlantic multidecadal variability and the Pacific decadal oscillation. …“. That admission should have been her headline.
By the time they break down their black box system, figure out exactly how their AI is finishing up, and present the actual method for review, the method currently hidden in their AI, it seems remarkably premature to make a big announcement, just because they like the look of their result.