Categories
Science

Zhurong is Rolling on Mars

On May 22nd, 2021, the Zhurong rover – part of Tianwen-1, China’s first mission to Mars – descended from its lander and drove on the Martian surface for the first time. According to the mission’s official social media account, the rover drove down its descent ramp from the Tianwen-1 lander at 10:40 a.m. Beijing time (07:40 p.m. PDT; 10:40 p.m. EDT) and placed its wheels upon the surface of Mars.

Mission controllers were treated to a video taken by the rover shortly thereafter, which showed the empty landing platform with its descent ramp extended. This comes about a week after the first images were taken by the rover (released on May 19th) that showed surface from the lander and the descent ramp deployed in front of it. The rover has now commenced science operations, which currently involve exploring its landing site.

This is the second milestone achieved by the China National Space Agency (CNSA) in recent weeks, the first being the successful landing of the Tianwen-1 lander on May 14th. This made China the third nation to send a robotic mission to the surface of Mars, the others being the United States and the former Soviet Union. The Soviets were the first to land with the Mars 2 mission (1971), but communications were lost with the lander seconds later.

Zhurong Images released by the Tianwen-1 mission team on May 19th and May 22nd, before and after it disembarked from the lander. Credit: CNSA

On top of that, China is now the first nation to orbit, land, and deploy a rover as part of its first mission to Mars. Whereas all other nations – the US, Russia, the EU, and India – began by sending orbiters, then landers, then orbiters, landers, and rovers, China has pulled off all three with its very first mission. Equipped with a suite of six scientific instruments, Zurong will spend a total of 90 days gathering data on the Martian surface. These include:

  • Multi-Spectrum Camera (MSCam) – a radiometer that will capture different wavelengths of radiation on the surface
  • Navigation and Topography Cameras (NaTeCam) – high-resolution cameras for mapping out the Martian surface
  • Rover ground-Penetrating Radar (RoPeR) – for imaging features about 100 m (330 ft) beneath the Martian surface
  • Mars Surface Magnetic Field Detector (RoMAG) for surveying Mars’ variable magnetic field
  • Mars Meteorological Measurement Instrument (MMMI) – aka. Mars Climate Station (MCS), this instrument includes a thermometer, anemometer, and pressure sensor
  • Mars Surface Compound Detector (MarsSCoDe) – a spectrometer capable of conducting infrared and laser-induced breakdown spectroscopy

The objectives of the Tianwen-1 mission include characterizing the internal structure of Mars, the composition of its surface material, its climate and environment, the distribution of water ice, the planet’s morphology and geology, the planet’s variable magnetic field, ionosphere, and other key characteristics. In essence, Zhurong will be joining the three NASA surface missions to learn more about what Mars once looked like.

This includes studying features that formed in the presence of water and searching for possible indications of past life. It is for these very reasons that the Zhurong and its lander set down in Utopia Planitia, a plains region in the Northern Lowlands that was once covered by an ocean that enclosed much of the northern hemisphere. Utopia Planitia is also where NASA’s Viking 2 lander set down on September 3rd, 1976, to search for biosignatures.

The rover will also be looking for indications of what happened to Mars’ surface water, which scientists now theorize may have escaped underground. Finding existing caches of water and ice underground will also help pave the way for human exploration, as well as the creation of long-term habitats on the surface. The orbiter will monitor Zhurong and operate as a relay to provide a steady information conduit to the mission controllers back on Earth.

According to China Space News (quoted by Reuters), Zhurong has spent its first three days away from the lander exploring the surface in slow and small intervals – never venturing more than 10 m (33 ft) at a time. “The slow progress of the rover was due to the limited understanding of the Martian environment, so a relatively conservative working mode was specially designed,” said Jia Yang, an engineer and member of the mission team. Jia added that the pace may increase as the mission continues.

Zhurong is currently one of four missions exploring the Martian surface, the others being NASA’s Perseverance rover, Curiosity rover, and Insight lander. Next year, they will be joined by the ExoMars 2022 mission that will consist of Roscosmos’ Kazachok lander and the ESA’s Rosalind Franklin rover. By 2027-8, the elements that make up the Mars Sample Return are scheduled to arrive (a lander, rover, ascent vehicle, and Earth-return orbiter).

Back in February, the Emirates Mars Mission (aka. Hope) probe arrived in orbit, becoming the first mission sent by an Arab (or Muslim majority) nation to the Red Planet. It is now one of six orbiter missions, which include NASA’s 2001 Mars Odyssey, Mars Reconnaissance Orbiter (MRO), MAVEN, and the ESA’s Mars Express and ExoMars 2016 Trace Gas Orbiter (TGO).

These missions will carry on in the quest to learn more about Mars’ past and potential habitability. They will also help pave the way for crewed missions to the Red Planet, which are expected to begin sometime in the 2030s. The data obtained from all surface, orbiter, robotic, and crewed missions to Mars will also contribute to our overall understanding of how the rocky planets of our Solar System formed and evolved over the course of billions of years.

With any luck, we might even learn a thing or two about when and how life first emerged in our little corner of the cosmos. That slice of knowledge could also go a long way toward helping us find life beyond the Solar System someday.

Further Reading: Reuters, Parabolic Arc

Like this:

Like Loading…

Categories
Entertainment

Buddies Reunion Director speaks “impolite” Matthew Perry feedback

The director of the Friends Reunion special on HBO Max has only kind words to say Matthew Perry.

Ben Winston, who introduced the show’s six stars after re-watching the series again, recently spoke about comments fans have made about the actor’s appearance in the special.

Matthew, who portrayed Chandler Bing on the longtime sitcom, has been open to the fight against substance abuse over the years, which in turn was sometimes reflected in his appearance on Friends.

However, Ben has no reason to believe that Matthew is currently struggling with any health issues.

“He was great,” Ben said on The Hollywood Reporter’s Top Five podcast television on Friday May 28th. “People can just be rude at times. I wish they weren’t. I loved working with him. He’s a brilliantly funny man and I just felt happy and happy to be around him and him to direct something like that. “

Categories
Sport

Helio Castroneves wins the record-breaking fourth Indianapolis 500

Helio Castroneves has joined the most exclusive group in the Indianapolis 500 and is the fourth four-time winner of “The Greatest Spectacle in Racing”.

After joining Meyer Shank Racing for this year’s race after two decades with Team Penske, Castroneves overtook Alex Palou with two laps to go and held Palou back for victory.

Castroneves joins AJ Foyt, Rick Mears and Al Unser Sr. on the four-time winner list.

After the win, Castroneves drove his winning lap, stopped his car just behind the brick starting area and climbed over his branded fence with his crew.

Castroneves previously won the Indy 500 in 2001, 2002 and 2009.

Categories
Science

The function of CO2 in paleoclimate – Watts Up With That?

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on May 29, 2021 by curryja 

by Thomas Anderl

Simple models are formulated to identify the essentials of the natural climate variabilities, concentrating on the readily observable and simplest description. The results will be presented in a series of five articles. This first part shows an attempt to determine the climate role of CO2 from the past. Observations on 400 Mio. years of paleoclimate are found to well constrain the compound universal climate role of CO2, represented by a simple formula.

1. Introduction

Earth presently receives on average 240 W/m2 of insolation (planetary albedo taken into account) [1]. In equilibrium, Earth radiates the same amount back to space, corresponding to -18 °C in the blackbody approximation. The actual surface temperature is far higher with an average of about +15 °C. Therefore, something must be delivering heat to the surface in addition to insolation. When looking for the sources, a hint comes from a well-known experience: clear-sky nights exhibit relatively low Earth surface temperatures while cloudy nights remain relatively warm. Thus, the atmosphere is contributing to the heat variability at the surface, with water molecules as the dominant components.

However during the current geologic eon, the water content in the atmosphere is a passive reactant to otherwise driven temperatures, acting as an amplifier. When looking for the temperature driving processes, key candidates are the insolation (in particular the varying solar activity and modulation by the planetary albedo), tectonic movements (e.g. with their impact on ocean and wind currents), large volcanic activities, forms of life, extra-terrestrial events (bolide impacts, cosmic rays), and atmospheric composition beyond water content. Apparently through history, all these components have played their role in driving Earth’s near-surface atmospheric temperature.

Regarding the atmospheric composition, CO2 is recognized as a temperature driving agent. A clear sign comes from the well-known transmission spectrum of infrared radiation from Earth’s surface into space: It reveals strong absorption by atmospheric CO2 which to all existing knowledge, is contributing to the atmospheric heat.

The present analysis is devoted to the search for the empirically obvious related to the climate role of CO2, including its relation to the further driving forces. Starting points are the paleo-reconstructions on surface temperature and atmospheric CO2 concentration, with focus on the period 50-35 Mio. years before present (Ma BP) [2, 3], 400 ka BP (Vostok ice core data [4]), and the entire past 400 Ma BP [5,6]. These measurement data are found to be well reproduced by a simple model concentrating on the climate driving forces, basically identified as modulated insolation and CO2. From this observation-based approach, the CO2 contribution to equilibrium climate is judged universally well constrained in its compound effect, i.e. with all related effects taken into account, and is clearly disentangled from the opposite causation, the CO2 concentration following temperature variabilities.

2. The climate contribution of CO2

2.1. Eocene, 50-35 Ma BP

First let us think of designing an experiment to measure the impact of the atmospheric CO2 concentration onto the surface-air temperature. The CO2 concentration needed to be changed and for each change, its value and the corresponding temperature recorded. Other temperature influences needed to be negligible or well controlled. It turns out that Earth has performed such an experiment in the past. During the Eocene, in the period 35-50 Ma BP, atmospheric CO2 has steadily been removed by sequestration while recording its concentration and the corresponding temperature via proxies. Other temperature influences are judged negligible. This assumption is considered a first-order approximation subjected to potential amendment as the time horizon and the data base widen in the course of the further analysis. The span of the CO2 concentration has been from 1600 to 500 ppmv in the considered period, the temperature span from about 28 to 20 °C.

An interpretation of the ‘measurement’ data (i.e. the proxy reconstructions) has previously been presented [2, 3]. In the present studies, these reconstruction data are found to follow a simple relationship between the CO2 concentration (hereafter 𝑝CO2 in the unit ppmv) and the entailed temperature (TCO2), in the further course referred to as the Eocene (CO2-temperature) relationship:

TCO2 = ln(𝑝CO2/22) * 6.68 °C. (1)

From the historical CO2 concentrations of [3] (here used in course representation), the related temperatures are determined according to the preceding Eocene relationship. A slight correction is applied to account for the steady solar luminosity increase with time (ΔTsol) by approximating [5] via

ΔTsol = -0.01514 * t °C, (2)

with t the time from present into the past in million years, and by applying 0.75 °C/(W/m2) for the radiative forcing-to-temperature sensitivity (see e.g. [3]).

In Figure 1, the resulting T = TCO2 + ΔTsol (smooth blue line) is compared with the ‘measured’ data given in [2] (orange wiggly line). The simple logarithmic function (equation 1) for the temperature impact from the atmospheric CO2 concentration is well able to reproduce the temperatures of the considered period 50-35 Ma BP and beyond, extending to 60 Ma BP. As a sensitivity test, the two coefficients in TCO2 (equation 1) are changed by ±1 % and the resulting temperature boundaries depicted in Figure 1 by the dotted bright-blue lines.

Figure 1. Mean global annual near-surface air temperature trend for the Eocene as published by [2] (wiggly orange line) and computed from the Eocene CO2-temperature relationship, T = TCO2 + ΔTsol, of the present work (smooth blue line); dotted bright-blue lines: boundaries for changes of coefficients in TCO2 by ±1 %

Conclusion from the Eocene: As the primary change process, the atmospheric CO2 concentration was steadily reduced in the period of 50 to 35 Ma BP. Roughly, a difference of 1100 ppmv in the CO2 concentration is followed by a temperature difference of 8 °C. This causal relationship is well explained by simulation programs [2, 3]. At the same time, the simple 2-parameter logarithmic function of equation (1), the Eocene relationship, is able to reflect the compound effect of all underlying processes.

2.2. Late Quaternary, 420 ka BP until present

To explore the general applicability of the simple Eocene relationship, it is examined for a period with heavy disturbances to the pure CO2 influence: the Late Quaternary with its dominant waxing and waning ice sheets, in cause alternating the surface albedo and thus, the absorbed surface insolation. The present study is based on the Vostok ice core data [4]. The herein reported CO2 concentrations are used to derive the CO2-effected temperature contributions according to the Eocene relationship (TCO2). The albedo effect (ΔTice-Quaternary) is approximated with help of the also reported proxy-determined temperature variabilities (ΔTVostok) of [4] by adapting the linear δ18O-sea level-albedo relationship of [3] via:

ΔTice-Quaternary = (0.2 * ΔTVostok – 2.5) °C. (3)

The factor 0.2 has the meaning of αT/αp where αp the polar amplification (in this work taken as 2) and αT the proportionality factor for the global mean surface temperature, hence 0.4.

In Figure 2, the resulting temperatures T = TCO2 + ΔTice-Quaternary are compared with the proxy-measured temperatures. The computed temperatures T (orange solid curve) are in good accordance with the measured temperatures (long-dashed dark blue from [4] and short-dashed bright blue from [2]).

Figure 2. Surface temperatures for the Late Quaternary; ‘T (CO2, albedo)’: computed as T = TCO2 + ΔTice-Quaternary in the present work (orange solid line); ‘T Petit’ (long-dashed dark blue line): course representation of [4] as derived from the Vostok ice core proxies, multiplied by 0.5 to transform local temperature anomalies into mean global values (as in [3]), plus a 14 °C offset to translate from anomalies into absolute temperature (treated as fit parameter to match the computed temperatures, and being approximately the pre-industrial surface temperature); ‘Ts (Hansen)’ (short-dashed bright blue line): temperature values of [2]

The two contributions to the computed temperature T, originating from CO2 and predominantly ice albedo, are depicted in Figure 3. Each, CO2 and ice albedo, influence the surface temperature at similar size. In a more general (and correct) view, ΔTice-Quaternary represents all terms not covered by TCO2. From Figure 2, it is inferred that the aggregate non-CO2 temperature contribution largely follows a linear relationship to the global mean surface temperature.

Figure 3. Surface temperature contributions to ‘T(CO2, albedo)’ of Figure 2; from CO2: TCO2 according to the Eocene relationship (dashed blue line, with an arbitrary offset for presentation purposes); from ice albedo: ΔTice-Quaternary (solid grey line)

Conclusion from the Late Quaternary, part 1: By switching on ice albedo as a massive second temperature determinant in addition to CO2, the observed temperatures are also well reproduced with help of the Eocene CO2-temperature relationship. The Eocene relationship is indicated as independent of other temperature-driving forces.

This raises the question about the CO2-temperature relationship in the other direction: It is well known that temperature is viably directing the atmospheric CO2 concentration. On the sceptics’ side, there is remarkable supposition that the CO2 concentration is predominantly driven by temperature, rather than by human emissions during the industrial age. For an examination, let us think of an experiment to measure the CO2 concentration entailed by different temperatures. Again, nature has done such an experiment: in the Late Quaternary. By increasing and reducing ice coverage, albedo is being varied, by this the absorbed surface insolation and in turn, the surface temperature. Temperature and CO2 concentration have been recorded via proxies educed from ice cores (see before), and the associated time via the ice core depth. During the Late Quaternary, temperature is considered the predominant CO2 change agent, other CO2-determining processes judged disregardable.

Looking at the Vostok ice core data [4], the local temperature has varied by about 10 °C between glacial and inter-glacial maxima, and the CO2 concentration by 100 ppmv. 10 °C temperature difference in the Vostok ice core data roughly relate to 5 °C in the global average temperatures (see factor of 0.5 in Figure 2). Thus, a change of 1 °C of the global annual mean temperature is followed by a change of 20 ppmv in CO2 concentration. This is a factor of 2 higher then resulting from theoretical research [7], where the CO2 concentration (pCO2) varies per 1 °C of temperature change according to pCO2/27 (ppmv). For pre-industrial pCO2, this roughly results in 10 ppmv CO2 concentration change caused by a 1 °C temperature change.

Application of this theorical relationship to the temperature variabilities in the Vostok ice core data results in the CO2 concentrations as depicted by the dashed orange and dotted gray lines of Figure 4, for Vostok temperatures times 0.5 and raw Vostok temperatures, respectively; the solid blue line shows the CO2 concentrations as reported from the ice cores.

Figure 4. Atmospheric CO2 concentration in the Late Quaternary; solid blue line: course representation of proxy reconstruction [4]; dashed orange line: computed as caused by the temperature variabilities (proxy data of [4] times 0.5) according to theory [7]; dotted gray line: as before, temperature variabilities of proxy data without factor for translation from local to global mean temperature

Conclusion from the Late Quaternary, part 2: Nature reveals different CO2-temperature relationships for either direction: (a) temperature driving CO2, (b) CO2 driving temperature. In direction (a), the atmospheric CO2 concentration follows temperature changes by 10-20 ppmv per 1 °C temperature change. In direction (b), a change of 10 ppmv in CO2 concentration causes a temperature change of about 0.07 °C. Regarding for instance a CO2 concentration increase of 100 ppmv, the Eocene relationship indicates an induced temperature increase of 0.7 °C. Since this temperature increase, in turn, causes a concentration change of 7-14 ppmv, about 7-14 % of the 100 ppmv-increase is to be attributed to the entailed temperature increase.

2.3. PETM, 56 Ma BP, and Devonian to Triassic, 400-200 Ma BP

So far, the Eocene CO2-temperature relationship has proven applicable for two geological ages, the Eocene and the Late Quaternary. The next sections shall turn to other eons with yet different conditions. The first is the time of the Paleocene-Eocene Thermal Maximum (PETM), circa 56 Ma BP. In a previous computer simulation study [8], temperature and CO2 conditions have been analyzed by varying the CO2 concentration up to 9 times pre-industrial levels. In Figure 5, the results of the simulation study (blue dots connected by the solid line) are compared with the Eocene relationship results, corrected by ΔTsol (equation 2) for 56 Ma (orange dots connected by the dashed line); the black circle depicts the PETM condition according to [8].

Conclusion from the PETM-study: The simple Eocene CO2-temperature relationship is well able to reflect the comprehensive understanding of nature as implemented in simulation programs.

Figure 5. Surface temperature for PETM in dependence upon the atmospheric CO2 concentration, computation results as dots connected by straight lines; blue (solid connection): simulation results of [8]; black open circle: PETM condition [8]; orange (dashed connection): temperature following the CO2 concentration according to the Eocene relationship, corrected by ΔTsol for 56 Ma (this work)

In a further earlier study [9], the period of 400 to 200 Ma BP has been analyzed. Based on observed CO2 concentrations [5], the related radiative forcings have been determined. In Figure 6, these forcings (solid blue line) are compared to those given by the Eocene relationship (dashed orange line) by applying a sensitivity of 1.2 °C/(W/m2).

Figure 6. CO2 radiative forcing in the period 400-200 Ma BP; solid blue line: radiative forcing from [9] in course representation; dashed orange line: radiative forcing from the Eocene CO2-temperature relationship (this work) with 1.2 °C/(W/m2) as sensitivity

Conclusion from the 400-200 Ma-period: The pattern of the radiative forcing from earlier computer studies is well reproduced by the simple Eocene relationship. It is noted that a sensitivity of 1.2 °C/(W/m2) is required for the agreement, whereas 0.75 °C/(W/m2) are perceived as a generally applicable standard. At this point, no interpretation can be given on the sensitivity specifics of this case; as hypothesis, the difference may predominantly be attributed to water vapor.

2.4. Late Paleozoic, 420 Ma BP until present

So far, the considerations have each focused on rather specific periods. In the various periods, the Eocene CO2-temperature relationship has proven as a viable tool to quantify the CO2-induced temperature variabilities. In this paragraph, the entire Late Paleozoic from 400 Ma BP to present will be analyzed utilizing the Eocene relationship. The CO2 data are now taken from [5] (as in the previous 400-200 Ma study, context of Figure 6), and the temperature data from [6]. Either data are judged coherent state-of-the-art reconstructions for the considered period. Both data are shown together in Figure 7, the blue (mostly upper) line for the temperature and the orange line for the CO2 concentration.

Figure 7. Reconstructed surface temperatures (course reconstruction of [6]) and CO2 concentrations [5] for the Late Paleozoic; blue (mostly upper) line: temperature, left scale; orange line: CO2 concentration, right scale

From visual impression, the extremes exhibit rather consistent patterns: nearly the same CO2 concentrations correspond to the respective temperatures at the minima and maxima (except at the maxima of 90 and 55 Ma BP). In between, CO2 may lead temperature by circa 20 Ma (400-320 Ma BP) or lag by 20 Ma (280-220 Ma BP). From this, it is expected improbable to extract a statistically significant correlation between the two variables – if not artificially adapted for the 20 Ma-time shifts. Since there is no explanation in sight for a potential time lead / lag of this order, such statistical analysis is disregarded.

Instead, the Eocene relationship is applied to the CO2 concentrations. The resulting temperatures are depicted in Figure 8 (dashed orange line) with a constant subtraction of 3 °C, and compared to the reconstructed (measured) temperatures (solid blue line). Besides the artificial 3 °C-offset, the agreement between the two curves is perceived remarkably good. One may infer that the Eocene relationship represents the major temperature driving force.

However, it is known that the absorbed insolation is subject to modulations with time. Significant variability is to be expected from the constantly increasing solar luminosity (see ΔTsol of equation 2), from surface albedo via snow and ice coverage (e.g. regarding the Late Paleozoic icehouse at around 300 Ma), and proposedly from the cyclic cosmic ray intensities [10]. Further significant temperature influence is expected from tectonic changes (the entire considered period covered by supercontinent Pangea assembly to break-up).

Figure 8. Surface temperatures; solid blue line: geologic reconstruction, as in Figure 7; dashed orange line: temperature determined from the CO2 concentrations [5] via the Eocene CO2-temperature relationship minus 3 °C (this work)

The cosmic ray intensity φ

ΔTcrf = -4 * φ

The resulting variability of ~ 3 °C is found in consistency with [10].
The tectonic changes are apparent in the paleogeographic evolvement; Figure 9 shows a course reconstruction of [11]. The temperature impact is approximated via multiplying the coverages (in percent) of landmass, mountains, and ice sheets by -0.2 °C/%, and the coverages of water (shallow waters and deep ocean) by +0.2 °C/%, and applying a constant offset of -7 °C:

ΔTtec =(Σifi* Ci -7)°C, (5)

with i indicating the tectonic types, fi the coverage-temperature impact described before, and Ci the respective coverages (Figure 9).

Figure 9. Paleogeographic evolvement with time; Earth coverages in % from top to bottom: deep ocean (dashed blue), landmass (solid brown), shallow waters (dashed bright blue), mountains (solid ochre), ice sheets (dotted violet)

This approach means for instance: if land gives 1 % to water, then 0.2 °C is contributed by the reduction of land coverage and another 0.2 °C by the simultaneous increase of the water area, in total 0.4 °C. Originally introduced to explore the tectonic influences, ΔTtec in its given form is interpreted as predominantly reflecting albedo variabilities and in addition, overall land/water-driven climate variabilities (shift in the coverage ratio of continental vs. warm-humid climates).

To put this into perspective, a 1 % land increase from today’s tectonics – with ocean and land coverages 0.71 and 0.29, respectively, the ocean and land solar surface absorptions of [1], and a sensitivity of 0.75 °C/(W/m2) – results in a temperature reduction of 0.26 °C. More qualitatively, the albedo of water clouds is about 10 % higher over land than over oceans, 0.46 versus 0.42 [12], contributing to higher surface insolation at oceans than at land. In conclusion, the albedo interpretation of ΔTtec and the chosen parameter set are viewed as principally supported by separate studies. For further instance, in the Late Paleozoic icehouse at around 300 Ma BP, the ice sheet contribution to ΔTtec is -2.9 °C if the ice area is recruited from water areas.

In summary, the total temperature is determined by

T = TCO2 + ΔTsol + ΔTcrf + ΔTtec. (6)

The result is depicted in Figure 10 by the dashed orange line and compared to the reconstructed (measured) temperatures (solid blue line). The agreement is perceived fair, particularly regarding the extensive period of about 400 Ma covering a large variety of disparate conditions. The pattern of the agreement remains principally unchanged (not shown) if considering the 68 % confidence boundaries for the CO2 concentrations of [5], the temperature discussion of [6], and a potential sensitivity dependency on the climate state by varying the non-CO2-terms in equation (6) by ±1⁄3. The agreement of the present high-level consideration with observations is seen as confirmation that the major temperature-determining components have been identified and that their respective contributions can be quantified by simple approximations.

Figure 10. Surface temperatures; solid blue line: geologic reconstruction, as in Figure 7 and Figure 8; dashed orange line: determined by equation (6) of this work based on the Eocene CO2-temperature relationship; dotted gray line: as before, with cosmic ray influence switched off and ΔTtec adapted; dot-dashed green line: as before (no cosmic ray influence), with ΔTtec replaced by a snow/ice albedo approximation and continental coverage (sea level)-to-temperature proportionality (see text)

By nature of the approximations, the regarded contributions subsume all relevant underlying processes. This particularly applies to the Eocene CO2-temperature relationship comprising e.g. atmospheric water vapor variations with temperature, changing ocean-atmosphere interaction with varying atmospheric CO2 concentration and temperature, and the temperature influence on the CO2 concentration (see above, Late Quaternary). TCO2 in equation (6) gives the near-surface temperature if CO2 was the only forcing. The further components of equation (6) act as correction terms, each again subsuming all underlying processes. These are explicitly incorporated in ΔTsol (equation 2) by applying the sensitivity of 0.75 °C/(W/m2) and implicitly incorporated via the factors -4 and fi in ΔTcrf (equation 4) and ΔTtec, (equation 5), respectively. Dependency of the sensitivity on the climate state is approximated as zero, cross-terms and higher-order terms in the forcing-to-temperature relationship are interpreted to be partly contained as averages in the insolation components of equation (6) (i.e. ΔTsol, ΔTcrf, ΔTtec) and to be partly attributed to the residuals.

To examine model alternatives, variations have been applied to equation (6). (A) First, the contribution from the cosmic ray flux is set to zero. With the parameters of ΔTtec changing from -0.2 to -0.3 °C/%, from +0.2 to +0.3 °C/%, and the constant to -15 °C, the temperatures are given as depicted by the dotted gray line in Figure 10. (B) From here, ΔTtec is replaced by two components. (i) Snow/ice albedo is approximated by a linear relationship to temperature: for TCO2 + ΔTsol > 17 °C, the relative albedo contribution is +3 °C; for lower temperatures, the contribution is (TCO2 + ΔTsol – 11.5) ∙ 0.545 °C. (ii) A temperature contribution is introduced proportional to the ocean continental coverage [13], which is a measure for the eustatic sea level; this temperature contribution is taken proportional as 0.2 °C per 1 % continental coverage difference with a constant offset of -6 °C. This temperature contribution is interpreted to originate from albedo variabilities. The resulting temperatures are shown in Figure 10 by the dot-dashed green line. (C) Introduction of effects from atmospheric oxygen variabilities leads to temperatures within the ranges exhibited in Figure 10 (therefore not shown).

In general, the pursued selective and simple driving-force consideration cannot cater for the entirety of all related processes. Major contributions to the temperature variabilities are expected from strong volcanic activities (beyond the CO2 effects) as well as from wind and ocean currents. The latter may be the cause for the deviations between about 50 and 30 Ma BP in Figure 10 which decrease by circa -4 °C during this period (differences between solid blue and dashed orange lines in Figure 10). Such progressive cooling may well be ascribed to changes in the ocean currents [14]. Also the model-to-reconstruction deviations before and after the center of the late Paleozoic icehouse (at about 300 Ma BP) are proposed to be predominantly attributed to warming contributions from – tectonically determined – ocean current specifics, these being largely reduced in the presence of wide-spread glaciation (i.e. at the center of the icehouse).

The proxy reconstructions used for the Late Paleozoic in this paragraph exhibit deviations from those used for the derivation of the Eocene relationship in § 2.1. Nevertheless, the original relationship of equation (1) reveals as best fit through the Late Paleozoic-analysis.

From comparison of Figure 10 (dashed orange line) with Figure 8, the summed effect of insolation variabilities (particularly from solar luminosity (ΔTsol) and albedo) roughly acts as a constant temperature reduction of 3 °C. As example for detailed insight, the single temperature contributions to T (equation (6), dashed orange line in Figure 10) are depicted in Figure 11.

Figure 11. Surface temperature contributions to dashed orange line of Figure 10: TCO2 (solid blue) with 14 °C-subtraction for presentation purposes, ΔTsol (dotted gray), ΔTcrf (dash-dotted green), ΔTtec (dashed orange)

For an illustration of reconstruction uncertainty effects, the 68 %-pCO2 confidence envelope is used for TCO2 of the dotted gray line in Figure 10 and the results depicted by the dotted gray lines of Figure 12. The relative temperature uncertainties are emulated as 0.3 times the relative pCO2 uncertainties (68% confidence). By this, the uncertainty increase with depth into the past is accounted for; the absolute height (factor 0.3) has intuitive character. It is interpreted that detailed error treatment cannot substantially alter the preceding considerations.

Figure 12. Uncertainty consideration for reconstructed temperature and dotted gray model of Figure 10; gray: TCO2 computed with 68%-low/high confidence envelope for pCO2 instead of maximum probability pCO2; blue: temperature envelope by emulating uncertainties from the pCO2 data via 0.3 times their relative 68%-confidence deviation from the maximum probability value

Conclusion: The attempt is perceived successful to describe the fundamental climate determinants by simple means. The Eocene CO2-temperature relationship is revealed to be applicable throughout (at least) the past 400 Ma, as resulting from comparisons with paleo-reconstructions (Eocene, Late Quaternary, Late Paleozoic) together with plausibility considerations on the further major climate determinants. CO2 delivers the major contribution to the climate variabilities. The second major influence stems from the modulation of the absorbed insolation by the sun’s luminosity, the planetary albedo (via paleogeography/tectonics, or snow/ice and sea level), and potentially cosmic rays. The Milankovitch-cycles turn out to play a subordinate role for understanding the climate variabilities on the high level pursued in this study. However, there is room for other important contributions, particularly from ocean currents. At the very least, the benefit of the present analysis is to have a handy tool for estimates, particularly to quickly size risk from the CO2-temperature relationship.

3. Interpretation

Methodologically, the present study is based on the principle that the determining forces of a certain natural phenomenon are (1) few and (2), clearly visible. The focus has been the search for the clearly visible on nature’s interplay between CO2 concentration and temperature.

With this focus, a sophisticated error calculation is regarded subordinate. Remarks on error consideration are included (Late Paleozoic) and sensitivity studies performed (Eocene relationship, Late Paleozoic). In general, the presented studies are based on long-term trends. The approach presumes that the degree of agreement between approximation and observation is clearly visible in the long-term patterns. It is perceived that a sophisticated error analysis would basically leave the degree of conclusiveness unchanged.

The major goal, uncovering reproducibility from the abundant scientific results in an 80:20 approach, is considered achieved – strongly observation-based (Eocene, Late Quaternary, Late Paleozoic), and extracting simple descriptions. The analysis recruits a single value from previous modelling: Earth’s climate sensitivity for its response to the steadily increasing solar luminosity (sensitivity in the present definition as the transformation of radiation change into surface temperature change).

Due to the long time span considered in the initial derivation (15 Mio. years), the Eocene CO2-temperature relationship reflects equilibrium climate states. Beyond conformance with measurements, the simple relationship agrees well with sophisticated simulation results (Eocene, PETM, Devonian to Triassic) offering itself as a handy tool for further analysis, and testifying reproducibility of the complex models.

The interdependency between CO2 and Earth’s climate is clearly crystallized. Either direction in the temperature relationship – CO2 or temperature in the driver’s seat – is quantified by simple means. From this analysis, the sceptics’ argument seems difficult to be maintained that the CO2-temperature relationship reflects a spurious correlation. At the very least with societal responsibility, the risk must be assumed that nature treats any atmospheric CO2 concentration change according to the Eocene relationship.

Furthermore, the role of CO2 is put into perspective with other major climate determinants, mainly those causing insolation variabilities (particularly solar luminosity and planetary albedo), with a note to the anticipated role of the ocean currents. The hope is that this will facilitate differentiation in the discussions.

Supplementary Material: All data and code are available: Simplified climate modelling.

References

  1. Wild M., Folini D., Hakuba M.Z., Schär C., Seneviratne S.I., Kato S., Rutan D., Ammann C., Wood E.F., König-Langlo G.. The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models. Clim Dyn 2015, 44, 3393–3429. https://doi.org/10.1007/s00382-014-2430-z.
  2. Hansen J., Sato M., Russell G., Kharecha P. Climate sensitivity, sea level and atmospheric carbon dioxide. Phil. Trans. R. Soc. A 2013, 37120120294. http://doi.org/10.1098/rsta.2012.0294.
  1. Hansen J., Sato M., Kharecha P., Beerling D., Berner R., Masson-Delmotte V., Pagani M., Raymo M., Royer D.L., Zachos J.C. Target Atmospheric CO2: Where should Humanity Aim?. The Open Atmospheric Science Journal 2008, 2. http://dx.doi.org/10.2174/1874282300802010217.
  2. Petit J. R., Jouzel J., Raynaud D., Barkov N. I., Barnola J.-M., Basile I., Bender M., Chappellaz J., Davis M., Delaygue G., Delmotte M., Kotlyakov V. M., Legrand M., Lipenkov V. Y., Lorius C., Pépin L., Ritz C., Saltzman E., Stievenard M. Climate and Atmospheric History of the Past 420,000 Years from the Vostok Ice Core, Antarctica. Nature 1999, 399, 429-436. https://doi.org/10.1038/20859.
  3. Foster G.L., Royer D.L., Lunt D.J. Future climate forcing potentially without precedent in the last 420 million years. Nat Commun 2017, 8, 14845. https://doi.org/10.1038/ncomms14845.
  4. Scotese C. A NEW GLOBAL TEMPERATURE CURVE FOR THE PHANEROZOIC. 2016. doi:10.1130/abs/2016AM-287167. Herein: Scotese, Christopher. PhanerozoicGlobalTemperatureCurve_Small. 2016.
  5. Omta A.W., Dutkiewicz S., Follows M.J. Dependence of the ocean‐atmosphere partitioning of carbon on temperature and alkalinity. Global Biogeochem. Cycles 2011, 25, GB1003. https://doi.org/10.1029/2010GB003839.
  6. Zhu J., Poulsen C.J., Tierney J.E. Simulation of Eocene extreme warmth and high climate sensitivity through cloud feedbacks. Sci. Adv. 2019, 5, eaax1874. https://doi.org/10.1126/sciadv.aax1874.
  7. Soreghan G.S.; Soreghan M.J.; Heavens N.G. Explosive volcanism as a key driver of the late Paleozoic ice age. Geology 2019, 47, 600–604. https://doi.org/10.1130/G46349.1.
  8. Shaviv N.J., Veizer J. Celestial driver of Phanerozoic climate? GSA Today July 2003, 13, 7, 4. doi: 10.1130/1052-5173(2003)013<0004:CDOPC>2.0.CO;2.
  9. Cao W., Zahirovic S., Flament N., Williams S., Golonka J., Müller R.D. Improving global paleogeography since the late Paleozoic using paleobiology. Biogeosciences 2017, 14, 5425–5439. https://doi.org/10.5194/bg-14-5425-2017.

12. Han Q., Rossow W.B., Chou J., Welch R.M. Global Survey of the Relationships of Cloud Albedo and Liquid Water Path with Droplet Size Using ISCCP. J. Climate 1998,11, 1516-1528. https://doi.org/10.1175/1520-0442(1998)0111516:GSOTRO2.0.CO;2.

  1. Keller C.B., Husson J.M., Mitchell R.N., Bottke W.F., Gernon T.M., Boehnke P., Bell E.A., Swanson-Hysell N.L., Peters S.E. Neoproterozoic glacial origin of the Great Unconformity. Proceedings of the National Academy of Sciences 2018, 116, 201804350. DOI: 10.1073/pnas.1804350116.
  2. Yang S., Galbraith E., Palter J. Coupled climate impacts of the Drake Passage and the Panama Seaway. Clim Dyn 2014, 4337–52. https://doi.org/10.1007/s00382-013-1809-6.

Like this:

Like Loading…

Categories
Health

U.S. Covid circumstances are the bottom in a yr as Memorial Day journey will increase

A crowd of travelers are checking in for their flights in LAX on Friday, May 28, 2021.

Allen J. Cockroaches | Los Angeles Times | Getty Images

The U.S. has reported the lowest number of Covid-19 cases in more than a year as the country’s airports recorded the highest number of travelers since the pandemic began over the weekend of Memorial Day.

The 11,976 new cases reported on May 29 were the lowest since March 23, 2020, when 11,238 new cases were reported, according to Johns Hopkins University.

The seven-day average of 21,007 is the lowest since March 31 last year when it was 19,363.

On Friday, the TSA also reported the highest number of travelers since the pandemic began. More than 1.9 million people went to heaven for the long weekend. At the same time last year, the TSA counted only 327,000 passengers at its checkpoints.

The World Health Organization officially declared Covid-19 a global pandemic on March 11, 2020. The US reported 1,147 Covid cases that day. The pandemic would infect more than 33 million people in the United States and kill nearly 600,000 people.

Within a week of the WHO’s statement, daily TSA trips dropped from 1.7 million to 620,000. As of March 25, the number was 203,000. Since March 11, 2021, the daily number of fliers has remained above 1 million.

More than 60% of adults in the US have at least one dose of a Covid vaccine, while 40.5% of adults are fully vaccinated, according to data from the Centers for Disease Control and Prevention. President Biden announced earlier this month that his government plans to increase the number of adults on at least one dose to 70% by July 4th. He also said he plans to fully vaccinate 160 million American adults by the same date.

“If we succeed in these efforts,” said Biden during his announcement, “then the Americans will have taken a serious step towards a return to normal.”

The CDC recently said that fully vaccinated people do not need to wear masks in most settings, although masks are still required on airplanes, buses, trains and public transit. Cities across the country are lifting restrictions on indoor dining and gatherings as cases fall and vaccinations increase.

The Chief Medical Officer of the White House, Dr. Anthony Fauci, has repeatedly stated that he wants the daily case numbers to fall below 10,000 before a major relaxation of security measures takes place.

Categories
Entertainment

Queen Naija and her sister share phrases about CJs on Instagram

There’s only so much people can ignore when their name is tossed around on social media. All week long, Queen Naija has defended her actions after being called out by Chris Sails, the father of her eldest son, CJ. Chris shared on social media that he was upset that Queen hadn’t informed him about her son’s graduation from kindergarten. Parents shared some graduation and parenting news online.

When it looked like things were finally settling down after the back and forth, Queen’s sister Tina posted a long YouTube video calling her sister and her sister Clarence’s boyfriend. In the video, she scolded how upset she was that her mom and Chris weren’t notified of CJ’s graduation to be in attendance. Tina admitted, however, that she did not speak to Queen. Tina expressed that she felt the misunderstood singer had failed to defend her in the media, particularly when her fans called Tina to steal Queen and Clarence’s credit card.

Hours later, Queen broke her silence and responded to her sister’s claims about the gram, and Chile said a swig! Queen said she didn’t want to do a video so she opted for a long message. She wrote: “Nobody knows how much unnecessary disrespect I had to take privately for not exposing them. Even during my pregnancy. I took care of my family because I really care for them and love them. “

Tina quickly got wind of her sister’s answer, took a few more things from her chest and called Queen again.

Roommate, drop a comment and let us know what you think about it?

Would you like updates directly in your text inbox? Hit us at 917-722-8057 or https://my.community.com/theshaderoom

Categories
Sport

Liga MX last 2021: Date, time, odds, TV schedule & location for Cruz Azul vs. Santos Laguna

They are one of the biggest soccer clubs in Mexico and they have one of the largest fan bases. But they haven’t won a Mexican league title in 23 years.

But Cruz Azul, also known as “La Maquina,” are just 90 minutes away from finally ending the drought as they take a narrow 1-0 aggregate-goal lead into Sunday’s second leg of their Liga MX final series against Santos Laguna.

What time is the Liga MX final?

The Liga MX final is played over two legs. The team that scores the most total goals in both matches will be crowned Clausura 2021 champions. If the teams are tied on total goals after the second leg, they go to a 30-minute extra time. If the draw persists, then a penalty kick shootout will determine the champion.

Liga MX plays split seasons: two seasons per calendar year. The Clausura 2021 champion is recognized as the best team from January to May.

1st Leg Final Score: Santos Laguna 0, Cruz Azul 1

2nd Leg: Cruz Azul vs. Santos Laguna

  • Date: Sunday, May 30
  • Time: 9 p.m. ET / 6 p.m. PT
  • Stadium: Estadio Azteca

How to watch the Liga MX final in the USA

2nd Leg: Cruz Azul vs. Santos Laguna

Univision / TUDN will carry the 2nd Leg of the Liga MX final. Both outlets are available via fuboTV’s 7-day free trial.

Getty Images

Cruz Azul, Santos Laguna in Liga MX final

Since their last Mexican title in 1997, Cruz Azul have been to the Liga MX final six times, but they’ve come up short in every instance. These failures have haunted the team over 23 years, but they seem to be close to exorcising those demons once and for all come Sunday’s second leg at home against Santos Laguna.

Fans of La Maquina believe the team is jinxed. They are constantly waiting for the other shoe to drop, but they also feel optimistic about the 1-0 aggregate-goal lead their team secured in their first-leg victory on Thursday. They are so confident that they serenaded the team at the hotel they’re using as the base for Sunday’s second leg. What jinx? The scenes were incredible: 

The fans can feel how close it is. A Luis Romo goal in the 71st minute (video below) was all Cruz Azul needed on Thursday in a professional performance in enemy territory where they showed composure and personality. 

Santos came at them in waves to start the first leg, but Cruz Azul were unflappable and slowly took control of the game. La Maquina players revealed that they were extra motivated after the tragedy that struck the family of starting winger Roberto “Piojo” Alvarado. The player and his pregnant wife lost the baby they had been expecting and he was not with the team for the first leg in Torreon, Mexico. 

But Alvarado will be back for Cruz Azul in the second leg, as will Ecuadorian forward Bryan Angulo, who has recovered from injury and is available off the bench, according to reports.

Cruz Azul, the No. 1 seed in the Liga MX playoffs and the best team in the Liga MX regular season, will be favorites to hoist the trophy at home in the Estadio Azteca. A sold-out crowd of 21,000 fans is expected (25 percent capacity) with tickets selling out within 15 minutes of going on sale last week.

But success hasn’t come easy for Cruz Azul during these Liga MX playoffs. They’ve had to sweat out tough playoff series victories against Toluca in the quarterfinals (advancing 4-3 on aggregate goals) and Pachuca in the semis (1-0 on aggregate goals).

No. 5-seeded Santos and their young squad were never expected to make it this far and they had to win a play-in game just to make the postseason. But they’ll give Cruz Azul all they can handle after coming close to scoring on several occasions in the first leg. Santos were also unlucky to have a goal called back for offside when the score was still deadlocked at 0-0. 

Santos, who last won a Mexican league title in 2018, will need a comeback and they’ll hope their 22-year-old Mexican forward Eduardo Aguirre can return to his goal-scoring ways after he was held scoreless in the first leg. Entering the first leg, Aguirre had scored five goals in the five previous playoff matches.

Getty Images

How Liga MX tiebreaker works in the final

Unlike the quarterfinal and semifinal stage of the Liga MX playoffs, the final will NOT employ away goals as a tiebreaker in case teams are tied on goals scored.

If Cruz Azul and Santos are even on aggregate goals at the end of the second leg in Mexico City, 30 minutes of extra time and a penalty-kick shootout will determine which team will hoist the trophy at the Estadio Azteca.

Liga MX Final: Leg 2 Prediction

While fans and media wonder aloud whether another heartbreaking collapse is in the cards for Cruz Azul, this time there’s a different feel about this team. Judging by the first leg, they don’t seem overawed at all by the magnitude of the moment.

So expect Cruz Azul to make their narrow advantage hold up at home to win the trophy. Santos Laguna and their electric front line may very well find a goal to make things interesting (Both Teams To Score is at +100), but this Cruz Azul team will overcome. They won’t let this opportunity pass them by. 

Odds courtesy of DraftKings

  • Cruz Azul (Moneyline, 90-minute regulation): -106
  • Draw (Moneyline, 90-minute regulation): +240
  • Santos Laguna (Moneyline, 90-minute regulation): +290
  • Cruz Azul to win the trophy: -625
  • Santos Laguna to win the trophy: +400
  • Both teams to score: +100
  • Roberto Alvarado (Cruz Azul) to score: +310
Categories
Science

A darkish matter map of our native cosmic neighborhood

Since the first theory in the 1970s, astrophysicists and cosmologists have done their best to solve the mystery of dark matter. It is believed that this invisible mass makes up 85% of the matter in the universe and makes up 27% of its mass-energy density. In addition, it also provides the large-scale skeletal structure of the universe (the cosmic web), which, due to its gravitational influence, determines the movements of galaxies and material.

Unfortunately, the mysterious nature of dark matter means that astronomers cannot study it directly, which prevented it from measuring its distribution. However, it is possible to infer its distribution based on the observable influence of its gravity on local galaxies and other celestial objects. Using state-of-the-art machine learning techniques, a team of Korean-American astrophysicists was able to create the most detailed map of the local universe to date, showing what the “cosmic web” looks like.

The team responsible for this breakthrough was led by senior researcher Sungwook E. Hong of Seoul University and the Korea Astronomy and Space Science Institute (KASI). He was joined by the associate professor Donghui Jeong from the Institute for Gravitation and the Cosmos (IGS) at Penn State and the researchers Ho Seong Hwang and Juhan Kim from the Seoul National University and the Korea Institute for Advanced Study (KIAS).

In the past, previous attempts to map the cosmic web began with a model of the early universe and then simulated its evolution over billions of years. However, this method has had limited success because of the enormous amount of computing power required. Taking a different approach, the team created a model that used machine learning to predict the distribution of dark matter based on the known distribution and movement of galaxies.

The team built and trained this model using Illustris-TNG, a cosmological project that has performed multiple simulations using galaxies, gases, other forms of baryonic (also known as visible) matter, as well as dark matter. The team selected simulated galaxies from Illustris-TNG that were comparable to the Milky Way and identified the properties needed to predict the distribution of dark matter. Said Jeong:

“Ironically, it is easier to study the dark matter distribution much further away because it reflects the very distant past, which is much less complex. Over time, as the large-scale structure of the universe has grown, the complexity of the universe has increased, making it inherently more difficult to take measurements locally on dark matter. “

“Once the model receives certain information, it can essentially fill in the gaps based on what it has considered previously. The map from our models does not perfectly match the simulation data, but we can still reconstruct very detailed structures. We found that the inclusion of the motion of galaxies – their own radial velocities – in addition to their distribution, dramatically improved the quality of the map and allowed us to see these details. “

Map of the distribution of dark matter within the local universe using a model to infer its location due to its gravitational influence on galaxies. Photo credit: Hong et. al., Astrophysical Journal

The next step was to apply this model to real data from the local universe that the team had obtained from the Cosmicflow-3 database. This astronomical catalog contains extensive data on the distribution and movement of over 17,000 galaxies in a 650 million light year region (200 megaparsecs) around the Milky Way. The resulting map successfully reproduced well-known prominent structures in the local universe.

These included the “Local sheet, “Region of space that contains the Milky Way, Andromeda (and other members of the“ local group ”), and the galaxies of the Virgo Cluster. Another outstanding structure was the “Local emptiness, “A relatively empty space region next to the local group. In addition, several new structures have been identified on the map, such as: B. smaller filament structures that act as hidden connections between galaxies.

As you can see from the cross-sections of the map (see above), large concentrations of luminescent material are shown in red, while largely empty sections are shown in blue. Galaxies are referred to as small black dots, the Milky Way is denoted by the black X in the center, and the arrows represent the movement of these large-scale structures. These connecting filaments, which appear as wispy yellow threads, need to be reexamined to learn more about these previously unknown features. Said Jeong:

“A local map of the cosmic web opens a new chapter in cosmological investigation. We can study how the distribution of dark matter affects other emission data, which helps us understand the nature of dark matter. And we can examine these filament structures directly, these hidden bridges between galaxies. “

“Because dark matter dominates the dynamics of the universe, it basically determines our fate. So we can ask a computer to evolve the map for billions of years to see what will happen in the local universe. And we can develop the model further in time to understand the history of our cosmic neighborhood. “

Illustris simulation showing the distribution of dark matter in 350 million by 300,000 light years. Galaxies are represented as high-density white dots (left) and normal baryonic matter (right). Photo credit: Markus Haider / Illustris

For example, scientists have known for some time that the Milky Way and Andromeda galaxies are slowly approaching. However, whether or not they will collide to form a supergalaxy (uncreative nickname Milkomeda) in an estimated 4.5 billion years remains unclear. By studying the filaments of dark matter that connect our two galaxies, astrophysicists could gain valuable insights into their future.

Hong and his colleagues also plan to improve the accuracy of their map by adding more galaxies. This will be possible thanks to next-generation missions like the James Webb Space Telescope (JWST), which will finally launch into space on October 31, 2021. With its advanced suite of instruments, the JWST will examine the universe in the long-wave range, visible and near-infrared to mid-infrared wavelengths.

In this way, astronomers can identify galaxies that are smaller, weaker, and further away from our solar system. Improvements in computing and machine learning will also result in bigger and better simulations that can make out more galaxies over longer periods of time. Similarly, missions such as ESA’s Gaia Observatory provide more accurate data on the correct movements and speeds of galaxies (astrometry).

The planned successor, ESA’s Euclid Observatory, is scheduled to start in 2022 and collect data on two billion galaxies in 10 billion light years of space. This will be used to create the most detailed 3D map of the local area of ​​the universe to date, which is intended to provide important clues as to the role of dark matter (and dark energy) in cosmic evolution. These maps provide a means of comparison for astronomers to know that their physics models are spot on.

The study describing its results, “Uncovering the deep cosmic web of galaxies through deep learning,” recently appeared in the Astrophysical Journal. This research was made possible with support from the National Research Foundation of Korea, funded by the Korean Department of Education, Korea Department of Science, the US National Science Foundation (NSF), NASA Astrophysics Theory Program, and the KIAS ‘Center for Advanced Computation.

Further reading: PSU, The Astrophysical Journal

Like this:

To like Loading…

Categories
Health

Summer season is a small threat for Covid, however winter could possibly be troublesome

The coronavirus threat in the US is likely to be on the low side this summer, but there is no guarantee that it will stay that way later this year, said Dr. Scott Gottlieb told CNBC on Friday.

“I don’t think we should declare the mission accomplished. I think we should declare a short-term victory,” the former commissioner of the Food and Drug Administration said on Squawk Box.

Coronavirus cases in the country have fallen as more Americans are vaccinated against Covid. According to a CNBC analysis of Johns Hopkins University data, the 7-day average of new infections a day is 23,000. That has fallen by more than 50% since the beginning of May alone.

“I think we’ve done enough to give ourselves the opportunity to enjoy the summer and take a low risk this summer,” said Gottlieb, who headed the FDA from 2017 to 2019 and is now on the board of directors at vaccine company Pfizer . However, he added, “I think this will be a risk when we get into autumn and probably earlier into winter.”

Later on at CNBC, Gottlieb stated that he believed the risk was likely to increase in December and January.

“I think there are pockets all over the country that have low vaccination rates, that have people who haven’t been infected, so you’re going to see outbreaks. I don’t think we’re going to see anything on the scale of that, what we’ve seen in the past, “said Gottlieb to” Closing Bell “. “I think the public health steps we are going to take will be reactive, not proactive,” he added.

One reason for the cautious outlook for the colder months is that “we were able to see new variants,” said Gottlieb, who previously determined that respiratory pathogens such as the coronavirus generally spread more easily in winter. “I think we need to get better monitoring and sequencing of the strains so we can spot these variants faster,” he said.

The US cannot relax its efforts to have more people vaccinated either, said Gottlieb. This is a key factor in reducing risk across the country.

Around 50% of the country’s population had received at least one dose by Thursday, according to the Centers for Disease Control and Prevention. Gottlieb suggested that around 75% of the country could be vaccinated by the fall.

“So there is still a lot to be done. Right now we are on a pretty good way to do the right things,” he said.

Disclosure: Scott Gottlieb is a CNBC employee and a member of the boards of directors of Pfizer, genetic testing startup Tempus, healthcare technology company Aetion, and Illumina biotech. He is also co-chair of the Healthy Sail Panel for Norwegian Cruise Line Holdings and Royal Caribbean.

Categories
Science

Social Value (Profit) of Carbon Dioxide from FUND with Corrected Temperatures, Power and CO2 Fertilization – Watts Up With That?

By Ken Gregory, P.Eng.     May 26, 2021

Climate policies such as carbon taxes are set by governments using social cost of carbon (SCC) values calculated by a set of economic computer programs called integrated assessment models (IAM). The USA government used modified versions of three IAM, called PAGE, DICE and FUND.  Neither PAGE nor DICE includes significant CO2 fertilization benefits.  Dr. Pat Michaels wrote “By including the results of IAMs that do not include known processes that have a significant impact on the end product must disqualify them from contributing to the final result” and “The sea level rise module used by the IWG2013/2015 in the DICE model produces future sea level rise values that far exceed mainstream projections and are unsupported by the best available science.” Therefore, this article discusses the FUND model.

FUND is the most complex of the IAMs which links scenarios and simple models of population, technology, economics, emissions, atmospheric chemistry, climate, sea level, and impacts. FUND distinguishes 16 major world regions. It is the only model used by the US Government that includes benefits of warming and CO2 fertilization. Unfortunately, the climate component of FUND that determines temperature is flawed as it assumes that the deep oceans are instantly in temperature equilibrium with the atmosphere, without any time delay, when the equilibrium climate sensitivity (ECS) is 1.5 °C or less. The transient climate response (TCR) is defined as the temperature change starting from equilibrium, of a 1% per year increase of CO2 concentration to the time when it doubles. If CO2 concentrations are then held constant, temperatures would continue to increase to the ECS as the oceans reach temperature equilibrium with the surface, which can take hundreds to more than a thousand years depending on the value of the ECS. The FUND temperature response at an ECS of 1.5 °C shows the TCR is equal to the ECS, also 1.5 °C, which is impossible. Comparing the average of two climate models which each have ESCs equal to 2.1 °C, the FUND model runs 0.43 °C too warm in 2100 using the RCP4.5 emissions scenario.

The FUND model uses a default ESC of 3.0 °C based on the average of climate models that over warm the lower air temperatures by a factor of two compared to global temperature measurements as shown by this graph. This article shows the climate models warm the sea surface at twice the rate of the measured temperatures. The models on average over warm the tropical bulk atmosphere by a factor of 2.7. The models produce too much warming because they attribute natural warming caused by high solar activity and ocean cycles to greenhouse gas warming and they fail to account for the urban heat island effect (UHIE) that contaminate the government temperature datasets.

The ESC can only be estimated using the energy balance method that compares the climate forcings to historical temperature records. The paper Lewis & Curry 2018 presents estimates of ECS with uncertainty analysis. The authors estimated the median ECS at 1.50 °C with a likely (17%-83%) range of 1.20 – 1.95 °C using the HadCRUT4.5 temperature dataset. The probability distribution is shown as the blue curve of figure 1. The analysis was deficient in that the natural climate change from the base to final periods were not considered and no correction was applied to remove the UHIE from the temperature record. There exists a huge body of literature that shows the UHIE is a large part of the warming in government datasets and that the natural millennium cycle of warming from the Little Ice Age affects current temperatures so it is incorrect to ignore these effects.  Making these adjustments, the likely range of ECS based on energy balance calculations using actual historical temperatures is 0.76 – 1.39 °C with a best estimate of 1.04 °C. The red line of figure 1 is the corrected ECS probability distribution used to calculate the SCC.

The energy impact components of FUND are very flawed. The energy impacts are for space heating and cooling expenditures. In FUND, the expenditures depend on temperature anomalies relative to 1900, but expenditures actually depend of the actual temperatures where people live. The change of expenditures with temperatures does not correspond to expenditure data published for the USA states. A paper by Peter Lang and me shows that a 3 °C temperature rise would decrease energy expenditures in the USA by 0.07% of gross domestic product (GDP) but FUND projects an increase of expenditures of 0.80% of GDP with non-temperature drivers held constant. The analysis is based on extensive energy consumption surveys in the USA.

The FUND energy cost projections show very bizarre results. For example, when average temperatures in China reach 12.5 °C, China is forecast to spend over 38% of its GDP on space cooling with non-temperature drivers held constant at 2010 values, whereas when the USA reaches the same temperature they are forecast to spend less than 0.5% of its GDP on space cooling. Figure 2 shows the impacts on GDP percent of heating expenditure changes due to temperature change. In China when average temperature are 5 °C, space heating expenditure decrease by 1.8% of GDP per °C of temperature change, again with non-temperature drivers held constant at 2010 values, whereas in Canada with temperatures less than 5 °C, space heating expenditure decrease by 0.006% of GDP per °C of temperature change.

A study by Dayaratna, McKitrick and Michaels (D, M & M 2020) of the CO2 fertilization effect and the FUND agricultural component shows that the FUND CO2 fertilization effect should be increased by 30%. The study says “New compilations of satellite and experimental evidence suggest larger agricultural productivity gains due to CO2 growth are being experienced than are reflected in FUND parameterization. … For numerous crop types around the world, CO2 fertilization more than offsets negative effects of climate change on crop water productivity, with some of the largest gains likely in arid and tropical regions”.

I have created a modified version of FUND which incorporates a 2-box ocean climate model that is tuned to closely match the temperature profile of climate models. A 2-box ocean energy balance model can very well replicate the temperature rise of climate models. A blog post by Dr. Isaac Held provides a set of equations and information about this model. The top 70 m of the oceans are well mixed and in near temperature equilibrium with the surface. Heat flow from this layer to the deeper ocean acts as a negative feedback, inhibiting the surface temperature rise. The results are shown figure 3. The global temperature profile of two climate models that each have an ECS of 2.1 °C are shown. The blue line is their average. The purple line is the FUND temperature profile with ESC set at 2.1 °C. The 2-box energy balance model is the orange line which well matches the model average blue line. All models use the RCP4.5 emissions scenario. Nic Lewis published an article that shows both the FUND and DICE climate modules are mis-specified. He calls DICE module a “trillion dollar error”.

I have replaced the flawed space heating and cooling components with new components to match the empirical heating and cooling USA data. The model assumes that when other regions reach the wealth per person of the USA in 2010, adjusted for the same energy efficiency and temperature, they will have similar space heating and cooling costs per capita as that in the USA. I also increased the FUND CO2 fertilization effect by 30% as recommended by D, M & M 2020. This allows me to calculate the realistic social net benefit of CO2 emissions using all impact sectors, weighted by the energy balance based ECS probability distribution.

The table below shows the SCC (negative means CO2 emissions are net beneficial) for emissions in 2020 in US and Canadian 2020 dollars, using 3% and 5% discount rates, with and without the CO2 fertilization update using the modified FUND. The Can$ to US$ exchange rate of 0.83 was used. The results show the net benefits of CO2 emissions range from 8 to 12 US$/tCO2 (10 to 14 Can$/tCO2) depending on the discount rate used.

The data show that climate change with CO2 fertilization effect is quite beneficial, so policies costing trillions of dollars to reduce CO2 emissions are misguided. Bjorn Lomborg estimates reducing global temperatures by 0.35 °C in 2100 would cost US$18 trillion. At the 3% discount rate, the 30% increase of the CO2 fertilization effect increases the benefits of emissions by US$3.32/tCO2.

The social cost (benefit) of CO2 is a marginal concept. It represents the difference of a base case of a forecast global wealth changes with CO2 emissions without any emissions control policies and the case with a pulse of CO2 emissions added in the year 2020, discounted to the year of the pulse, divided by the pulse size, giving the wealth impact in dollars per tonne of CO2. In FUND, the pulse size is 10 megatonnes (Mt) of CO2.  If the SCC is positive, a tax may be imposed on CO2 emission equal or less than the SCC only after all other non-tax policies designed to reduce fossil fuel use are removed and all other taxes which are greater than that imposed on other factors of production are removed. Since this study shows that the SCC is negative, the optimum policy would be to subsidize CO2 emissions equal to the calculated net benefits.

Figure 4 compares the temperature forecasts by FUND and the 2-box climate model, both using FUND’s default emissions scenario with ECS = 1.1 °C. FUND’s climate component causes too much warming.

The figures 5, 6 and 7 below show the empirical space heating and cooling impacts for 7 selected regions versus the regional temperatures, from 2000 to 2200, with non-temperature drivers held constant at 2010 values. I do this to show only the impacts of the temperature change. The regions are Canada, USA, Australia & New Zealand, North Africa, South America, China & near countries and Small Island States. The ECS probabilistic distribution gives a mean SCC equal to that calculated using ECS of 1.13 °C, so the ECS is set to 1.1 °C for the following graphs and discussion.

Figure 5 shows the energy impacts which are the sum of the space heating and cooling impacts. A decrease in space heating cost due to a temperature rise results in an increase in GDP as people are left with more cash to spend on other things.

The impacts are positive for cold countries and negative for warm regions. Canada’s temperature in 2000 is much warmer than that shown in the FUND graph, figure 2, because I use the temperature at the population centroid latitude, not the geographical center of the country as used by FUND. Figure 6 shows the heating impacts. Small Island States (SIS) have no impact because their average temperature is above 26 °C so no heating is required.

Figure 7 shows the cooling impacts. An increase of cooling costs with temperatures decreases wealth.

Figure 8 shows the global energy, heating and cooling impact, again with non-temperature drivers held constant at 2010 values. Note that the temperature impacts on space energy (heating plus cooling) reduce expenditures and increase global wealth. The blue line shows that 2 °C of global warming would increase global wealth by 0.029%. By contrast, the default FUND parameters forecast that 2 °C of global warming would decrease global wealth by 0.37%.

Figure 9 show the global impacts per GDP of seven impact sectors and the total impacts, with non-temperature drivers changing with time.

The non-temperature drivers of energy, including population and GDP per capita growth, have a large effect on the forecast. The large income growth caused the forecast of energy (mostly heating) expenditure to increase from 2000 to 2040 despite increasing temperatures resulting in a reduction of wealth per GDP. Figure 8 by contrast shows that global energy impacts are always positive with non-temperature drivers held constant.

To get a better understanding how temperatures affect the seven impact sectors, the calculated SCC values can be parsed by impact sector. Figures 10 and 11 show the percent contribution of each impact sector at 3% and 5% discount rates, respectively. Agriculture dominates the SCC values. At 3% discount rate, agriculture represents 115.3% of the US$11.74/tCO2 net benefit. Water resources is the next largest at -6.0%. The mainstream media is fixated on storms and sea level rise which are insignificant. Sea level rise damages are kept in check by protection expenditures which are included by cost-benefit optimization. At 5% discount rate, agriculture increases to 123.0% and ecosystems is the next largest at -11.2% of the US$8.41/tCO2.

_________________

An Excel file with all the data and calculations is here. [2,853 KB]

The FUND model can be downloaded and installed from here.

The IJulia notebook used to modify and run the FUND model in the html format is here. [1,754 KB]

Like this:

Like Loading…