Skip to content

Why are there no exposure-age data from glaciated Mexican volcanoes?

November 8, 2019

In a previous post a while ago, I used this diagram, which is a fairly famous (at least among glacial geologists) representation of the changes in the extent of mountain glaciers worldwide between the last glacial maximum (LGM) about 20,000 years ago and the present time, and is generally thought to have first been put together by the late glacial geologist Steve Porter.

This particular version appears in an article in Scientific American by Wally Broecker and George Denton. It shows a topographic profile down the North American Cordillera that forms the eastern edge of the Pacific Ocean, and highlights the difference between the equilibrium line altitudes of modern glaciers (red) and glaciers at the LGM (blue). I was using the diagram to highlight the impressive amount of cosmogenic-nuclide exposure age data that have been collected from glacial deposits worldwide and are tabulated in the ICE-D:ALPINE database; the blue dots added below show the locations of exposure-dated alpine glacial landforms in the mountains of North and South America as well as the same sector of Antarctica.

There are a lot of measurements here. Researchers who collect these data have focused for many years on visiting glaciers across the entire range of latitude, with the goal of learning about latitudinal variations in temperature and heat transport during the LGM and major late-glacial climate changes like the Younger Dryas and the Antarctic Cold Reversal. Nearly all the areas that Denton and Broecker specifically highlighted in this figure, as areas where prominent LGM-to-present glacial deposits could be found, have been visited and thoroughly studied.

With one exception. The “Mexican Volcanoes.” In fact, there is a pretty big gap between southern California (the southern end of the Sierra Nevada in the image) and Colombia (beneath the “America” in “Central America”). There are at least a few measurements that exist from this range of latitude that I know about that are not at the moment included in the database (mainly because of growing pains having to do with dynamically calculating chlorine-36 exposure ages); they are from Costa Rica and described in this dissertation. However, the reason that “Mexican Volcanoes” appear as a specific callout in this figure is because there are glacial deposits there, and they were fairly well studied in the 1980’s by Klaus Heine and others, as described, for example, in this paper.  But no one seems to have been there for exposure-dating purposes. Get to it, people! This is the only significant gap left in the Porter transect. It is easy to see why there are very few exposure ages on glacial deposits from, say, trans-Baikalia or interior Yukon, but the Mexican volcano sites are near major cities that you can drive to, in your own car, in only a few days’ drive from major cosmogenic-nuclide labs in the US. According to the Heine papers, there are enough radiocarbon dates on some of the moraines that they could potentially be used as production rate calibration sites, which could be useful. And if you are interested in tropical glaciers, the radiocarbon dates in that paper indicate that there are late-glacial moraines and ~30-35 ka moraines, but possibly not any standard-issue 18-25-ka-LGM moraines. Is this really true? Are there boulders enough to figure it out? You could find out.

 

 

ccSTP is not an SI unit for a reason. Use moles instead.

October 31, 2019

This post is about the unit of “ccSTP,” or cubic-centimeters-at-standard-temperature-and-pressure, that is used to measure amounts of gas. This subject has been on my list to write about for a while, but I am ashamed to say that even though this posting has been sitting around in draft form for a couple of years, I couldn’t be bothered to actually finish it and push the ‘publish’ button until Marissa Tremblay brought up the subject. Really, there was no collusion:  convergent evolution actually happens.

The important difference is, I get more than 280 characters to talk about it, so I can begin at the beginning. If you are not like me and you began your career as a noble gas geochemist when you were young and impressionable in graduate school, instead of backing into it later as an already-opinionated mid-career cosmogenic-nuclide geochemist, you have absorbed the fact that these units are used in a major fraction of the scientific literature having to do with noble gas geochemistry (which is voluminous, that is, the literature is, not the gas). Your brain is hard-wired to think in ccSTP. Or, even worse, “ncc” or “pcc”, which are hopelessly uninformative shorthand in which the nano and pico are added and the STP left off. OK, I get this — as gas measurement technology initially developed, measurements of pressure, volume, and temperature were the data actually used to determine how much of a gas was present or to prepare a standard for mass-spectrometric measurements, and folks wanted to keep the units of subsequent measurements linked to the source data. So, being able to use this unit fluently indicates that you are well-grounded in the history and fundamentals of your craft. There is some analogy to the secret handshake.

If you are like me, however, you think these units are terrible, for the following reasons.

  1. I have no idea what “standard temperature and pressure” actually is. Believe it or not, Wikipedia lists eighteen different definitions of standard reference conditions used by various entities and organizations. Before sitting down to write this blog entry, I was pretty sure that most uses of STP in the noble gas literature implied something resembling 1 atmosphere at either 0° or 25° C, probably 0°. But now, having read the Wikipedia entry, I am no longer so sure. In addition, Wikipedia adds the rather disturbing information that in 1982, IUPAC changed their definition of the “P” part of STP from 101.325 kPa (1 atm) to 100 kPa (almost 1 atm). Did post-1982 geochemists know this? I probably wouldn’t have noticed for years.
  2. In cosmogenic-nuclide geochemistry, we think about pretty much everything in units of atoms. Concentrations in atoms per gram, production rates in atoms per gram per year, etc. So, in general, any measurements in the cosmogenic-noble-gas literature that are in ccSTP have to be converted to atoms or moles before one can do anything useful with them. However, the conversion factor to do this is deeply obscure. In fact, it is a testament to the profound obscurity of this unit that Google does not know how to do this. If I type “1 dram in jiggers” into a Google search box, I get the correct answer. “One furlong in cubits.” No problem. But “1 ccSTP in mol?” Total blank. It does not make me feel good about the societal relevance of noble gas geochemistry if no one at Google has even bothered to figure this out. On the other hand, perhaps they read the Wikipedia entry and they were too confused to proceed.

Hence, the rest of this entry will do two things. First, try to figure out what the ccSTP/mol/atom conversion factors are, so that even if we don’t really know what T and P were used in a particular paper, we ought to be able to get fairly close to the correct number of atoms. Second, encourage people to stop using these units. Just say no. Use moles or atoms, which at least have only one agreed-upon definition, instead.

So, conversion factors. First, to help future victims of typing ‘1 ccSTP in mol’ into Google, let’s include all the possible phrases one might want to search for. ccSTP in mol. ccSTP in atoms. nccSTP in mol. Convert ncc to moles. Convert ccSTP to moles. Convert ccSTP to atoms. That should give the Google index-bot a bit of a head start in finding this posting.

Even if Google doesn’t know about them, some references do give a conversion factor from ccSTP to mol. For example, we can consult page 6 of a fairly standard reference, the 2002 edition of Noble Gas Geochemistry by Ozima and Podosek. This page contains a table of constants that gives the conversion as 1 ccSTP = 4.465e-5 mol. OK, that is some progress, but what is STP? There are some clues in the table on this page, although it is not clear that they make things any clearer. First of all, the table reminds us that 1 atmosphere is 101.325 kPa, which makes one think that even though we are writing in 2002, we might be thinking about the pre-1982 IUPAC STP that used 1 atm instead of 100 kPa. Then the table also tabulates RT (the  ideal gas constant times the temperature) in units of cm3 atm mol-1, which also makes one think that we are using 1 atm as standard P. But the table gives RT for both 0° C and 25° C, which is rather disturbing. So, OK, if PV = nRT, P = 1 atm, V = 1 cm3, and RT = 2.24141e4 cm3 atm mol-1 (this is the value tabulated for 0° C), then n = 4.4615e-5, which is close to, but not exactly what is in the table. The difference between 4.465e-5 and 4.4615e-5 could be rounding error or a typo, but then there have to be at least two typos, because it is further stated to be equal to 2.688e19 molecules, which is, in fact, 4.465e-5 * 6.022e23. However, it does appear that these authors were, in fact, assuming 0° C and 1 atm; if we use the 25°C value in the table (2.44655e4 cm3 atm mol-1), we get n = 4.0874e-5 mol, which is much more different from the tabulated value. To the extent that this book is a good representation of the noble gas literature generally, at least we can be reasonably confident that most folks think STP is 0°C at somewhere near 1 atmosphere (even if it remains rather mysterious what the value for 25°C is doing in this table at all).

Of course, lacking an early indoctrination in the arcana of noble gas geochemistry, I may just never have been informed of a global conspiracy to stick with the pre-1982 definition, which would be almost exactly analogous to the global conspiracy among radiocarbon-dating specialists to retain the obsolete Libby half-life for data reporting purposes. However, one would think that Ozima and Podosek would have mentioned it. Regardless, let’s assume, even though we are probably wrong for a lot of papers in the existing literature, that we are talking about the current IUPAC STP, which is 100 kPa and 0° C or 273.15 ° K. We obtain the ideal gas constant from a different source (Wikipedia again) as 8.314462 m3 Pa K-1 mol-1. So, 1 cm3 is 1e-6 m3, 100 kPa is 1e5 Pa, and n = PV/RT = (1e5 * 1e-6)/(8.314462 * 273.15) = 4.4032e-5, which, again, is fairly close to but a couple of percent different from the value tabulated in Ozima and Podosek. Apparently this unit is not unlike the biblical cubit

Although the assumption that most authors were paying attention to the IUPAC in 1982 may be too optimistic, I would guess that this value is probably the most sensible thing to assume, so:

1 ccSTP = 4.4032e-5 mol = 2.6516e19 atoms

1 nccSTP = 4.4032e-14 mol = 2.6516e10 atoms

1 pccSTP = 4.4032e-17 mol = 2.6516e7 atoms = 26.516 million atoms (this is actually a useful amount for cosmogenic He-3 measurements)

These should at least get you within a few percent of the true value in most papers in the literature. If someone disagrees with this, or notices that I have made an error in the above, please let me know. And for heaven’s sake, if you are talking or writing about cosmogenic noble gases, use units of atoms or moles. You know what the pressure and temperature were when you made your gas standards, and we don’t, so please use this information to do the conversion to moles or atoms, yourself, now, before it becomes impossible later.

 

 

 

 

Isostatic rebound corrections are still on a squishy footing

September 18, 2019

This blog post covers the subject of whether or not you should account for isostatic elevation changes in production rate calculations, comes to the conclusion that we don’t really know the answer, highlights one cautionary tale that is probably relevant, and suggests some ways to fix the situation.

This subject is important because production rates vary with atmospheric pressure. Higher  elevation, less atmosphere, higher production rate. In exposure-dating, because it’s not possible to measure the mean atmospheric pressure over the entire exposure duration of a sample, what we actually do in nearly all cases is (i) measure the elevation, (ii) convert to atmospheric pressure using a relationship based on the modern atmosphere, and (iii) assume that the resulting pressure applies for the entire exposure duration. Unfortunately, for sites that are more than a few thousand years old, and especially for sites that are, or were, marginal to large ice sheets, this usual procedure has to be at least partly wrong on two counts. First, as ice sheets, sea level, and global temperature have changed in the past, the mass of the atmosphere has moved around: the atmosphere has to get out of the way of ice sheets, fill up space created by eustatic sea-level fall, and shrink and expand as global temperature cools and warms. The pressure-elevation relationship in many places was probably quite different in the past. Second, because of glacioisostatic depression and rebound associated with the advance and retreat of large ice sheets, many potential exposure-dating sites have not only had the atmosphere move around on them, but have themselves moved up and down in the atmosphere. These observations would tend to indicate that maybe we shouldn’t just compute the production rate at the present elevation — we should account for the fact that all these things are changing.

Several folks have tried to do this. Or, at least, part of it. Specifically, a number of exposure-dating papers have extracted elevation time series for their sample sites from glacioisostasy models (such as the ICE-XG series of global models generated by Richard Peltier and colleagues), computed production rates corresponding to those elevations, and   then, basically, used a time-averaged production rate instead of an instantaneous modern production rate to compute the exposure ages. In effect this approach assumes that the atmosphere is stationary and the sites are moving up and down in it, and in all cases where I can remember it being applied, it results in lower calculated production rates (because the sites were ice-marginal, so at lower elevation in the past than they are now) and therefore older calculated exposure ages.

OK, this seems like a good idea. And it is possible to do this for nearly any exposure-dating study that deals with LGM-to-present events, because there exist global models for isostatic depression and relative sea-level change for the last glacial cycle. Of course, it’s not easily possible for pre-LGM exposure-dating applications.

However, there are several problems with this otherwise good idea.

One, as noted above, it’s not just the sites that are moving up and down in the atmosphere, the atmosphere is moving around as well. This aspect of the problem is the subject of a very good paper from quite a long time ago by Jane Willenbring (Staiger). This paper looked mainly at the atmosphere effects and pointed out, among other things, that they may act to offset isostatic depression near ice sheet margins, resulting in less variation in production rates than one would expect from each process individually.

Two, the production rate calibration sites are moving up and down too. Suppose that your production rate calibration data experienced an increase in the surface production rate over time, as expected for glacioisostatic rebound in the absence of any offsetting atmospheric effects. If you don’t account for this — that is, you just use the modern elevations of calibration sites — when using these data for production rate calibration, you will underestimate the production rate. If you then use this calibrated production rate, but make an isostatic rebound correction when computing ages for unknown-age sites, you will make your production rate estimate at your site, which is already too low globally because you forgot to include isostasy in the production rate calibration, even more too low locally. Thus, the resulting exposure ages will be too old, twice. Instead, to do this correctly, you must include the isostatic rebound correction in both the production rate calibration phase and the exposure-age calculation phase.

There exist several papers that appear on a reasonably close reading to have made this mistake and doubly-compounded errors, although in many cases it is hard to tell because the details have disappeared into a methods supplement. Because this is a blog entry and not an actual journal article, I don’t have to name them all in detail. But the point is that ages calculated in this way are unambiguously wrong. If you want to use an isostatic rebound correction in your exposure-age calculations, you must also use it in your production rate calibration. Or you can do neither: if you have ice-marginal calibration sites and ice-marginal unknown-age sites with similar rebound histories, and you don’t make the correction at either one, you will get close to the right answer, because any errors you introduced by skipping the correction in part 1 will be canceled by the offsetting error in part 2. Thus, no matter what, it is unquestionably better to not do this correction at all than to do it halfway.

The third problem is that the existing calibration data set does not show any evidence that a correction based on elevation change alone is either necessary or desirable. This is kind of a complicated claim, and it’s important to clarify the difference between (i) testing a particular model, that is, asking “does a particular glacioisostasy model, ICE-5G or whatever, improve the performance of production rate scaling models when measured against calibration data” and (ii) asking whether the calibration data themselves provide any evidence that production rates are affected by glacioisostatic elevation change considered alone. I would argue that we have to do (ii) first. Basically, if we can’t establish (ii) from the calibration data, then it is very difficult to determine what the answer to (i) would mean, or if a “yes” answer was meaningful at all.

So, here I will consider item (ii). The basic principle here is that the existing set of calibration data includes both calibration sites that are marginal to large ice sheets, and those that are not. If glacioisostatic elevation change by itself is an important control on production rates, then ice-sheet-marginal calibration sites will yield lower production rates than far-field ones.

This seems like a pretty simple question, but it is not obvious that anyone has actually answered it. A recent paper by Richard Jones and others gets tantalizingly close to at least asking the question, but then disappointingly stops at the last minute. The bulk of this paper is a quite thorough global calculation of the effect on time-integrated production rates expected from glacioisostatic elevation change. In the course of discussing whether or not this should be routinely taken into account in exposure-dating studies, the authors point out that the four (or six, depending on how you group the Scottish data) calibration sites for beryllium-10 data in the widely used “CRONUS-Earth primary calibration data set” are all in areas that experienced very small glacioisostatic elevation changes, so if you use this calibration data set, including or not including a glacioisostatic elevation change correction in the production rate calibration phase doesn’t really make any difference. This is true and, of course, represents a loophole in the absolutely strict rule described above that if you want to use an isostatic correction in exposure-age calculations, you must also use it in production rate calibration. After this, however, the paper stops thinking about calibration data and goes on to other things. Stopping here is extremely disappointing in the context of the otherwise very clear and comprehensive paper, because there are not just four Be-10 production rate calibration sites. In fact, there are 38, seventeen of which are from ice-sheet-marginal features. Although this is the subject of a whole different discussion, there is no particular reason that most of these sites are more or less reliable than the “CRONUS primary” calibration data, which, IMHO, were somewhat arbitrarily selected. Thus, it is possible to test production rate predictions based on glacioisostatic elevation change against a fairly large data set of calibration measurements from both ice-sheet-marginal and far-field sites, and as I read this paper I was very surprised that the authors did not do it. Or, at least, I could not find it in the paper. Why not?

So, let’s try it, at least to the extent feasible without making this blog posting any longer than its current 14,000 words. Specifically, we are asking the question: are production rates inferred from ice-sheet-proximal calibration sites lower than those inferred from other calibration sites? If glacioisostatic elevation change by itself is an important control on production rates, we expect the answer to be yes. However…

…the answer is no. The above histograms show Be-10 production rate scaling parameters for LSDn scaling inferred from all Be-10 calibration sites in the ICE-D:CALIBRATION database (top panel) as well as ice-sheet-marginal and far-field subsets (lower two panels). The blue histograms are for all samples; the red histograms are for averages from each site. Remember, the LSDn scaling algorithm directly predicts production rates with units of atoms/g/yr instead of nondimensional scaling factors, so the calibrated parameter is accordingly a nondimensional correction factor rather than a “reference production rate” with physical units. Regardless, the mean and standard deviation of correction factors inferred from ice-sheet-marginal and far-field calibration sites are, basically, the same. Almost exactly the same.

This result, therefore, fails to provide any evidence that production rates are significantly affected by glacioisostatic elevation change alone. Instead, it tends to suggest that elevation-change effects at ice-marginal sites could very well be substantially offset by atmospheric effects. This, in turn, implies that one should probably not include isostatic effects in a production rate scaling model without also including atmospheric mass redistribution effects.

There is no point in giving up too soon, though, so let’s try to come up with some other first-principles predictions that we can test with the calibration data. The simplest prediction was just that production rates from ice-marginal calibration sites should be lower. A slightly more complex prediction is that we expect that this effect should be more pronounced for younger calibration sites. Suppose an ice sheet margin retreats between the LGM and the mid-Holocene, and we have ice-marginal calibration sites that deglaciated at a range of times during this retreat. In general terms, the sites that are closer to the center of the ice sheet deglaciated more recently and should be more affected by isostatic depression: not only is the magnitude of the depression typically larger at sites near the center of the ice sheet, but the period of time during which the sites are isostatically depressed makes up a larger fraction of their total exposure history. Therefore, we expect a correlation between the age of ice-marginal calibration sites and the inferred production rates: all sites should give lower production rates than non-ice-marginal sites, but the younger ice-marginal sites should yield lower production rates than the older ones. Is this true?

The answer: well, sort of. The plot above shows values of the LSDn calibration parameter inferred from far-field sites (blue) and ice-sheet-marginal sites (red). The solid and dotted black lines give the mean and standard deviation of the entire data set. Production rates inferred from non-ice-proximal calibration data are uncorrelated with site age, as they should be, but those inferred from the ice-marginal sites are strongly correlated with site age. This correlation would be a point in favor of an elevation-change effect being important…except for one thing. As expected from the previous observation that the two groups yield the same mean production rate, half the ice-marginal sites give production rate estimates that are HIGHER, not lower, than the mean of the non-ice-marginal sites. If glacioisostatic elevation change is the primary control on production rate, we do not expect this. Forebulge effects are not expected for sites that are well within the LGM boundaries of the ice sheets, so (at least to me) there appears no obvious way that elevation change alone can account for both (i) the similarity in the distribution of production rate estimates between production rates inferred from ice-marginal and far-field sites, and (ii) the strong age-production rate correlation for ice-marginal sites. Overall, this result is sort of ambiguous but, again, appears to indicate that correcting for glacioisostatic elevation changes by themselves is oversimplified.

The summary up to this point is that arguing that glacioisostatic elevation change should be an element in production rate scaling models is probably reasonable. However, both the calculations in Jane’s paper and an analysis of available calibration data indicate that it is probably oversimplified, and probably not a good idea, to correct only for the elevation changes without also considering simultaneous atmospheric mass redistribution.

On to the cautionary tale. The cautionary tale part of this posting is that this situation is very similar to a somewhat obscure, but analogous and pertinent, aspect of cosmogenic-nuclide applications to erosion-rate measurements. Specifically, this is the question of whether or not to apply topographic shielding corrections to production rate estimates for basin-scale erosion-rate calculations.

In the isostasy example, we know that there are two time-dependent effects on production rates in ice-sheet marginal areas, and we think they probably offset each other, at least to some extent. One of them (isostatic elevation change) is easy to calculate, because there exist a number of global, time-dependent, LGM-to-present glacioisostasy models that can be easily used to produce a time-dependent elevation change estimate for any site on Earth. The second one (atmospheric mass redistribution) is difficult to calculate, because there are no equivalent, easily accessible, LGM-to present models for atmospheric pressure distribution.

In the topographic shielding example, we know that the erosion rate we infer from a nuclide concentration in sediment leaving a basin is inversely proportional to the mean production rate in the basin, and directly proportional to the effective attenuation  length for subsurface production. In a steep basin where many parts of the basin are shielded by surrounding topography, shielding reduces both the mean production rate and the attenuation length. Because these have opposite proportionalities, therefore, corrections for these effects offset each other. Again, one of these corrections is easy: using digital elevation models to compute the reduction in the mean surface production rate in a basin due to topographic shielding is a satisfying and fun application of raster GIS. And one of them is hard: computing the shielding effect on the subsurface attenuation length is computationally painful, time-consuming, and really not fun at all. The result is that for quite a long time, many researchers just did the easy correction and ignored the hard correction, until a thoroughly helpful paper by Roman DiBiase did the hard part of the correction, and showed that doing only the easy correction results in an answer that is more wrong than it would have been if you had ignored both corrections. Oops.

The analogy is pretty clear. Both cases feature two offsetting corrections, one easy and one hard, and a seductive heuristic trap created by the availability of code to do the easy correction but not the hard one. It seems like it might be a good idea to learn from history, and try not to shove our left foot in a mouth still filled with the taste of our right shoe.

So, to conclude, what should we do or not do about this? Well, for one thing, we should probably not apply a time-dependent isostatic rebound model to production rate calculations without an atmosphere model as well. The evidence seems to indicate that this is probably oversimplified and may not improve the accuracy of exposure ages. However, it is not obvious that there is a good candidate atmosphere model at the moment; this may require some work. And then, if we use both models, it is necessary to apply them to both calibration data and unknown-age sites, which requires hard-coding them into everyone’s  exposure-age calculation code. We should complete the Jones paper by doing a better job than I have done above in examining more carefully whether scatter in fitting calibration data to production rate scaling models can be explained by glacioisostatic or atmospheric-mass-redistribution effects. And, finally, we will probably need to collect more calibration data. This is because applying these corrections will very likely have the largest effect on inferred exposure ages for times and places where we do not have calibration data. It is probably not a good idea to extrapolate production rate corrections into places outside the range of calibration data where being wrong could incur large and hard-to-identify errors. Targeting places and times where we expect the largest corrections, and also places and times where elevation-change and mass-redistribution effects are likely to have offsetting or compounding effects, to collect new calibration data would add a lot of credibility to the argument that these effects are important.

 

 

 

The DMC:ANTARCTICA project

August 29, 2019

By Perry Spector

Summary

This is the first guest post on this blog, and the point is to describe a data-model comparison project called DMC:ANTARCTICA that I have been working on for the past year. The overall objective of the project is to provide synoptic evaluation of Antarctic ice sheet models that simulate the evolution of the ice sheet over long time periods using the entire dataset of cosmogenic-nuclide measurements from Antarctica. This project is still under development, but there now exists a publicly available DMC:ANTARCTICA web interface that serves data-model comparisons for every site in Antarctica where data exist and for 15 different ice-sheet model simulations.

Why?

We rely on numerical ice-sheet models for establishing the climate sensitivity of the Antarctic ice sheet and for forecasting its future behavior and contribution to sea level. Although these models provide glaciologically-feasible reconstructions ice-sheet evolution, they are of limited use unless they can be shown to agree with observations of past ice-sheet change.

Considerably progress actually has been made on this front. Recent simulations by David Pollard, Pippa Whitehouse, Rob Briggs, and others that are constrained by geologic observations (e.g. relative sea-level curves from Antarctica, exposure ages, etc.) do, in fact, agree with those observations remarkably well in many cases.

Existing data-constrained models, however, are primarily restricted to the period between the LGM and the present. Longer simulations, spanning important time periods when the climate was as warm or warmer than present, have not been thoroughly tested. One limitation to doing this is that many of the constraint datasets, such as RSL curves from Antarctica, do not extend beyond the LGM. On Pleistocene-Pliocene timescales, cosmogenic-nuclide measurements from Antarctica are some of the only data available to test hypotheses of past ice-sheet behavior.

The existing set of Antarctic cosmogenic-nuclide measurements

All known published (and many unpublished) cosmogenic nuclide data from Antarctica have been compiled, mostly by Greg Balco, in the ICE-D:ANTARCTICA database. The database currently comprises 4361 measurements on 2551 samples. For this to be a useful set of ice-sheet model constraints, it should span as much of the continent as possible and also be distributed in time. Let’s look at the breakdown. The figure below shows the distribution for samples of glacial erratics and for bedrock surfaces.

Minimum and maximum circle sizes represent 1 and 88 samples, respectively.

Both populations do a pretty good job highlighting where outcropping mountains exist in Antarctica. There are large areas, however, that are entirely unrepresented, such as the East Antarctic interior and the coast between northern Victoria Land and the Lambert-Amery Basin. Fortunately these are sectors where most ice-sheet models predict relatively small changes over glacial-interglacial cycles. For glacial periods, models predict large ice thickness changes in the Ross, Weddell, and Amundsen Sea sectors, areas where the coverage is relatively good.

Interestingly, around 80% of the samples are of glacial erratics while the remaining 20% are of bedrock surfaces. This distribution is probably, in part, due to the combination of two factors. First, outcropping bedrock surfaces in Antarctica that were ice-covered during the LGM commonly have exposure ages much older than the LGM because, in many cases, past ice cover was frozen to the bed and non-erosive. Second, the objective of many exposure-dating projects has specifically been to establish the ice sheet’s configuration during or following the LGM, and so these projects likely opted to sample glacial deposits rather than bedrock. However, as has been discussed by some authors (e.g. here) and as will be discussed below, well-preserved bedrock surfaces can provide important information about exposure and ice cover over multiple glacial-interglacial cycles.

The next set of maps show the distribution by nuclide.

Most samples in the database only have a single analysis of a single nuclide, and, unsurprisingly, that nuclide is usually Be-10. Multiple nuclides (e.g. Be-10 and Al-26) have been measured on some samples, as shown below. These samples are particularly important because paired nuclide data can provide information not just about past exposure, but also about past ice-cover and erosion.

The next figure shows histograms of exposure ages (note logarithmic axis) and gives a sense of the temporal distribution of the database.

Although a large fraction of the samples only provide information about the last ice age and the subsequent deglaciation (i.e. the focus of many exposure dating studies), a substantial population provides constraints on timescales extending as far back as the Miocene.

Therefore, to summarize, the set of Antarctic cosmogenic-nuclide measurements is reasonably well distributed in space and time, which suggests that it should be of some use for evaluating numerical ice-sheet models that run over a range of time periods. It would be useful if the database contained more samples from slowly-eroding bedrock surfaces, but the coverage isn’t terrible at least.

A Data-model comparison framework

For most samples, there are an infinite number of different histories of exposure, ice-cover, and erosion that could produce the observed nuclide concentrations. Therefore, these data cannot be simply inverted to generate unique ice-thickness chronologies. However, they can be used to distinguish incorrect chronologies predicted from ice-sheet models from plausible ones. The basic idea is that, for a given rock in Antarctica, the history of exposure and ice-cover simulated by a model predicts certain concentrations of cosmogenic nuclides at the present-day that can be directly compared to concentrations that have actually been measured in the rock.

This isn’t a new idea. In a 2012 paper, Sujoy Mukhopadhyay and others reported that Be-10 and Ne-21 concentrations measured in bedrock samples from the Ohio Range were consistent with concentrations predicted by a 5 million year ice-sheet model run. Since then, the only other example that I am aware of is a paper I recently wrote with John Stone and Brent Goehring, which is under review in The Cryosphere, that evaluates several models using results from the Pirrit Hills and the Whitmore Mountains.

However, rather than doing one-off data-model comparisons for one or two sites, the objective of the DMC:ANTARCTICA project is to perform synoptic evaluation of model performance using all available cosmogenic-nuclide measurements from Antarctica. At the moment, that capability is not yet available, but it is coming. What is available is the DMC:ANTARCTICA web interface which serves comparisons between every site in Antarctica where data exist and a number of published ice-sheet model simulations. The rest of this post will give a basic description of how the web interface works, and highlight a couple interesting results.

For a given site in Antarctica and ice-sheet model of interest, the modeled ice-thickness history at the site is extracted via bi-linear interpolation. Here is an example from the model of Pollard & DeConto (2009) for the Pirrit Hills.

The next steps are to (i) generate synthetic samples of both bedrock surfaces and glacial erratics spanning a range of altitudes at the site, (ii) determine the history of exposure and ice cover experienced by each sample, (iii) use those histories to calculate the present-day concentrations of various nuclides, and then (iv) compare those predictions to concentrations that have actually been measured in samples from the site.

For each DMC request, the calculations are performed in real time on a Google Cloud virtual machine. Concentrations, as well as exposure ages, are calculated using the Matlab/Octave code behind the version 3 online exposure age calculator, which was described by Balco et al. (2008) and has since been updated. Additional details about data-model comparison calculations are available here.

All samples are assumed to have zero cosmogenic nuclides at the beginning of the model simulation. Thus running the web interface with a site where there are 12 million year Ne-21 exposure ages, paired with a LGM-to-present ice-sheet model, will obviously produce large data-model misfits that are unrelated to the actual performance of the model. Similar misfits can be obtained by doing the reverse, that is, pairing a long-term model with a site that only has, say, Holocene exposure ages. This latter scenario can be dealt with by employing the option to change the start time of the model so you can look at, for example, only the last 20 kyr of a 800 kyr run.

The web interface can also serve model predictions for any latitude and longitude in Antarctica, even if no samples have been collected at that location. This feature could potentially be useful for planning future field projects.

Some interesting DMC results: (i) elevation transects of bedrock surfaces and glacial erratics

DMC simulates periodic deposition of erratics at a site whenever the ice sheet there is thinning. This is because most glacial erratics in Antarctica are ablation deposits, meaning that they were transported englacially from upstream erosion areas to ablation areas on the ice surface (where they were initially exposed to the cosmic-ray flux), and subsequently deposited on mountain flanks by ice thinning. Periodic deposition of erratics is simulated for all sites, regardless of whether erratics are actually known to be present.

Ice-sheet models that simulate many glacial-interglacial cycles can therefore produce erratics at similar elevations but at different times. The effect of this is shown in the upper-right panel below for the 5 Myr simulation of Pollard & DeConto (2009) at Mt. Turcotte in the Pirrit Hills (see ice-thickness time series above).

Different colors represent different nuclide-mineral pairs as follows: orange = C-14 (quartz); red = Be-10 (quartz); blue = Al-26 (quartz); neon = Ne-21 (quartz); gray = He-3 (pyroxene/olivine).

Concentrations for a given nuclide in erratics are scattered and have a relatively weak relationship with elevation. The exception is for C-14 (shown in orange), which decays rapidly with a 5730 year half life and therefore has no memory of the exposure and ice-cover history prior to around 30 kyr ago. In contrast, the bedrock surfaces (upper-left panel), which are predicted to have been intermittently ice covered, show concentrations that always increase monotonically with altitude. In part, this is due to the fact that production rates increase with elevation, but high elevation samples also must have spent a lesser proportion of their existence ice-covered than low elevation samples from the same site.

Note 1: Both bedrock and erratics are assumed to not experience erosion nor get covered by till or snow. For erratics, once they are deposited they are assumed to not change elevation, roll over, etc.

Note 2: The four panels above have different x and y axis ranges. Sorry. Working on that.

The predictions for Mt. Turcotte bedrock and erratics (upper panels) are interesting because they are qualitatively pretty similar to what is actually observed there (lower panels), as well as at many other Antarctic sites. Observed Be-10, Al-26, and Ne-21 in bedrock samples monotonically increase with elevation. There are only minor exceptions to this, which are probably related to small amounts of erosion experienced by some samples or, in the case of Ne-21 (shown in neon), radiogenic production. In contrast to the bedrock samples, Be-10 measurements on glacial erratics (lower-right panel) are scattered and have no obvious concentration-elevation relationship. Thus, at least qualitatively, the model predictions for both bedrock and erratics are consistent with what we actually observe.

One take-home point from this comparison is that, in Antarctica, where past ice cover was commonly cold-based and non-erosive, weathered bedrock surfaces can, in some cases, be much better indicators of long-term exposure and ice cover than glacial erratics. Next time you’re in the field, collect an elevation transect of weathered bedrock surfaces.

Some interesting DMC results: (ii) LGM-to-present ice-sheet models consistently simulate premature deglaciation in the western Weddell Sea

Okay, now let’s look at an example in which we actually evaluate the performance of ice-sheet models.

Our knowledge of the post-LGM deglaciation history of the two largest embayments in Antarctica, the Ross and the Weddell, has remained very lopsided. Decades of geological (marine and terrestrial) and glaciological research in and around the Ross Embayment has generated a large network of constraints on the thickness and extent of grounded ice during the LGM and during the subsequent retreat. In contrast, relatively few field studies in the Weddell Embayment have produced commensurately few constraints there.

Recent papers by Keir Nichols, Jo Johnson, Brent Goehring, and others (published and in review here and here) are increasing our knowledge of the deglacial history of the Weddell Sea. These papers describe C-14 exposure ages from an elevation transect of 8 bedrock samples from the Lassiter Coast on the east side of the Peninsula. All samples have mid-Holocene ages and indicate abrupt thinning there at this time.

Let’s see how these results compare to predictions from state-of-the-art ice sheet models. In the figures below, orange circles show measured C-14 exposure ages plotted against sample elevation. Gray circles show model predictions. Note that the 8 bedrock samples on which C-14 concentrations were measured were collected from a few different but nearby peaks, which have been classified as different sites in ICE-D:ANTARCTICA. Thus, only 5 of the 8 samples are shown below, which does not affect the data-model comparison.

The first figure shows the predictions from the reference simulation of the model by Jonny Kingslake and others (2018).

The second figure shows predictions from the best-scoring run of the large ensemble by David Pollard and others (2016). Other recent models by David Pollard and others predict relatively similar transects.

The third figure shows predictions from the last 20 kyr of the model by Tigchelaar et al. (2018).

These three state-of-the-art ice-sheet models consistently simulate premature thinning on the Lassiter Coast by several millennia, which likely means that the grounding line in the western Weddell Sea retreats too early in the models. Note that the observation of premature thinning is not limited to this site but is also seen at the Heritage Range and at the Pirrit Hills, sites which are presumably sensitive to ice-sheet behavior in the western Weddell Sea.

The model that performs best at this site appears to be that of Pollard et al. (2016), which is perhaps not surprising because, unlike the models by Kingslake et al. (2018) and Tigchelaar et al. (2018), it was calibrated using a large network of observational constrains from various other sites in Antarctica.

Conclusions

The overall goal of the DMC:ANTARCTICA project is to systematically evaluate ice-sheet model simulations using the entire set of cosmogenic-nuclide measurements. This isn’t available yet, but it’s coming.

The DMC:ANTARCTICA web interface is publicly available for dynamically comparing observations and model predictions anywhere in Antarctica. Check it out.

The web interface helps address the issue of ice-sheet model opacity. Typically the complex behavior of an ice-sheet model is conveyed only in a few small figures in a paper. Now it’s possible for anyone to explore models and see what they predict anywhere in the continent.

The web interface still has some issues to be ironed out. Some of these are discussed here. Let me know if you find bugs or if there is a specific feature you’d like to see implemented.

Don’t see your data? The problem is probably either that (i) it’s not in ICE-D:ANTARCTICA yet, or (ii) certain information needed to calculate exposure ages (e.g. sample thickness) is missing from ICE-D. Contact me or Greg to fix that.

Have a model you would like included in this project? Get in touch.

Apparently, it takes 10 years to recruit a guest blogger. But don’t give up.

August 29, 2019

Despite the fact that the total audience of this blog is about ten cosmogenic-nuclide geochemists, one reasonable criticism of the project is that it does not do a great job of representing the diversity of opinion among these ten people — because I am the only author so far. Although, of course, many of the blog entries here make reference to work or publications by other people, so far it has been difficult to tell from the blog alone whether these are real, live, cosmogenic-nuclide geochemists, or purely characters introduced for rhetorical purposes, kind of like the obscure Greek philosophers who lob questions at Socrates, or the opponents of the Harlem Globetrotters — because none of them have ever posted blog entries themselves. A handful of readers (although a surprisingly small handful) have left comments, but all my attempts so far to recruit guest bloggers to contribute their own postings have resulted in the subject of the request immediately remembering another really important commitment and abruptly ending the conversation. I can’t imagine why.

Thus, it is with great pleasure that I can report that Perry Spector, a BGC postdoctoral researcher and specialist in cosmogenic-nuclide applications to Antarctic ice sheet change, has agreed to author at least one post — and made it sound like he meant it. Expect to see it soon. This represents major progress, and an opportunity to point out that, at least for the other nine readers, this blog is potentially a really useful forum for comments and opinions on obscure technical bits of cosmogenic-nuclide geochemistry that are not fully-baked enough, or not important or useful enough, to be worth the effort of an actual publication. I find writing blog postings to be largely painless, sometimes entertaining, a time-saver when the subject of the post is something I would otherwise have to answer a lot of emails about, and often helpful in organizing ideas. So, if you have something to say that you want to say here, let me know.

Where to publish cosmogenic-nuclide geochemistry

April 11, 2019

The overall goal of this posting is to highlight a new open-access/open-review journal, just introduced by EGU and Copernicus Publications, entitled “Geochronology.” But first, some context.

For context, consider two things.

One. The dispute between the University of California system and scientific publisher Elsevier. Basically, UC has canceled its subscriptions to all Elsevier journals after failing to come to a blanket agreement for open access publication of UC-hosted research. I care about this because I utilize the UC library journal subscriptions.

Two. The list of peer-reviewed scientific journals where technical articles about cosmogenic-nuclide geochemistry are commonly published. Quaternary Geochronology. Earth and Planetary Science Letters. Geochimica et Cosmochmica Acta. Nuclear Instruments and Methods in Physics Research. Chemical Geology. Possibly Quaternary Science Reviews. For short-lived nuclides in tracer studies, perhaps the Journal of Environmental Radioactivity.

These journals have two things in common. They publish technical research on geochemistry and geochronology that is relevant to your work, so you read them all the time. And they are published by Elsevier. Although there are many other journals where one might sensibly publish any research involving applications of cosmogenic-nuclide geochemistry to broader Earth science projects, consolidation and acquisitions by Elsevier over the past decade or so have produced a near-monopoly on journals suitable for technical articles in this field. About half the articles that I am an author of are in Elsevier journals. The only other journals I can think of where I might publish technical articles rather than applications are, maybe, Geophysical Research Letters or JGR-Earth Surface.

Now put these two things together. In future, I will not be able to legally read any articles published in Elsevier journals, which include nearly every possible outlet for basic research in cosmogenic-nuclide geochemistry. Most sensible people would conclude that in this situation, I should also stop publishing in and reviewing for these journals. While it is true that cosmogenic-nuclide geochemistry was probably not a major contributor to Elsevier’s $1.2B in profits from scientific publishing in 2017, and my decision to boycott is unlikely to significantly threaten said profits, it seems pretty dumb to continue providing free services to journals that I can’t even read.

So, OK, I can’t publish the majority of my research output, which concerns technical research in cosmogenic-nuclide geochemistry, in any of the journals that are most suitable for publishing this research. While the idea of transferring all my future research output to this blog and other non-peer-reviewed websites  is certainly appealing, it may not be a wise career move, or fair to funding agencies who expect to eventually see some results, to disengage completely from normal peer-reviewed journals.

That concludes the context. Although I thoroughly support the UC position in the Elsevier dispute because I think the Elsevier/Wiley/MacMillan business model, in which my free labor supports an enormously profitable business, is ridiculous, I am left at the moment with a lack of publication outlets. Thus, it is an extraordinary coincidence of timing that on nearly exactly the same day that UC/Elsevier negotiations failed, the European Geophysical Union and Copernicus Publications introduced a new non-profit, open-access, open-review journal called Geochronology, that aims to focus on basic research in geochronology and associated areas of geochemistry and analytical methods.

From my perspective, this is the only such journal that is not published by Elsevier, so it looks like I have to publish in it whether I like it or not — although there is a slight complication in that I am one of the editors, so it would probably look bad if I were also the most prolific author. In the rest of this post I will explain why you — a reader of this blog who presumably carries out and publishes research in cosmogenic-nuclide geochemistry — should publish in it too.

First, what is it? In the past several years, EGU has worked with Copernicus Press, which is a publisher and conference organizer organized as a non-profit corporation under German law, to develop a series of online journals that provide (i) open access without subscription fees, (ii) impose on authors or funding agencies the minimal costs needed to support publication and archiving, and (iii) utilize an open review process. More about the open review process later. Several of these journals have become extremely successful and widely read; for example, The Cryosphere, which is rapidly becoming the de facto international journal of record for research in glaciology and cold regions climate and surface processes.

Second, what is the open review process? Basically, papers submitted to these journals, if the editors deem that they meet basic standards, are immediately posted as discussion drafts. In effect this is the same as posting to a preprint server such as arXiv. Then two things can happen. The editors solicit reviews of the draft paper; these reviewers get to choose whether or not to remain anonymous. And in addition, anyone can submit an unsolicited review under their name; unsolicited reviews may not be anonymous. Finally, the authors submit a revised paper and a response to the reviews; if acceptable, the editors post this as the final version. The important difference between this and the normal review process at most journals is that all parts of this — the reviews, the responses, and successive versions of the paper — are accessible online with the paper and become part of the permanent record.

I think this last element — the fact that all the review correspondence is an accessible part of the permanent record — is an enormous improvement on standard practice. From my perspective as a reader, I learn a lot more from reading the paper with the reviews than I would from the paper alone. It is a throwback — a good throwback — to a long-past bit of the history of science in which many papers were published as conference proceedings, and a stenographer was on hand at the conference to record and later publish the discussion that took place after the presentation. Speaking as a reviewer, I put a lot of work into reviews, sometimes including extensive calculations and model exercises, and if this work disappears into anonymity and is never seen again, it is hard to see why it wasn’t a waste of my time. And then as an author, I am pretty sure that if reviewers know that their comments will be publicly accessible and permanently on the record, I am going to get a more careful, fair, and well-thought-out review. I certainly feel that pressure when reviewing in these journals. The point is, these are all good things. They make it so I, or you, learn more from published research. They are probably more work for everyone, but I think they also produce better results.

To summarize, Geochronology (or “GChron” if you don’t have time for the extra four syllables) solves my immediate problem created by the UC-Elsevier struggle for dominance. But even if you don’t have that problem, if you publish research in this field, it corrects several other problems and improves the situation in several ways. The reason I signed on as an editor long before the Elsevier incident is that it creates an outlet for basic research in geochronology and related areas that is independent of applications, which will hopefully make it possible to focus on doing the best possible job of determining the age of geologic deposits, without having to bury all the interesting technical research in the supplementary methods to make a paper suitable for a more general journal. Then, the open review system is better than the conventional system. And finally, open access publication is better than a subscription model. Publication fees in the EGU/Copernicus journals are cheap — 77 Euros per page — and many European universities and funding agencies, as well as a couple in the US, have blanket agreements. And for GChron, fees are waived for the first two years until there is an impact factor, so it’s free. That’s free, as in free. There is no downside.

The Geochronology website is here. Click on the link and submit the paper that you are writing right now.

 

 

 

 

 

 

YD vs. ACR: Who Would Win?

February 8, 2019

Any readers with kids — of the six people in the world who read this blog, statistics suggest that at least one or two will have children — may be familiar with the “Who Would Win” series of books by prolific children’s book author Jerry Pallotta, including such titles as “Who Would Win: Komodo Dragon vs. King Cobra,” “Who Would Win: Jaguar vs. Skunk,” and many, many, many others. This posting is about an equally epic matchup that does not appear in that series, but is a good way to highlight yet another database project and some ideas about what we should be doing with it.

The database project is the ‘ICE-D:ALPINE‘ database, which is basically a copy of the now-reasonably-well-established ‘ICE-D:ANTARCTICA‘ database of all known cosmogenic-nuclide data from Antarctica. The difference, as indicated by the name, is that the ICE-D:ALPINE database is intended to contain all known cosmogenic-nuclide exposure-age data from alpine glacier systems worldwide (except for Antarctica of course). This is a much bigger project, because alpine glacial landforms in populated mountain ranges of the world are a lot easier to get to than glacial deposits in Antarctica — a substantial fraction of them are directly accessible by train from the Zurich airport — so the density of geologists and therefore exposure-dated boulders is correspondingly higher. Thus, the quantity of data is a lot bigger. If we count by measurements, the alpine database is about 4 times larger than the Antarctica database at the moment; if we count by published journal articles, alpine glaciers are winning by about a factor of 3. The alpine glacier database is also a much bigger project, and there are more people working to make this happen. The initial data content was primarily based on the global compilation of exposure ages on boulders put together by Jakob Heyman, and this has now been augmented by an international group of contributors whose names appear on the database front page. It’s also less complete than the Antarctica database. We guess that it will have to grow by 25-50% to include all published data, and of course the rate of publication of new exposure-age data for alpine glacial deposits worldwide is a lot higher. So right now it’s a pretty good, but not totally comprehensive, representation of most of the data that are out there. The present distribution of data looks like this, where each dot signifies a glacial landform, typically an alpine glacier moraine, that’s been exposure-dated at least once. There are 2163 dots.

The matchup concerns the two biggest climate events — or at least the ones with the largest media presence — from the period of global deglaciation following the Last Glacial Maximum. The Younger Dryas (“YD”) event is a 1200-year cold period between 12,900 and 11,700 years ago that is extremely prominent in climate records from Greenland and the North Atlantic region. The Antarctic Cold Reversal (“ACR”) is sort of like the opposite of the YD: it is a 1700-year cold period between 14,700 and 13,000 years ago that is prominent in Antarctic and Southern Hemisphere climate records. The idea is that during the ACR, it was warm in Europe and cold in Antarctica; during the YD it was the opposite.

Most paleoclimatologists and glacial geologists would probably expect that alpine glaciers worldwide paid attention to these climate events. Glaciers expand during cold periods, and contract during warm ones. An extraordinary amount of work in the glacial-geological and paleoclimate literature, which will not be summarized or enumerated here because this is a blog entry and not a peer-reviewed journal article, has been devoted to associating particular glacier advances in particular places with the YD or the ACR. Nearly all this work consists of single studies of single glacier systems, or a handful of glacier systems in the same general area. But if we were to do a global synthesis of all dated alpine glacier deposits, a decent initial hypothesis would be that in the Northern Hemisphere and particularly around the North Atlantic, it was cold during the YD and warm during the ACR, so we should see a lot of YD moraines and not a lot of ACR moraines. In the Southern Hemisphere, it was the opposite, so we should see a lot of ACR moraines and not a lot of YD moraines.

So let’s see if this is really true, using the entire data set of all dated moraines and not just a few individual hand-picked ones that happen to be part of our own Ph.D. dissertation. To do this, I’ll use what is generally thought to be more or less the present state of the art for production rate scaling and calibration, specifically (i) the time-dependent ‘LSDn’ scaling method developed by Nat Lifton and others in 2014-16, as implemented in version 3 of the online exposure age calculators described by Balco et al. (2008) and subsequently updated, and (ii) the CRONUS-Earth “primary” production rate calibration data sets for Be-10, Al-26, and He-3. The majority of the exposure ages in the database are derived from Be-10 measurements, but there are significant numbers of Al-26 and He-3 data in there too, and for purposes of this I’ll treat them all equally. Using these tools, calculate exposure ages for all the samples on all 2163 moraines in the database. For each landform, apply the outlier-detection and averaging scheme used in the online exposure age calculators — which basically attempts to discard the minimum number of outliers necessary to yield a single population — to generate a summary age and uncertainty for each moraine. Assume that these define a Gaussian uncertainty distribution and compute what fraction thereof lies within the YD (11700-12900 years BP) and ACR (14700-13000 BP). Basically, this is an estimate of the probability that a particular landform was emplaced during each of these climate events.

As an aside, a measure of how useful it is to have all these data in a modern online database is that this entire analysis takes slightly less than 5 minutes to run in MATLAB, even with a slow household network connection.

Now plot the results on a map.

Obviously, lots of moraines have ages that are nowhere near either the YD or the ACR; these have zero probability of belonging to either event and are denoted by black dots. But a decent number of moraines have a significant probability of belonging to one event or the other. Note that because the typical overall uncertainty in a moraine age is similar in magnitude to the length of these climate events, this algorithm will never yield a 100% probability; given this, a high probability of belonging to a millenial-scale climate event will be anything greater than approximately 50%. So the colored circles on both maps denote moraines that have more than half a chance of belonging to each climate event. Out of 2163 moraines, this scheme identifies 44 likely YD moraines (the green circles) and 156 likely ACR moraines (the blue circles).

What is interesting about these results is that they are not quite what we expect. It does seem evident that YD moraines are rare, and ACR moraines are common, across all latitudes in the Pacific Cordillera. I am not sure whether we expect this: the ACR is typically explained as something having to do mostly with Atlantic heat transport, and although ACR-age glacier advances in much of South America have been explained as a response to cooling in the Southern Ocean, there is significant disagreement about whether this should be true in the tropics, and it is definitely not obvious why it should also happen in the north Pacific. What is not what we expected, though, is that YD moraines are not dominant, or even very common at all, anywhere, even in Europe, where there are certainly lots of late-glacial moraines, but most of them seem to be ACR age, not YD-age. It is important to remember that these data only pertain to alpine glaciers, not ice sheets, so the general lack of evidence for YD glacier advances may be in agreement with the idea that the seasonality of YD climate change resulted in different responses from alpine glaciers and ice sheets. But the main point is that this analysis does not at all show that the YD and ACR each have their own geographic sphere of influence on glacier change. In fact, nothing like it. ACR wins and YD loses globally.

We also get this result if we just look at the derived ages of all 2163 moraines as a histogram.

The green and light blue boxes indicate the YD and ACR, respectively. Viewed in this way, it is actually evident that there are fewer moraines that belong to the YD than there are for the rest of the late glacial: there are a lot of moraines older than 13-ish ka and a lot of moraines younger than 12-ish ka, but not a whole lot in the middle. Weird.

So, what is going on here? One thing that is interesting about this analysis is that it has reached the level of complexity where it’s hard to know exactly what’s going on in detail. This is potentially bad because it’s hard to know what’s going on in detail. For example, although I think that the outlier-removal scheme mostly behaves rationally, if there are some pathological age distributions in here that caused it to do the wrong thing, I’ll never know it. On the other hand, it’s good because it makes the analysis unbiased: even if I have some preconception of what the results ought to be, it is so complicated that it would be really hard to even figure out how to consciously or unconsciously fudge the algorithm or the data to get the desired result. But, still, there are some reasons why these results could be systematically wrong. A couple are fairly obvious. One, summary moraine ages could skew young because of postdepositional disturbance (just because of how the outlier-rejection scheme works in this example, they are less likely to be skewed old due to inheritance). But this would be mitigated by the fact that production rate calibration data are largely from the same types of landforms, which would skew production rates to the young side and neutralize this effect. And, in any case, if this were happening it would push moraines out of the YD to the young ages and also push other moraines into it from the old side, so it would not be expected to produce a systematic lack of YD moraines, except by accident. Two, production rate estimates could be inaccurate. If we globally lowered production rates by 5%, that would push the big group of ages near 11-11.5 ka right into the end of the YD near 12 ka. But we would have to ignore a lot of presumably correct production rate calibration data to do that, so that is not very compelling either.

Many people would argue that this entire analysis is oversimplified, misleading, and probably wrong exactly because the data set is not highly curated and subject to detailed examination and quality control at the level of each individual landform. Many specialists in exposure-dating of glacial deposits, if you get them talking privately, will reveal that data collected by other research groups are sometimes not good. Scattered. Hopelessly contaminated by postdepositional disturbance. Not to be trusted. Garbage. Certainly not to be used in an analysis of this sort for fear of leading the results astray. Of course, this data set is so large that I have no idea who even collected most of the data, so even if I had some of these opinions myself it would be nearly impossible to act on them to influence the results. But we can make a bit of an effort to de-garbage the data set by applying some objective criteria. For example, we can only accept exposure-age distributions where we can’t reject the probability that the ages belong to a single population at the 95% confidence level, which basically has the effect of further screening out landforms whose ages are scattered more than can be dealt with by discarding a couple of outliers. The effect on the global distribution of moraine ages:

is to make the relative absence of moraines of YD age in relation to older and younger time periods even more striking. Applying this filter geographically produces this:

Again, being more strict about discarding scattered age distributions results in rejecting a whole lot of moraines that might have appeared YD-age just because of excess scatter, and makes ACR-age moraines (and “Preboreal,” i.e. post-YD, moraines) even more dominant. We are down to only a handful of YD moraines in the Alps and only one in the western US. What do we conclude? We conclude that YD-age alpine glacier moraines are actually kind of rare, which is at least not exactly what I expected, and possibly not what you did either. Perhaps the YD was cold, but also dry, so alpine glaciers shrank everywhere. Perhaps this is a manifestation of the seasonal distribution of YD cooling, something like Meredith Kelly suggested in this paper. Or perhaps Gerard Roe is right and glacier change is just an autoregressive process driven by stochastic variability in a steady climate.

But in any case, ACR wins! Game over.

So, to summarize, that was kind of fun, but what is the point? It could be argued that this analysis is oversimplified, misleading, based on crappy data, and largely BS. Maybe that’s why this is an unreviewed blog posting instead of a Nature article. But what is really important here is that this exercise highlights that we can move away from making large-scale inferences about glacier response to climate change based on small sets of highly curated data, and move towards doing it with much larger and more inclusive data sets, even if they are noisy, and, hopefully, with globally applied and less biased algorithms. We have, or almost have, a modern database infrastructure with dynamic, internally consistent, and mostly transparent, exposure-age recalculation, that we can use to do this. We should do it. And if, in fact, my shot at this really is oversimplified, misleading, and largely BS, then go do it better. All the data are here.