Skip to content

Isostatic rebound corrections are still on a squishy footing

September 18, 2019

This blog post covers the subject of whether or not you should account for isostatic elevation changes in production rate calculations, comes to the conclusion that we don’t really know the answer, highlights one cautionary tale that is probably relevant, and suggests some ways to fix the situation.

This subject is important because production rates vary with atmospheric pressure. Higher  elevation, less atmosphere, higher production rate. In exposure-dating, because it’s not possible to measure the mean atmospheric pressure over the entire exposure duration of a sample, what we actually do in nearly all cases is (i) measure the elevation, (ii) convert to atmospheric pressure using a relationship based on the modern atmosphere, and (iii) assume that the resulting pressure applies for the entire exposure duration. Unfortunately, for sites that are more than a few thousand years old, and especially for sites that are, or were, marginal to large ice sheets, this usual procedure has to be at least partly wrong on two counts. First, as ice sheets, sea level, and global temperature have changed in the past, the mass of the atmosphere has moved around: the atmosphere has to get out of the way of ice sheets, fill up space created by eustatic sea-level fall, and shrink and expand as global temperature cools and warms. The pressure-elevation relationship in many places was probably quite different in the past. Second, because of glacioisostatic depression and rebound associated with the advance and retreat of large ice sheets, many potential exposure-dating sites have not only had the atmosphere move around on them, but have themselves moved up and down in the atmosphere. These observations would tend to indicate that maybe we shouldn’t just compute the production rate at the present elevation — we should account for the fact that all these things are changing.

Several folks have tried to do this. Or, at least, part of it. Specifically, a number of exposure-dating papers have extracted elevation time series for their sample sites from glacioisostasy models (such as the ICE-XG series of global models generated by Richard Peltier and colleagues), computed production rates corresponding to those elevations, and   then, basically, used a time-averaged production rate instead of an instantaneous modern production rate to compute the exposure ages. In effect this approach assumes that the atmosphere is stationary and the sites are moving up and down in it, and in all cases where I can remember it being applied, it results in lower calculated production rates (because the sites were ice-marginal, so at lower elevation in the past than they are now) and therefore older calculated exposure ages.

OK, this seems like a good idea. And it is possible to do this for nearly any exposure-dating study that deals with LGM-to-present events, because there exist global models for isostatic depression and relative sea-level change for the last glacial cycle. Of course, it’s not easily possible for pre-LGM exposure-dating applications.

However, there are several problems with this otherwise good idea.

One, as noted above, it’s not just the sites that are moving up and down in the atmosphere, the atmosphere is moving around as well. This aspect of the problem is the subject of a very good paper from quite a long time ago by Jane Willenbring (Staiger). This paper looked mainly at the atmosphere effects and pointed out, among other things, that they may act to offset isostatic depression near ice sheet margins, resulting in less variation in production rates than one would expect from each process individually.

Two, the production rate calibration sites are moving up and down too. Suppose that your production rate calibration data experienced an increase in the surface production rate over time, as expected for glacioisostatic rebound in the absence of any offsetting atmospheric effects. If you don’t account for this — that is, you just use the modern elevations of calibration sites — when using these data for production rate calibration, you will underestimate the production rate. If you then use this calibrated production rate, but make an isostatic rebound correction when computing ages for unknown-age sites, you will make your production rate estimate at your site, which is already too low globally because you forgot to include isostasy in the production rate calibration, even more too low locally. Thus, the resulting exposure ages will be too old, twice. Instead, to do this correctly, you must include the isostatic rebound correction in both the production rate calibration phase and the exposure-age calculation phase.

There exist several papers that appear on a reasonably close reading to have made this mistake and doubly-compounded errors, although in many cases it is hard to tell because the details have disappeared into a methods supplement. Because this is a blog entry and not an actual journal article, I don’t have to name them all in detail. But the point is that ages calculated in this way are unambiguously wrong. If you want to use an isostatic rebound correction in your exposure-age calculations, you must also use it in your production rate calibration. Or you can do neither: if you have ice-marginal calibration sites and ice-marginal unknown-age sites with similar rebound histories, and you don’t make the correction at either one, you will get close to the right answer, because any errors you introduced by skipping the correction in part 1 will be canceled by the offsetting error in part 2. Thus, no matter what, it is unquestionably better to not do this correction at all than to do it halfway.

The third problem is that the existing calibration data set does not show any evidence that a correction based on elevation change alone is either necessary or desirable. This is kind of a complicated claim, and it’s important to clarify the difference between (i) testing a particular model, that is, asking “does a particular glacioisostasy model, ICE-5G or whatever, improve the performance of production rate scaling models when measured against calibration data” and (ii) asking whether the calibration data themselves provide any evidence that production rates are affected by glacioisostatic elevation change considered alone. I would argue that we have to do (ii) first. Basically, if we can’t establish (ii) from the calibration data, then it is very difficult to determine what the answer to (i) would mean, or if a “yes” answer was meaningful at all.

So, here I will consider item (ii). The basic principle here is that the existing set of calibration data includes both calibration sites that are marginal to large ice sheets, and those that are not. If glacioisostatic elevation change by itself is an important control on production rates, then ice-sheet-marginal calibration sites will yield lower production rates than far-field ones.

This seems like a pretty simple question, but it is not obvious that anyone has actually answered it. A recent paper by Richard Jones and others gets tantalizingly close to at least asking the question, but then disappointingly stops at the last minute. The bulk of this paper is a quite thorough global calculation of the effect on time-integrated production rates expected from glacioisostatic elevation change. In the course of discussing whether or not this should be routinely taken into account in exposure-dating studies, the authors point out that the four (or six, depending on how you group the Scottish data) calibration sites for beryllium-10 data in the widely used “CRONUS-Earth primary calibration data set” are all in areas that experienced very small glacioisostatic elevation changes, so if you use this calibration data set, including or not including a glacioisostatic elevation change correction in the production rate calibration phase doesn’t really make any difference. This is true and, of course, represents a loophole in the absolutely strict rule described above that if you want to use an isostatic correction in exposure-age calculations, you must also use it in production rate calibration. After this, however, the paper stops thinking about calibration data and goes on to other things. Stopping here is extremely disappointing in the context of the otherwise very clear and comprehensive paper, because there are not just four Be-10 production rate calibration sites. In fact, there are 38, seventeen of which are from ice-sheet-marginal features. Although this is the subject of a whole different discussion, there is no particular reason that most of these sites are more or less reliable than the “CRONUS primary” calibration data, which, IMHO, were somewhat arbitrarily selected. Thus, it is possible to test production rate predictions based on glacioisostatic elevation change against a fairly large data set of calibration measurements from both ice-sheet-marginal and far-field sites, and as I read this paper I was very surprised that the authors did not do it. Or, at least, I could not find it in the paper. Why not?

So, let’s try it, at least to the extent feasible without making this blog posting any longer than its current 14,000 words. Specifically, we are asking the question: are production rates inferred from ice-sheet-proximal calibration sites lower than those inferred from other calibration sites? If glacioisostatic elevation change by itself is an important control on production rates, we expect the answer to be yes. However…

…the answer is no. The above histograms show Be-10 production rate scaling parameters for LSDn scaling inferred from all Be-10 calibration sites in the ICE-D:CALIBRATION database (top panel) as well as ice-sheet-marginal and far-field subsets (lower two panels). The blue histograms are for all samples; the red histograms are for averages from each site. Remember, the LSDn scaling algorithm directly predicts production rates with units of atoms/g/yr instead of nondimensional scaling factors, so the calibrated parameter is accordingly a nondimensional correction factor rather than a “reference production rate” with physical units. Regardless, the mean and standard deviation of correction factors inferred from ice-sheet-marginal and far-field calibration sites are, basically, the same. Almost exactly the same.

This result, therefore, fails to provide any evidence that production rates are significantly affected by glacioisostatic elevation change alone. Instead, it tends to suggest that elevation-change effects at ice-marginal sites could very well be substantially offset by atmospheric effects. This, in turn, implies that one should probably not include isostatic effects in a production rate scaling model without also including atmospheric mass redistribution effects.

There is no point in giving up too soon, though, so let’s try to come up with some other first-principles predictions that we can test with the calibration data. The simplest prediction was just that production rates from ice-marginal calibration sites should be lower. A slightly more complex prediction is that we expect that this effect should be more pronounced for younger calibration sites. Suppose an ice sheet margin retreats between the LGM and the mid-Holocene, and we have ice-marginal calibration sites that deglaciated at a range of times during this retreat. In general terms, the sites that are closer to the center of the ice sheet deglaciated more recently and should be more affected by isostatic depression: not only is the magnitude of the depression typically larger at sites near the center of the ice sheet, but the period of time during which the sites are isostatically depressed makes up a larger fraction of their total exposure history. Therefore, we expect a correlation between the age of ice-marginal calibration sites and the inferred production rates: all sites should give lower production rates than non-ice-marginal sites, but the younger ice-marginal sites should yield lower production rates than the older ones. Is this true?

The answer: well, sort of. The plot above shows values of the LSDn calibration parameter inferred from far-field sites (blue) and ice-sheet-marginal sites (red). The solid and dotted black lines give the mean and standard deviation of the entire data set. Production rates inferred from non-ice-proximal calibration data are uncorrelated with site age, as they should be, but those inferred from the ice-marginal sites are strongly correlated with site age. This correlation would be a point in favor of an elevation-change effect being important…except for one thing. As expected from the previous observation that the two groups yield the same mean production rate, half the ice-marginal sites give production rate estimates that are HIGHER, not lower, than the mean of the non-ice-marginal sites. If glacioisostatic elevation change is the primary control on production rate, we do not expect this. Forebulge effects are not expected for sites that are well within the LGM boundaries of the ice sheets, so (at least to me) there appears no obvious way that elevation change alone can account for both (i) the similarity in the distribution of production rate estimates between production rates inferred from ice-marginal and far-field sites, and (ii) the strong age-production rate correlation for ice-marginal sites. Overall, this result is sort of ambiguous but, again, appears to indicate that correcting for glacioisostatic elevation changes by themselves is oversimplified.

The summary up to this point is that arguing that glacioisostatic elevation change should be an element in production rate scaling models is probably reasonable. However, both the calculations in Jane’s paper and an analysis of available calibration data indicate that it is probably oversimplified, and probably not a good idea, to correct only for the elevation changes without also considering simultaneous atmospheric mass redistribution.

On to the cautionary tale. The cautionary tale part of this posting is that this situation is very similar to a somewhat obscure, but analogous and pertinent, aspect of cosmogenic-nuclide applications to erosion-rate measurements. Specifically, this is the question of whether or not to apply topographic shielding corrections to production rate estimates for basin-scale erosion-rate calculations.

In the isostasy example, we know that there are two time-dependent effects on production rates in ice-sheet marginal areas, and we think they probably offset each other, at least to some extent. One of them (isostatic elevation change) is easy to calculate, because there exist a number of global, time-dependent, LGM-to-present glacioisostasy models that can be easily used to produce a time-dependent elevation change estimate for any site on Earth. The second one (atmospheric mass redistribution) is difficult to calculate, because there are no equivalent, easily accessible, LGM-to present models for atmospheric pressure distribution.

In the topographic shielding example, we know that the erosion rate we infer from a nuclide concentration in sediment leaving a basin is inversely proportional to the mean production rate in the basin, and directly proportional to the effective attenuation  length for subsurface production. In a steep basin where many parts of the basin are shielded by surrounding topography, shielding reduces both the mean production rate and the attenuation length. Because these have opposite proportionalities, therefore, corrections for these effects offset each other. Again, one of these corrections is easy: using digital elevation models to compute the reduction in the mean surface production rate in a basin due to topographic shielding is a satisfying and fun application of raster GIS. And one of them is hard: computing the shielding effect on the subsurface attenuation length is computationally painful, time-consuming, and really not fun at all. The result is that for quite a long time, many researchers just did the easy correction and ignored the hard correction, until a thoroughly helpful paper by Roman DiBiase did the hard part of the correction, and showed that doing only the easy correction results in an answer that is more wrong than it would have been if you had ignored both corrections. Oops.

The analogy is pretty clear. Both cases feature two offsetting corrections, one easy and one hard, and a seductive heuristic trap created by the availability of code to do the easy correction but not the hard one. It seems like it might be a good idea to learn from history, and try not to shove our left foot in a mouth still filled with the taste of our right shoe.

So, to conclude, what should we do or not do about this? Well, for one thing, we should probably not apply a time-dependent isostatic rebound model to production rate calculations without an atmosphere model as well. The evidence seems to indicate that this is probably oversimplified and may not improve the accuracy of exposure ages. However, it is not obvious that there is a good candidate atmosphere model at the moment; this may require some work. And then, if we use both models, it is necessary to apply them to both calibration data and unknown-age sites, which requires hard-coding them into everyone’s  exposure-age calculation code. We should complete the Jones paper by doing a better job than I have done above in examining more carefully whether scatter in fitting calibration data to production rate scaling models can be explained by glacioisostatic or atmospheric-mass-redistribution effects. And, finally, we will probably need to collect more calibration data. This is because applying these corrections will very likely have the largest effect on inferred exposure ages for times and places where we do not have calibration data. It is probably not a good idea to extrapolate production rate corrections into places outside the range of calibration data where being wrong could incur large and hard-to-identify errors. Targeting places and times where we expect the largest corrections, and also places and times where elevation-change and mass-redistribution effects are likely to have offsetting or compounding effects, to collect new calibration data would add a lot of credibility to the argument that these effects are important.






August 29, 2019

By Perry Spector


This is the first guest post on this blog, and the point is to describe a data-model comparison project called DMC:ANTARCTICA that I have been working on for the past year. The overall objective of the project is to provide synoptic evaluation of Antarctic ice sheet models that simulate the evolution of the ice sheet over long time periods using the entire dataset of cosmogenic-nuclide measurements from Antarctica. This project is still under development, but there now exists a publicly available DMC:ANTARCTICA web interface that serves data-model comparisons for every site in Antarctica where data exist and for 15 different ice-sheet model simulations.


We rely on numerical ice-sheet models for establishing the climate sensitivity of the Antarctic ice sheet and for forecasting its future behavior and contribution to sea level. Although these models provide glaciologically-feasible reconstructions ice-sheet evolution, they are of limited use unless they can be shown to agree with observations of past ice-sheet change.

Considerably progress actually has been made on this front. Recent simulations by David Pollard, Pippa Whitehouse, Rob Briggs, and others that are constrained by geologic observations (e.g. relative sea-level curves from Antarctica, exposure ages, etc.) do, in fact, agree with those observations remarkably well in many cases.

Existing data-constrained models, however, are primarily restricted to the period between the LGM and the present. Longer simulations, spanning important time periods when the climate was as warm or warmer than present, have not been thoroughly tested. One limitation to doing this is that many of the constraint datasets, such as RSL curves from Antarctica, do not extend beyond the LGM. On Pleistocene-Pliocene timescales, cosmogenic-nuclide measurements from Antarctica are some of the only data available to test hypotheses of past ice-sheet behavior.

The existing set of Antarctic cosmogenic-nuclide measurements

All known published (and many unpublished) cosmogenic nuclide data from Antarctica have been compiled, mostly by Greg Balco, in the ICE-D:ANTARCTICA database. The database currently comprises 4361 measurements on 2551 samples. For this to be a useful set of ice-sheet model constraints, it should span as much of the continent as possible and also be distributed in time. Let’s look at the breakdown. The figure below shows the distribution for samples of glacial erratics and for bedrock surfaces.

Minimum and maximum circle sizes represent 1 and 88 samples, respectively.

Both populations do a pretty good job highlighting where outcropping mountains exist in Antarctica. There are large areas, however, that are entirely unrepresented, such as the East Antarctic interior and the coast between northern Victoria Land and the Lambert-Amery Basin. Fortunately these are sectors where most ice-sheet models predict relatively small changes over glacial-interglacial cycles. For glacial periods, models predict large ice thickness changes in the Ross, Weddell, and Amundsen Sea sectors, areas where the coverage is relatively good.

Interestingly, around 80% of the samples are of glacial erratics while the remaining 20% are of bedrock surfaces. This distribution is probably, in part, due to the combination of two factors. First, outcropping bedrock surfaces in Antarctica that were ice-covered during the LGM commonly have exposure ages much older than the LGM because, in many cases, past ice cover was frozen to the bed and non-erosive. Second, the objective of many exposure-dating projects has specifically been to establish the ice sheet’s configuration during or following the LGM, and so these projects likely opted to sample glacial deposits rather than bedrock. However, as has been discussed by some authors (e.g. here) and as will be discussed below, well-preserved bedrock surfaces can provide important information about exposure and ice cover over multiple glacial-interglacial cycles.

The next set of maps show the distribution by nuclide.

Most samples in the database only have a single analysis of a single nuclide, and, unsurprisingly, that nuclide is usually Be-10. Multiple nuclides (e.g. Be-10 and Al-26) have been measured on some samples, as shown below. These samples are particularly important because paired nuclide data can provide information not just about past exposure, but also about past ice-cover and erosion.

The next figure shows histograms of exposure ages (note logarithmic axis) and gives a sense of the temporal distribution of the database.

Although a large fraction of the samples only provide information about the last ice age and the subsequent deglaciation (i.e. the focus of many exposure dating studies), a substantial population provides constraints on timescales extending as far back as the Miocene.

Therefore, to summarize, the set of Antarctic cosmogenic-nuclide measurements is reasonably well distributed in space and time, which suggests that it should be of some use for evaluating numerical ice-sheet models that run over a range of time periods. It would be useful if the database contained more samples from slowly-eroding bedrock surfaces, but the coverage isn’t terrible at least.

A Data-model comparison framework

For most samples, there are an infinite number of different histories of exposure, ice-cover, and erosion that could produce the observed nuclide concentrations. Therefore, these data cannot be simply inverted to generate unique ice-thickness chronologies. However, they can be used to distinguish incorrect chronologies predicted from ice-sheet models from plausible ones. The basic idea is that, for a given rock in Antarctica, the history of exposure and ice-cover simulated by a model predicts certain concentrations of cosmogenic nuclides at the present-day that can be directly compared to concentrations that have actually been measured in the rock.

This isn’t a new idea. In a 2012 paper, Sujoy Mukhopadhyay and others reported that Be-10 and Ne-21 concentrations measured in bedrock samples from the Ohio Range were consistent with concentrations predicted by a 5 million year ice-sheet model run. Since then, the only other example that I am aware of is a paper I recently wrote with John Stone and Brent Goehring, which is under review in The Cryosphere, that evaluates several models using results from the Pirrit Hills and the Whitmore Mountains.

However, rather than doing one-off data-model comparisons for one or two sites, the objective of the DMC:ANTARCTICA project is to perform synoptic evaluation of model performance using all available cosmogenic-nuclide measurements from Antarctica. At the moment, that capability is not yet available, but it is coming. What is available is the DMC:ANTARCTICA web interface which serves comparisons between every site in Antarctica where data exist and a number of published ice-sheet model simulations. The rest of this post will give a basic description of how the web interface works, and highlight a couple interesting results.

For a given site in Antarctica and ice-sheet model of interest, the modeled ice-thickness history at the site is extracted via bi-linear interpolation. Here is an example from the model of Pollard & DeConto (2009) for the Pirrit Hills.

The next steps are to (i) generate synthetic samples of both bedrock surfaces and glacial erratics spanning a range of altitudes at the site, (ii) determine the history of exposure and ice cover experienced by each sample, (iii) use those histories to calculate the present-day concentrations of various nuclides, and then (iv) compare those predictions to concentrations that have actually been measured in samples from the site.

For each DMC request, the calculations are performed in real time on a Google Cloud virtual machine. Concentrations, as well as exposure ages, are calculated using the Matlab/Octave code behind the version 3 online exposure age calculator, which was described by Balco et al. (2008) and has since been updated. Additional details about data-model comparison calculations are available here.

All samples are assumed to have zero cosmogenic nuclides at the beginning of the model simulation. Thus running the web interface with a site where there are 12 million year Ne-21 exposure ages, paired with a LGM-to-present ice-sheet model, will obviously produce large data-model misfits that are unrelated to the actual performance of the model. Similar misfits can be obtained by doing the reverse, that is, pairing a long-term model with a site that only has, say, Holocene exposure ages. This latter scenario can be dealt with by employing the option to change the start time of the model so you can look at, for example, only the last 20 kyr of a 800 kyr run.

The web interface can also serve model predictions for any latitude and longitude in Antarctica, even if no samples have been collected at that location. This feature could potentially be useful for planning future field projects.

Some interesting DMC results: (i) elevation transects of bedrock surfaces and glacial erratics

DMC simulates periodic deposition of erratics at a site whenever the ice sheet there is thinning. This is because most glacial erratics in Antarctica are ablation deposits, meaning that they were transported englacially from upstream erosion areas to ablation areas on the ice surface (where they were initially exposed to the cosmic-ray flux), and subsequently deposited on mountain flanks by ice thinning. Periodic deposition of erratics is simulated for all sites, regardless of whether erratics are actually known to be present.

Ice-sheet models that simulate many glacial-interglacial cycles can therefore produce erratics at similar elevations but at different times. The effect of this is shown in the upper-right panel below for the 5 Myr simulation of Pollard & DeConto (2009) at Mt. Turcotte in the Pirrit Hills (see ice-thickness time series above).

Different colors represent different nuclide-mineral pairs as follows: orange = C-14 (quartz); red = Be-10 (quartz); blue = Al-26 (quartz); neon = Ne-21 (quartz); gray = He-3 (pyroxene/olivine).

Concentrations for a given nuclide in erratics are scattered and have a relatively weak relationship with elevation. The exception is for C-14 (shown in orange), which decays rapidly with a 5730 year half life and therefore has no memory of the exposure and ice-cover history prior to around 30 kyr ago. In contrast, the bedrock surfaces (upper-left panel), which are predicted to have been intermittently ice covered, show concentrations that always increase monotonically with altitude. In part, this is due to the fact that production rates increase with elevation, but high elevation samples also must have spent a lesser proportion of their existence ice-covered than low elevation samples from the same site.

Note 1: Both bedrock and erratics are assumed to not experience erosion nor get covered by till or snow. For erratics, once they are deposited they are assumed to not change elevation, roll over, etc.

Note 2: The four panels above have different x and y axis ranges. Sorry. Working on that.

The predictions for Mt. Turcotte bedrock and erratics (upper panels) are interesting because they are qualitatively pretty similar to what is actually observed there (lower panels), as well as at many other Antarctic sites. Observed Be-10, Al-26, and Ne-21 in bedrock samples monotonically increase with elevation. There are only minor exceptions to this, which are probably related to small amounts of erosion experienced by some samples or, in the case of Ne-21 (shown in neon), radiogenic production. In contrast to the bedrock samples, Be-10 measurements on glacial erratics (lower-right panel) are scattered and have no obvious concentration-elevation relationship. Thus, at least qualitatively, the model predictions for both bedrock and erratics are consistent with what we actually observe.

One take-home point from this comparison is that, in Antarctica, where past ice cover was commonly cold-based and non-erosive, weathered bedrock surfaces can, in some cases, be much better indicators of long-term exposure and ice cover than glacial erratics. Next time you’re in the field, collect an elevation transect of weathered bedrock surfaces.

Some interesting DMC results: (ii) LGM-to-present ice-sheet models consistently simulate premature deglaciation in the western Weddell Sea

Okay, now let’s look at an example in which we actually evaluate the performance of ice-sheet models.

Our knowledge of the post-LGM deglaciation history of the two largest embayments in Antarctica, the Ross and the Weddell, has remained very lopsided. Decades of geological (marine and terrestrial) and glaciological research in and around the Ross Embayment has generated a large network of constraints on the thickness and extent of grounded ice during the LGM and during the subsequent retreat. In contrast, relatively few field studies in the Weddell Embayment have produced commensurately few constraints there.

Recent papers by Keir Nichols, Jo Johnson, Brent Goehring, and others (published and in review here and here) are increasing our knowledge of the deglacial history of the Weddell Sea. These papers describe C-14 exposure ages from an elevation transect of 8 bedrock samples from the Lassiter Coast on the east side of the Peninsula. All samples have mid-Holocene ages and indicate abrupt thinning there at this time.

Let’s see how these results compare to predictions from state-of-the-art ice sheet models. In the figures below, orange circles show measured C-14 exposure ages plotted against sample elevation. Gray circles show model predictions. Note that the 8 bedrock samples on which C-14 concentrations were measured were collected from a few different but nearby peaks, which have been classified as different sites in ICE-D:ANTARCTICA. Thus, only 5 of the 8 samples are shown below, which does not affect the data-model comparison.

The first figure shows the predictions from the reference simulation of the model by Jonny Kingslake and others (2018).

The second figure shows predictions from the best-scoring run of the large ensemble by David Pollard and others (2016). Other recent models by David Pollard and others predict relatively similar transects.

The third figure shows predictions from the last 20 kyr of the model by Tigchelaar et al. (2018).

These three state-of-the-art ice-sheet models consistently simulate premature thinning on the Lassiter Coast by several millennia, which likely means that the grounding line in the western Weddell Sea retreats too early in the models. Note that the observation of premature thinning is not limited to this site but is also seen at the Heritage Range and at the Pirrit Hills, sites which are presumably sensitive to ice-sheet behavior in the western Weddell Sea.

The model that performs best at this site appears to be that of Pollard et al. (2016), which is perhaps not surprising because, unlike the models by Kingslake et al. (2018) and Tigchelaar et al. (2018), it was calibrated using a large network of observational constrains from various other sites in Antarctica.


The overall goal of the DMC:ANTARCTICA project is to systematically evaluate ice-sheet model simulations using the entire set of cosmogenic-nuclide measurements. This isn’t available yet, but it’s coming.

The DMC:ANTARCTICA web interface is publicly available for dynamically comparing observations and model predictions anywhere in Antarctica. Check it out.

The web interface helps address the issue of ice-sheet model opacity. Typically the complex behavior of an ice-sheet model is conveyed only in a few small figures in a paper. Now it’s possible for anyone to explore models and see what they predict anywhere in the continent.

The web interface still has some issues to be ironed out. Some of these are discussed here. Let me know if you find bugs or if there is a specific feature you’d like to see implemented.

Don’t see your data? The problem is probably either that (i) it’s not in ICE-D:ANTARCTICA yet, or (ii) certain information needed to calculate exposure ages (e.g. sample thickness) is missing from ICE-D. Contact me or Greg to fix that.

Have a model you would like included in this project? Get in touch.

Apparently, it takes 10 years to recruit a guest blogger. But don’t give up.

August 29, 2019

Despite the fact that the total audience of this blog is about ten cosmogenic-nuclide geochemists, one reasonable criticism of the project is that it does not do a great job of representing the diversity of opinion among these ten people — because I am the only author so far. Although, of course, many of the blog entries here make reference to work or publications by other people, so far it has been difficult to tell from the blog alone whether these are real, live, cosmogenic-nuclide geochemists, or purely characters introduced for rhetorical purposes, kind of like the obscure Greek philosophers who lob questions at Socrates, or the opponents of the Harlem Globetrotters — because none of them have ever posted blog entries themselves. A handful of readers (although a surprisingly small handful) have left comments, but all my attempts so far to recruit guest bloggers to contribute their own postings have resulted in the subject of the request immediately remembering another really important commitment and abruptly ending the conversation. I can’t imagine why.

Thus, it is with great pleasure that I can report that Perry Spector, a BGC postdoctoral researcher and specialist in cosmogenic-nuclide applications to Antarctic ice sheet change, has agreed to author at least one post — and made it sound like he meant it. Expect to see it soon. This represents major progress, and an opportunity to point out that, at least for the other nine readers, this blog is potentially a really useful forum for comments and opinions on obscure technical bits of cosmogenic-nuclide geochemistry that are not fully-baked enough, or not important or useful enough, to be worth the effort of an actual publication. I find writing blog postings to be largely painless, sometimes entertaining, a time-saver when the subject of the post is something I would otherwise have to answer a lot of emails about, and often helpful in organizing ideas. So, if you have something to say that you want to say here, let me know.

Where to publish cosmogenic-nuclide geochemistry

April 11, 2019

The overall goal of this posting is to highlight a new open-access/open-review journal, just introduced by EGU and Copernicus Publications, entitled “Geochronology.” But first, some context.

For context, consider two things.

One. The dispute between the University of California system and scientific publisher Elsevier. Basically, UC has canceled its subscriptions to all Elsevier journals after failing to come to a blanket agreement for open access publication of UC-hosted research. I care about this because I utilize the UC library journal subscriptions.

Two. The list of peer-reviewed scientific journals where technical articles about cosmogenic-nuclide geochemistry are commonly published. Quaternary Geochronology. Earth and Planetary Science Letters. Geochimica et Cosmochmica Acta. Nuclear Instruments and Methods in Physics Research. Chemical Geology. Possibly Quaternary Science Reviews. For short-lived nuclides in tracer studies, perhaps the Journal of Environmental Radioactivity.

These journals have two things in common. They publish technical research on geochemistry and geochronology that is relevant to your work, so you read them all the time. And they are published by Elsevier. Although there are many other journals where one might sensibly publish any research involving applications of cosmogenic-nuclide geochemistry to broader Earth science projects, consolidation and acquisitions by Elsevier over the past decade or so have produced a near-monopoly on journals suitable for technical articles in this field. About half the articles that I am an author of are in Elsevier journals. The only other journals I can think of where I might publish technical articles rather than applications are, maybe, Geophysical Research Letters or JGR-Earth Surface.

Now put these two things together. In future, I will not be able to legally read any articles published in Elsevier journals, which include nearly every possible outlet for basic research in cosmogenic-nuclide geochemistry. Most sensible people would conclude that in this situation, I should also stop publishing in and reviewing for these journals. While it is true that cosmogenic-nuclide geochemistry was probably not a major contributor to Elsevier’s $1.2B in profits from scientific publishing in 2017, and my decision to boycott is unlikely to significantly threaten said profits, it seems pretty dumb to continue providing free services to journals that I can’t even read.

So, OK, I can’t publish the majority of my research output, which concerns technical research in cosmogenic-nuclide geochemistry, in any of the journals that are most suitable for publishing this research. While the idea of transferring all my future research output to this blog and other non-peer-reviewed websites  is certainly appealing, it may not be a wise career move, or fair to funding agencies who expect to eventually see some results, to disengage completely from normal peer-reviewed journals.

That concludes the context. Although I thoroughly support the UC position in the Elsevier dispute because I think the Elsevier/Wiley/MacMillan business model, in which my free labor supports an enormously profitable business, is ridiculous, I am left at the moment with a lack of publication outlets. Thus, it is an extraordinary coincidence of timing that on nearly exactly the same day that UC/Elsevier negotiations failed, the European Geophysical Union and Copernicus Publications introduced a new non-profit, open-access, open-review journal called Geochronology, that aims to focus on basic research in geochronology and associated areas of geochemistry and analytical methods.

From my perspective, this is the only such journal that is not published by Elsevier, so it looks like I have to publish in it whether I like it or not — although there is a slight complication in that I am one of the editors, so it would probably look bad if I were also the most prolific author. In the rest of this post I will explain why you — a reader of this blog who presumably carries out and publishes research in cosmogenic-nuclide geochemistry — should publish in it too.

First, what is it? In the past several years, EGU has worked with Copernicus Press, which is a publisher and conference organizer organized as a non-profit corporation under German law, to develop a series of online journals that provide (i) open access without subscription fees, (ii) impose on authors or funding agencies the minimal costs needed to support publication and archiving, and (iii) utilize an open review process. More about the open review process later. Several of these journals have become extremely successful and widely read; for example, The Cryosphere, which is rapidly becoming the de facto international journal of record for research in glaciology and cold regions climate and surface processes.

Second, what is the open review process? Basically, papers submitted to these journals, if the editors deem that they meet basic standards, are immediately posted as discussion drafts. In effect this is the same as posting to a preprint server such as arXiv. Then two things can happen. The editors solicit reviews of the draft paper; these reviewers get to choose whether or not to remain anonymous. And in addition, anyone can submit an unsolicited review under their name; unsolicited reviews may not be anonymous. Finally, the authors submit a revised paper and a response to the reviews; if acceptable, the editors post this as the final version. The important difference between this and the normal review process at most journals is that all parts of this — the reviews, the responses, and successive versions of the paper — are accessible online with the paper and become part of the permanent record.

I think this last element — the fact that all the review correspondence is an accessible part of the permanent record — is an enormous improvement on standard practice. From my perspective as a reader, I learn a lot more from reading the paper with the reviews than I would from the paper alone. It is a throwback — a good throwback — to a long-past bit of the history of science in which many papers were published as conference proceedings, and a stenographer was on hand at the conference to record and later publish the discussion that took place after the presentation. Speaking as a reviewer, I put a lot of work into reviews, sometimes including extensive calculations and model exercises, and if this work disappears into anonymity and is never seen again, it is hard to see why it wasn’t a waste of my time. And then as an author, I am pretty sure that if reviewers know that their comments will be publicly accessible and permanently on the record, I am going to get a more careful, fair, and well-thought-out review. I certainly feel that pressure when reviewing in these journals. The point is, these are all good things. They make it so I, or you, learn more from published research. They are probably more work for everyone, but I think they also produce better results.

To summarize, Geochronology (or “GChron” if you don’t have time for the extra four syllables) solves my immediate problem created by the UC-Elsevier struggle for dominance. But even if you don’t have that problem, if you publish research in this field, it corrects several other problems and improves the situation in several ways. The reason I signed on as an editor long before the Elsevier incident is that it creates an outlet for basic research in geochronology and related areas that is independent of applications, which will hopefully make it possible to focus on doing the best possible job of determining the age of geologic deposits, without having to bury all the interesting technical research in the supplementary methods to make a paper suitable for a more general journal. Then, the open review system is better than the conventional system. And finally, open access publication is better than a subscription model. Publication fees in the EGU/Copernicus journals are cheap — 77 Euros per page — and many European universities and funding agencies, as well as a couple in the US, have blanket agreements. And for GChron, fees are waived for the first two years until there is an impact factor, so it’s free. That’s free, as in free. There is no downside.

The Geochronology website is here. Click on the link and submit the paper that you are writing right now.







YD vs. ACR: Who Would Win?

February 8, 2019

Any readers with kids — of the six people in the world who read this blog, statistics suggest that at least one or two will have children — may be familiar with the “Who Would Win” series of books by prolific children’s book author Jerry Pallotta, including such titles as “Who Would Win: Komodo Dragon vs. King Cobra,” “Who Would Win: Jaguar vs. Skunk,” and many, many, many others. This posting is about an equally epic matchup that does not appear in that series, but is a good way to highlight yet another database project and some ideas about what we should be doing with it.

The database project is the ‘ICE-D:ALPINE‘ database, which is basically a copy of the now-reasonably-well-established ‘ICE-D:ANTARCTICA‘ database of all known cosmogenic-nuclide data from Antarctica. The difference, as indicated by the name, is that the ICE-D:ALPINE database is intended to contain all known cosmogenic-nuclide exposure-age data from alpine glacier systems worldwide (except for Antarctica of course). This is a much bigger project, because alpine glacial landforms in populated mountain ranges of the world are a lot easier to get to than glacial deposits in Antarctica — a substantial fraction of them are directly accessible by train from the Zurich airport — so the density of geologists and therefore exposure-dated boulders is correspondingly higher. Thus, the quantity of data is a lot bigger. If we count by measurements, the alpine database is about 4 times larger than the Antarctica database at the moment; if we count by published journal articles, alpine glaciers are winning by about a factor of 3. The alpine glacier database is also a much bigger project, and there are more people working to make this happen. The initial data content was primarily based on the global compilation of exposure ages on boulders put together by Jakob Heyman, and this has now been augmented by an international group of contributors whose names appear on the database front page. It’s also less complete than the Antarctica database. We guess that it will have to grow by 25-50% to include all published data, and of course the rate of publication of new exposure-age data for alpine glacial deposits worldwide is a lot higher. So right now it’s a pretty good, but not totally comprehensive, representation of most of the data that are out there. The present distribution of data looks like this, where each dot signifies a glacial landform, typically an alpine glacier moraine, that’s been exposure-dated at least once. There are 2163 dots.

The matchup concerns the two biggest climate events — or at least the ones with the largest media presence — from the period of global deglaciation following the Last Glacial Maximum. The Younger Dryas (“YD”) event is a 1200-year cold period between 12,900 and 11,700 years ago that is extremely prominent in climate records from Greenland and the North Atlantic region. The Antarctic Cold Reversal (“ACR”) is sort of like the opposite of the YD: it is a 1700-year cold period between 14,700 and 13,000 years ago that is prominent in Antarctic and Southern Hemisphere climate records. The idea is that during the ACR, it was warm in Europe and cold in Antarctica; during the YD it was the opposite.

Most paleoclimatologists and glacial geologists would probably expect that alpine glaciers worldwide paid attention to these climate events. Glaciers expand during cold periods, and contract during warm ones. An extraordinary amount of work in the glacial-geological and paleoclimate literature, which will not be summarized or enumerated here because this is a blog entry and not a peer-reviewed journal article, has been devoted to associating particular glacier advances in particular places with the YD or the ACR. Nearly all this work consists of single studies of single glacier systems, or a handful of glacier systems in the same general area. But if we were to do a global synthesis of all dated alpine glacier deposits, a decent initial hypothesis would be that in the Northern Hemisphere and particularly around the North Atlantic, it was cold during the YD and warm during the ACR, so we should see a lot of YD moraines and not a lot of ACR moraines. In the Southern Hemisphere, it was the opposite, so we should see a lot of ACR moraines and not a lot of YD moraines.

So let’s see if this is really true, using the entire data set of all dated moraines and not just a few individual hand-picked ones that happen to be part of our own Ph.D. dissertation. To do this, I’ll use what is generally thought to be more or less the present state of the art for production rate scaling and calibration, specifically (i) the time-dependent ‘LSDn’ scaling method developed by Nat Lifton and others in 2014-16, as implemented in version 3 of the online exposure age calculators described by Balco et al. (2008) and subsequently updated, and (ii) the CRONUS-Earth “primary” production rate calibration data sets for Be-10, Al-26, and He-3. The majority of the exposure ages in the database are derived from Be-10 measurements, but there are significant numbers of Al-26 and He-3 data in there too, and for purposes of this I’ll treat them all equally. Using these tools, calculate exposure ages for all the samples on all 2163 moraines in the database. For each landform, apply the outlier-detection and averaging scheme used in the online exposure age calculators — which basically attempts to discard the minimum number of outliers necessary to yield a single population — to generate a summary age and uncertainty for each moraine. Assume that these define a Gaussian uncertainty distribution and compute what fraction thereof lies within the YD (11700-12900 years BP) and ACR (14700-13000 BP). Basically, this is an estimate of the probability that a particular landform was emplaced during each of these climate events.

As an aside, a measure of how useful it is to have all these data in a modern online database is that this entire analysis takes slightly less than 5 minutes to run in MATLAB, even with a slow household network connection.

Now plot the results on a map.

Obviously, lots of moraines have ages that are nowhere near either the YD or the ACR; these have zero probability of belonging to either event and are denoted by black dots. But a decent number of moraines have a significant probability of belonging to one event or the other. Note that because the typical overall uncertainty in a moraine age is similar in magnitude to the length of these climate events, this algorithm will never yield a 100% probability; given this, a high probability of belonging to a millenial-scale climate event will be anything greater than approximately 50%. So the colored circles on both maps denote moraines that have more than half a chance of belonging to each climate event. Out of 2163 moraines, this scheme identifies 44 likely YD moraines (the green circles) and 156 likely ACR moraines (the blue circles).

What is interesting about these results is that they are not quite what we expect. It does seem evident that YD moraines are rare, and ACR moraines are common, across all latitudes in the Pacific Cordillera. I am not sure whether we expect this: the ACR is typically explained as something having to do mostly with Atlantic heat transport, and although ACR-age glacier advances in much of South America have been explained as a response to cooling in the Southern Ocean, there is significant disagreement about whether this should be true in the tropics, and it is definitely not obvious why it should also happen in the north Pacific. What is not what we expected, though, is that YD moraines are not dominant, or even very common at all, anywhere, even in Europe, where there are certainly lots of late-glacial moraines, but most of them seem to be ACR age, not YD-age. It is important to remember that these data only pertain to alpine glaciers, not ice sheets, so the general lack of evidence for YD glacier advances may be in agreement with the idea that the seasonality of YD climate change resulted in different responses from alpine glaciers and ice sheets. But the main point is that this analysis does not at all show that the YD and ACR each have their own geographic sphere of influence on glacier change. In fact, nothing like it. ACR wins and YD loses globally.

We also get this result if we just look at the derived ages of all 2163 moraines as a histogram.

The green and light blue boxes indicate the YD and ACR, respectively. Viewed in this way, it is actually evident that there are fewer moraines that belong to the YD than there are for the rest of the late glacial: there are a lot of moraines older than 13-ish ka and a lot of moraines younger than 12-ish ka, but not a whole lot in the middle. Weird.

So, what is going on here? One thing that is interesting about this analysis is that it has reached the level of complexity where it’s hard to know exactly what’s going on in detail. This is potentially bad because it’s hard to know what’s going on in detail. For example, although I think that the outlier-removal scheme mostly behaves rationally, if there are some pathological age distributions in here that caused it to do the wrong thing, I’ll never know it. On the other hand, it’s good because it makes the analysis unbiased: even if I have some preconception of what the results ought to be, it is so complicated that it would be really hard to even figure out how to consciously or unconsciously fudge the algorithm or the data to get the desired result. But, still, there are some reasons why these results could be systematically wrong. A couple are fairly obvious. One, summary moraine ages could skew young because of postdepositional disturbance (just because of how the outlier-rejection scheme works in this example, they are less likely to be skewed old due to inheritance). But this would be mitigated by the fact that production rate calibration data are largely from the same types of landforms, which would skew production rates to the young side and neutralize this effect. And, in any case, if this were happening it would push moraines out of the YD to the young ages and also push other moraines into it from the old side, so it would not be expected to produce a systematic lack of YD moraines, except by accident. Two, production rate estimates could be inaccurate. If we globally lowered production rates by 5%, that would push the big group of ages near 11-11.5 ka right into the end of the YD near 12 ka. But we would have to ignore a lot of presumably correct production rate calibration data to do that, so that is not very compelling either.

Many people would argue that this entire analysis is oversimplified, misleading, and probably wrong exactly because the data set is not highly curated and subject to detailed examination and quality control at the level of each individual landform. Many specialists in exposure-dating of glacial deposits, if you get them talking privately, will reveal that data collected by other research groups are sometimes not good. Scattered. Hopelessly contaminated by postdepositional disturbance. Not to be trusted. Garbage. Certainly not to be used in an analysis of this sort for fear of leading the results astray. Of course, this data set is so large that I have no idea who even collected most of the data, so even if I had some of these opinions myself it would be nearly impossible to act on them to influence the results. But we can make a bit of an effort to de-garbage the data set by applying some objective criteria. For example, we can only accept exposure-age distributions where we can’t reject the probability that the ages belong to a single population at the 95% confidence level, which basically has the effect of further screening out landforms whose ages are scattered more than can be dealt with by discarding a couple of outliers. The effect on the global distribution of moraine ages:

is to make the relative absence of moraines of YD age in relation to older and younger time periods even more striking. Applying this filter geographically produces this:

Again, being more strict about discarding scattered age distributions results in rejecting a whole lot of moraines that might have appeared YD-age just because of excess scatter, and makes ACR-age moraines (and “Preboreal,” i.e. post-YD, moraines) even more dominant. We are down to only a handful of YD moraines in the Alps and only one in the western US. What do we conclude? We conclude that YD-age alpine glacier moraines are actually kind of rare, which is at least not exactly what I expected, and possibly not what you did either. Perhaps the YD was cold, but also dry, so alpine glaciers shrank everywhere. Perhaps this is a manifestation of the seasonal distribution of YD cooling, something like Meredith Kelly suggested in this paper. Or perhaps Gerard Roe is right and glacier change is just an autoregressive process driven by stochastic variability in a steady climate.

But in any case, ACR wins! Game over.

So, to summarize, that was kind of fun, but what is the point? It could be argued that this analysis is oversimplified, misleading, based on crappy data, and largely BS. Maybe that’s why this is an unreviewed blog posting instead of a Nature article. But what is really important here is that this exercise highlights that we can move away from making large-scale inferences about glacier response to climate change based on small sets of highly curated data, and move towards doing it with much larger and more inclusive data sets, even if they are noisy, and, hopefully, with globally applied and less biased algorithms. We have, or almost have, a modern database infrastructure with dynamic, internally consistent, and mostly transparent, exposure-age recalculation, that we can use to do this. We should do it. And if, in fact, my shot at this really is oversimplified, misleading, and largely BS, then go do it better. All the data are here.



Stone (2000) revisited

February 5, 2019

This is just to tie up a loose end from a previous post in which I argued that a compilation of Be-10 and Al-26 concentrations in extremely old, low-erosion-rate surfaces from Antarctica indicated that the time-dependent, paleomagnetically-aware “LSDn” production rate scaling method of Lifton and others did a better job of production rate estimation than the non-time-dependent “St” scaling method of Lal (1991) as implemented by Stone (2000). Basically, the difference is the altitude dependence of the production rate. Tying up this loose end is an interesting exercise in data archaeology, and also highlights some evidence that either the Al-26 half-life or the Al-26/Be-10 production ratio may need some attention.

The issue in the previous post is just that in the St scaling method, the production rate increases more slowly with elevation in polar latitudes, with the result that many high-elevation Antarctic samples have measured Be-10 concentrations that are higher than saturation concentrations predicted by that scaling method. With LSDn scaling, predicted production rates at high altitudes are higher, so all the observed concentrations are near or below predicted saturation.

However, as I pointed out in the previous post, the 2000 paper already looked at available near-saturation data from Antarctica, and that paper concluded the opposite: that near-saturation measurements from Antarctica were consistent with the scaling method proposed in that paper. This was an important conclusion at the time, that was extremely persuasive in the adoption of lower production rate estimates and the abandonment of the standard atmosphere as a single global elevation-pressure conversion. However, lots of stuff has changed between the two comparisons. Errors in Be-10 measurement standardization were corrected by Nishiizumi and others (2007) and related work. The Be-10 half-life was remeasured, and remeasured again. Large amounts of new Be-10 and Al-26 production rate calibration data have been collected. Everything is different. So figuring out whether what the 2000 paper says is still valid is really complicated. But, let’s try anyway. It’s probably not worth the effort to explore the result of each individual one of those changes, but it’s possible to use the ICE-D:ANTARCTICA database and the latest version of the online exposure age calculator to look at the effect of the last 18 years of progress together. 18 years is a while, so hopefully there has actually been some progress.

First, a review of the original discussion. Here is the relevant figure from the 2000 paper.

This shows selected Be-10 and Al-26 data from Antarctica plotted on a two-isotope diagram. The simple exposure region — that is, the part of the diagram where it is physically possible for concentrations to lie given a single period of steady exposure at some surface erosion rate — is the region outlined by solid black lines. The point marking the right-hand boundary of this region is the saturation endpoint: concentrations cannot lie to the right of this point, or above the upper boundary of the region. Potentially, concentrations could lie below the region if sites had been ice-covered for a time, but the majority of these sites are from mountain summits that are unlikely to have experienced ice cover in the past. In the original paper, the point of this figure was to show that certain combinations of Be-10 production rates and scaling methods produced either unphysical results (like in the upper left panel where many concentrations lie above the simple exposure region) or unlikely ones (as in the upper right panel where all concentrations imply significant burial). On the other hand, the scaling approach proposed in the paper for Antarctic sites, shown in the lower left panel, produced plausible and physically possible results.

Now, if you believe my previous posting, changes in Be-10 standardization and production rate scaling in the subsequent 18 years have superseded these results: when we take these into account, the “St” scaling method actually forces concentrations to lie outside the simple exposure region. So, let’s try to reconstruct this diagram with our current best estimate of all this stuff.

Step 1 is to figure out which samples are involved. The 2000 paper doesn’t list sample names, but describes the sample set only as “Antarctic Be-10/Al-26 data from Nishiizumi et al. (1991), Ivy-Ochs et al. (1995), and Brook et al. (1995), selected to show the oldest exposure ages in each data set.” We can make a decent estimate of figuring out which samples these are using the ICE-D:ANTARCTICA database; choosing the highest-concentration samples from the three 1990’s papers yields this set of samples. We can then replicate  the figure from the 2000 paper using the following tools. One, the various scaling method implementations in version 3 of the online exposure age calculator described by Balco et al. (2008) and subsequently updated., specifically the “St” scaling method, which is an implementation of the method originally described in the 2000 paper, and the “LSDn” scaling method developed by Lifton and others in 2014-16. Two, the CRONUS “primary” production rate calibration data sets for Be-10 and Al-26. Three, the Be-10 and Al-26 standardization relationships used in the online exposure age calculator and tabulated here. With these ingredients and the “St” scaling method as implemented in the online exposure age calculator, the figure from the 2000 paper now looks like this:

The important change is now that at least two, and probably more, of the measurements exceed the saturation endpoint. They plot in the impossible region of the diagram to the right of the simple exposure region. So this answers one question: yes, trying to repeat the Stone (2000) calculation with the same data, but revised measurement standardizations and production rate calibration does, in fact, support the argument that this scaling method underpredicts elevation scaling at the poles. Primarily, what has happened here is that once everything has been corrected for measurement restandardization, new production rate calibration data imply a lower production rate than the best estimate at the time, so Be-10 concentrations are relatively higher in comparison to predicted saturation concentrations. Of course, we knew this already, but recalculating all these data is actually a fairly difficult test of whether the entire web of restandardization and recalibration is internally consistent. It seems to work, which is good.

For completeness, we should also try this with the latest “LSDn” scaling. Here is the result:

As discussed in the previous post, this scaling method predicts higher production rates, which pulls all the normalized Be-10 concentrations back to the left into the region of the possible. So from the perspective of Be-10 production rates, as discussed in the previous posting, this seems to show that LSDn scaling works: Be-10 concentrations are all at least a little bit below predicted saturation. However, both results, and in particular the LSDn result, are really interesting from the perspective of Al-26 production rates, because they imply that all these sites lie in the region of intermittent burial, below the simple exposure line, and thus have been covered by ice for significant parts of their exposure history. Of course, ice cover is not exactly unusual in Antarctica, but these samples, especially those from the Ivy-Ochs and Brook papers, are mostly from high-elevation mountaintop sites where ice cover is very unlikely to have occurred at any time. So this is consistent with the observation in the previous post that Al-26 concentrations throughout Antarctica are systematically lower with respect to predicted saturation than the corresponding Be-10 measurements, which probably indicates that something related to Al-26 is not as we expect. Note that we can’t fix this just by lowering production rates together (the difference between upper right and lower left panels in the original figure), because that would force Be-10 concentrations too far to the right — which tends to indicate that Be-10 is probably fine, and our problem likely has something to do with Al-26. Although of course it is hard to exactly reconstruct what happened to Al-26 measurements 20 years ago, there is not an obvious means by which Al-26 measurement standardization errors would cause this effect. In addition, we can look at more recently collected Al-26 data, for example recent unpublished measurements by Perry Spector at the Whitmore Mountains, and we get the same result. Here are some of the Whitmore Mountains data with LSDn scaling:

Same story. Be-10 concentrations at (or in this case maybe slightly beyond) the predicted saturation endpoint, but Al-26/Be-10 ratios significantly below that predicted for saturation. So if we can’t explain the problem with measurement standardization errors (and we’re pretty sure the Al-26 measurements were done right), this leaves two possibilities. First, the Al-26 half-life might be shorter than we think it is. This is possible: existing measurements of the Al-26 half-life might leave room for as much as a 3-5% inaccuracy, which could account for most of the apparent burial for these samples. If this is the answer it would imply an Al-26 half-life near 675 ka…but although this is technically within the uncertainty of some of the direct measurements of the half-life, it’s substantially lower than any of them. Second, we might be overestimating the Al-26/Be-10 production ratio, although fully accounting for apparent burial without changing the half-life would require the production ratio to be 6.3, which is lower than the mean ratio at moderate elevations allowed by currently available data, but not totally outside the range of the data. Certainly, though, the data set of near-saturation measurements from Antarctica are not at the moment consistent with proposals that the 26/10 production ratio is significantly higher than 6.75. But in any case, this seems to indicate that it’s probably worth taking another look at the highest Al-26 concentrations in Antarctica, perhaps making a systematic effort to remeasure Al-26/Be-10 ratios in the highest exposure-age samples, and maybe even looking into the Al-26 half-life.

Summary: recalculating the results in Stone (2000) to incorporate 18 years of work on measurement standardization and production rate calibration does, in fact, support the hypothesis that LSDn scaling performs better at scaling existing calibration data to high-elevation, high-latitude sites, at least for Be-10. It’s also relevant that we seem to actually be doing a decent job of systematically applying the improvements: adjusting 20-year-old data and calculations for all the subsequent changes in standardization and scaling seems to produce internally consistent results. That’s good. However, looking carefully at the data set of near-saturation measurements from Antarctica indicates that maybe the Al-26 half-life and the Al-26/Be-10 production ratio need some attention.

The ICE-D x OCTOPUS collabo

January 7, 2019

The point of this blog post is to highlight that cosmogenic-nuclide geochemists will not be outdone by Nigerian rappers. A little late — Vogue has just declared that fashion/hip-hop collaborations denoted by “x” are completely 2018 — but not bad for a field whose conceptualization of success and promotion is still defined by the pre-industrial concept of literature citations. Sadly, online mash-ups are not peer-reviewed (which is probably a good thing, based on the results of the interactive review process linked below) and have zero h-factor impact.

Seriously, this is a follow-up to a blog posting and an interactive paper review on the subject of the “OCTOPUS” database of cosmogenic-nuclide basin-scale erosion rate measurements. In these, I expressed, at length, bafflement as to why the existing “OCTOPUS” website did not allow direct, anonymous, granular access to the data in the database, instead providing data via an asynchronous and inconvenient email download scheme. I argued that not only was this very strange — why not just serve the raw data via the website — but also damaging to the overall credibility of the project, because it made it unnecessarily difficult for the casual user to examine the source data used for erosion rate calculations. I even quoted Ronald Reagan (who was quoting a Russian proverb at the time) to the effect that the project would not be credible unless researchers interested in using these data for synoptic erosion rate analyses could “trust, but verify.” Of course, it’s easy to quote Ronald Reagan — in fact, it’s terrifyingly easy to worship Reagan in the present American political environment — but not as easy to fix the problem. So here is the ICE-D x OCTOPUS collabo, which is an effort to fix the problem.

The effort to fix the problem is a mash-up of the ICE-D and OCTOPUS databases at

What it is is a text-based, non-graphical, browser interface to the OCTOPUS cosmogenic-nuclide databases, that makes it possible to (i) examine the source data recorded for each sample in that database, and (ii) recalculate erosion rates from these source data using other calculation methods.

Technically, what is happening is that this web server is interrogating the WFS server that serves the OCTOPUS data as if it were GIS software, but then is parsing the resulting XML feed and displaying the data directly in human-readable form instead of showing it in map form or using it for geographic analysis. You could do the same thing with some scripting and a local copy of ArcGIS or QGIS configured to talk to the WFS server, but this web server provides a simple browse interface. So (as long as the OCTOPUS providers continue to let me do this), this solves the problem of providing casual, anonymous, granular examination of the source data. It makes it easy to, for example, quickly check the OCTOPUS database to make sure that data from a particular paper are accurately recorded. Of course, it is not clear that authors of published erosion-rate studies have the same relationship to the OCTOPUS compilers as the US and Russian strategic nuclear forces had to each other in 1987, but the idea of “trust, but verify” is still not a bad thing. So now you can do this.

The second thing that this project enables is the ability to use the source data in the OCTOPUS database to compute basin-scale erosion rates with a different calculation framework than the one (CAIRN) used to compute the apparent erosion rates that are stored in the OCTOPUS database. The present ICE-D x OCTOPUS implementation, for example, recalculates apparent erosion rates via a highly simplified method that uses a single-point approximation of each basin and the version 3 online erosion rate calculator. It is not clear whether this method is more or less accurate than CAIRN — each involves different simplifications and approximations — but the comparison is useful in evaluating how sensitive an erosion rate estimate is to the choice of scaling method, production rate calibration data, and numerical calculation scheme. If one were to find that erosion rates are correlated with some landscape metric using one erosion-rate calculation method, but not with another, it would be time to figure out why. The current implementation doesn’t include bulk download of OCTOPUS data in v3 input form — at the moment it happens at the individual sample level — but it’s potentially feasible in future. If this is a problem, take heart from Reagan and “never let the things you can’t do stop you from doing what you can.”

Reagan also said that “if you’re explaining, you’re losing.” So that’s it for the explanation. Thanks to the OCTOPUS developers for not blocking the WFS feed to this server, and remember that “there is no limit to the amount of good you can do if you don’t care who gets the credit*.”



* Yup, Ronald Reagan again. If only we could go back.