Skip to content

Ne-21 production rate calibration demystified. Or not.

August 27, 2020

Neon-21 is now ‘trending’ on this website, because this is the second post about it this year. The purpose of this one is to try to unscramble the tedious and obscure subject of production rate calibration for cosmogenic neon-21 in quartz. Eventually, at the end, it describes an update to the part of the online exposure age calculator dealing with Ne-21, which doesn’t fix everything, but is a legitimate conceptual improvement and results in only small differences between Ne-21 exposure ages calculated before the update (i.e., yesterday) and after (today).

Even though Ne-21 production in quartz considered by itself  is nearly all just by neutron spallation and isn’t any more complicated than production of Be-10, Al-26, or He-3, it is a messy nuclide-mineral system to calibrate because measurement of cosmogenic Ne-21 has a terrible signal-to-noise ratio compared to these other nuclides.

The reason for this is simply that there is a lot of non-cosmogenic Ne-21 out there, both present as a trace isotope in all sources of natural neon (like the atmosphere) and also produced in minerals indirectly from decay of natural uranium and thorium. It is easily possible to identify and correct for atmospheric neon based on its isotope ratio, but even after doing this there is nearly always some extra non-cosmogenic Ne-21 in quartz. Ne-21 is not radioactive, so the only way to get rid of it is to heat the quartz up to high temperature. Most quartz in typical rocks at the Earth’s surface has been below the neon retention temperature for tens or hundreds of millions of years, and contains quite a lot of non-cosmogenic (“nucleogenic”) Ne-21 produced from U and Th decay, so can contain nucleogenic Ne-21 equivalent to tens or hundreds of thousands of years of surface exposure. Suppose you go to a site where the production rate of cosmogenic Ne-21 is about 20 atoms/g/yr, pick up a rock, and observe 2 Matoms/g of Ne-21 in the quartz. This rock could really have a 100,000 year exposure age, or it could have arrived at the surface yesterday with a lot of nucleogenic Ne-21. Hard to tell.

This property of Ne-21 is a serious problem for production rate calibration because of how calibration normally works. Normally, we find a rock surface that we already know the exposure age of, usually by radiocarbon dating of related deposits. We measure the amount of, say, cosmogenic beryllium-10, divide the number of atoms per gram of Be-10 by the age, and obtain the Be-10 production rate in atoms per gram per year. This is quite straightforward if you can find rock surfaces that have been exposed for a known amount of time and also have not been disturbed or eroded since they were emplaced. Unfortunately, you can generally only do this for sites that are relatively young. Most good production rate calibration sites have exposure ages less than about 20,000 years, and the reason for this is that rock surfaces in most environments simply do not survive in pristine condition for a lot longer than this. Older surfaces get eroded, weathered, flaked, spalled, covered, uncovered, and generally messed up, which makes them hard to use for production rate calibration. If the surfaces that are good for production rate calibration have short exposure durations, they also have low concentrations of cosmogenic Ne-21. For nearly all production rate calibration sites that we have used for Be-10 and Al-26 calibration, concentrations of cosmogenic Ne-21 in quartz are lower than concentrations of non-cosmogenic Ne-21. Even if you measure the total amount of Ne-21 very precisely, and also you could estimate the amount of non-cosmogenic Ne-21 very precisely (which is usually impossible for a variety of reasons), estimating cosmogenic Ne-21 would still involve subtracting one big number from another big number to yield a small number. This results in a small number with a big uncertainty.

So, a surface that is good for direct production rate calibration has to be young enough that it is still pristine, but if this is true it is also too young to make a decent measurement of the cosmogenic Ne-21 concentration. This is extremely inconvenient. Most attempts to estimate the Ne-21 production rate, therefore, have had to use much more complicated strategies, mostly based on the idea of finding surfaces that have very high concentrations of cosmogenic Ne-21 so any correction for nucleogenic Ne-21 is negligible, but also have some kind of back story that enables you to make a believable argument about what the long-term exposure history has been, so you can assume an exposure history and back-calculate the production rate. One example, which I will focus on here because it is more polite to highlight the weaknesses of my own work than of others’, is a 2009 paper about rock surfaces in the Antarctic Dry Valleys. This involves a handful of samples from slowly eroding rock surfaces that are believed to have been ice-free and slowly weathering for millions of years, and also have Be-10 and Al-26 concentrations that are consistent with production-decay-erosion steady state. In other words, their Be-10 and Al-26 concentrations lie on the steady erosion line in a 2-nuclide diagram:

So, we assume that these samples have also reached production-erosion equilibrium for Ne-21. Then if the cosmogenic Ne-21 concentration is N21, the production rate is P21, the erosion rate inferred from the Be-10 and Al-26 concentrations is E, and we ignore muon production for the moment, P21 = N21*E/L, where L is the attenuation depth constant for spallogenic production (Lambda in most places). Basically, we are asking, OK, what Ne-21 production rate is needed to make it so that these data are also on the steady erosion line in Al-26/Ne-21 and Be-10/Ne-21 diagrams?

The advantage of this approach is that these surfaces are eroding very slowly, so cosmogenic Ne-21 concentrations are about two orders of magnitude higher than nucleogenic concentrations. It is easy to accurately measure the cosmogenic Ne-21 concentration. The disadvantage is that the back story involves a lot of assumptions. As noted, we have  to assume that the surfaces really are at erosional steady state. That might be true, but we also rely on accurately knowing the Al-26 and Be-10 production rates, so that we get the right erosion rate to plug into this formula. To some extent the dependence on the Be-10 production rate can be mitigated if you compute a ratio of Ne-21 to Be-10 production rates, and this paper eventually concluded that the Ne-21/Be-10 production ratio is 4.08. This more or less agrees with the conclusions of four other papers that used similarly devious and complex strategies, particularly including a very creative one by Florian Kober and Vasily Alfimov, as well as an additional paper describing an early attempt to determine the Ne-21 production rate directly from a young surface by Samuel Niedermann. Since approximately 2009, therefore, folks have mostly assumed that the Ne-21/Be-10 production ratio is close to 4, and computed Ne-21 production rates by applying this ratio to their best estimate of the Be-10 production rate.

This is what the Ne-21 production rate calibration in version 3 of the online exposure age calculators, at least as of last week, is based on. I think that’s the only online exposure age calculator that accepts Ne-21 input at all. This assumption mostly seems to work pretty well, but to get to the main point of this post, if you look at this carefully, it is not really clear that it should work at all, or if it is a good idea. For example, here are some problems.

The main problem is that the production ratio estimate from the Dry Valleys data described above is based on an impressive number of assumptions that were fine at the time, but are now known to be obsolete or incorrect. One: the erosion rates calculated from the Be-10 and Al-26 concentrations are based on obsolete production rate calibration data that have since been discovered to be inaccurate. Basically, the Be-10/Al-26 production rates were too high, which means the estimates of the erosion rates are also too high. Two: these calculations also used a Be-10 half-life that we now know is wrong. Because the erosion rates are so low, the half-life update affects the erosion rate estimate. Three: AMS standardization of Be-10 measurements had not fully been sorted out when that paper was written, which is related to the production rate and half-life revisions. Four: that paper assumes that the quartz samples had zero nucleogenic Ne-21. Later investigation has shown that they have a fairly large amount. Five: that paper concluded that muon production of Ne-21 was fairly high. It is now known from other data to be lower. Six: recent interlaboratory comparisons show that the Ne-21 concentrations in that paper need to be adusted.

This is a huge list of things that we know are wrong and need to be updated.

Disappointingly, another recent paper (also by me) addressed items (4-6) above and concluded that they had a negligible effect on the estimate of the Ne-21/Be-10 production ratio. However, it didn’t address items (1-3), which is kind of misleading, so this conclusion might be right but possibly shouldn’t have been included in the paper.

And then there is one more thing. The ‘LSDn’ production rate scaling method of Lifton and Sato (various references), which is now in common use and believed to be the most accurate available, predicts that the Ne-21/Be-10 production ratio should be variable with elevation, and this was not taken into account at all in the 2009-2011 studies. And, finally, adding insult to injury, the 2014 implementation of the LSDn scaling method did not include any estimates of the cross-sections for Ne-21 production reactions, so the online exposure age calculator has been using a generic neutron flux scaling, which predicts a ratio-elevation relationship that is not even the correct one.

So, this is about ten reasons that the currently accepted production rate calibration that is in the online exposure age calculators for Ne-21 is probably wrong. On the other hand, the other papers that all basically agree on the 21/10 production ratio are subject to a different set of obsolete inputs and assumptions, and the fact that they all basically agree may well indicate that the various inaccuracies all cancel each other out. However, it seems like a bad idea to rely on this sort of magical thinking. So the current overall credibility of the Ne-21 production rate calibration that we have mostly been using is not ideal.

Fortunately, and in sharp contrast with the arc of American history at this precise moment, there is hope. Just as we were faced with the unpleasant and possibly impossible task of trying to recalculate all the old estimates of the 21/10 production ratio with new input parameters, a new paper by Cassie Fenton and others has saved us from despair. Almost. I’ll get to the “almost” part at the end.

Here’s how. Remember the root problem with Ne-21 production rate calibration is that we can’t do a direct calibration on surfaces of known age if all we have are young surfaces with high concentrations of nucleogenic Ne-21. So, instead of doing all this fancy stuff with elaborate back stories and steady-erosion assumptions, wouldn’t it be better to just find an old but pristine rock surface that doesn’t have any nucleogenic Ne-21 in the quartz? It is hard to argue with that reasoning. The surface that Cassie and colleagues have located is a quartz-bearing basaltic lava flow in Arizona with an unprintable name having the abbreviation “SP.” When you add some other words like “Cosmogenic” and, maybe, “Intercalibration” and “Experiment” – I am not sure – you get a project called “SPICE,” which should not be confused with either the Spice Girls or the SPICE core. But the point is that they have located a 72,000-year-old quartz-bearing lava flow in the middle of a desert where erosion and surface disturbance rates are very low. And because the amount of nucleogenic Ne-21 scales with the cooling age of the rock, if you want the minimum possible amount of non-cosmogenic Ne-21, you want a rock whose cooling age is no older than the exposure age. Like a lava flow. So we have a site that is a lot older than most calibration sites and has a lot less nucleogenic Ne-21 than most other rocks. Perfect.

This is important because it means we might not have to unscramble the mess of old production ratio estimates from ten years ago. We can forget about them and move on to a better plan. This implies that from the online exposure age calculator perspective, we should do the following:

  1. Fix the LSDn scaling code to use specific reaction cross sections for Si –> Ne-21. Also fix it to use up-to-date muon-related parameters from here.
  2. Calibrate the various production rate scaling methods using the SPICE data.
  3. Figure out if the results agree with what we were using before. Have we been doing it wrong?

We can obtain reaction cross-sections from a 2015 revision of the LSDn scaling code provided by Nat Lifton, that originate with estimates by Bob Reedy described here.  Here is what they look like:

This shows the energy dependence of the main production reactions for Ne-21 and Be-10, with the X-axis in MeV. The threshold energy and peak cross-section energy for Ne-21 production is a little bit higher than for Be-10, so the Ne-21/Be-10 production ratio should go up a little bit as elevation increases. This is actually the opposite of what has been in the online exposure age calculator until now: so far, the Ne-21 scaling has just used a total-neutron-flux scaling (e.g., the ‘Sf’ method in CRONUSCalc), which has no threshold energy, so will predict a higher 21/10 ratio at low elevation.

Having fixed that, we can now take the SPICE Ne-21 data, which are here, and paste them into the production rate calibration input page here (note that this posting is slightly asynchronous with the server update, so this may not look exactly like this until such time as I update the web servers to use the new scaling code). This is the result:

Now, compare this with what we were doing before, which was computing the Be-10 production rate and multiplying by 4. Start with ‘St’ and ‘Lm’ scaling. For these scaling methods, the fitting parameter is a dimensional spallogenic production rate (atoms/g/yr) at some reference condition. If we were to take the existing default calibrated reference production rates for Be-10 for St and Lm scaling (4.09 and 4.21 atoms/g/yr, which derive from the CRONUS-Earth ‘primary’ calibration data set here) and apply the 21/10 production ratio of about 4 that is expected from the 2009 studies, we would get 16.36 and 16.84, which are within 3-4% of the results of calibrating with the SPICE data. So that’s good. The other way to say that is that if we calibrate Ne-21 using SPICE data and Be-10 using the existing default calibration, we predict Ne-21/Be-10 production ratios of (16.896/4.09 = 4.13) and (16.243/4.21 = 3.86) for ‘St’ and ‘Lm’ scaling, which are both within 3% of the previously accepted value of 4, or basically the same given the expected precision of the various measurements. Of course this only includes spallogenic production, but both nuclides have minimal muon production so that has a negligible effect on the ratio. Even though our best estimate of the reference Be-10 production rate now is very different from the 2009 estimate — it is about 10-15% lower — these results are indistinguishable from what we thought the ratio should be at the time. So this result (which, by the way, just duplicates a conclusion that Cassie already came to in the SPICE paper) is good. We were not totally wrong before.

One interesting aspect of this result (which is also thoroughly discussed in the SPICE paper, although with slightly different conclusions) is that the apparent production ratios for the two scaling methods are different. This is unexpected because the ratio primarily results from the mineral chemistry and reaction cross-sections, so the true ratio should be close to independent of the scaling method.  On the other hand, the way the calculation is done is expected to lead to a difference here because the SPICE site is much older than the calibration sites used to estimate the Be-10 production rate, so the two calibration data sets are averaging over different time periods. Because most of the sites involved are at low enough latitude that magnetic field variations have an effect on production rates, the older site is sampling a period of lower average magnetic field strength, and therefore higher production rates, than the younger sites. The ‘Lm’ scaling method accounts for this, and the non-time-dependent ‘St’ method doesn’t. This means that St scaling with this particular combination of sites should overestimate the 21/10 production ratio by a few percent.

For LSDn scaling, the calibrated parameter is a nondimensional correction factor, not a reference production rate, and in addition the production ratio is predicted to vary with elevation, so we need to compute both Be-10 and Ne-21 production rates for a range of elevations instead of just dividing the fitted reference production rates. After calibrating Ne-21 production with the SPICE data set and leaving the Be-10 production rate calibration as the default, here is the predicted 21/10 production ratio for LSDn scaling as a function of elevation for polar latitudes (like Antarctica, where we care about this because there are a lot of Ne-21 data):

Basically, it is exactly what we expected it to be based on the complex and sketchy indirect calibration scheme from 2009: at the elevation of the Dry Valleys (about 1500 m) it predicts P21/P10 = 4.05. This figure also shows what I said above, that the ratio is basically the same whether or not you ignore muons. Again, this seems good. Either we got it right the first time, or we made so many offsetting errors that we still got it right by accident. It’s a few percent higher than what we get for the also-paleomagnetically-corrected Lm scaling, but this is not necessarily significant because LSDn scaling treats each production reaction separately, whereas St and Lm treat them all the same, so the two estimates are not really comparable.

So, this seems like a complete solution for the online exposure age calculator. Disregard the messy and in-need-of-revision attempts to measure the production ratio from ten years ago. Update the various background parameters. Calibrate the Ne-21 production rate directly from the SPICE data. Done.



OK, careful readers will immediately notice that the previous paragraph ended in “Done,” but this already quite tedious posting hasn’t actually ended. This is because there is a small problem with this reasoning. Here is the problem. The SPICE calibration study didn’t just measure Ne-21 in the quartz from the SP flow…it also measured Be-10. Thus, no matter what the scaling method is and regardless of whether or not we have messed up anything to do with scaling and production rate calibration, the measured ratio of Ne-21 and Be-10 concentrations should be equal to the ratio of the production rates (or almost equal…a small correction for radioactive decay of Be-10 is needed). The production ratio computed from the measured concentrations in this study is…4.35 +/- 0.23. Which is 5-10% larger than we think it should be from both the old calibration studies and the paragraphs above that compare Ne-21 and Be-10 production rates calibrated using separate data sets. Theoretically, measuring both nuclides in the same quartz at the same site should be a better estimate of the production ratio than comparing production rates calibrated from different data sets, so this is weird.

On closer examination, what is happening here is that the Be-10 concentrations measured here are lower than we expect from other production rate calibrations. In other words, as noted at length in the SPICE paper, production rates calibrated using the SPICE Be-10 data alone are 5-10% lower than those calibrated using the rest of the global data set, depending on which scaling method is used. This is not just an artifact of the fact that most of the global calibration data are much younger, because it’s true for both paleomagnetically corrected and uncorrected scaling methods. And, in addition, Be-10 production rates that are as low as predicted by the SPICE data are inconsistent with many Be-10 data from Antarctica that are from surfaces with near-zero erosion rates and are close to production-decay saturation. If production rates were as low as predicted by the SPICE Be-10 data, predicted saturation concentrations would be 10% lower than the measured concentrations in these samples, which is impossible. To see an example of this calculation, take the Be-10 calibration data from here, put it in the production rate calibration page here, and then use the result to calculate Be-10 exposure ages for samples from this site.

So, the reason the apparent P21/P10 ratio at this site is a little anomalous is actually not because of the Ne-21 data, but because the Be-10 data are anomalous with respect to the rest of the world. There could be a couple of reasons for this.

Option 1 is the elaborate one. In this option, we solve the mismatch between these Be-10 data and others by arguing that there has been unrecognized erosion at the site. 18 cm of erosion in 72 ka would make the Be-10 data agree with other calibration data. But then we have too much Ne-21, so we need to solve that by arguing that 5-10% of the Ne-21 is actually nucleogenic and not cosmogenic. This is rather unlikely, as nucleogenic production of 10% of the observed excess Ne-21 in 72 ka would require something like 50 ppm U and Th in the quartz, which is about two orders of magnitude more than typically observed. Theoretically one could appeal to trapped magmatic neon with some strange isotope composition, but overall that is not a very good explanation.

Option 2 is that the Ar-Ar age is inaccurate and the flow is really a bit younger. However, quite a lot of work has gone into the Ar-Ar dating and there is no reason to suspect this. This option also requires that 10% of the Ne-21 is noncosmogenic, as above.

Option 3 is the simple one and involves concluding that the Be-10 measurements are systematically a little too low, and the Ar-Ar age and the Ne-21 measurements are correct. The argument against this option is that there is no reason to think that any measurement errors were made in this study. The Be-10 measurements should be totally fine. On the other hand, a ~10% difference is not too far out of the range of normal measurement scatter. So this would tend to indicate that we should calibrate the Be-10 production rate using a large data set from many sites (of which the SPICE data are a moderate, although not extreme, outlier), and the Ne-21 production rate using the SPICE data (because there are no other equivalent sites, so they are the entire global data set for Ne-21). This is probably the least unsatisfying option.

Summary: the online exposure age calculator is now updated to use this approach going forward, even though this is still a bit unsatisying, because the concentration ratio measured at any one site should agree with the production ratio estimated from large calibration data sets, and the fact that it doesn’t means that we are missing something. Even though this is not perfectly satisfying, remember, the screwup we are arguing about now is only a 5% effect. That’s pretty good. So, we will be happy that this approach agrees with all other estimates of the P21/P10 ratio, and we will temporarily ignore the fact that it doesn’t quite agree with the measured concentration ratios. Because this site turns out to be so transformatively useful for Ne-21 production rate calibration, though, it would probably also be a good idea to take some steps to try to unscramble this, including (i) sending some SPICE quartz to additional labs to deal with the possibility of measurement error for both Be-10 and Ne-21, (ii) maybe replicating the Ar-Ar age a couple more times, and (iii) looking for a shielded sample of the SP flow to see if there is any noncosmogenic Ne-21. Unfortunately, a quick Google Maps search did not disclose any road cuts or borrow pits in the flow, so the shielded sample might be tough.

Noncosmogenic helium-3 in pyroxene and Antarctic exposure dating

August 22, 2020

This posting falls into the category of something that is a really interesting scientific result if you care about cosmogenic-nuclide arcana, but is probably too obscure to actually write a paper about. Specifically, it covers the topic of noncosmogenic helium-3 in pyroxene from the Ferrar Dolerite in Antarctica, which is obscure enough that even among the already highly select group of readers of this blog, only a few folks will be select enough to have any idea what I am talking about.

The context — there is always a bunch of context that wastes a lot of space at the beginning of these postings  — is just the inconvenient property of stable cosmogenic nuclides that, because they are stable, there are always small concentrations of these nuclides in many minerals that are really hard to get rid of. Radionuclides such as Be-10 and Al-26, of course, eventually disappear due to radioactive decay, so minerals that have been in the subsurface for a few million years have negligible concentrations of these nuclides and arrive at the surface as the proverbial clean slate. Mostly this isn’t true for the stable noble gases He-3 and Ne-21, because the only way to get rid of them is to heat the mineral to fairly high temperature, and typical surface rocks haven’t been heated to high enough temperatures for tens to hundreds of millions of years, often not since the rock was formed. So nearly all rocks contain significant concentrations of noncosmogenic He-3 and Ne-21, either trapped at the time of rock formation or subsequently produced by nuclear reactions induced in various ways by radioactive decay of naturally occurring U and Th.

This may or may not be a problem, depending on the application. The next post, if I ever get it done, will explain why this is a big problem for production rate calibration for Ne-21. For He-3, depending on the rock type and the mineral being analysed, it is often possible to accurately quantify and correct for noncosmogenic He-3 by crushing to extract magmatic helium independently of cosmogenic helium, and/or measuring U and Th concentrations in the mineral, as described in detail in a paper by PH Blard. From the perspective of the present post, however, this type of approach commonly does not work in the application of exposure-dating using He-3 in pyroxene. This application is particularly important in Antarctica because a common target for exposure-dating of Antarctic bedrock and glacial deposits is pyroxene extracted from the Ferrar Dolerite, a mafic intrusive rock that is extremely widespread in the Transantarctic Mountains and therefore covers a major fraction of the total ice-free area on the continent. Fine-grained facies of the Ferrar Dolerite are also one of the most durable and weathering-resistant rocks on Earth, so the effect of millions of years of weathering in many long-exposed ice-free areas in Antarctica has been to destroy pretty much everything else and produce large areas where nearly every surface clast is Ferrar.

This is not an exaggeration. Fresh clasts of Ferrar start off greenish-gray, and develop a reddish-chocolatey weathering rind over time. In this photo of a moraine sequence at Otway Massif, in the southern Transantarctic Mountains, the entire landscape is this color. Essentially every object in this photo, with the exception of glacial geologist Gordon Bromley and some distant snowfields, is Ferrar dolerite. The example that Gordon has triumphantly slain has an exposure age of 2.3 Ma.

Thus, Ferrar dolerite is everywhere, pyroxene extraction from this rock is fairly straightforward, and He-3 measurement in the pyroxene is also fairly simple, so we would like to be able to use it widely for exposure-dating applications throughout the Transantarctic Mountains. To do this, we need an estimate of the noncosmogenic He-3 concentration. However, it’s not possible to estimate this in Ferrar pyroxene internally from a single sample. Without getting too far into the details, the rock has a cooling age of 183 Ma and reasonable U and Th concentrations, so pyroxenes have very high concentrations of He-4 that is due to U/Th decay and not to trapping of magmatic gas, and in any case the pyroxene generally doesn’t yield any significant amount of gas when crushed. In this situation the methods described by Blard and others don’t work.

So, what do we do now? The other strategy to determining the noncosmogenic He-3 concentration in a given mineral from a given lithology is just to obtain a deeply buried sample of the lithology that hasn’t been exposed to the surface cosmic-ray flux. Measure the amount of He-3 in the shielded sample and subtract this amount from total He-3 measured in surface samples to arrive at an estimate of cosmogenic He-3 in the surface sample.

This is easy when you can obtain a subsurface sample of the same lithology. A good example is a recent paper by Gunnar Speth and others dealing with He-3 exposure dating of glacially transported boulders in the Oregon Cascades: they figured out the source lithologies of the boulders and obtained shielded samples from quarries and road cuts in those units. No problem.

Unfortunately, in Antarctica, problem. Road cuts, quarries, mines, and boreholes are absent in the Transantarctic Mountains. However, in the past 20 years or so, a number of people have struggled to overcome this inconvenience and find the most shielded samples of Ferrar Dolerite that they could.

Robert Ackert searched the Dry Valleys for samples from the deepest crevices beneath the largest overhanging cliffs that he could. The lowest He-3 concentrations in these samples, described in his dissertation, are in the range of 5-9 Matoms/g. This is not a huge amount, but is equivalent to 30,000-60,000 years of exposure at sea level, so is significant for exposure dating.

Helen Margerison argued in this paper that the Ferrar Dolerite was basically the same as other dolerites stranded in Tasmania by Gondwana breakup, and obtained some samples of deep drill core from the Tasmanian example.  Result: 6.8 Matoms/g.

Mike Kaplan found two clasts of Ferrar Dolerite that appear to have very recently melted out of ablating blue ice at Mt. Achernar, and reported in this paper that pyroxene in these samples had 5 Matoms/g He-3.

Of course, with the exception of the Tasmanian borehole sample, we don’t really know that any of these samples have an exposure age of exactly zero — and it is quite likely that they don’t — so they are all upper limits on the noncosmogenic He-3 concentration. The Kaplan and Ackert samples have lower concentrations than the Tasmanian sample, which does indicate that Antarctic and Tasmanian dolerites are not exactly the same from this perspective, but they only show that the true noncosmogenic concentration must be less than about 5 Matoms/g. We still can’t prove it’s greater than zero.

So, lacking a drill rig or explosives, how can we do better than “somewhere between 0 and 5?” An interesting way to accomplish this relies on a study by Julia Collins, Shaun Eaves, and others that attempted to make cosmogenic Be-10 measurements on pyroxene instead of the usual quartz. There are many reasons this is difficult, and the production rate of Be-10 in pyroxene is not super well established, so this approach is not immediately very useful for exposure dating, but one result of this study was a set of Be-10 measurements in pyroxene from samples of Ferrar dolerite bedrock and erratics at a site near Mackay Glacier.

Subsequently, thanks to the greatly appreciated assistance of the authors of this paper in supplying splits of the samples, we measured He-3 concentrations in some of these samples at BGC. These data can be found in a summary table here, and they’re also in the ICE-D:ANTARCTICA database. This is what they look like.

We didn’t find any samples that have less He-3 than the previous record holders with around 5 Matoms/g. But the important thing here is that for samples that have experienced a single period of exposure — as we expect for these samples that have Be-10 ages that record Holocene deglaciation — cosmogenic Be-10 and He-3 concentrations have to be linearly related. The slope of the line is equal to the He-3/Be-10 production ratio, and the intercept  is equal to the noncosmogenic He-3 concentration in the pyroxene. These data are, in fact, linearly related as we expect, so they enable us to estimate the noncosmogenic He-3 concentration without actually finding a true zero-exposure-age sample. Drill rig not needed. Very useful.

The estimate from linear regression of these data is 3.3 +/- 1.1 Matoms/g, which is consistent with the maximum limiting estimates from Antarctic samples noted above. In addition, we recently measured a He-3 concentration of 2.9 +/- 0.7 Matoms/g (actually the mean of three not-super-precise measurements) from an erratic clast collected by Jamey Stutz near the ice margin at Evans Heights, adjacent to David Glacier. So that sample may be the hotly pursued true zero-age sample, or something close to it. Here is the regression above together with all of the estimates from attempts to find shielded samples shown as horizontal lines:

Blueish lines are from Ackert (5-9 Matoms/g); green, Margerison (6.8); pink, Kaplan (5); gray, Stutz (3).

All the measurements from Antarctica (although not the one from Tasmania) are consistent with the 3.3 +/- 1.1 Matoms/g estimate. If we accept that the Tasmanian and Antarctic dolerites are not equivalent from this perspective, this all seems to make sense. On the other hand, it also brings up some aspects of helium systematics in the Ferrar that are still unexplained, and tend to cast a bit of doubt on this whole story. Even though the Ferrar is a single magmatic system that was emplaced over a geologically fairly short period of time, it consists of many different intrusions, and each intrusion includes various petrographic facies with distinct crystallization histories. Suppose all the noncosmogenic He-3 we observe in Ferrar pyroxene is magmatic. It sounds somewhat plausible that the He-4/He-3 ratio of magmatic gases could be uniform and constant throughout the magma system, but it would be very surprising if the He-3 concentration in pyroxene that presumably crystallized at different times, places, and rates was also uniform and constant. However, what we observe in the regression plot above is that samples of Ferrar pyroxene from a set of glacial erratics that were presumably drawn from many different levels of the Ferrar in the Mackay Glacier area appear to have a fairly constant noncosmogenic He-3 concentration. If the noncosmogenic He-3 concentration was much more variable than the typical uncertainties in these measurements, we wouldn’t see anything resembling a linear relationship.

This leads us to believe that maybe the noncosmogenic He-3 isn’t magmatic, but was produced post-emplacement by the reaction Li-6(n,alpha)He-3 induced by neutrons derived from decay of U and Th in the rock. Helen Margerison did this calculation with a few trial measurements and came up with something of at least the right order of magnitude. For He-3 from this source to be constant throughout the Ferrar would require (i) a fairly constant Li concentration in pyroxene throughout the Ferrar, and (ii) a fairly constant neutron flux, which in turn requires fairly constant U and Th concentrations in the whole rock. These conditions seem more plausible, and could potentially be verified by measurements. Although the high variability of He-4 concentrations in Ferrar pyroxene (an order of magnitude or more) implies that U and Th concentrations in pyroxenes are also highly variable, the thermal neutron flux is controlled by the bulk composition of the whole rock and not the pyroxenes. Thus, the hypothesis that noncosmogenic He-3 in Ferrar pyroxene is mostly nucleogenic, mostly not magmatic, and fairly constant throughout the unit at least can’t be disproved by available data at the moment. However, it would probably be a good idea to try a little harder to disprove it by more systematic measurements of U, Th, and Li in pyroxene and whole rock. We could also measure Be-10 and He-3 on a lot more Holocene erratics; there are plenty near major glaciers throughout the TAM.

To summarize, we can’t disprove the hypothesis that noncosmogenic He-3 in Ferrar pyroxene is always 3.3 +/- 1.1 Matoms/g. In fact, so far it seems like a pretty good hypothesis. If we assume that it is pretty good, what does this do for us in the exposure-dating application? At sea level in Antarctica, this amount of He-3 is equivalent to 22,000 +/- 7000 years of exposure, so this is still not ideal for exposure-dating of LGM-to-present glacial deposits at low elevation. Even if we measure the total amount of He-3 with perfect precision, we are still stuck with a 7 ka uncertainty after we subtract this estimate of noncosmogenic He-3. On the other hand, at 2000 m elevation, it only equates to 3800 +/- 1200 years of exposure, so in principle this allows us to exposure-date LGM-to-present deposits at high-elevation sites with reasonable precision. For example, this implies that LGM-age deposits near the top of Beardmore Glacier are near 12 ka, rather then just less than 20 ka as calculated in the original paper without a correction. This seems like progress.


Where is the production rate?

June 10, 2020

Based on recent emails, a common complaint about version 3 of the online exposure age calculator is that when you calculate an exposure age, the results page does not include the nuclide production rate at the sample location.

This was a feature of the version 2 exposure age calculator that apparently was much more widely used than I thought.

Some users then proceed to further confusion about whether or not the production rate calibration input page is intended to be used for this purpose. Of course this is not the case — production rate calibration is something quite different.

The main reason that the sample production rate has been removed from the results page in version 3 is that for a variety of reasons, it is now fairly clear that time-dependent production rate scaling methods that take account of magnetic field changes are more accurate than scaling methods that assume that the production rate is unchanged. In practice, there’s not much difference for many common applications, mainly exposure-dating glacial deposits at relatively high latitude where the unknown-age sites are similar in age to calibration data, but in general it is almost certainly better to use the time-dependent methods. With a time-dependent scaling method, of course, there is really no point in telling you what the production rate is at any one time. So, no production rate.

The purpose of this post is to point out that even though there is no box on the results that says “production rate,” once you have calculated an exposure age, it is quite easy to determine the average production rate during the period of exposure, which is probably what you are looking for if you are sending me emails asking about where the production rate is.

In the case of Be-10, you have already measured the nuclide concentration N_{10} (atoms/g). You have just calculated the exposure age t (yr). You know the decay constant for Be-10 (\lambda_{10} = 4.99 x 10-7 /yr). These things are related to the Be-10 production rate P_{10} (atoms/g/yr) by:


\frac{P_{10}}{\lambda_{10}}\left[ 1 - \exp{\left(-\lambda_{10}t \right)} \right]


This is easily solved for P_{10}.

This is even simpler for stable nuclides:


N = tP


As noted above, for stable nuclides and for short exposure times relative to the half-life of a radionuclide (that is, most exposure-dating applications), this will give you the mean production rate during the period of exposure.

Of course it is the mean production rate in the sample — thickness and shielding corrections are included — not at a theoretical unshielded flat surface.

For long exposure times with radionuclides, what you get is a decay-weighted effective mean production rate, which is something like:


\frac{1}{t} \int_{0}^{t}P(\tau)\exp{\left(-\lambda \tau \right)} d\tau


where t is zero now and positive for times in the past, and \tau is a variable of integration.

This is additionally simplified because zero erosion is assumed: if you had included an erosion rate, you would either have to make the equation slightly more complex to include both radioactive decay and erosion. However, if you just wanted the production rate, it would be easiest to just do the calculation with zero erosion and use this equation.

Again of course, for a time-dependent scaling method the mean production rate is not the same as the production rate in the sample right now, except by accident. For non-time-dependent scaling the present production rate and the mean production rate during the period of exposure are the same.

OK, done. The point is that if not having the local production rate reported is a problem, it is a pretty easy problem to overcome.





Some don’t like it hot: accounting for Ne diffusion during exposure in hot deserts

April 22, 2020

Another guest author! Michal Ben-Israel is a Ph.D. from the Hebrew University of Jerusalem, spending her time attempting to apply cosmogenic Ne-21 to quantify rates of surface processes in the deep geological past, to variable degrees of success.

It has been pointed out before that cosmogenic Ne-21 is, simply put, a pain in the neck. As a stable cosmogenic nuclide, Ne-21 has undeniable potential, especially when it comes to exposure dating past 10^7 yr. Unfortunately, cosmogenic Ne-21 comes with an impressive list of issues that need to be considered when applying it over such timescales. Issues such as interferences from non-cosmogenic Ne-21 trapped in the crystal lattice (of atmospheric, nucleogenic, or other sources), inherited cosmogenic Ne-21, post-burial muon produced Ne-21, not to mention the overall scarcity of preserved sediments/surfaces that are old enough to justify getting into this whole Ne-21 mess. 

In defense of cosmogenic Ne-21, it has been shown to be useful in dating exposure of very slow eroding surfaces that have been exposed beyond the reach of radioactive cosmogenic nuclides, like in Antarctica or the Atacama. These two cold deserts constitute the perfect case-study for exposure dating with cosmogenic Ne-21: uninterruptedly exposed, slowly eroding, and with plenty of quartz to sample. But these are not the only locations that check those Ne-21 boxes, parts of Australia, the southwest US, and the Levant all make good candidates. Except, that when we consider hot deserts, we find ourselves faced with one more disadvantage of Ne-21: diffusion. 

Thermally activated diffusion of Ne from quartz is yet another thorn in the side of the application of cosmogenic Ne-21 over geological timescales. Neon, being a noble gas, is inert and so it readily diffuses out of the quartz lattice, which like all diffusive processes, depends on temperature and time. Diffusion can be useful, and a few experimental studies into diffusion kinetics of Ne in quartz led the way to some cool applications of diffusion, such as paleothermometry. However, when it comes to extended periods of surface exposure, diffusion is a foe. 

This thought experiment started with a thought provoking discussion with Marissa Tremblay (see referee comment – RC3) wherein she wondered about possible diffusion in the reported Miocene chert pebbles, sampled from the Negev Desert. While diffusion from these pebbles is most likely insignificant, the question arose – what would happen to exposure dating in uninterruptedly exposed, slow eroding, quartz covered surfaces in hot deserts.

The first thing to note about hot deserts is that they get very very hot. Air temperatures in the Sahara, for example, can reach 56ºC (~134ºF for the metrically challenged). Surface temperatures and particularly temperatures of exposed dark rocks such as chert or patina covered quartz/quartzolite can be significantly hotter. In fact, Mcfadden et al. proposed solar heating as a mechanism for physical weathering of rocks in arid landscapes, leading to preferential vertical cracking in exposed boulders and pebbles. So it would be reasonable to assume that a dark rock in a hot desert during mid-day in the summertime would reach temperatures of up to 80ºC (this might be on the high end of the scale but is probably still within reason). The question I’ll examine next is, would such temperatures affect surface exposure dating and by how much?

On the left are fragments of what used to be a chert cobble from the hyperarid Eilat area. On the right is a well-developed desert pavement composed of dark fragments of cherts and quartzolites located in the northern Negev Desert.

For the diffusion calculations, we can assume that only during Apr-Oct and only between 10 am to 4 pm, rock temperatures get sufficiently hot. We can simplify these calculations by assuming that during [(6/12)*(6/24)] of the time each year, dark rocks at the surface reach a steady temperature of 80°C. One more assumption we need to make is regarding the effective mineral radius. In the presented calculations, I used a 250 and 850 micrometers diameter (the commonly used grain-size sampling range for cosmogenic nuclide analysis), and 1 cm mineral diameter (a typical size for desert pavement fragment). [sidenote: mineral radius is one of the variables in the diffusion equation, which might have the most uncertainty, as mineral radii can range between tens of microns to a few centimeters. It is also important to note that mineral diameter isn’t necessarily equal to grain size, but that is a whole other question that I am not sure I know how to answer, and I definitely don’t want to get into here.]

Now we can calculate the time-integrated diffusion over 100yr intervals for a total span of 10 Myr. For this imaginary scenario, I scaled production for a site in the Negev Desert, where some very slowly eroding surfaces have been reported. This calculation could have also been normalized to production rate or calculated for SLHL production rates, but I chose this site for no other reason other than I wanted to use data for an existing location.

This figure shows how diffusion affects cosmogenic Ne-21 calculated exposure ages. On the left, we can see how the concentration of cosmogenic Ne-21 deviates (dashed lines) from the cosmogenic Ne-21 produced (continuous line), depending on grain size radius. On the right is the actual exposure age versus the calculated exposure age, with the continuous line representing no diffusion and the dashed lines representing diffusion depending on grain size.

What we learn from this figure is that for sand size range, diffusion becomes significant after ~2-3 Myr of exposure, and beyond 5 Myr, the difference between apparent exposure time and calculated exposure time gets problematic. So right around the time that exposure can no longer be quantified using Al-26 and Be-10, and Ne-21 gets useful, is when cosmogenic Ne-21 starts to falter and we need to start considering the effects of diffusion. To sum it all up, I guess we can add diffusion during exposure (in hot deserts) to the long list of nuisances that come with cosmogenic Ne-21 exposure dating. 

At the base if this text hangs the question, is cosmogenic Ne-21 worth bothering with for exposure dating? There is now even more evidence to suggest the answer is – no, but I don’t think the answer is this clear-cut. At this point in time, Ne-21 remains the ONLY available cosmogenic nuclide that could be applied to quantify rates of surface processes throughout the geological record, at least until another stable or slow decaying nuclide comes along (I’m looking at you, Mn-53…)

Computational infrastructure for cosmogenic-nuclide geochemistry

February 25, 2020

The point of this post is just to provide a link to an otherwise obscure document that is, nevertheless, interesting if you are (i) a cosmogenic-nuclide geochronologist, which is unusual already, and also (ii) a cosmogenic-nuclide geochronologist interested in data management and computational infrastructure, which is, to say the least, far beyond unusual. However, I am sure that if any such people do exist, they are probably readers of this blog, so this seemed like a good place to put it.

The document is here.

What it is is the project description section of a proposal submitted by myself and Ben Laabs (NDSU) to the NSF ‘Geoinformatics’ program in August, 2019.

Basically, the main idea of the proposal is to provide some support for the ICE-D database projects previously featured in this blog. Although presumably this proposal has now been reviewed, its funding status is unknown at the moment.

In any case, I think this document is interesting for several reasons, as follows:

  1. It contains the best and probably only intelligible description of what the ICE-D database project is supposed to be doing, and why it is set up the way it is. The ICE-D websites are otherwise nearly entirely without documentation. It’s not going to give you the details about exactly what is happening in the back end (and I apologize in advance for Figure 2), but it is a good overview of the purpose and context of the project that is not otherwise written down anywhere else.
  2. It contains some ideas on why the ICE-D project infrastructure is, potentially, a pretty good way to think about computational and data management infrastructure for geochemistry and geochronology.
  3. It contains some decent ideas on what science applications we could use it for in the future.
  4. It introduces the phrase “transparent middle layer” to geochronology. Although this makes the whole thing sound like a terrible Silicon Valley venture-capital pitch — synergizing the enterprise cloud data ecosystem with a transparent middle layer — and I may regret it later, I think it is a good way to think about how geochronology data management should work if it is going to be useful for anything.
  5. It also contains a compact account of the historical development of the bits of computational infrastructure used for cosmogenic-nuclide geochemistry, in particular the various online exposure age calculators. Mainly because the production rate calculations needed to get from a nuclide concentration to an exposure age are such a mess, cosmogenic-nuclide geochemists were quite early adopters of cloud computing in the form of the online exposure age calculators. I think this is interesting.

On the other hand, it is a proposal, so there are also a lot of things in there that are just proposed and may never happen, and in addition it wildly overuses the word “synoptic.” But that part is really not too bad. In any case, if you plan to be involved in Earth science applications of cosmogenic-nuclide geochemistry in future, it is a potentially useful read.

How well do we actually know sample elevations in Antarctica?

December 8, 2019

This is a guest post about verifying the elevations of samples collected in Antarctica for exposure dating purposes using a high-resolution DEM of the continent. Basically, what I’m doing is comparing the elevations reported in the literature of all samples in the ICE-D:ANTARCTICA database to the elevations of those same samples calculated from the high-resolution Reference Elevation Model of Antarctica (REMA). This is kind of a long post, but the main points are that (i) DEM-derived elevations agree reasonably well with reported elevations for most samples, but a large number of samples have misfits of 10s to 100s of meters; (ii) there is no obvious indication that folks are reporting elevations relative to the ellipsoid rather than the geoid, which is a good thing; (iii) misfits between reported and REMA-derived elevations decrease considerably starting in the early 2000s, presumably because portable GPS units had become widely available by then, and (iv)  large misfits of 10s to 100s of meters occur in both the pre-GPS era and the GPS era, and, in many cases, appear to be due to folks reporting incorrect or imprecise latitude and longitude coordinates.

I’m doing this exercise for two main reasons. First, elevation is the primary control on cosmogenic-nuclide production rates – production rates change by around 1% for every 10 m change in elevation – and so we’d really like to know whether elevations in the literature are actually correct. It’s reasonable to be a skeptic here because almost no one reports how they determined sample elevations, what their uncertainties are, and whether elevation uncertainty has been propagated into exposure-age uncertainty (probably not). In Antarctica, sample elevation is important for a second reason, which is that, provided that you also know the elevation of the local ice surface, you can often say something about how much ice thickness may have changed in the past.

I’m using REMA for this exercise, which is a DEM constructed from high-resolution stereo imagery shot by commercial satellites. Note, that I’ve re-referenced REMA from the WGS84 ellipsoid to the EGM96 geoid. Here’s how reported elevations compare to REMA elevations.

There’s actually a surprising amount of scatter. Given uncertainties in both REMA and in handheld GPS units (discussed below), I think about 10 m is as small as we can realistically hope the misfit to be for a given sample. However, of the 3300 samples in the database, only 61% have absolute misfits that are 10 m or less. 24% of samples have misfits less than 20 m and 5% have misfits that exceed 100 m. That’s big. An elevation error of 100 m would translate to a production rate error of roughly 10%.

At this point it’s reasonable to ask how much elevation error we expect from both field measurements and from REMA. Reported REMA elevation errors are generally less than 1 m for the open ice sheet but, not surprisingly, they are somewhat higher in rough terrain. The version of REMA that I’m using has 8 m horizontal resolution (a 2 m product is available, but the jump from 8 m to 2 m increases the hard-drive space needed for this exercise by a factor of 16). REMA errors for all samples (represented by vertical lines in the left panel above) have an average value of 2 m and a standard deviation of 7 m. So, I think we can safely say that, in almost all cases, the misfits of 10s to 100s of meters are not due to errors in REMA. For field-based elevation measurements, accuracy really depends on whether samples were collected in the pre-GPS era or the GPS era, as will be demonstrated later in this post. In the pre-GPS era, sample elevations were presumably estimated using topographic maps or barometric methods, and I would guess that these estimates have uncertainties of at least tens of meters in most cases. Since roughly the year 2000, most sample elevations have probably been determined with handheld GPS units. In this blog post, Greg compared measurements he made with an inexpensive handheld GPS unit to measurements with a fancy geodetic-grade one. He found normally distributed elevation residuals with a standard deviation of 7 m. So, to summarize, I think it’s safe to say that neither field GPS measurement error nor REMA error explains the large misfits of 10s to 100s of meters that we see.

Before moving on, it’s worth pointing out that the histogram in the right panel above is skewed significantly to the right. In other words, reported elevations are, on average, higher than REMA-derived elevations. We’ll come back to this later in the post.

Now let’s look at the spatial distribution of the misfit. In the figures below each 50 km box is colored by the average (upper panel) and standard deviation (lower panel) of the misfit of samples located in that box.

Interestingly, many of the blue squares in the upper panel are sites where either Mike Bentley, Jo Johnson, or John Stone (and academic descendants) have worked (e.g. Southern TAMs, Amundsen Sea Coast, Antarctic Peninsula, Pensacola Mtns., Lassiter Coast). You can also see that some of the largest misfits are clustered in certain areas such as the Shackleton Range, located east of the Filchner–Ronne Ice Shelf, and sites in Dronning Maud Land farther east along the coast. Note, however, that presenting the data in this way masks how many samples are actually contributing to each box. For example, in the case of the Shackleton Range, the three red boxes only contain five samples.

There are a few plausible factors that could be contributing to the large misfits. One possibility is that folks are reporting elevations relative to the ellipsoid rather than to the geoid. In Antarctica, depending on where you are, the difference between the ellipsoid and the geoid can be up to about 60 m. So, while this can’t explain the largest misfits, it is important to know whether or not sample elevations are being reported in a consistent fashion. The plot below shows the relationship between misfit and ellipsoid-geoid correction for all samples.

The distribution is skewed to the right (as noted above), but there is no obvious indication that folks are reporting elevations referenced to the ellipsoid rather than the geoid, which is a good thing.

A second possible explanation for the large misfits is that samples with large misfits were collected in the pre-GPS era. The figure below shows that this effect does, in fact, explain a lot of the misfit. In the lower panel, the misfit distribution for each year is represented by a box plot where the red line shows the median misfit, the bottom and top of the box represent the 25th and 75th percentiles, respectively, and the entire range of misfit is represented by the dashed line. We don’t actually know the collection date for all samples (or it is at least not compiled in the ICE-D:ANTARCTICA database), and, for those samples, I’m using the date of the earliest publications in which they are mentioned, which probably lags the collection date by a few years on average.

What’s cool about the lower panel is that you can actually see when folks started using precise GPS units in the field. Starting in the early 2000s the average misfit decreases to 6 m, which is about what we would expect given REMA errors and handheld GPS errors discussed above. 2019 is kind of an outlier, but as you can see in the upper panel, there are very few samples associated with that year (presumably more samples were actually collected in 2019 but have just not been published yet). Note that although GPS units were occasionally used in Antarctica to determine sample elevations at least as early as 1995, GPS signals for civilian use were intentionally degraded by the U.S. government until 2000 under a program known as “selective availability”, during which GPS-determined positions had uncertainty of 10s of meters.

It’s important to mention that the large difference between pre-GPS and GPS era misfits doesn’t necessarily mean that the elevation of samples collected in the pre-GPS era are wrong. Because REMA is queried at reported latitude and longitude coordinates, any error in these coordinates translates into elevation error in areas where there is significant vertical relief. So, it’s possible that the reported pre-GPS era elevations are actually correct, but that the samples are incorrectly located. This actually appears to be what’s going on for at least some samples.

Here’s an example of this effect from Dronning Maud Land. These samples were collected by a German expedition in 1995-1996. The samples are reported to have been collected from either the nunatak in the lower part of the image or from the nearby debris fields, however, as you can see, they appear to be located several kilometers away on the ice sheet.

Here’s another example of samples collected in 1999 from Migmatite Ridge in Marie Byrd Land. Five of the samples appear to be located hundreds of meters or more than a kilometer away from outcropping rock.

Interestingly, the problem of mis-located samples is not actually limited to samples collected in the pre-GPS era. Below is an example from the west side of the Antarctic Peninsula, where samples were collected from Overton Peak on Rothschild Island. Despite the fact that the collectors of these samples were using handheld GPS units in the field, six of their samples appear to have been collected in the middle of an ice shelf 15 km away from Overton Peak. Interestingly, these misplaced samples actually account for all of the samples with misfit in excess of 100 m from the year 2012.

Although the examples above show obviously misplaced samples, for most samples with large misfits it is not clear whether (i) the reported latitude and longitude coordinates are incorrect/imprecise, (ii) the reported elevations are incorrect, or (iii) both are wrong. Samples with large misfits commonly map to areas of outcropping rock, and, in these cases, it’s difficult to tell what the problem is. However, there is actually another piece of evidence in support of the idea that many of the large misfits are due to bad latitude and longitude coordinates. Recall the right panel in the first figure I showed. The distribution is skewed significantly to the right, which means that, on average, reported elevations are higher than REMA-derived elevations. Samples for exposure-dating purposes are more likely to be collected from topographic highs (summits, ridges, etc.) than from topographic depressions. That means that horizontal position error for a given sample will more often than not translate to a REMA-derived elevation that is lower than the sample’s true elevation, which is consistent with the skew in the distribution.

What’s interesting is that, if we assume that horizontal (rather than vertical) position error is the source of the bulk of the misfit, then samples are actually misplaced horizontally by much more than the 5-10 m typical of the accuracy of handheld GPS units. For example, if we look at all of the samples with misfits greater than 25 m, only 5% of these have reported elevations that can be found in REMA within 50 m of the reported latitude and longitude. For samples with misfits greater than 100 m, this value decreases to 0.6%.

So, to conclude, what have we learned from this exercise? Well, for one thing, the advent of the portable GPS unit was a big gift to cosmogenic-nuclide geochemists. However, even equipped with this technology, in some cases folks have still managed to report sample locations that are demonstrably incorrect in the vertical and/or the horizontal. Now that we are also equipped with an extremely high-resolution DEM of the entire continent, it’s probably a good idea to double check your elevation, latitude, and longitude measurements once you get back from the field.

Why are there no exposure-age data from glaciated Mexican volcanoes?

November 8, 2019

In a previous post a while ago, I used this diagram, which is a fairly famous (at least among glacial geologists) representation of the changes in the extent of mountain glaciers worldwide between the last glacial maximum (LGM) about 20,000 years ago and the present time, and is generally thought to have first been put together by the late glacial geologist Steve Porter.

This particular version appears in an article in Scientific American by Wally Broecker and George Denton. It shows a topographic profile down the North American Cordillera that forms the eastern edge of the Pacific Ocean, and highlights the difference between the equilibrium line altitudes of modern glaciers (red) and glaciers at the LGM (blue). I was using the diagram to highlight the impressive amount of cosmogenic-nuclide exposure age data that have been collected from glacial deposits worldwide and are tabulated in the ICE-D:ALPINE database; the blue dots added below show the locations of exposure-dated alpine glacial landforms in the mountains of North and South America as well as the same sector of Antarctica.

There are a lot of measurements here. Researchers who collect these data have focused for many years on visiting glaciers across the entire range of latitude, with the goal of learning about latitudinal variations in temperature and heat transport during the LGM and major late-glacial climate changes like the Younger Dryas and the Antarctic Cold Reversal. Nearly all the areas that Denton and Broecker specifically highlighted in this figure, as areas where prominent LGM-to-present glacial deposits could be found, have been visited and thoroughly studied.

With one exception. The “Mexican Volcanoes.” In fact, there is a pretty big gap between southern California (the southern end of the Sierra Nevada in the image) and Colombia (beneath the “America” in “Central America”). There are at least a few measurements that exist from this range of latitude that I know about that are not at the moment included in the database (mainly because of growing pains having to do with dynamically calculating chlorine-36 exposure ages); they are from Costa Rica and described in this dissertation. However, the reason that “Mexican Volcanoes” appear as a specific callout in this figure is because there are glacial deposits there, and they were fairly well studied in the 1980’s by Klaus Heine and others, as described, for example, in this paper.  But no one seems to have been there for exposure-dating purposes. Get to it, people! This is the only significant gap left in the Porter transect. It is easy to see why there are very few exposure ages on glacial deposits from, say, trans-Baikalia or interior Yukon, but the Mexican volcano sites are near major cities that you can drive to, in your own car, in only a few days’ drive from major cosmogenic-nuclide labs in the US. According to the Heine papers, there are enough radiocarbon dates on some of the moraines that they could potentially be used as production rate calibration sites, which could be useful. And if you are interested in tropical glaciers, the radiocarbon dates in that paper indicate that there are late-glacial moraines and ~30-35 ka moraines, but possibly not any standard-issue 18-25-ka-LGM moraines. Is this really true? Are there boulders enough to figure it out? You could find out.



ccSTP is not an SI unit for a reason. Use moles instead.

October 31, 2019

This post is about the unit of “ccSTP,” or cubic-centimeters-at-standard-temperature-and-pressure, that is used to measure amounts of gas. This subject has been on my list to write about for a while, but I am ashamed to say that even though this posting has been sitting around in draft form for a couple of years, I couldn’t be bothered to actually finish it and push the ‘publish’ button until Marissa Tremblay brought up the subject. Really, there was no collusion:  convergent evolution actually happens.

The important difference is, I get more than 280 characters to talk about it, so I can begin at the beginning. If you are not like me and you began your career as a noble gas geochemist when you were young and impressionable in graduate school, instead of backing into it later as an already-opinionated mid-career cosmogenic-nuclide geochemist, you have absorbed the fact that these units are used in a major fraction of the scientific literature having to do with noble gas geochemistry (which is voluminous, that is, the literature is, not the gas). Your brain is hard-wired to think in ccSTP. Or, even worse, “ncc” or “pcc”, which are hopelessly uninformative shorthand in which the nano and pico are added and the STP left off. OK, I get this — as gas measurement technology initially developed, measurements of pressure, volume, and temperature were the data actually used to determine how much of a gas was present or to prepare a standard for mass-spectrometric measurements, and folks wanted to keep the units of subsequent measurements linked to the source data. So, being able to use this unit fluently indicates that you are well-grounded in the history and fundamentals of your craft. There is some analogy to the secret handshake.

If you are like me, however, you think these units are terrible, for the following reasons.

  1. I have no idea what “standard temperature and pressure” actually is. Believe it or not, Wikipedia lists eighteen different definitions of standard reference conditions used by various entities and organizations. Before sitting down to write this blog entry, I was pretty sure that most uses of STP in the noble gas literature implied something resembling 1 atmosphere at either 0° or 25° C, probably 0°. But now, having read the Wikipedia entry, I am no longer so sure. In addition, Wikipedia adds the rather disturbing information that in 1982, IUPAC changed their definition of the “P” part of STP from 101.325 kPa (1 atm) to 100 kPa (almost 1 atm). Did post-1982 geochemists know this? I probably wouldn’t have noticed for years.
  2. In cosmogenic-nuclide geochemistry, we think about pretty much everything in units of atoms. Concentrations in atoms per gram, production rates in atoms per gram per year, etc. So, in general, any measurements in the cosmogenic-noble-gas literature that are in ccSTP have to be converted to atoms or moles before one can do anything useful with them. However, the conversion factor to do this is deeply obscure. In fact, it is a testament to the profound obscurity of this unit that Google does not know how to do this. If I type “1 dram in jiggers” into a Google search box, I get the correct answer. “One furlong in cubits.” No problem. But “1 ccSTP in mol?” Total blank. It does not make me feel good about the societal relevance of noble gas geochemistry if no one at Google has even bothered to figure this out. On the other hand, perhaps they read the Wikipedia entry and they were too confused to proceed.

Hence, the rest of this entry will do two things. First, try to figure out what the ccSTP/mol/atom conversion factors are, so that even if we don’t really know what T and P were used in a particular paper, we ought to be able to get fairly close to the correct number of atoms. Second, encourage people to stop using these units. Just say no. Use moles or atoms, which at least have only one agreed-upon definition, instead.

So, conversion factors. First, to help future victims of typing ‘1 ccSTP in mol’ into Google, let’s include all the possible phrases one might want to search for. ccSTP in mol. ccSTP in atoms. nccSTP in mol. Convert ncc to moles. Convert ccSTP to moles. Convert ccSTP to atoms. That should give the Google index-bot a bit of a head start in finding this posting.

Even if Google doesn’t know about them, some references do give a conversion factor from ccSTP to mol. For example, we can consult page 6 of a fairly standard reference, the 2002 edition of Noble Gas Geochemistry by Ozima and Podosek. This page contains a table of constants that gives the conversion as 1 ccSTP = 4.465e-5 mol. OK, that is some progress, but what is STP? There are some clues in the table on this page, although it is not clear that they make things any clearer. First of all, the table reminds us that 1 atmosphere is 101.325 kPa, which makes one think that even though we are writing in 2002, we might be thinking about the pre-1982 IUPAC STP that used 1 atm instead of 100 kPa. Then the table also tabulates RT (the  ideal gas constant times the temperature) in units of cm3 atm mol-1, which also makes one think that we are using 1 atm as standard P. But the table gives RT for both 0° C and 25° C, which is rather disturbing. So, OK, if PV = nRT, P = 1 atm, V = 1 cm3, and RT = 2.24141e4 cm3 atm mol-1 (this is the value tabulated for 0° C), then n = 4.4615e-5, which is close to, but not exactly what is in the table. The difference between 4.465e-5 and 4.4615e-5 could be rounding error or a typo, but then there have to be at least two typos, because it is further stated to be equal to 2.688e19 molecules, which is, in fact, 4.465e-5 * 6.022e23. However, it does appear that these authors were, in fact, assuming 0° C and 1 atm; if we use the 25°C value in the table (2.44655e4 cm3 atm mol-1), we get n = 4.0874e-5 mol, which is much more different from the tabulated value. To the extent that this book is a good representation of the noble gas literature generally, at least we can be reasonably confident that most folks think STP is 0°C at somewhere near 1 atmosphere (even if it remains rather mysterious what the value for 25°C is doing in this table at all).

Of course, lacking an early indoctrination in the arcana of noble gas geochemistry, I may just never have been informed of a global conspiracy to stick with the pre-1982 definition, which would be almost exactly analogous to the global conspiracy among radiocarbon-dating specialists to retain the obsolete Libby half-life for data reporting purposes. However, one would think that Ozima and Podosek would have mentioned it. Regardless, let’s assume, even though we are probably wrong for a lot of papers in the existing literature, that we are talking about the current IUPAC STP, which is 100 kPa and 0° C or 273.15 ° K. We obtain the ideal gas constant from a different source (Wikipedia again) as 8.314462 m3 Pa K-1 mol-1. So, 1 cm3 is 1e-6 m3, 100 kPa is 1e5 Pa, and n = PV/RT = (1e5 * 1e-6)/(8.314462 * 273.15) = 4.4032e-5, which, again, is fairly close to but a couple of percent different from the value tabulated in Ozima and Podosek. Apparently this unit is not unlike the biblical cubit

Although the assumption that most authors were paying attention to the IUPAC in 1982 may be too optimistic, I would guess that this value is probably the most sensible thing to assume, so:

1 ccSTP = 4.4032e-5 mol = 2.6516e19 atoms

1 nccSTP = 4.4032e-14 mol = 2.6516e10 atoms

1 pccSTP = 4.4032e-17 mol = 2.6516e7 atoms = 26.516 million atoms (this is actually a useful amount for cosmogenic He-3 measurements)

These should at least get you within a few percent of the true value in most papers in the literature. If someone disagrees with this, or notices that I have made an error in the above, please let me know. And for heaven’s sake, if you are talking or writing about cosmogenic noble gases, use units of atoms or moles. You know what the pressure and temperature were when you made your gas standards, and we don’t, so please use this information to do the conversion to moles or atoms, yourself, now, before it becomes impossible later.





Isostatic rebound corrections are still on a squishy footing

September 18, 2019

This blog post covers the subject of whether or not you should account for isostatic elevation changes in production rate calculations, comes to the conclusion that we don’t really know the answer, highlights one cautionary tale that is probably relevant, and suggests some ways to fix the situation.

This subject is important because production rates vary with atmospheric pressure. Higher  elevation, less atmosphere, higher production rate. In exposure-dating, because it’s not possible to measure the mean atmospheric pressure over the entire exposure duration of a sample, what we actually do in nearly all cases is (i) measure the elevation, (ii) convert to atmospheric pressure using a relationship based on the modern atmosphere, and (iii) assume that the resulting pressure applies for the entire exposure duration. Unfortunately, for sites that are more than a few thousand years old, and especially for sites that are, or were, marginal to large ice sheets, this usual procedure has to be at least partly wrong on two counts. First, as ice sheets, sea level, and global temperature have changed in the past, the mass of the atmosphere has moved around: the atmosphere has to get out of the way of ice sheets, fill up space created by eustatic sea-level fall, and shrink and expand as global temperature cools and warms. The pressure-elevation relationship in many places was probably quite different in the past. Second, because of glacioisostatic depression and rebound associated with the advance and retreat of large ice sheets, many potential exposure-dating sites have not only had the atmosphere move around on them, but have themselves moved up and down in the atmosphere. These observations would tend to indicate that maybe we shouldn’t just compute the production rate at the present elevation — we should account for the fact that all these things are changing.

Several folks have tried to do this. Or, at least, part of it. Specifically, a number of exposure-dating papers have extracted elevation time series for their sample sites from glacioisostasy models (such as the ICE-XG series of global models generated by Richard Peltier and colleagues), computed production rates corresponding to those elevations, and   then, basically, used a time-averaged production rate instead of an instantaneous modern production rate to compute the exposure ages. In effect this approach assumes that the atmosphere is stationary and the sites are moving up and down in it, and in all cases where I can remember it being applied, it results in lower calculated production rates (because the sites were ice-marginal, so at lower elevation in the past than they are now) and therefore older calculated exposure ages.

OK, this seems like a good idea. And it is possible to do this for nearly any exposure-dating study that deals with LGM-to-present events, because there exist global models for isostatic depression and relative sea-level change for the last glacial cycle. Of course, it’s not easily possible for pre-LGM exposure-dating applications.

However, there are several problems with this otherwise good idea.

One, as noted above, it’s not just the sites that are moving up and down in the atmosphere, the atmosphere is moving around as well. This aspect of the problem is the subject of a very good paper from quite a long time ago by Jane Willenbring (Staiger). This paper looked mainly at the atmosphere effects and pointed out, among other things, that they may act to offset isostatic depression near ice sheet margins, resulting in less variation in production rates than one would expect from each process individually.

Two, the production rate calibration sites are moving up and down too. Suppose that your production rate calibration data experienced an increase in the surface production rate over time, as expected for glacioisostatic rebound in the absence of any offsetting atmospheric effects. If you don’t account for this — that is, you just use the modern elevations of calibration sites — when using these data for production rate calibration, you will underestimate the production rate. If you then use this calibrated production rate, but make an isostatic rebound correction when computing ages for unknown-age sites, you will make your production rate estimate at your site, which is already too low globally because you forgot to include isostasy in the production rate calibration, even more too low locally. Thus, the resulting exposure ages will be too old, twice. Instead, to do this correctly, you must include the isostatic rebound correction in both the production rate calibration phase and the exposure-age calculation phase.

There exist several papers that appear on a reasonably close reading to have made this mistake and doubly-compounded errors, although in many cases it is hard to tell because the details have disappeared into a methods supplement. Because this is a blog entry and not an actual journal article, I don’t have to name them all in detail. But the point is that ages calculated in this way are unambiguously wrong. If you want to use an isostatic rebound correction in your exposure-age calculations, you must also use it in your production rate calibration. Or you can do neither: if you have ice-marginal calibration sites and ice-marginal unknown-age sites with similar rebound histories, and you don’t make the correction at either one, you will get close to the right answer, because any errors you introduced by skipping the correction in part 1 will be canceled by the offsetting error in part 2. Thus, no matter what, it is unquestionably better to not do this correction at all than to do it halfway.

The third problem is that the existing calibration data set does not show any evidence that a correction based on elevation change alone is either necessary or desirable. This is kind of a complicated claim, and it’s important to clarify the difference between (i) testing a particular model, that is, asking “does a particular glacioisostasy model, ICE-5G or whatever, improve the performance of production rate scaling models when measured against calibration data” and (ii) asking whether the calibration data themselves provide any evidence that production rates are affected by glacioisostatic elevation change considered alone. I would argue that we have to do (ii) first. Basically, if we can’t establish (ii) from the calibration data, then it is very difficult to determine what the answer to (i) would mean, or if a “yes” answer was meaningful at all.

So, here I will consider item (ii). The basic principle here is that the existing set of calibration data includes both calibration sites that are marginal to large ice sheets, and those that are not. If glacioisostatic elevation change by itself is an important control on production rates, then ice-sheet-marginal calibration sites will yield lower production rates than far-field ones.

This seems like a pretty simple question, but it is not obvious that anyone has actually answered it. A recent paper by Richard Jones and others gets tantalizingly close to at least asking the question, but then disappointingly stops at the last minute. The bulk of this paper is a quite thorough global calculation of the effect on time-integrated production rates expected from glacioisostatic elevation change. In the course of discussing whether or not this should be routinely taken into account in exposure-dating studies, the authors point out that the four (or six, depending on how you group the Scottish data) calibration sites for beryllium-10 data in the widely used “CRONUS-Earth primary calibration data set” are all in areas that experienced very small glacioisostatic elevation changes, so if you use this calibration data set, including or not including a glacioisostatic elevation change correction in the production rate calibration phase doesn’t really make any difference. This is true and, of course, represents a loophole in the absolutely strict rule described above that if you want to use an isostatic correction in exposure-age calculations, you must also use it in production rate calibration. After this, however, the paper stops thinking about calibration data and goes on to other things. Stopping here is extremely disappointing in the context of the otherwise very clear and comprehensive paper, because there are not just four Be-10 production rate calibration sites. In fact, there are 38, seventeen of which are from ice-sheet-marginal features. Although this is the subject of a whole different discussion, there is no particular reason that most of these sites are more or less reliable than the “CRONUS primary” calibration data, which, IMHO, were somewhat arbitrarily selected. Thus, it is possible to test production rate predictions based on glacioisostatic elevation change against a fairly large data set of calibration measurements from both ice-sheet-marginal and far-field sites, and as I read this paper I was very surprised that the authors did not do it. Or, at least, I could not find it in the paper. Why not?

So, let’s try it, at least to the extent feasible without making this blog posting any longer than its current 14,000 words. Specifically, we are asking the question: are production rates inferred from ice-sheet-proximal calibration sites lower than those inferred from other calibration sites? If glacioisostatic elevation change by itself is an important control on production rates, we expect the answer to be yes. However…

…the answer is no. The above histograms show Be-10 production rate scaling parameters for LSDn scaling inferred from all Be-10 calibration sites in the ICE-D:CALIBRATION database (top panel) as well as ice-sheet-marginal and far-field subsets (lower two panels). The blue histograms are for all samples; the red histograms are for averages from each site. Remember, the LSDn scaling algorithm directly predicts production rates with units of atoms/g/yr instead of nondimensional scaling factors, so the calibrated parameter is accordingly a nondimensional correction factor rather than a “reference production rate” with physical units. Regardless, the mean and standard deviation of correction factors inferred from ice-sheet-marginal and far-field calibration sites are, basically, the same. Almost exactly the same.

This result, therefore, fails to provide any evidence that production rates are significantly affected by glacioisostatic elevation change alone. Instead, it tends to suggest that elevation-change effects at ice-marginal sites could very well be substantially offset by atmospheric effects. This, in turn, implies that one should probably not include isostatic effects in a production rate scaling model without also including atmospheric mass redistribution effects.

There is no point in giving up too soon, though, so let’s try to come up with some other first-principles predictions that we can test with the calibration data. The simplest prediction was just that production rates from ice-marginal calibration sites should be lower. A slightly more complex prediction is that we expect that this effect should be more pronounced for younger calibration sites. Suppose an ice sheet margin retreats between the LGM and the mid-Holocene, and we have ice-marginal calibration sites that deglaciated at a range of times during this retreat. In general terms, the sites that are closer to the center of the ice sheet deglaciated more recently and should be more affected by isostatic depression: not only is the magnitude of the depression typically larger at sites near the center of the ice sheet, but the period of time during which the sites are isostatically depressed makes up a larger fraction of their total exposure history. Therefore, we expect a correlation between the age of ice-marginal calibration sites and the inferred production rates: all sites should give lower production rates than non-ice-marginal sites, but the younger ice-marginal sites should yield lower production rates than the older ones. Is this true?

The answer: well, sort of. The plot above shows values of the LSDn calibration parameter inferred from far-field sites (blue) and ice-sheet-marginal sites (red). The solid and dotted black lines give the mean and standard deviation of the entire data set. Production rates inferred from non-ice-proximal calibration data are uncorrelated with site age, as they should be, but those inferred from the ice-marginal sites are strongly correlated with site age. This correlation would be a point in favor of an elevation-change effect being important…except for one thing. As expected from the previous observation that the two groups yield the same mean production rate, half the ice-marginal sites give production rate estimates that are HIGHER, not lower, than the mean of the non-ice-marginal sites. If glacioisostatic elevation change is the primary control on production rate, we do not expect this. Forebulge effects are not expected for sites that are well within the LGM boundaries of the ice sheets, so (at least to me) there appears no obvious way that elevation change alone can account for both (i) the similarity in the distribution of production rate estimates between production rates inferred from ice-marginal and far-field sites, and (ii) the strong age-production rate correlation for ice-marginal sites. Overall, this result is sort of ambiguous but, again, appears to indicate that correcting for glacioisostatic elevation changes by themselves is oversimplified.

The summary up to this point is that arguing that glacioisostatic elevation change should be an element in production rate scaling models is probably reasonable. However, both the calculations in Jane’s paper and an analysis of available calibration data indicate that it is probably oversimplified, and probably not a good idea, to correct only for the elevation changes without also considering simultaneous atmospheric mass redistribution.

On to the cautionary tale. The cautionary tale part of this posting is that this situation is very similar to a somewhat obscure, but analogous and pertinent, aspect of cosmogenic-nuclide applications to erosion-rate measurements. Specifically, this is the question of whether or not to apply topographic shielding corrections to production rate estimates for basin-scale erosion-rate calculations.

In the isostasy example, we know that there are two time-dependent effects on production rates in ice-sheet marginal areas, and we think they probably offset each other, at least to some extent. One of them (isostatic elevation change) is easy to calculate, because there exist a number of global, time-dependent, LGM-to-present glacioisostasy models that can be easily used to produce a time-dependent elevation change estimate for any site on Earth. The second one (atmospheric mass redistribution) is difficult to calculate, because there are no equivalent, easily accessible, LGM-to present models for atmospheric pressure distribution.

In the topographic shielding example, we know that the erosion rate we infer from a nuclide concentration in sediment leaving a basin is inversely proportional to the mean production rate in the basin, and directly proportional to the effective attenuation  length for subsurface production. In a steep basin where many parts of the basin are shielded by surrounding topography, shielding reduces both the mean production rate and the attenuation length. Because these have opposite proportionalities, therefore, corrections for these effects offset each other. Again, one of these corrections is easy: using digital elevation models to compute the reduction in the mean surface production rate in a basin due to topographic shielding is a satisfying and fun application of raster GIS. And one of them is hard: computing the shielding effect on the subsurface attenuation length is computationally painful, time-consuming, and really not fun at all. The result is that for quite a long time, many researchers just did the easy correction and ignored the hard correction, until a thoroughly helpful paper by Roman DiBiase did the hard part of the correction, and showed that doing only the easy correction results in an answer that is more wrong than it would have been if you had ignored both corrections. Oops.

The analogy is pretty clear. Both cases feature two offsetting corrections, one easy and one hard, and a seductive heuristic trap created by the availability of code to do the easy correction but not the hard one. It seems like it might be a good idea to learn from history, and try not to shove our left foot in a mouth still filled with the taste of our right shoe.

So, to conclude, what should we do or not do about this? Well, for one thing, we should probably not apply a time-dependent isostatic rebound model to production rate calculations without an atmosphere model as well. The evidence seems to indicate that this is probably oversimplified and may not improve the accuracy of exposure ages. However, it is not obvious that there is a good candidate atmosphere model at the moment; this may require some work. And then, if we use both models, it is necessary to apply them to both calibration data and unknown-age sites, which requires hard-coding them into everyone’s  exposure-age calculation code. We should complete the Jones paper by doing a better job than I have done above in examining more carefully whether scatter in fitting calibration data to production rate scaling models can be explained by glacioisostatic or atmospheric-mass-redistribution effects. And, finally, we will probably need to collect more calibration data. This is because applying these corrections will very likely have the largest effect on inferred exposure ages for times and places where we do not have calibration data. It is probably not a good idea to extrapolate production rate corrections into places outside the range of calibration data where being wrong could incur large and hard-to-identify errors. Targeting places and times where we expect the largest corrections, and also places and times where elevation-change and mass-redistribution effects are likely to have offsetting or compounding effects, to collect new calibration data would add a lot of credibility to the argument that these effects are important.





August 29, 2019

By Perry Spector


This is the first guest post on this blog, and the point is to describe a data-model comparison project called DMC:ANTARCTICA that I have been working on for the past year. The overall objective of the project is to provide synoptic evaluation of Antarctic ice sheet models that simulate the evolution of the ice sheet over long time periods using the entire dataset of cosmogenic-nuclide measurements from Antarctica. This project is still under development, but there now exists a publicly available DMC:ANTARCTICA web interface that serves data-model comparisons for every site in Antarctica where data exist and for 15 different ice-sheet model simulations.


We rely on numerical ice-sheet models for establishing the climate sensitivity of the Antarctic ice sheet and for forecasting its future behavior and contribution to sea level. Although these models provide glaciologically-feasible reconstructions ice-sheet evolution, they are of limited use unless they can be shown to agree with observations of past ice-sheet change.

Considerably progress actually has been made on this front. Recent simulations by David Pollard, Pippa Whitehouse, Rob Briggs, and others that are constrained by geologic observations (e.g. relative sea-level curves from Antarctica, exposure ages, etc.) do, in fact, agree with those observations remarkably well in many cases.

Existing data-constrained models, however, are primarily restricted to the period between the LGM and the present. Longer simulations, spanning important time periods when the climate was as warm or warmer than present, have not been thoroughly tested. One limitation to doing this is that many of the constraint datasets, such as RSL curves from Antarctica, do not extend beyond the LGM. On Pleistocene-Pliocene timescales, cosmogenic-nuclide measurements from Antarctica are some of the only data available to test hypotheses of past ice-sheet behavior.

The existing set of Antarctic cosmogenic-nuclide measurements

All known published (and many unpublished) cosmogenic nuclide data from Antarctica have been compiled, mostly by Greg Balco, in the ICE-D:ANTARCTICA database. The database currently comprises 4361 measurements on 2551 samples. For this to be a useful set of ice-sheet model constraints, it should span as much of the continent as possible and also be distributed in time. Let’s look at the breakdown. The figure below shows the distribution for samples of glacial erratics and for bedrock surfaces.

Minimum and maximum circle sizes represent 1 and 88 samples, respectively.

Both populations do a pretty good job highlighting where outcropping mountains exist in Antarctica. There are large areas, however, that are entirely unrepresented, such as the East Antarctic interior and the coast between northern Victoria Land and the Lambert-Amery Basin. Fortunately these are sectors where most ice-sheet models predict relatively small changes over glacial-interglacial cycles. For glacial periods, models predict large ice thickness changes in the Ross, Weddell, and Amundsen Sea sectors, areas where the coverage is relatively good.

Interestingly, around 80% of the samples are of glacial erratics while the remaining 20% are of bedrock surfaces. This distribution is probably, in part, due to the combination of two factors. First, outcropping bedrock surfaces in Antarctica that were ice-covered during the LGM commonly have exposure ages much older than the LGM because, in many cases, past ice cover was frozen to the bed and non-erosive. Second, the objective of many exposure-dating projects has specifically been to establish the ice sheet’s configuration during or following the LGM, and so these projects likely opted to sample glacial deposits rather than bedrock. However, as has been discussed by some authors (e.g. here) and as will be discussed below, well-preserved bedrock surfaces can provide important information about exposure and ice cover over multiple glacial-interglacial cycles.

The next set of maps show the distribution by nuclide.

Most samples in the database only have a single analysis of a single nuclide, and, unsurprisingly, that nuclide is usually Be-10. Multiple nuclides (e.g. Be-10 and Al-26) have been measured on some samples, as shown below. These samples are particularly important because paired nuclide data can provide information not just about past exposure, but also about past ice-cover and erosion.

The next figure shows histograms of exposure ages (note logarithmic axis) and gives a sense of the temporal distribution of the database.

Although a large fraction of the samples only provide information about the last ice age and the subsequent deglaciation (i.e. the focus of many exposure dating studies), a substantial population provides constraints on timescales extending as far back as the Miocene.

Therefore, to summarize, the set of Antarctic cosmogenic-nuclide measurements is reasonably well distributed in space and time, which suggests that it should be of some use for evaluating numerical ice-sheet models that run over a range of time periods. It would be useful if the database contained more samples from slowly-eroding bedrock surfaces, but the coverage isn’t terrible at least.

A Data-model comparison framework

For most samples, there are an infinite number of different histories of exposure, ice-cover, and erosion that could produce the observed nuclide concentrations. Therefore, these data cannot be simply inverted to generate unique ice-thickness chronologies. However, they can be used to distinguish incorrect chronologies predicted from ice-sheet models from plausible ones. The basic idea is that, for a given rock in Antarctica, the history of exposure and ice-cover simulated by a model predicts certain concentrations of cosmogenic nuclides at the present-day that can be directly compared to concentrations that have actually been measured in the rock.

This isn’t a new idea. In a 2012 paper, Sujoy Mukhopadhyay and others reported that Be-10 and Ne-21 concentrations measured in bedrock samples from the Ohio Range were consistent with concentrations predicted by a 5 million year ice-sheet model run. Since then, the only other example that I am aware of is a paper I recently wrote with John Stone and Brent Goehring, which is under review in The Cryosphere, that evaluates several models using results from the Pirrit Hills and the Whitmore Mountains.

However, rather than doing one-off data-model comparisons for one or two sites, the objective of the DMC:ANTARCTICA project is to perform synoptic evaluation of model performance using all available cosmogenic-nuclide measurements from Antarctica. At the moment, that capability is not yet available, but it is coming. What is available is the DMC:ANTARCTICA web interface which serves comparisons between every site in Antarctica where data exist and a number of published ice-sheet model simulations. The rest of this post will give a basic description of how the web interface works, and highlight a couple interesting results.

For a given site in Antarctica and ice-sheet model of interest, the modeled ice-thickness history at the site is extracted via bi-linear interpolation. Here is an example from the model of Pollard & DeConto (2009) for the Pirrit Hills.

The next steps are to (i) generate synthetic samples of both bedrock surfaces and glacial erratics spanning a range of altitudes at the site, (ii) determine the history of exposure and ice cover experienced by each sample, (iii) use those histories to calculate the present-day concentrations of various nuclides, and then (iv) compare those predictions to concentrations that have actually been measured in samples from the site.

For each DMC request, the calculations are performed in real time on a Google Cloud virtual machine. Concentrations, as well as exposure ages, are calculated using the Matlab/Octave code behind the version 3 online exposure age calculator, which was described by Balco et al. (2008) and has since been updated. Additional details about data-model comparison calculations are available here.

All samples are assumed to have zero cosmogenic nuclides at the beginning of the model simulation. Thus running the web interface with a site where there are 12 million year Ne-21 exposure ages, paired with a LGM-to-present ice-sheet model, will obviously produce large data-model misfits that are unrelated to the actual performance of the model. Similar misfits can be obtained by doing the reverse, that is, pairing a long-term model with a site that only has, say, Holocene exposure ages. This latter scenario can be dealt with by employing the option to change the start time of the model so you can look at, for example, only the last 20 kyr of a 800 kyr run.

The web interface can also serve model predictions for any latitude and longitude in Antarctica, even if no samples have been collected at that location. This feature could potentially be useful for planning future field projects.

Some interesting DMC results: (i) elevation transects of bedrock surfaces and glacial erratics

DMC simulates periodic deposition of erratics at a site whenever the ice sheet there is thinning. This is because most glacial erratics in Antarctica are ablation deposits, meaning that they were transported englacially from upstream erosion areas to ablation areas on the ice surface (where they were initially exposed to the cosmic-ray flux), and subsequently deposited on mountain flanks by ice thinning. Periodic deposition of erratics is simulated for all sites, regardless of whether erratics are actually known to be present.

Ice-sheet models that simulate many glacial-interglacial cycles can therefore produce erratics at similar elevations but at different times. The effect of this is shown in the upper-right panel below for the 5 Myr simulation of Pollard & DeConto (2009) at Mt. Turcotte in the Pirrit Hills (see ice-thickness time series above).

Different colors represent different nuclide-mineral pairs as follows: orange = C-14 (quartz); red = Be-10 (quartz); blue = Al-26 (quartz); neon = Ne-21 (quartz); gray = He-3 (pyroxene/olivine).

Concentrations for a given nuclide in erratics are scattered and have a relatively weak relationship with elevation. The exception is for C-14 (shown in orange), which decays rapidly with a 5730 year half life and therefore has no memory of the exposure and ice-cover history prior to around 30 kyr ago. In contrast, the bedrock surfaces (upper-left panel), which are predicted to have been intermittently ice covered, show concentrations that always increase monotonically with altitude. In part, this is due to the fact that production rates increase with elevation, but high elevation samples also must have spent a lesser proportion of their existence ice-covered than low elevation samples from the same site.

Note 1: Both bedrock and erratics are assumed to not experience erosion nor get covered by till or snow. For erratics, once they are deposited they are assumed to not change elevation, roll over, etc.

Note 2: The four panels above have different x and y axis ranges. Sorry. Working on that.

The predictions for Mt. Turcotte bedrock and erratics (upper panels) are interesting because they are qualitatively pretty similar to what is actually observed there (lower panels), as well as at many other Antarctic sites. Observed Be-10, Al-26, and Ne-21 in bedrock samples monotonically increase with elevation. There are only minor exceptions to this, which are probably related to small amounts of erosion experienced by some samples or, in the case of Ne-21 (shown in neon), radiogenic production. In contrast to the bedrock samples, Be-10 measurements on glacial erratics (lower-right panel) are scattered and have no obvious concentration-elevation relationship. Thus, at least qualitatively, the model predictions for both bedrock and erratics are consistent with what we actually observe.

One take-home point from this comparison is that, in Antarctica, where past ice cover was commonly cold-based and non-erosive, weathered bedrock surfaces can, in some cases, be much better indicators of long-term exposure and ice cover than glacial erratics. Next time you’re in the field, collect an elevation transect of weathered bedrock surfaces.

Some interesting DMC results: (ii) LGM-to-present ice-sheet models consistently simulate premature deglaciation in the western Weddell Sea

Okay, now let’s look at an example in which we actually evaluate the performance of ice-sheet models.

Our knowledge of the post-LGM deglaciation history of the two largest embayments in Antarctica, the Ross and the Weddell, has remained very lopsided. Decades of geological (marine and terrestrial) and glaciological research in and around the Ross Embayment has generated a large network of constraints on the thickness and extent of grounded ice during the LGM and during the subsequent retreat. In contrast, relatively few field studies in the Weddell Embayment have produced commensurately few constraints there.

Recent papers by Keir Nichols, Jo Johnson, Brent Goehring, and others (published and in review here and here) are increasing our knowledge of the deglacial history of the Weddell Sea. These papers describe C-14 exposure ages from an elevation transect of 8 bedrock samples from the Lassiter Coast on the east side of the Peninsula. All samples have mid-Holocene ages and indicate abrupt thinning there at this time.

Let’s see how these results compare to predictions from state-of-the-art ice sheet models. In the figures below, orange circles show measured C-14 exposure ages plotted against sample elevation. Gray circles show model predictions. Note that the 8 bedrock samples on which C-14 concentrations were measured were collected from a few different but nearby peaks, which have been classified as different sites in ICE-D:ANTARCTICA. Thus, only 5 of the 8 samples are shown below, which does not affect the data-model comparison.

The first figure shows the predictions from the reference simulation of the model by Jonny Kingslake and others (2018).

The second figure shows predictions from the best-scoring run of the large ensemble by David Pollard and others (2016). Other recent models by David Pollard and others predict relatively similar transects.

The third figure shows predictions from the last 20 kyr of the model by Tigchelaar et al. (2018).

These three state-of-the-art ice-sheet models consistently simulate premature thinning on the Lassiter Coast by several millennia, which likely means that the grounding line in the western Weddell Sea retreats too early in the models. Note that the observation of premature thinning is not limited to this site but is also seen at the Heritage Range and at the Pirrit Hills, sites which are presumably sensitive to ice-sheet behavior in the western Weddell Sea.

The model that performs best at this site appears to be that of Pollard et al. (2016), which is perhaps not surprising because, unlike the models by Kingslake et al. (2018) and Tigchelaar et al. (2018), it was calibrated using a large network of observational constrains from various other sites in Antarctica.


The overall goal of the DMC:ANTARCTICA project is to systematically evaluate ice-sheet model simulations using the entire set of cosmogenic-nuclide measurements. This isn’t available yet, but it’s coming.

The DMC:ANTARCTICA web interface is publicly available for dynamically comparing observations and model predictions anywhere in Antarctica. Check it out.

The web interface helps address the issue of ice-sheet model opacity. Typically the complex behavior of an ice-sheet model is conveyed only in a few small figures in a paper. Now it’s possible for anyone to explore models and see what they predict anywhere in the continent.

The web interface still has some issues to be ironed out. Some of these are discussed here. Let me know if you find bugs or if there is a specific feature you’d like to see implemented.

Don’t see your data? The problem is probably either that (i) it’s not in ICE-D:ANTARCTICA yet, or (ii) certain information needed to calculate exposure ages (e.g. sample thickness) is missing from ICE-D. Contact me or Greg to fix that.

Have a model you would like included in this project? Get in touch.