Skip to content

The uneven distribution of production rate calibration data

February 2, 2017

This post is about the distribution of production rate calibration data, and whether it’s good enough or not.

The basic background here is that “production rate calibration data” are measurements of cosmogenic-nuclide concentrations from sites where we already know the exposure age of a surface from some sort of independent geochronology. In the simplest possible case, the way surface exposure dating works is that we go and measure the concentration of some cosmic-ray-produced nuclide (call this N, with units of atoms/g) in a rock surface . If we know what the production rate (P, units of atoms/g/yr) is, we can just divide N by P to get the exposure age of the surface in years (call it t). This works great if you know what P is. We can use physical first principles and measurements of the modern cosmic-ray flux to put together “scaling methods” or “scaling models ” that describe how P should vary with elevation and position in the Earth’s magnetic field. Then, to use a scaling model to estimate the actual value of a production rate at a particular location, we need to calibrate the scaling model to actual measurements of the production rate, and, as noted above, we do this by measuring nuclide concentrations in surfaces where we already know the exposure age: we measure N and divide by t to get P.

So to summarize, if we want to estimate production rates everywhere globally, we do that by formulating a scaling model and then fitting the scaling model to measured production rate calibration data. This also allows us to evaluate scaling models by asking whether they fit the calibration data, or not. Because production rates vary by a lot (factors of 10 or more) within the part of the Earth where we’d like to be able to do exposure dating (at pretty much all latitudes from sea level up to about 4-5 km elevation), it is evident that to accurately calibrate a scaling model, and to accurately test whether a scaling model either works at all or works better than another scaling model, we want to have calibration data from the entire range of scaling space, that is, that span a large range in both latitude and elevation. We’d also like to have calibration data that span a large range of true exposure ages (in order to test time-dependent scaling models), but I’ll take that as a secondary issue for now and save it for some other time.

So the question is, do we have production rate calibration data that span the entire useful part of production rate scaling space? We certainly have a lot of production rate calibration data: the ICE-D:CALIBRATION database at this writing includes 72 distinct sites where we have Be-10, Al-26, or He-3 production rate calibration data. There are also a lot of additional sites where we have C-14 or Cl-36 calibration data; we’ll leave that for now on the basis that these nuclides are less commonly measured. Mostly this proliferation of calibration data is the result of a greatly increased rate of calibration data collection during the past few years, both as part of the CRONUS-Earth and CRONUS-EU projects and by other groups. The following image, which is taken from the front page of that database, shows the distribution of the Be-10, Al-26, and He-3 data geographically and in scaling space.

current_cal_map

In the world map, the size of the circles indicates the number of different samples collected at each site. Although the world map reveals the interesting phenomenon that there are exactly zero production rate calibration measurements for these nuclides in the entire continent of Asia, the three lower panels are more relevant in answering the question of how well the calibration data are distributed in scaling space. From left to right, they show the distribution in latitude (a simple proxy for position in the magnetic field) and elevation for Be-10 (red), Al-26 (green), and He-3 (gray) calibration data. In these plots, the size of the circle reflects how many measurements of the respective nuclides were made at each site.

This shows that the available calibration data do span a wide range in latitude. They do span a wide range in elevation. But their distribution across scaling space is not even close to uniform. Rather, with the exception of a couple of He-3 calibration sites near sea level at low latitude, latitude and elevation are highly correlated for these sites, such that they nearly all lie in a fairly narrow zone of scaling space that is at high elevation at low latitude, and at low elevation at high latitude. The opposite corners of scaling space — high-elevation-high-latitude and low-elevation-low-latitude — are nearly empty.

To figure out why, consult the following figure from a well-known 1990 article in Scientific American by George Denton and Wally Broecker.

broeckerdenton1990snowline

This shows the latitudinal distribution of the global snowline, which essentially controls where glaciers exist. Naturally, it’s lower at the poles and higher in the tropics, where it’s warmer. The highest snowlines are actually in the subtropics due to the relative aridity there compared to the equator, but that’s beside the point for now. The red line is the modern snowline; the blue line is the (lower) snowline during the last glacial maximum.

The distribution of global snowlines is a thoroughly well-discussed phenomenon and this diagram, or something like it, appears lots of other places. Here is a nice example from a Russian glacier atlas that shows the same thing for South America (north is to left):

Porter diagram of Andean snowline elevations from Russian atlas of global snow and ice resources. MZS library.

The point here is that the majority of production rate calibration data are from glacial deposits — they’re sites where glacial moraines with boulders on them have been independently radiocarbon-dated. Thus, the very non-uniform distribution of calibration data in scaling space just reflects the fact that most calibration data are from glacial deposits formed during or after the LGM, and these deposits, by nature, are lower in polar regions and higher in tropical regions. Here are the locations of Be-10 and Al-26 calibration data pasted onto the Denton-Broecker diagram:

db_with_sites

Although a lot of the Northern Hemisphere calibration sites are well below mountain snowlines because they are associated with ice-marginal deposits of the Laurentide and Greenland ice sheets (and there are a few sites that aren’t associated with glaciers or ice sheets at all), the calibration data basically follow the LGM snowline, because that’s where you find LGM and late-glacial moraines. Again, the main exceptions are in the He-3 data set (not shown here; see above), where there are a few sites that consist of dated lava flows or other volcanics at low elevation and low latitude.

So, in fact, the production rate calibration data that we have, although fairly abundant now, are not, in fact, very evenly distributed across scaling space. The next question is, do we care? For us to care, two things need to be true.

First, we have to want to exposure-date something that is not near the Pleistocene snowline, e.g., something that is at high latitude and high elevation, or low latitude and low elevation. Do we want to do this? Consider what is by far the most common application of exposure-dating, which is, of course, dating glacial deposits. Like calibration data, glacial deposits tend to be near the global snowline, so we  might not be worried about this. To find out whether we are worried, let’s look at the distribution of where glacial geochronologists actually collect exposure-age data.

edist1

This shows the latitude-elevation distribution of exposure-age data associated with glacial deposits from both the ICE-D:ANTARCTICA database (green dots; includes measurements of many different nuclides) and a prototype of a similar database aimed at global exposure-age data for alpine glacier moraines (gray dots; includes Be-10 and Al-26 data from a future ICE-D:ALPINE database). As expected, most of these data that aren’t in Antarctica are clustered around and a bit below Pleistocene snowlines:

edist3

Now add Be-10 and Al-26 calibration data to these data (without the Denton/Broecker diagram):

edist2

For the most part, the places where we want to actually apply exposure-dating to glacial deposits are located very close in scaling space to the calibration data that we have. That’s good. In most glacier-chronology situations, we don’t have to worry about extrapolating production rate estimates away from the calibration data. However, there are some exceptions. Of course, if we’re trying to exposure-date something that’s not a glacier — landslides, fault scarps, shoreline features, whatever — we could be anywhere in scaling space and could require large extrapolations. But even in the more restricted province of glacier chronology, the most glaring mismatch in this figure (there are others less glaring) is that there are a lot of situations where we want to date glacial deposits that are at relatively high elevations, 2000-3000 m, near both poles…but we don’t have any calibration data at all at high elevations near the poles. So the first requirement for whether or not we care about the restricted distribution of calibration data is, in fact, met. We definitely want to exposure-date things in parts of scaling space that are empty of calibration data.

The second criterion for whether or not we should be worried about uncertainty in extrapolating production rates from where they are well constrained by calibration data near the global snowline to locations that are not near the global snowline is as follows. We have scaling models that predict the variation in production rates everywhere in scaling space, not just near the calibration data. However, we can’t test whether they are correct in areas where we don’t have calibration data. So the question is, do we know that the scaling models are accurate in unsampled parts of scaling space, in particular at high latitude near the poles? Obviously, we can’t answer this question by comparing predictions to calibration data, because we don’t have any calibration data in that area. One alternative, but much less satisfying, way to answer the question is to ask whether different scaling models predict the same production rates for this region, or not. If they do, we might not be that worried; if they don’t, we might be more worried. Here is a comparison of production rate estimates for high latitude derived using the two most au courant scaling methods: that based on the work of Devendra Lal (the “St” scaling method in current jargon) and that based on the work of Nat Lifton and Tatsuhiko Sato (“LSDn”):

comp1

Here I have calibrated both scaling methods using the CRONUS-Earth “primary” calibration data set for Be-10, and plotted the resulting predictions for high latitude (zero cutoff rigidity, mean Holocene solar modulation parameter). Left panel shows the predicted production rates (at high latitude using an Antarctic atmosphere approximation) for the St (blue) and LSDn (red) scaling models; right panel shows the ratio of the two production rates. At low elevation, both scaling models predict production rates that are within a few percent of each other (because they are both pinned to calibration data at relatively low elevations at relatively high latitudes), but the predictions diverge at higher elevations. At 3000 m in Antarctica, LSDn scaling predicts ca. 15% higher production rates than St scaling (note that this is the same phenomenon discussed in this blog posting about saturated surfaces in Antarctica). This is a pretty big difference. As noted in the other blog entry about saturated surfaces, we can potentially use near-saturation Be-10 and Al-26 measurements from Antarctica to in part overcome the lack of independently dated calibration data in this part of scaling space. This exercise, I argue, indicates that LSDn scaling is accurate, and St scaling is not, for high latitude and high elevation. We could also potentially address this issue using samples that are saturated with respect to in-situ-produced cosmogenic C-14 in quartz, although this was attempted by Borchers and others and was inconclusive. Overall, however, this situation could be very much improved with new production rate calibration data from high elevation near the poles.

Summary: the existing set of production rate calibration data, although very far from uniformly distributed in scaling space, is pretty good if you are interested in using cosmogenic-nuclide exposure dating for glacier chronology, with one exception. The exception is that we could really use more calibration data from high elevation near the poles. This is actually a lot harder than it sounds, because there is, for the most part, not much in the way of organic life forms living at high elevation near the poles. By extension, there is not much available for radiocarbon dating, which makes it rather hard to independently date landforms. But this is clearly a problem that could use some creative ideas about how to overcome it.

What is the 26/10 production ratio anyway?

January 10, 2017

This rather long post covers the issue of the Al-26/Be-10 production ratio. The value of this ratio is important for several reasons.

First, it gives some information about whether a surface sample has experienced a single period of exposure at the surface. The basic, although slightly oversimplified, concept here is that if you measure a ratio that is a lot lower than the production ratio, your sample has spent a significant fraction of its life buried far enough below the surface to partially or completely stop the cosmic ray flux and allow previously produced nuclide inventories to decay…so if you compute the exposure age of the sample based on the assumption that it hasn’t been buried, you’ll be wrong.

Second, and more importantly, if you have a sample that is buried below the surface now, and you measure this ratio, the difference between the measured ratio and the production ratio tells you how long the sample has been buried. This concept, although also slightly oversimplified here, is the foundation of the method of burial dating.

The currently accepted value of the 26/10 production ratio, when appropriately normalized to the “07KNSTD” and “KNSTD” isotope ratio standardizations for Be-10 and Al-26, respectively, is 6.75. Interestingly, an uncertainty is almost never quoted for or applied to this number. Also interestingly, as far as I can tell, this number is just taken from a paper written in 1989 by Kuni Nishiizumi and colleagues. They stated it as 6.1; restandardizing Be-10 measurements from the then-current “KNSTD” standardization to the now-current “07KNSTD” corrects it to 6.75. In the widely utilized 2008 paper describing the online exposure age calculators formerly known as the CRONUS-Earth online exposure age calculators, we just accepted this number, mainly because we concluded that production rate calibration data that were available at the time agreed with the Nishiizumi estimate, and we didn’t propose any change. Subsequently, most papers about burial dating (which is where this number is really important) adopted this value or something very close to it (exactly how you do the KNSTD–>07KNSTD conversion affects the actual number at rounding error level). To summarize, we have all basically ignored this issue for 25 years. Why? Because 6.75 works. Most measurements on surface samples seem to more or less agree with this canonical value, and we have not ever found that burial ages computed using a ratio of 6.75 have been inconsistent with other age constraints.

Unfortunately, it may now be time to revisit this issue, because in recent years there has been (i) a lot more collection of production rate calibration data, as well as (ii) a lot more attention to first-principles modeling of production processes and their spatial variation. So in the rest of this post I’ll try to exploit this new information to figure out what the 26/10 production ratio really is, and, more importantly, to determine if we can just keep on not worrying about this issue as we have done successfully for the last 25 years.

To clarify what exactly we are talking about, in this post I am talking about the ratio of total Al-26 and Be-10 production rates at the surface, that is, including both spallation and muon production. In some work (including lots of mine) there is some confusion about whether an “Al-26/Be-10 production ratio” refers to spallogenic production only, or to total production. Fortunately, most of the time these are very close to the same thing, so any consequences of this confusion are small. But here, I am talking about total production.

The most straightforward way to determine the production ratio is simply to do what Nishiizumi and others did. Find some samples that have the following properties: they have been exposed at the surface for a single period of time; at the beginning of this period their nuclide concentrations were zero; surface erosion during the exposure period was negligible; and the duration of exposure is much less than the half-life of either Al-26 or Be-10. Then measure the concentration of both nuclides, and take the ratio. In the Nishiizumi paper these are samples of glacially polished bedrock from the California Sierra Nevada that have been exposed since deglaciation ca. 13,000 years ago.

The easiest way to find a larger set of samples that have these properties is, in general, to look at samples collected for production rate calibration purposes. Selecting samples suitable for measuring the 26/10 production ratio, of course, is less restrictive than selecting ones suitable for production rate calibration, because to measure the production ratio, you don’t have to know the exact duration of exposure, just that the exposure time has been a lot shorter than the Al-26 half-life. So lots of other samples that aren’t suitable for production rate calibration could be used to estimate the 26/10 ratio. However, existing production rate calibration data sets are a convenient way to find a set of samples that have been carefully selected to have simple exposure histories and negligible erosion. Looking in the ICE-D production rate calibration database and removing one extreme outlier measurement (26/10 = 14; not sure what went wrong here), one finds 90 measurements of both Al-26 and Be-10 on samples collected for calibration purposes. Each of these is a distinct measurement in which both nuclides were measured in a particular aliquot of quartz. Here is the distribution of 26/10 ratios in these samples:

r_hist_1 r_hist_2

On the left this is shown as a histogram and on the right as a normal kernel density estimate. A couple of points about this:

The average of all these data (shown as a red line on both plots) is 6.76. OK, we’re done. Back to sleep for another 25 years.

Just kidding. We’re not quite done. One initial thing that is notable about this data set is that it is skewed to the low side. This observation is consistent with the easiest way to make an error when measuring the 26/10 ratio in a sample. Specifically, one measures the Al-26 concentration by multiplying a measured total amount of Al by a measured Al-26/Al-27 ratio. The total Al measurement is made by taking aliquots of a solution of quartz dissolved in HF, evaporating the HF in the aliquots, redissolving the Al in a dilute acid, and measuring the Al concentration by ICP. If you are not careful in this process, you can retain insoluble Al fluorides in the final step, which means that the Al concentration in solution underestimates the total amount of Al in the sample, and leads to an underestimate of the 26/10 ratio. The mild negative skewness of these data suggests that this may have been a problem for some of the measurements, in which case the mean value of these data might slightly underestimate the true production ratio (the median value is 6.84).

For a slightly different perspective on this, then, let’s try estimating the production ratio from calibration data using a slightly different workflow, as follows:

  1. Start with the “CRONUS-Earth primary calibration data sets” for Be-10 and Al-26, again as represented in the ICE-D calibration database.
  2. Using these data and the production rate calibration code from the version 3 online exposure age calculator, determine best-fitting values of reference production rates for Be-10 and Al-26. I’ll use two scaling methods: the non-time-dependent “St” scaling and the time-dependent “LSDn” scaling.
  3. Also compute production rates due to muons for Al-26 and Be-10; add production rates by spallation and muons together; divide to get an estimate of the 26/10 production ratio.

The main difference between this method and simply averaging all direct measurements of the 26/10 ratio as I did above is just that it allows us to incorporate data from calibration samples where only one nuclide was measured: we can utilize Be-10 data that lack corresponding Al-26 measurements, and vice versa.  In addition, all measurements represented in the CRONUS-Earth “primary” data set are relatively recent, meaning that the folks who did them certainly should have been careful about insoluble fluorides. In any case, this workflow uses somewhat different data and should yield an estimate of the 26/10 production ratio that is somewhat independent from the simple average above.

On the other hand, it brings up something new. The “St” scaling method, as implemented in the various online exposure-age calculators, assumes that Al-26 and Be-10 have the same latitude/elevation scaling, but the proportion of total production that is due to muons will vary with elevation and latitude (mostly with elevation). In addition, the relative proportions of fast muon production and negative muon capture change with elevation. As these three production pathways (spallation, fast muons, negative muon capture) all have different 26/10 production ratios, this means that the total 26/10 production ratio must vary with elevation (and a little with latitude as well). The point is that what we know about production rate scaling predicts that the 26/10 production ratio isn’t a constant value as we have been assuming, but will vary with elevation.

This variation is predicted to be close to negligible for the “St” scaling method: muon production makes up at most 2 or 3 percent of total production, so variations in the smallness of this very small fraction result in only per-mil-level variations in the production ratio. Thus, for this scaling method, mostly we disregard this issue completely. The “LSDn” scaling method, in contrast, predicts not only variation in the muon contribution, but also, more importantly, that spallogenic production of Al-26 and Be-10 will scale differently with magnetic field position and elevation. The reason for this is that this scaling method uses the results of an atmospheric particle transport model to predict the energy spectrum of fast neutrons at your location, and then multiplies this spectrum by a set of cross-sections for Al-26 or Be-10 production. These cross-sections depend on energy, and they’re different for the two nuclides. The following shows the neutron cross-section estimates used in LSDn scaling, as a function of neutron energy:

thresholds

Green is the neutron interaction cross-section for production of Al-26 from Si; blue is Be-10 from O; red is Be-10 from Si. The threshold energy for Al-26 production is lower than that for Be-10, so in parts of the atmosphere where neutron energy is lower (that is, at greater atmospheric depths or lower elevations), Al-26 production is favored relative to Be-10 production, and the 26/10 production ratio is predicted to be higher. So once we know that Al-26 and Be-10 have different threshold energies for production (we’ve known this for a while), this implies that the spallogenic production ratio has to vary with location; it can’t not do so. Similar work using an atmospheric particle transport model by Dave Argento (here) also highlighted the prediction that the 26/10 production ratio should change with elevation; he calculated that the spallogenic production ratio should be near 7 at sea level and 6.75 at high altitude.

The main point here is that first-principles physics and calculations based thereon predict that the 26/10 production ratio should vary with location. So let’s look at the data set of measured ratios (the data presented in histogram form above) as a function of elevation, to see if this variation is evident. Here are measured ratios plotted against elevation:

r_elv_1

Here are the same data with (i) the average (6.76), and (ii) a linear regression in elevation (6.94 – 1.022e-4 * elevation) plotted in red:

r_elv_2

Although the correlation between elevation and measured 26/10 is weak (correlation coefficient is -0.25; p = 0.02), it’s not zero. And a simple linear regression through these data agrees very closely with the prediction in the Argento paper.

This shows that the data are consistent with an elevation dependence to the production ratio, even though they don’t require it at very high confidence. But to back up a bit, this whole discussion of elevation dependence interrupted what we were trying to do: estimate the 26/10 production ratio by separately calibrating the Al-26 and Be-10 production rates. Now having shown that that no matter how we do this, we always predict some geographic variation in the ratio, we’ll continue to look at this as a function of elevation. Here is the result for the ‘St’ scaling method:

r_elv_3

Although it is a bit hard to tell this from the figure, there are separate green and blue dashed lines showing predictions for low latitude (high magnetic cutoff rigidity) and high latitude (low cutoff rigidity), respectively. As discussed above, although geographic variation is predicted for this scaling method, it’s very small. Reference production rates fit to the calibration data predict a 26/10 production ratio of 7, which is, in fact, a bit higher than the estimate we obtained above from simply averaging the direct measurements. Again, what seems to be the most likely explanation for this is just that there are more data involved, and we may have excluded some of the low outliers by choosing a more recently generated data set.

Here is the result for the “LSDn” scaling method (again, this is total production including muons):

r_elv_4

As discussed, this scaling method predicts a larger elevation dependence as well as a larger difference between high (green) and low (blue) cutoff rigidity. Again, it’s in decent agreement with the simple linear regression on these data as well as the predictions in the Argento paper (the latter isn’t surprising; they’re based on similar physics). To summarize, put all these predictions together with the data:

r_elv_5

My summary of this is as follows. The data set of direct ratio measurements on individual samples is, by itself, consistent with the canonical value of 6.75, although there is some evidence that this number might be skewed a bit low due to measurement errors. Basic physics of Al-26 and Be-10 production, on the other hand, indicates that the production ratio should be elevation-dependent. Scaling models that use this physics (e.g., LSDn), when fitted to calibration data, predict that the ratio should be in the range 7-7.1 at low elevation and in the range 6.5-6.75 at high elevation; these predictions are also completely consistent with the data set of direct measurements. Because the data set is rather scattered, mainly just due to measurement uncertainty, it does not appear to provide a means of choosing between scaling models at high confidence. So at this point, it appears that (i) simply assuming 6.75 always, and (ii) using an elevation-dependent value as predicted by LSDn scaling, are both consistent with available data. Choose (ii) if you like physics, and (i) if you like simplicity. Overall, I think I lean towards (ii), that is, physics.

For perspective, the difference between 6.75 and 7 is 3.7%. That’s not insignificant — it maps to age differences of 5-15% in typical burial-dating applications — but we’re haggling over fairly small differences here.

Finally, for completeness I’m going to cover one more source of indirect evidence for what the value of the production ratio is. This stems from the observation that if the production ratio at low elevation is 7 rather than 6.75, we should probably recalculate a bunch of previously published burial ages. Thus, it might be a good idea to look into whether changing the production ratio causes any serious problems with existing burial-dating results.  One could pursue this either by recalculating a bunch of burial ages and seeing if you find any conflicts with independent age constraints, or by a more explicit method in which if (i) you have a buried sample, (ii) you know its age independently, and (iii) you know the decay constants for Al-26 and Be-10, you can measure both nuclides in your sample and back-calculate the surface production ratio. This latter method features in a recent paper by Zhou and others, who burial-dated a fluvial gravel whose age was bracketed independently by argon-dated lava flows. They back-calculated the production ratio in this way to be between 6 and 7.2 (at 68% confidence), which is not particularly precise, but shows good agreement with predictions and surface measurements. Good. Pass. On the other hand, in a 2010 paper about glacial stratigraphy in the central US, we dated a paleomagnetically normal unit three times by the burial-isochron method (this is the “Fulton” till in that paper). Magnetically normal means it must be younger than the Brunhes-Matuyama reversal at 0.78 Ma; we dated it at 0.8 +/- 0.05 Ma, which is just barely consistent with the constraint. Assuming the 26/10 ratio to be 7 rather than 6.75 would make that age estimate older, 0.87 Ma, which would be badly inconsistent with the paleomagnetic stratigraphy. But — hold the presses — in the time period between that paper and now there’s also been a change in best estimates for muon interaction cross-sections, which acts to bring that age estimate back down to 0.81 (still +/- 0.05) Ma, and this remains consistent with the paleomagnetic boundary. So this highlights that there are a lot of moving parts involved in recalculating old burial age data, but for this one case I can think of immediately in which a burial age estimate bumps up against an an independent age closely enough to provide a serious constraint on the 26/10 production ratio, the hypothesis that the production ratio is near 7 at sea level passes.

So what ratio to use for burial dating? As noted above, physical arguments appear rather compellingly in favor of an elevation dependence for the ratio. If this is true, then if the average ratio computed from a set of samples that span a range of elevations is 6.75,  the true production ratio must be higher at low elevation (conveniently, Nishiizumi and others made their measurement at middle elevations). I think it seems sensible to include this effect in burial-age calculations.

P.S.: I still haven’t addressed the issue of what the uncertainty in the production ratio is.

 

Ten years of the online exposure age calculator formerly known as…

January 10, 2017

It’s recently come to my attention that I, like everyone else, unfortunately seem to have completely missed the ten-year anniversary of the online exposure age calculators formerly known as the CRONUS-Earth online exposure age calculators(*). Fortunately, the possibly more significant milestone of 1 million calculations served hasn’t happened yet: I expect that to happen sometime around the end of 2017. In any case, though, here are the latest statistics on usage of the online calculators at hess.ess.washington.edu. The following plot is denominated in number of exposure ages calculated per month — that is, if you submit ten samples at the same time, it counts as ten  — and I have made an effort to exclude samples provided as test data and also samples that are obviously synthetic (e.g., having zero or integer latitude and/or longitude). So this should be a decent approximation of how much actual use is happening. In any case the following shows more than ten years of usage by month.
usage_201701_a

The axes are for exposure age calculations (top), erosion rate calculations (middle), and production rate calibration (bottom). The color-coding reflects the various calculator versions: gray is the short-lived version 1; green the also fairly short-lived version 2.0; red the excessively-long-lived-and-well-past-sell-by-date version 2.2, and blue the recently introduced version 2.3. During the time period outlined in black in mid-2014, usage data weren’t being consistently logged due to various technical problems associated with hardware failures, operating system upgrades, and the switch from MATLAB to Octave. Thus, no data from that period are shown here. Shortly after that time period, the bars for two one-month periods go well off scale and are not shown here in their entirety: persons unknown, for unknown reasons, dropped tens of thousands of samples into the system all at once on several occasions. I have not yet investigated in detail whether they really pasted in that many samples by hand, or wrote some software to do that, or what exactly they were doing.

Agglomerating all the various functions and versions into a single number for total usage gives the following, again in units of exposure ages or erosion rates calculated per month:

usage_201701_b

There are a few interesting things about this.

First of all, overall usage continues to grow steadily. I guess this is not a total surprise, because even if the rate of new exposure age generation isn’t increasing, the total inventory of data that requires periodic recalculation continues to get bigger. But it does make you wonder exactly where the saturation point is.

As usual, calculating exposure ages dominates: the rate of erosion rate calculations is well behind. But usage rates are surprisingly high overall: average usage for the last year is about 10,000 samples monthly. That’s a lot of exposure age calculations.

Second, there hasn’t yet been any discernible impact from the introduction of two new online calculator services: the new CRONUS-Earth product, ‘CRONUScalc,’ developed by Shasta Marrero, and CREp, developed by Leo Martin, PH Blard, and colleagues. These have been available for approximately a year now without, apparently, making a significant dent in usage of the older system. This may just reflect the fact that right now everyone has to calculate everything three times, using all three systems, to make sure they’re not missing anything important. This would tend to imply that the brutal-competition-and-shakeout phase will come later. We can’t all be Facebook: someone has to be MySpace.

My bid to not become MySpace has been to put together a version 3 calculator with many technical and speed improvements, that does exposure-age and production-rate-calibration calculations and competes directly with CREp and CRONUScalc (but is faster, better, and cheaper, with more features, of course, even though CREp wins the swimsuit competition1 hands down and walking away). I am still working on version 3 and it doesn’t look like it has yet begun to cannibalize usage of the older version 2.x family, but the goal is to make that happen soon so that I can stop worrying about all the somewhat-out-of-date 2.x code. Version 3 usage, on the same time scale as above, is here:

usage_v3_20170103In this plot, the upper panel shows exposure age calculations and the lower panel shows use of the production rate calibration interface (which has only existed for less than a month). Light green shows programmatic requests using the web service API, and dark green shows requests by actual people pasting data into the data entry webpage. The web service requests, which comprise the vast majority of usage, reflect the fact that the version 3 code is the back end for the antarctica.ice-d.org database: any web page served by the ICE-D server uses the version 3 web service API to compute exposure ages, and those numbers bulk up the light green bars here. As this setup generates lots of requests with not much user effort, I am a bit surprised that these numbers are not bigger. However, so far, the direct use of version 3 through the web page interface (that is, the dark green) is nowhere near that of the version 2.x family. I’ve pretty much completed the version 3 code at this point, though, so hopefully that’ll change soon.

Summary: 10,000 exposure age or erosion rate calculations every month is a lot of exposure age or erosion rate calculations.


1A “swimsuit competition” is a typical component of contests known as “beauty pageants” that are popular in many parts of America as well as a few other countries worldwide. U.S. President-elect Donald Trump has been closely associated with promoting and judging beauty pageants in the past. In a beauty pageant, young women compete on the basis of their appearance, personality, and accomplishments, with the aim of winning the right to represent their city, state, country, or other group either in other beauty pageants or for charitable or publicity purposes. The competition involves a series of events, usually including one or more in which contestants appear before a panel of judges while clad in garments such as swimsuits (typically one-piece) or formal dresses. These events highlight the appearance, grooming, and fashion sense of the contestants, in contrast to other events that may focus more on their skills and accomplishments. Although such events have incurred widespread criticism on the basis that they perpetuate obsolete gender roles and validate discriminatory behaviour based on appearance, the term “swimsuit competition” remains commonly used in colloquial English to denote contests or evaluation criteria that emphasize appearance over substance. For example, one might, hypothetically, state that “the selection process for GSA Fellows is a total swimsuit competition.” In the present blog entry, the phrase is used rhetorically to highlight the professionally designed web pages and online user interface for the CREp software, which contrast favorably with the near-total lack of attention to graphic design and visual communication displayed by other online exposure age calculator websites. It is not intended to imply that CREp would not also score well in, for example, the talent or personal interview portions of the competition.

It’s all about $\mu$

November 4, 2016

no_muons

By far the biggest pain in the neck in the overall field of cosmogenic-nuclide geochemistry is the need to compute nuclide production rates due to muon interactions. Production rates due to muons are quite small relative to total production rates at the surface, but they’re not small enough that you can ignore them completely. In the subsurface, they’re the majority of production, so are actually important for applications such as burial dating and depth profile dating, but they’re complex to calculate and hard to verify due to a general lack of subsurface calibration data. It’s much more difficult to obtain useful calibration data for production rates due to muons than for surface production rates due to spallation, mainly because there aren’t really any geologic processes that completely remove the entire preexisting muon-produced nuclide inventory, which extends to tens of meters depth, all at once, leaving a blank slate. And if there were, production rates are really low, so the blank slate would have to sit around for a really long time  before you could do the measurements accurately, in which case something else bad would probably happen to it. So the only hope for decent calibration data is to collect tens-of-meters-long cores from bedrock surfaces that have been sitting around at extremely low erosion rates for millions of years, and there aren’t a whole lot of such things.

The last couple of years, largely due to work in the CRONUS-Earth and CRONUS-EU projects, has either improved or unimproved this situation by generating some new methods of computing muon production rates and also by collecting several new bedrock cores. This increases the overall complexity of the situation as a whole, but also helps a bit by providing enough data to make a decent start at evaluating which methods of computing muon production rates work best. In the last few months I’ve gotten tired of being perpetually confused on this subject, so I’ve waded through all these data and put together an extraordinarily long, tedious, and pedantic document on the subject of benchmarking muon production rate calculation methods against the available calibration data set. I imagine that I will submit this to an actual journal at some stage, but for now it is here (the PDF file). Warning: it is long, tedious, and pedantic. However, if you want my best shot at a synoptic view of all reasonably accessible knowledge that is relevant to production by muons, it might be potentially useful. Of course, it hasn’t been reviewed in detail by anyone else, so it might also be wrong. The MATLAB code that does all the calculations is also available at the same address (the zip-file).

Presumably, about six people on Earth will be interested in this document. However, I imagine that all six of those folks are also regular readers of this blog. You know who you are.  As noted, this will probably be sent off to a journal at some stage, in which case some of you will also have the truly unpleasant and unfortunate duty of reviewing it. Enjoy!

Oh yeah. The 2%-of-production-98%-of-suffering logo above is also in that directory, in case you want to put it on a T-shirt. For example, from CafePress:89_230x230_front_color-white

 

 

 

Production rate scaling devious plots

October 22, 2016

A large amount of the scientific literature on cosmogenic-nuclide geochemistry uses the term “production rate scaling scheme” to refer to one of a number of computational methods used to calculate how cosmogenic-nuclide production rates vary geographically and over time. Of course, the word “scheme” is intended to have its fairly common scientific meaning of a formal mathematical procedure for carrying out some task, as one would refer to a numerical integration scheme or an implicit solution scheme. As most people are aware, however, that’s not really what’s connoted by “scheme” in the rest of the English language. An article in Physics Today from 2011 called attention to this with the following table:

table

This list is absolutely right on target. In the admittedly unlikely event that a non-cosmogenic-nuclide geochemist were to read the cosmogenic-nuclide geochemistry literature, they might legitimately be mystified as to why you had to choose one of several dastardly plots, whilst rubbing one’s hands together with an evil cackle, to compute nuclide production rates. I’m probably the worst past offender in this regard, starting with this paper, and I’m not proposing to give up terms like “positive feedback” that are specifically meaningful because they are derived from the presence of positive or negative signs in mathematical descriptions of the phenomenon, but the sad fact is that “scheme” doesn’t have a whole lot of redeeming qualities. It really does sound like a devious plot, and it doesn’t have any special meaning that’s not equally well communicated by simpler words like “method” or “procedure.” We don’t need it, so let’s not use it. Henceforth, I suggest just talking about production rate scaling methods, procedures, or possibly algorithms; of these, I think I prefer “methods.” If anyone has any better ideas, let me know, but “scheme” is now out as far as I’m concerned.

 

Saturated surfaces in Antarctica

September 9, 2016

This post is mostly about production rate scaling models, but in addition is neat because it’s an actual useful application of the ICE-D Antarctic exposure age database. It’s a pretty simple one, but hey, mighty oaks from little acorns grow. This is another immensely long post, but the short summary is that a look at samples from Antarctica with very high Be-10 and Al-26 concentrations seems to indicate that the relatively recent, and extremely complicated, production rate scaling model of Nat Lifton and colleagues (the “LSD” scaling method) works better at high latitudes than the simpler and very widely used scaling method based on the work of Devendra Lal in the 1980’s. This is interesting because so far it has been quite difficult to determine whether or not there is any substantive difference in performance between these scaling methods.

We start with some recent Be-10 measurements by Gordon Bromley of the University of Maine on sandstone boulders, presumably glacially transported, at the same site in the southern Transantarctic Mountains where we recently observed what appears to be the highest concentration of cosmogenic He-3 ever observed in a terrestrial sample. These Be-10 measurements are interesting because, even after a reasonably thorough effort to make sure that no mistakes were made, they appear to show that these samples contain an impossible amount of Be-10. What does impossible mean? Basically, if a surface is left undisturbed and exposed to the cosmic-ray flux long enough, the concentration of cosmic-ray-produced Be-10 will eventually build up to a level high enough that the rate that Be-10 is lost by radioactive decay is equal to the production rate. At this point, the amount of Be-10 can’t increase any more. So this relationship provides a limit to the maximum possible concentration of Be-10 that can be present in quartz at a particular site. Generally this condition is referred to as either ‘production-decay equilibrium,’ or, more commonly, just ‘saturation.’ In math, accumulation of Be-10 in a non-eroding surface is governed by the differential equation:

\frac{dN}{dt} = P - N\lambda

where N is the Be-10 concentration (atoms/g), P is the production rate at the site (atoms/g/yr), and \lambda is the decay constant for Be-10 (4.99e-7 /yr). So saturation occurs when dN/dt = 0, or when N = P/\lambda. Thus, if we know the production rate at some site, we equivalently know the saturation concentration.

At Gordon’s sample site, the production rate (calculated with the Antarctic atmosphere model and scaling scheme of Stone(2000) and the “primary” production rate calibration data set of Borchers and others (2015), the Be-10 production rate is 38.2 atoms/g/yr, which means the saturation concentration for Be-10 is 7.65 x 10^7 atoms/g. However, the sample actually contains 8.5 x 10^7 atoms/g. Inconceivable!  If we accept this production rate estimate, this is an impossible Be-10 concentration.

OK, that seems weird, but it’s not actually all that  unusual in Antarctica. To show this, look at nearly all known Be-10 and Al-26 measurements from Antarctica as they are represented in the ICE-D:ANTARCTICA database (making sure they are all correctly normalized to the ’07KNSTD’ and ‘KNSTD’ standardizations, respectively). This data set looks like this:

fig1

What I have done here is just plot nuclide concentration vs. elevation. Because production rates increase with elevation, the nuclide concentration associated with a particular exposure age also increases with elevation, so the envelope of possible nuclide concentrations that one could potentially observe widens as elevation increases. There are about 1200 Be-10 measurements and 500 Al-26 measurements represented here.

Now, because we are using an atmosphere model and scaling scheme that vary only with elevation (magnetic cutoff rigidity is pretty much effectively zero in Antarctica for all practical purposes), predicted saturation concentrations for Be-10 and Al-26 are also only a function of elevation, so we can plot them on these axes, as follows:

fig2

Here the black line shows predicted saturation concentrations. It is evident that there are lots of samples in Antarctica, mostly at high elevation above about 2000 m, whose Be-10 concentrations appear to be above saturation. Gordon’s samples are sitting off to the right of the line at about 2300 m elevation. Some measurements are up to 12% above saturation concentrations calculated in this way.

There are four possible reasons for this. One, the measurements are somehow messed up. This is unlikely, because the measurements represented here are from several different laboratories and/or AMS facilities, and it’s hard to come up with any reason why they would all be similarly spurious. Certainly it is suspicious that the maximum amount that some samples exceed predicted saturation concentrations (12%) is about the same as the difference between the KNSTD and 07KNSTD standardization (11%). However, I spot-checked most of these samples, most of the measurements are well documented, and I am pretty sure that this error has not been made.

The second reason is that we might be overestimating the atmospheric pressure at high-elevation sites, and therefore underestimating the production rate and the saturation concentration. That possibility we can evaluate by using a totally different atmosphere model, one derived from the ERA40 reanalysis and adapted for use in estimating production rates by Nat Lifton (you can read about it here). To do this, we need slightly different axes, because in the ERA40-derived atmosphere model, atmospheric pressure varies with location as well as elevation. So instead of plotting just raw nuclide concentrations, we will plot the ratio of the measured nuclide concentration at a particular site to the saturation concentration predicted for that site. For samples above saturation, this ratio will be greater than one.

fig3

The two left-hand panels use the Stone (2000) Antarctic atmosphere, and the two right-hand panels use the ERA40-derived atmosphere. Basically, there is no difference. Both atmosphere models with this scaling scheme indicate that there are a bunch of samples at high elevations that exceed saturation concentrations for both Be-10 and Al-26. The fact that two independent atmosphere models agree on this point would tend to indicate that our problem cannot be explained just by a bad atmosphere approximation.

The third reason is that the sites where we observe greater-than-saturated nuclide concentrations might have changed elevation. If the samples had spent a significant amount of their exposure history at an elevation higher than their present elevation, they could potentially have higher nuclide concentrations than they could ever achieve at their present elevations. Elevation changes could be caused by local effects (for example, all the samples are on the hanging wall of an active normal fault), or by regional effects (for example, changes in dynamic topography during the several-million-year exposure history of these samples; or glacial isostasy, if the ice sheet surrounding sample sites had continuously thickened during the period of exposure). The amount by which the offending samples exceed predicted saturation concentrations requires approximately 300-400 meters of elevation change in the last few million years. We can evaluate some of these possibilities by looking at where these samples are actually located in Antarctica. Here they are:

fig4

The white dots are Be-10 measurements exceeding saturation, and the gray ones are for Al-26. There are some more white dots hidden behind some of the gray dots. These samples are widely distributed around the high mountains of East Antarctica, which makes it unlikely that concentrations above saturation are due to local faulting or mass-movement effects at a particular location. The background image in this figure is a model for changes in dynamic topography in the last 3 Ma from a recent paper by Austermann and others; the color scale shows total uplift in the past 3 Ma in meters. Samples exhibiting Be-10 concentrations above saturation are not located in areas that have experienced signficant regional subsidence, at least due to this process, over long time periods. In fact a few of them are located in areas that are predicted to have experienced significant uplift. So we can probably exclude local faulting as a blanket explanation for the many samples in Antarctica that have Be-10 concentrations above saturation, and we can also exclude long-wavelength elevation change due to mantle dynamics. What about glacial isostasy? If the ice sheet in the vicinity of these samples had continuously thickened over the past several million years, then the elevation of the samples would have lowered during that time due to isostatic compensation, potentially leading to above-saturation nuclide concentrations. However, we need this change to be large — order 1000 m of ice to obtain the needed few hundred meters of isostatic response — and this is not consistent with sea level records or ice sheet modeling.

fig7

For example, the above figure shows changes in bed elevation from a 5-million-year-long Antarctic ice sheet simulation by David Pollard and others, at two locations that are representative of where we see Be-10 concentrations above saturation: the central Transantarctic Mountains (red), and the East Antarctic marginal highland in the area of the Sor Rondane mountains (blue). At both of these sites, this model run predicts a long-term decrease in land surface elevation over the last ~4 Ma, presumably driven by Antarctic ice sheet expansion due to lower sea levels in the Pleistocene. However, the magnitude of the decrease is only several tens of meters, much too small to account for the above-saturation Be-10 concentrations. Thus, long-term lowering of sample sites due to glacial isostasy does not appear to be a good explanation for above-saturation cosmogenic-nuclide concentrations either.

The fourth possibility, of course, is that our production rate scaling is incorrect. I’ve left it for last, of course, because it’s the most likely. We don’t have any Be-10 or Al-26 production rate calibration data at high elevation near the poles (see here), so we are relying on the scaling method of Stone (2000), which is of course just based on the work of Devendra Lal, to extrapolate from production rate calibration sites at low elevation to sample sites at high elevation. We can test whether inaccuracy in this scaling method explains the apparently-above-saturation samples, by trying a different scaling method. Specifically, let’s try the more recent (and much more complicated) scaling method of Lifton and others (2015), now commonly known as the ‘LSD’ scaling method. At low magnetic cutoff rigidity (i.e., in polar regions), this scaling method predicts a larger altitude dependence than Lal-based scaling methods. Because magnetic field variability is out of the picture at polar latitudes, this elevation dependence is the primary difference between the two scaling methods. Here are the results, calculated in the same way as for the four-panel figure above (compare with the first two panels of that figure):

fig5

Using the LSD scaling method completely fixes the problem (for both atmosphere models, although only one is shown here). Here is another view that copies the first figure that I started with above, showing raw concentration vs. elevation, compared with saturation concentrations. The solid lines are saturation concentrations predicted using the Stone/Lal scaling method as shown above, and the dashed lines are saturation concentrations predicted using the LSD scaling method. If we use the LSD scaling method, there are no measured Be-10 or Al-26 concentrations in Antarctica that are impossibly high.

fig6

 

So, overall, nuclide concentrations in many high-elevation Antarctic surfaces are inconsistent with production rates estimated using the Lal/Stone scaling method, but are consistent with production rates estimated using the LSD method. This would appear to imply that LSD scaling performs better for this data set, and, by implication, very likely for high-elevation sites in polar regions generally.

This is interesting for several reasons.

First, it would tend to indicate that one should probably use LSD rather than Lal-based scaling to compute exposure ages from old, high-elevation glacial deposits in  Antarctica. It makes a pretty big difference: almost 20% at 2500 m elevation. For example, you might find a moraine at this elevation that belonged to the mid-Pliocene warm period (ca. 3-3.3 Ma) if you used Lal-based scaling, but was coeval with the onset of Northern Hemisphere glaciation (ca. 2.7 Ma) if you used LSD scaling. Certainly some potential for confusion there.

Second, although the LSD scaling method is very much more complex than the Lal-based methods, and there are some physical arguments that indicate that LSD should be more accurate than Lal, so far it has been very difficult to show that there is any practical difference in performance between the two. This  issue is important mainly because of the complexity of the LSD scheme — it requires a lot more computation, so code to implement it is more complicated and very much slower. If we don’t have to use it, we’d rather not. The calibration exercise of Borchers and others (2015) showed that for available Be-10 and Al-26 calibration data, there was basically no detectable difference in performance, and that’s been taken (certainly by me) to indicate that for most practical purposes, it’s fine to use the simpler method. However, the Antarctic exposure-age data set would tend to indicate the opposite: that the Lal-based methods, when calibrated with existing calibration data sets that lack coverage at high altitudes in polar regions, significantly underestimate production rates at these sites.

Third, if we accept that the LSD scaling method is correct, we’ve gone from the conclusion that there are a lot of Be-10 and Al-26-saturated surfaces in Antarctica, to the conclusion that there aren’t any. No known measurements from Antarctica quite overlap with predicted saturation values. In one sense, this is not really a huge surprise, because for a site to reach true production-decay saturation, the surface erosion rate must be exactly zero for several million years. Really exactly zero erosion for millions of years is a big ask — this doesn’t even occur in outer space — and is probably unlikely to occur in reality, even at -60° C in the upper Transantarctic Mountains. Again, if we accept that the LSD scaling method is correct, we can estimate steady-state erosion rates (that is, maximum limits on the erosion rate) for the samples that are closest to saturation; for the Be-10 measurements this is in the range of 1-2 cm per million years; for the Al-26 measurements it is 3-4 cm per million years. It is not clear why Be-10 measurements are closer to saturation than Al-26 measurements, although because they are all very close to saturation, this calculation is quite sensitive to the calibrated reference production rates for both nuclides, so this could easily be explained by inaccuracies in those values. In any case, these are plausible values for the lowest erosion rates in Antarctica. Not zero, but extremely slow.

So the summary here is that it appears that the LSD scaling method is consistent with the overall data set of Be-10 and Al-26 concentrations in Antarctic surfaces, and the Lal-Stone scaling method isn’t. One final point, however, is that Stone (2000) also considered this issue (with fewer data; see Figure 4 in that paper) and concluded that the Lal scaling method did not, in fact, undershoot saturation concentrations. The difference, I am fairly sure, is that new calibration data imply lower production rates than the calibration data available at the time of that paper; if currently available production rate calibration data had been used in that paper they would have led to the same conclusion I come to here, that Lal-based scaling underestimates saturation concentrations at high elevations in Antarctica. However, there are a lot of moving parts involved in comparing the two calculations (Be-10 standardizations, production rates, decay constants, etc….) and that claim could probably use further investigation.

 

 

 

 

 

 

 

 

 

 

Possibly the most He-3 ever observed in a terrestrial sample

September 1, 2016

In a previous post I pointed out that we (that is, the noble gas mass spectrometry facility at BGC) might have recently analysed the sample with the highest concentration of cosmogenic Ne-21 ever observed in a terrestrial sample. By ‘terrestrial sample,’ of course, I mean a rock that has accumulated its entire cosmogenic-nuclide inventory on Earth — because cosmogenic-nuclide production rates are so much higher outside the Earth’s atmosphere, meteorites have orders of magnitude higher concentrations of cosmic-ray-produced radionuclides than possible on Earth. So meteorites, or extraterrestrial-dust-laden ocean-floor sediments from the central Pacific, don’t count. In any case, no one has shown up to tell me about anything with a higher Ne-21 concentration, so the claim may be true.

The point of this post is that it now appears that we may also have analysed the sample with the highest concentration of in-situ-produced cosmogenic helium-3 ever observed in a terrestrial rock. This sample is near 2300 m elevation at Roberts Massif, a nunatak at the head of the Shackleton Glacier in the Transantarctic Mountains at 85.5° S latitude. In fieldwork last year, Gordon Bromley and his colleagues collected this sample from that site; the sample is a loose clast, coarsely faceted and therefore possibly glacially transported, of the local bedrock, a Jurassic mafic intrusive rock known as the Ferrar Dolerite. Here is a picture (photo credit: Gordon Bromley).

15-ROB-028_2_small

Two separate analyses at BGC of pyroxene separated from this sample yielded 1.39 x 10^10 and 1.35 x 10^10 atoms/g He-3. Two adjacent smaller loose rocks — this one and this one — had slightly lower, but still pretty unreasonable, He-3 concentrations near 1.25 x 10^10 and 1.07 x 10^10 atoms/g, respectively. As surface rocks go, that’s a lot of helium-3. Note, however, that even though He-3 is extremely expensive at present, you would still have to mine something on the order of 1000 metric tons of this pyroxene to have $1 worth of He-3. It’s not ore-grade.

Pyroxene in this lithology is known to contain only about 6 x 10^6 atoms/g of non-cosmogenic He-3, presumably inherited from initial mineral crystallization, so the measured concentration in these samples is essentially all cosmic-ray-produced. 1.39 x 10^10 atoms/g cosmogenic He-3 at this site implies an apparent exposure age of approximately 13 Ma. As discussed in the previous post, this is not the oldest exposure age ever measured; that record still belongs to samples from the Atacama Desert, analysed by Tibor Dunai and colleagues, with apparent exposure ages exceeding 30 Ma. However, those samples are at much lower elevation where production rates are also much lower, so even though they are older, the total cosmic-ray dose they have experienced is smaller.

In any case, a relatively slapdash literature search reveals that the nearest competitor appears to be another sample of pyroxene from the Ferrar at Mt. Fleming, also a high-elevation site in the Transantarctic Mountains. Joerg Schaefer and colleagues found 6.9 x 10^9 atoms/g cosmogenic He-3 in this sample, which is still plenty but is almost a factor of two less than in the Roberts Massif sample. Are there any other contenders?

The point, once again, is that these high-elevation rock surfaces in Antarctica have enjoyed a stunning and expansive view of nearly the entire late Cenozoic evolution of the Antarctic ice sheets. Frankly, a lot of that was probably pretty boring, but if the East Antarctic Ice Sheet really collapsed in past warm-climate periods, they watched.