Skip to content

Upside-down bananas

March 30, 2017

If you are reading this, you are probably familiar with the two-nuclide diagram commonly used to represent paired Be-10 and Al-26 data:

This example is from a review article by Darryl Granger from 2006 (in GSA Special Paper 415) that gives a good description of what the diagram is and how it is supposed to work. Basically, the diagram shows Be-10 concentration on the x-axis and the Al-26/Be-10 ratio on the y-axis, although commonly both quantities are shown in normalized units computed by dividing observed concentrations by the respective production rates at the sample site – this normalization allows plotting data from different sites, with different production rates, on the same axis. The basic concept here is that if your sample stays at the surface and experiences steady exposure with or without erosion, nuclide concentrations are confined to the “simple exposure region” highlighted with dark lines in the above figure. In certain manifestations of this diagram (primarily when plotted with a log x-axis and a linear y-axis), the simple exposure region vaguely resembles a banana, for example:

This resemblance, perhaps unfortunately, has resulted in the common use of the term “banana diagram.”  Then the important aspect of this diagram is that if the sample gets buried after a period of surface exposure, both Al-26 and Be-10 concentrations decrease due to radioactive decay, and Al-26 decreases faster than Be-10. This is represented by a trajectory that goes down and to the left, as shown above in the Granger example. So samples that are “below the banana” have experienced both a period of exposure and a period of burial. Here’s an example:

The lines are contours of burial time in Myr. So these samples have been buried for 2.5 Myr. This is the foundation of the method of cosmogenic-nuclide burial dating.

So, basically, in the Al-26/Be-10 two-nuclide diagram (let’s not use “banana diagram” any more…historically, it probably should be called a “Lal-Klein-Nishiizumi diagram,” although that is a bit cumbersome), exposure goes to the right and burial goes down. Easy. The problem arises when other nuclides are involved. Here is an example of a Be-10/Ne-21 two-nuclide diagram from one of my papers:

Here I have put Ne-21 (the longer-lived nuclide) on the x-axis and the Be-10/Ne-21 ratio on the y-axis. So, again, exposure goes to the right and burial goes down. On the other hand, here is a Ne-21/Be-10 diagram from a very cool paper by Florian Kober and Vasily Alfimov:

This figure has a lot of data in it that are beside the point from the perspective of this post, but the point is that it has the opposite axes: Be-10 concentration on the x-axis and Ne-21/Be-10 ratio on the y-axis. Thus, exposure still goes to the right (at least for a while), but burial goes UP. Not down. Not what we expect from our previous experience with the Al-26/Be-10 diagram. Very weird.

At present, the choice of axes in two-nuclide diagrams involving Ne-21 in the literature appears to reflect your position in relation to the Atlantic Ocean. US researchers generally plot [Ne-21] vs. Be-10/Ne-21, whereas Europeans plot [Be-10] vs. Ne-21/Be-10. Although I have not made a systematic historiographic study of this phenomenon, I believe that the European style is largely just due to the fact that the “CosmoCalc” software put together by Pieter Vermeesch does it this way.

So the question here is which one to use. I have a pretty clear opinion on this. I think no matter what the nuclides involved, you should always do it the same way as is commonly done for Al-26/Be-10 diagrams, so that burial goes down. Nearly all the two-nuclide diagrams in the existing literature involve the normal implementation of the Al-26/Be-10 diagram, so anyone familiar with this literature expects exposure to go to the right on a tw0-nuclide diagram, and burial to go down. I think inverting the diagram so that burial goes up just confuses readers. Why confuse readers when you don’t have to? Thus, I advocate always plotting the longer-lived nuclide of the pair on the x-axis, and the ratio of the shorter-lived to longer-lived nuclide on the y-axis. Then burial always goes down, as we expect. Of course, I am in the US, but I am not just cheering for my own team here. It really does make more sense for two-nuclide diagrams to always behave the same way no matter what nuclide pair is involved. I’ve done it this way in the version 3 online exposure age calculator, which will generate two-nuclide diagrams for all combinations of Ne-21, Be-10, Al-26, and C-14 in quartz, and also in the ICE-D database which makes use of the v3 calculator as a back end.

Summary: Draw two-nuclide diagrams so that burial always goes the same way. Why confuse people?

 

 

 

 

 

 

 

Advertisements

Is a cheap GPS OK for elevation measurements, or do you need a fancy one?

March 28, 2017

This post is about elevation measurements for exposure-dating samples, and how accurate they need to be. Basically, the main thing that controls cosmogenic-nuclide production rates is site elevation, or, more precisely, atmospheric pressure — at higher elevation, there is less atmosphere between you and the extraterrestrial cosmic-ray flux, so the production rate is higher. Thus, to compute the cosmogenic-nuclide production rate at a sample site, the first thing we need to know is the elevation. Once we know the elevation, we can convert it to a mean atmospheric pressure using a model for how the atmospheric pressure varies with elevation, and then compute the production rate.

Note that there are two places to potentially screw up here. The second one — converting an elevation to a mean atmospheric pressure during the exposure duration of the sample — is actually a fairly complicated problem and is the subject of another post, as well as a fairly large number of papers. However, the first one — accurately measuring the elevation — ought to be pretty simple. In general, determining your elevation is a fairly well-established technology that people have been working on for centuries. So even if we can’t be completely sure that we’ve done the elevation-atmospheric pressure conversion very precisely, we ought to be able to be fairly confident that we’re not making things worse by also making inaccurate elevation measurements. So the rest of this post covers (i) exactly how precise we need elevation measurements to be, and (ii) various ways to accomplish or not accomplish that goal.

So how precise do we need elevation measurements to be? The basic rule of thumb is that 10 m of elevation change is approximately a 1% change in the production rate, but this is a bit variable with elevation. Specifically, if we choose, for example, the production rate scaling method and Antarctic atmosphere model from Stone, 2000 (other scaling methods and atmosphere models are equivalent for all practical purposes from this perspective), here is how much effect a 10 m change in elevation has on the production rate:

It’s about 1% per 10 m at sea level and less at higher elevation. What this means is that accurate elevation measurement is actually fairly important. In most cases, the total uncertainty in estimating production rates for purposes of exposure-dating is around 5% or 6%, and 1% is a decent-sized fraction of that. If having a 10-m uncertainty in measuring elevation is adding an additional 1% uncertainty to your production rate estimate, that certainly seems like something to try to avoid. Basically, the point of all this is that we would like to be able to measure elevations with better than 10 m precision. Preferably quite a lot better.

In the modern era, the default and simplest way to measure your elevation is just with the same inexpensive handheld GPS unit, which is probably a Garmin that you bought a few years ago for about $150, that you are already using to record the position of your sample. Most handheld GPS units indicate their estimated horizontal positioning accuracy in real time, and in most situations it’s generally somewhere in the range of 3-6  m. Note, however, that this condition dates back only to approximately the year 2000. Prior to that time, the GPS satellite network, which of course was operated by the U.S. Department of Defense, included a feature with the Orwellian name of “Selective Availability” that intentionally degraded GPS performance for non-military users. So handheld GPS data from that era are much less accurate, with horizontal precision in the tens-of-meters range.

Unfortunately, in nearly all cases the vertical accuracy of a GPS position is not nearly as good as its horizontal accuracy.  This is just because GPS satellites are mostly not right over your head, but well away from the zenith at shallow angles toward the horizon. In this geometry your vertical  position is more sensitive to range errors than your horizontal position. A rough rule of thumb is that vertical position errors are usually about twice horizontal ones, so if your handheld GPS is indicating 3-6 m horizontal precision, that is something like 6-12 m vertical precision. Here is an example from data that I collected during a couple of exposure-dating projects in Antarctica. What these figures show are histograms of differences between elevations measured with a Garmin consumer handheld GPS ($170; an older version of this) and elevations of the same sites measured with a pair of fancy, differentially-corrected, post-processed, dual-frequency Trimble survey GPS units (probably $40K, but should have decimeter precision).

If we throw out the obvious wild outliers in both of these data sets (more about them later), these residuals are approximately normally distributed with a standard deviation of 7 m, which is actually a little better than we expect from the twice-horizontal-precision rule of thumb. So in general, this is pretty good: with our $170 low-hassle solution we can keep production rate inaccuracy due to errors in elevation measurements below 1%.

There are, however, a couple of other important points about these data. One, they both show an offset in which the handheld measurement is systematically approx. 4 m above the DGPS measurement. These are in different years so we can’t ascribe it to weird satellite availability. I suspect this may reflect my error in converting between ellipsoid and geoid heights (more about this later) for the two sets of measurements: I am not an expert in GPS data reduction and it might be that this is how you can tell that. Alternatively, it could have something to do with the handheld GPS unit having a less sophisticated atmospheric correction than used in the DGPS post-processing.

Two, there are some pretty serious outliers in the 20-30 m range — sometimes the measurement is very wrong. Of course, you expect some of these from any random process, but I suspect that what happened here is just that I didn’t leave the handheld GPS turned on for long enough to collect an accurate position. These data are from Antarctica, which is (i) remote, and (ii) cold. Item (i) means you are highly motivated to conserve battery power because you have only a finite supply of AA batteries in your camp, and item (ii) means you are highly motivated to complete the measurement quickly and put your mittens back on. Both factors would tend to favor not leaving the unit turned on long enough to collect sufficient data. But these are significant outliers that could affect an exposure age by 3%. That’s nearly 1/3 of the Younger Dryas.

In my opinion, the summary of all this is that modern inexpensive handheld GPS units are, basically, just barely good enough for measuring elevations of exposure-dating samples. Assuming  I am right about the outliers in the data set above just being due to my carelessness, if you are careful, you should be able to keep production rate uncertainties stemming from this source to approximately 0.5-0.7%. That’s not bad.

The next question, then, is how to do better. There are several strategies for this. One is just to buy a more expensive dual-frequency GPS that can be differentially corrected. As Trimble has a near-monopoly on this market, the first issue here is that this is expensive. Still, a suitable unit can usually be borrowed from most university geoscience departments. However, this solution creates a number of new problems. One is weight — if your fieldwork is foot-supported, you have just added a lot of weight to your pack. Another is power — most of these units require daily recharging (or heavy gel-cells). Again, a potential problem for remote fieldwork. A third potential problem is that if you are operating someplace very remote (like Antarctica, again), you will not be within range of stationary GPS stations that can be used for differential correction, so you will need to not only carry a rover unit with you to the sample sites, but also establish a temporary base station at your camp with a second unit. A fourth is that you need to learn enough about GPS data processing to not make errors in the processing that are much worse than you would have made with the inexpensive unit. This is a serious consideration. A final problem is that collecting enough data with a Trimble DGPS unit to eventually yield submeter vertical precision takes a while — generally on the order of 30 minutes per site. Doing this at each site would seriously limit the number of samples that could be collected in a day.

A completely different strategy is to measure sample elevations using a barometer or barometric altimeter. This is very old-school and, at this point, kind of in the realm of retro hipster hobbies, but has a couple of big advantages, mainly that barometers designed for this purpose are extremely sensitive — they can register less than 1 m of elevation change — and quite lightweight. Some don’t even require batteries. The problem, of course, is that barometric pressure is constantly changing independently of elevation due to meteorological effects, so collecting good barometric elevation data requires care and thought, and, in some cases, a stationary base-station barometer as well. However, if you are working in any area of the continental US that is covered by a USGS 7.5′ topographic map, you can very accurately estimate the elevation of your sample sites by barometric traverse between sample sites and elevation benchmarks or survey points noted on the map. Of course, in this example in most cases you will find that you could have just plotted your horizontal GPS position on the map and read off the contour elevation to get the same result.

Leaving aside the special case where you have a USGS map (or, even better, a SwissTopo map that is almost so complete that your individual boulder is already marked), you can sometimes significantly improve on the 7-m accuracy of handheld units without incurring serious pack weight problems, by establishing temporary benchmarks with a Trimble DGPS and then using the barometric altimeter to survey elevations of sample sites in relation to the temporary benchmarks. I’ve used this strategy for many years in Antarctica and found it to yield total vertical positioning uncertainty somewhere in the 1.5-2.5 m range, depending on how closely spaced the benchmarks and sample sites are. This strategy does require a fancy GPS unit, but it is fast (because you only need to do a couple of 30-min occupations per day) and light (because you can minimize carrying around the heavy antenna and batteries). This is generally my preferred approach when possible — it’s not a prohibitive amount of work, but it reduces production rate errors associated with elevation measurements to essentially negligible levels.

Summarizing the discussion up to now, a decent-quality handheld GPS is just barely good enough for exposure dating in that errors in elevations measured using this method transfer to 0.5-0.7% errors in production rate estimates. Note that I’m pretty sure your iPhone is substantially worse, just due to the antenna size and geometry, but I have not attempted to verify that. If you can, though, you should do better, either with a more elaborate GPS unit or some combination of DGPS benchmarks and barometric survey.

There is, however, one final topic on the subject of GPS measurements of sample elevations, that is important. What we actually want to measure for exposure dating is the elevation of a sample with respect to sea level. Elevation-atmospheric pressure models are indexed to the bottom of the atmosphere, which is sea level, so we need to know how far above sea level we are. However, for GPS data reduction it is common to work not in a sea-level reference frame but in an ellipsoidal reference frame in which the Earth is represented by a simplified ellipsoid rather than the somewhat lumpy shape of the actual surface of the ocean (generally referred to as the geoid). This is not the place to launch into a full discussion of the differences between ellipsoid and geoid elevations (good places for this are here or here), but the point is that they are different. In some parts of the Earth they are quite different: here is a map (from this site) showing the difference between sea level (in this case represented by the EGM2008 geoid model) and the WGS84 ellipsoid, which is commonly used in GPS data reduction:

The details are beyond the scope of this post, but the point is that a GPS elevation computed with respect to an ellipsoid is not a sea level elevation, and can be tens of meters different from the correct sea level elevation. I am pretty sure there are a lot of errors in the exposure-dating literature where ellipsoid heights have been incorrectly reported instead of sea level heights. Be careful.

 

 

 

The uneven distribution of production rate calibration data

February 2, 2017

This post is about the distribution of production rate calibration data, and whether it’s good enough or not.

The basic background here is that “production rate calibration data” are measurements of cosmogenic-nuclide concentrations from sites where we already know the exposure age of a surface from some sort of independent geochronology. In the simplest possible case, the way surface exposure dating works is that we go and measure the concentration of some cosmic-ray-produced nuclide (call this N, with units of atoms/g) in a rock surface . If we know what the production rate (P, units of atoms/g/yr) is, we can just divide N by P to get the exposure age of the surface in years (call it t). This works great if you know what P is. We can use physical first principles and measurements of the modern cosmic-ray flux to put together “scaling methods” or “scaling models ” that describe how P should vary with elevation and position in the Earth’s magnetic field. Then, to use a scaling model to estimate the actual value of a production rate at a particular location, we need to calibrate the scaling model to actual measurements of the production rate, and, as noted above, we do this by measuring nuclide concentrations in surfaces where we already know the exposure age: we measure N and divide by t to get P.

So to summarize, if we want to estimate production rates everywhere globally, we do that by formulating a scaling model and then fitting the scaling model to measured production rate calibration data. This also allows us to evaluate scaling models by asking whether they fit the calibration data, or not. Because production rates vary by a lot (factors of 10 or more) within the part of the Earth where we’d like to be able to do exposure dating (at pretty much all latitudes from sea level up to about 4-5 km elevation), it is evident that to accurately calibrate a scaling model, and to accurately test whether a scaling model either works at all or works better than another scaling model, we want to have calibration data from the entire range of scaling space, that is, that span a large range in both latitude and elevation. We’d also like to have calibration data that span a large range of true exposure ages (in order to test time-dependent scaling models), but I’ll take that as a secondary issue for now and save it for some other time.

So the question is, do we have production rate calibration data that span the entire useful part of production rate scaling space? We certainly have a lot of production rate calibration data: the ICE-D:CALIBRATION database at this writing includes 72 distinct sites where we have Be-10, Al-26, or He-3 production rate calibration data. There are also a lot of additional sites where we have C-14 or Cl-36 calibration data; we’ll leave that for now on the basis that these nuclides are less commonly measured. Mostly this proliferation of calibration data is the result of a greatly increased rate of calibration data collection during the past few years, both as part of the CRONUS-Earth and CRONUS-EU projects and by other groups. The following image, which is taken from the front page of that database, shows the distribution of the Be-10, Al-26, and He-3 data geographically and in scaling space.

current_cal_map

In the world map, the size of the circles indicates the number of different samples collected at each site. Although the world map reveals the interesting phenomenon that there are exactly zero production rate calibration measurements for these nuclides in the entire continent of Asia, the three lower panels are more relevant in answering the question of how well the calibration data are distributed in scaling space. From left to right, they show the distribution in latitude (a simple proxy for position in the magnetic field) and elevation for Be-10 (red), Al-26 (green), and He-3 (gray) calibration data. In these plots, the size of the circle reflects how many measurements of the respective nuclides were made at each site.

This shows that the available calibration data do span a wide range in latitude. They do span a wide range in elevation. But their distribution across scaling space is not even close to uniform. Rather, with the exception of a couple of He-3 calibration sites near sea level at low latitude, latitude and elevation are highly correlated for these sites, such that they nearly all lie in a fairly narrow zone of scaling space that is at high elevation at low latitude, and at low elevation at high latitude. The opposite corners of scaling space — high-elevation-high-latitude and low-elevation-low-latitude — are nearly empty.

To figure out why, consult the following figure from a well-known 1990 article in Scientific American by George Denton and Wally Broecker.

broeckerdenton1990snowline

This shows the latitudinal distribution of the global snowline, which essentially controls where glaciers exist. Naturally, it’s lower at the poles and higher in the tropics, where it’s warmer. The highest snowlines are actually in the subtropics due to the relative aridity there compared to the equator, but that’s beside the point for now. The red line is the modern snowline; the blue line is the (lower) snowline during the last glacial maximum.

The distribution of global snowlines is a thoroughly well-discussed phenomenon and this diagram, or something like it, appears lots of other places. Here is a nice example from a Russian glacier atlas that shows the same thing for South America (north is to left):

Porter diagram of Andean snowline elevations from Russian atlas of global snow and ice resources. MZS library.

The point here is that the majority of production rate calibration data are from glacial deposits — they’re sites where glacial moraines with boulders on them have been independently radiocarbon-dated. Thus, the very non-uniform distribution of calibration data in scaling space just reflects the fact that most calibration data are from glacial deposits formed during or after the LGM, and these deposits, by nature, are lower in polar regions and higher in tropical regions. Here are the locations of Be-10 and Al-26 calibration data pasted onto the Denton-Broecker diagram:

db_with_sites

Although a lot of the Northern Hemisphere calibration sites are well below mountain snowlines because they are associated with ice-marginal deposits of the Laurentide and Greenland ice sheets (and there are a few sites that aren’t associated with glaciers or ice sheets at all), the calibration data basically follow the LGM snowline, because that’s where you find LGM and late-glacial moraines. Again, the main exceptions are in the He-3 data set (not shown here; see above), where there are a few sites that consist of dated lava flows or other volcanics at low elevation and low latitude.

So, in fact, the production rate calibration data that we have, although fairly abundant now, are not, in fact, very evenly distributed across scaling space. The next question is, do we care? For us to care, two things need to be true.

First, we have to want to exposure-date something that is not near the Pleistocene snowline, e.g., something that is at high latitude and high elevation, or low latitude and low elevation. Do we want to do this? Consider what is by far the most common application of exposure-dating, which is, of course, dating glacial deposits. Like calibration data, glacial deposits tend to be near the global snowline, so we  might not be worried about this. To find out whether we are worried, let’s look at the distribution of where glacial geochronologists actually collect exposure-age data.

edist1

This shows the latitude-elevation distribution of exposure-age data associated with glacial deposits from both the ICE-D:ANTARCTICA database (green dots; includes measurements of many different nuclides) and a prototype of a similar database aimed at global exposure-age data for alpine glacier moraines (gray dots; includes Be-10 and Al-26 data from a future ICE-D:ALPINE database). As expected, most of these data that aren’t in Antarctica are clustered around and a bit below Pleistocene snowlines:

edist3

Now add Be-10 and Al-26 calibration data to these data (without the Denton/Broecker diagram):

edist2

For the most part, the places where we want to actually apply exposure-dating to glacial deposits are located very close in scaling space to the calibration data that we have. That’s good. In most glacier-chronology situations, we don’t have to worry about extrapolating production rate estimates away from the calibration data. However, there are some exceptions. Of course, if we’re trying to exposure-date something that’s not a glacier — landslides, fault scarps, shoreline features, whatever — we could be anywhere in scaling space and could require large extrapolations. But even in the more restricted province of glacier chronology, the most glaring mismatch in this figure (there are others less glaring) is that there are a lot of situations where we want to date glacial deposits that are at relatively high elevations, 2000-3000 m, near both poles…but we don’t have any calibration data at all at high elevations near the poles. So the first requirement for whether or not we care about the restricted distribution of calibration data is, in fact, met. We definitely want to exposure-date things in parts of scaling space that are empty of calibration data.

The second criterion for whether or not we should be worried about uncertainty in extrapolating production rates from where they are well constrained by calibration data near the global snowline to locations that are not near the global snowline is as follows. We have scaling models that predict the variation in production rates everywhere in scaling space, not just near the calibration data. However, we can’t test whether they are correct in areas where we don’t have calibration data. So the question is, do we know that the scaling models are accurate in unsampled parts of scaling space, in particular at high latitude near the poles? Obviously, we can’t answer this question by comparing predictions to calibration data, because we don’t have any calibration data in that area. One alternative, but much less satisfying, way to answer the question is to ask whether different scaling models predict the same production rates for this region, or not. If they do, we might not be that worried; if they don’t, we might be more worried. Here is a comparison of production rate estimates for high latitude derived using the two most au courant scaling methods: that based on the work of Devendra Lal (the “St” scaling method in current jargon) and that based on the work of Nat Lifton and Tatsuhiko Sato (“LSDn”):

comp1

Here I have calibrated both scaling methods using the CRONUS-Earth “primary” calibration data set for Be-10, and plotted the resulting predictions for high latitude (zero cutoff rigidity, mean Holocene solar modulation parameter). Left panel shows the predicted production rates (at high latitude using an Antarctic atmosphere approximation) for the St (blue) and LSDn (red) scaling models; right panel shows the ratio of the two production rates. At low elevation, both scaling models predict production rates that are within a few percent of each other (because they are both pinned to calibration data at relatively low elevations at relatively high latitudes), but the predictions diverge at higher elevations. At 3000 m in Antarctica, LSDn scaling predicts ca. 15% higher production rates than St scaling (note that this is the same phenomenon discussed in this blog posting about saturated surfaces in Antarctica). This is a pretty big difference. As noted in the other blog entry about saturated surfaces, we can potentially use near-saturation Be-10 and Al-26 measurements from Antarctica to in part overcome the lack of independently dated calibration data in this part of scaling space. This exercise, I argue, indicates that LSDn scaling is accurate, and St scaling is not, for high latitude and high elevation. We could also potentially address this issue using samples that are saturated with respect to in-situ-produced cosmogenic C-14 in quartz, although this was attempted by Borchers and others and was inconclusive. Overall, however, this situation could be very much improved with new production rate calibration data from high elevation near the poles.

Summary: the existing set of production rate calibration data, although very far from uniformly distributed in scaling space, is pretty good if you are interested in using cosmogenic-nuclide exposure dating for glacier chronology, with one exception. The exception is that we could really use more calibration data from high elevation near the poles. This is actually a lot harder than it sounds, because there is, for the most part, not much in the way of organic life forms living at high elevation near the poles. By extension, there is not much available for radiocarbon dating, which makes it rather hard to independently date landforms. But this is clearly a problem that could use some creative ideas about how to overcome it.

What is the 26/10 production ratio anyway?

January 10, 2017

This rather long post covers the issue of the Al-26/Be-10 production ratio. The value of this ratio is important for several reasons.

First, it gives some information about whether a surface sample has experienced a single period of exposure at the surface. The basic, although slightly oversimplified, concept here is that if you measure a ratio that is a lot lower than the production ratio, your sample has spent a significant fraction of its life buried far enough below the surface to partially or completely stop the cosmic ray flux and allow previously produced nuclide inventories to decay…so if you compute the exposure age of the sample based on the assumption that it hasn’t been buried, you’ll be wrong.

Second, and more importantly, if you have a sample that is buried below the surface now, and you measure this ratio, the difference between the measured ratio and the production ratio tells you how long the sample has been buried. This concept, although also slightly oversimplified here, is the foundation of the method of burial dating.

The currently accepted value of the 26/10 production ratio, when appropriately normalized to the “07KNSTD” and “KNSTD” isotope ratio standardizations for Be-10 and Al-26, respectively, is 6.75. Interestingly, an uncertainty is almost never quoted for or applied to this number. Also interestingly, as far as I can tell, this number is just taken from a paper written in 1989 by Kuni Nishiizumi and colleagues. They stated it as 6.1; restandardizing Be-10 measurements from the then-current “KNSTD” standardization to the now-current “07KNSTD” corrects it to 6.75. In the widely utilized 2008 paper describing the online exposure age calculators formerly known as the CRONUS-Earth online exposure age calculators, we just accepted this number, mainly because we concluded that production rate calibration data that were available at the time agreed with the Nishiizumi estimate, and we didn’t propose any change. Subsequently, most papers about burial dating (which is where this number is really important) adopted this value or something very close to it (exactly how you do the KNSTD–>07KNSTD conversion affects the actual number at rounding error level). To summarize, we have all basically ignored this issue for 25 years. Why? Because 6.75 works. Most measurements on surface samples seem to more or less agree with this canonical value, and we have not ever found that burial ages computed using a ratio of 6.75 have been inconsistent with other age constraints.

Unfortunately, it may now be time to revisit this issue, because in recent years there has been (i) a lot more collection of production rate calibration data, as well as (ii) a lot more attention to first-principles modeling of production processes and their spatial variation. So in the rest of this post I’ll try to exploit this new information to figure out what the 26/10 production ratio really is, and, more importantly, to determine if we can just keep on not worrying about this issue as we have done successfully for the last 25 years.

To clarify what exactly we are talking about, in this post I am talking about the ratio of total Al-26 and Be-10 production rates at the surface, that is, including both spallation and muon production. In some work (including lots of mine) there is some confusion about whether an “Al-26/Be-10 production ratio” refers to spallogenic production only, or to total production. Fortunately, most of the time these are very close to the same thing, so any consequences of this confusion are small. But here, I am talking about total production.

The most straightforward way to determine the production ratio is simply to do what Nishiizumi and others did. Find some samples that have the following properties: they have been exposed at the surface for a single period of time; at the beginning of this period their nuclide concentrations were zero; surface erosion during the exposure period was negligible; and the duration of exposure is much less than the half-life of either Al-26 or Be-10. Then measure the concentration of both nuclides, and take the ratio. In the Nishiizumi paper these are samples of glacially polished bedrock from the California Sierra Nevada that have been exposed since deglaciation ca. 13,000 years ago.

The easiest way to find a larger set of samples that have these properties is, in general, to look at samples collected for production rate calibration purposes. Selecting samples suitable for measuring the 26/10 production ratio, of course, is less restrictive than selecting ones suitable for production rate calibration, because to measure the production ratio, you don’t have to know the exact duration of exposure, just that the exposure time has been a lot shorter than the Al-26 half-life. So lots of other samples that aren’t suitable for production rate calibration could be used to estimate the 26/10 ratio. However, existing production rate calibration data sets are a convenient way to find a set of samples that have been carefully selected to have simple exposure histories and negligible erosion. Looking in the ICE-D production rate calibration database and removing one extreme outlier measurement (26/10 = 14; not sure what went wrong here), one finds 90 measurements of both Al-26 and Be-10 on samples collected for calibration purposes. Each of these is a distinct measurement in which both nuclides were measured in a particular aliquot of quartz. Here is the distribution of 26/10 ratios in these samples:

r_hist_1 r_hist_2

On the left this is shown as a histogram and on the right as a normal kernel density estimate. A couple of points about this:

The average of all these data (shown as a red line on both plots) is 6.76. OK, we’re done. Back to sleep for another 25 years.

Just kidding. We’re not quite done. One initial thing that is notable about this data set is that it is skewed to the low side. This observation is consistent with the easiest way to make an error when measuring the 26/10 ratio in a sample. Specifically, one measures the Al-26 concentration by multiplying a measured total amount of Al by a measured Al-26/Al-27 ratio. The total Al measurement is made by taking aliquots of a solution of quartz dissolved in HF, evaporating the HF in the aliquots, redissolving the Al in a dilute acid, and measuring the Al concentration by ICP. If you are not careful in this process, you can retain insoluble Al fluorides in the final step, which means that the Al concentration in solution underestimates the total amount of Al in the sample, and leads to an underestimate of the 26/10 ratio. The mild negative skewness of these data suggests that this may have been a problem for some of the measurements, in which case the mean value of these data might slightly underestimate the true production ratio (the median value is 6.84).

For a slightly different perspective on this, then, let’s try estimating the production ratio from calibration data using a slightly different workflow, as follows:

  1. Start with the “CRONUS-Earth primary calibration data sets” for Be-10 and Al-26, again as represented in the ICE-D calibration database.
  2. Using these data and the production rate calibration code from the version 3 online exposure age calculator, determine best-fitting values of reference production rates for Be-10 and Al-26. I’ll use two scaling methods: the non-time-dependent “St” scaling and the time-dependent “LSDn” scaling.
  3. Also compute production rates due to muons for Al-26 and Be-10; add production rates by spallation and muons together; divide to get an estimate of the 26/10 production ratio.

The main difference between this method and simply averaging all direct measurements of the 26/10 ratio as I did above is just that it allows us to incorporate data from calibration samples where only one nuclide was measured: we can utilize Be-10 data that lack corresponding Al-26 measurements, and vice versa.  In addition, all measurements represented in the CRONUS-Earth “primary” data set are relatively recent, meaning that the folks who did them certainly should have been careful about insoluble fluorides. In any case, this workflow uses somewhat different data and should yield an estimate of the 26/10 production ratio that is somewhat independent from the simple average above.

On the other hand, it brings up something new. The “St” scaling method, as implemented in the various online exposure-age calculators, assumes that Al-26 and Be-10 have the same latitude/elevation scaling, but the proportion of total production that is due to muons will vary with elevation and latitude (mostly with elevation). In addition, the relative proportions of fast muon production and negative muon capture change with elevation. As these three production pathways (spallation, fast muons, negative muon capture) all have different 26/10 production ratios, this means that the total 26/10 production ratio must vary with elevation (and a little with latitude as well). The point is that what we know about production rate scaling predicts that the 26/10 production ratio isn’t a constant value as we have been assuming, but will vary with elevation.

This variation is predicted to be close to negligible for the “St” scaling method: muon production makes up at most 2 or 3 percent of total production, so variations in the smallness of this very small fraction result in only per-mil-level variations in the production ratio. Thus, for this scaling method, mostly we disregard this issue completely. The “LSDn” scaling method, in contrast, predicts not only variation in the muon contribution, but also, more importantly, that spallogenic production of Al-26 and Be-10 will scale differently with magnetic field position and elevation. The reason for this is that this scaling method uses the results of an atmospheric particle transport model to predict the energy spectrum of fast neutrons at your location, and then multiplies this spectrum by a set of cross-sections for Al-26 or Be-10 production. These cross-sections depend on energy, and they’re different for the two nuclides. The following shows the neutron cross-section estimates used in LSDn scaling, as a function of neutron energy:

thresholds

Green is the neutron interaction cross-section for production of Al-26 from Si; blue is Be-10 from O; red is Be-10 from Si. The threshold energy for Al-26 production is lower than that for Be-10, so in parts of the atmosphere where neutron energy is lower (that is, at greater atmospheric depths or lower elevations), Al-26 production is favored relative to Be-10 production, and the 26/10 production ratio is predicted to be higher. So once we know that Al-26 and Be-10 have different threshold energies for production (we’ve known this for a while), this implies that the spallogenic production ratio has to vary with location; it can’t not do so. Similar work using an atmospheric particle transport model by Dave Argento (here) also highlighted the prediction that the 26/10 production ratio should change with elevation; he calculated that the spallogenic production ratio should be near 7 at sea level and 6.75 at high altitude.

The main point here is that first-principles physics and calculations based thereon predict that the 26/10 production ratio should vary with location. So let’s look at the data set of measured ratios (the data presented in histogram form above) as a function of elevation, to see if this variation is evident. Here are measured ratios plotted against elevation:

r_elv_1

Here are the same data with (i) the average (6.76), and (ii) a linear regression in elevation (6.94 – 1.022e-4 * elevation) plotted in red:

r_elv_2

Although the correlation between elevation and measured 26/10 is weak (correlation coefficient is -0.25; p = 0.02), it’s not zero. And a simple linear regression through these data agrees very closely with the prediction in the Argento paper.

This shows that the data are consistent with an elevation dependence to the production ratio, even though they don’t require it at very high confidence. But to back up a bit, this whole discussion of elevation dependence interrupted what we were trying to do: estimate the 26/10 production ratio by separately calibrating the Al-26 and Be-10 production rates. Now having shown that that no matter how we do this, we always predict some geographic variation in the ratio, we’ll continue to look at this as a function of elevation. Here is the result for the ‘St’ scaling method:

r_elv_3

Although it is a bit hard to tell this from the figure, there are separate green and blue dashed lines showing predictions for low latitude (high magnetic cutoff rigidity) and high latitude (low cutoff rigidity), respectively. As discussed above, although geographic variation is predicted for this scaling method, it’s very small. Reference production rates fit to the calibration data predict a 26/10 production ratio of 7, which is, in fact, a bit higher than the estimate we obtained above from simply averaging the direct measurements. Again, what seems to be the most likely explanation for this is just that there are more data involved, and we may have excluded some of the low outliers by choosing a more recently generated data set.

Here is the result for the “LSDn” scaling method (again, this is total production including muons):

r_elv_4

As discussed, this scaling method predicts a larger elevation dependence as well as a larger difference between high (green) and low (blue) cutoff rigidity. Again, it’s in decent agreement with the simple linear regression on these data as well as the predictions in the Argento paper (the latter isn’t surprising; they’re based on similar physics). To summarize, put all these predictions together with the data:

r_elv_5

My summary of this is as follows. The data set of direct ratio measurements on individual samples is, by itself, consistent with the canonical value of 6.75, although there is some evidence that this number might be skewed a bit low due to measurement errors. Basic physics of Al-26 and Be-10 production, on the other hand, indicates that the production ratio should be elevation-dependent. Scaling models that use this physics (e.g., LSDn), when fitted to calibration data, predict that the ratio should be in the range 7-7.1 at low elevation and in the range 6.5-6.75 at high elevation; these predictions are also completely consistent with the data set of direct measurements. Because the data set is rather scattered, mainly just due to measurement uncertainty, it does not appear to provide a means of choosing between scaling models at high confidence. So at this point, it appears that (i) simply assuming 6.75 always, and (ii) using an elevation-dependent value as predicted by LSDn scaling, are both consistent with available data. Choose (ii) if you like physics, and (i) if you like simplicity. Overall, I think I lean towards (ii), that is, physics.

For perspective, the difference between 6.75 and 7 is 3.7%. That’s not insignificant — it maps to age differences of 5-15% in typical burial-dating applications — but we’re haggling over fairly small differences here.

Finally, for completeness I’m going to cover one more source of indirect evidence for what the value of the production ratio is. This stems from the observation that if the production ratio at low elevation is 7 rather than 6.75, we should probably recalculate a bunch of previously published burial ages. Thus, it might be a good idea to look into whether changing the production ratio causes any serious problems with existing burial-dating results.  One could pursue this either by recalculating a bunch of burial ages and seeing if you find any conflicts with independent age constraints, or by a more explicit method in which if (i) you have a buried sample, (ii) you know its age independently, and (iii) you know the decay constants for Al-26 and Be-10, you can measure both nuclides in your sample and back-calculate the surface production ratio. This latter method features in a recent paper by Zhou and others, who burial-dated a fluvial gravel whose age was bracketed independently by argon-dated lava flows. They back-calculated the production ratio in this way to be between 6 and 7.2 (at 68% confidence), which is not particularly precise, but shows good agreement with predictions and surface measurements. Good. Pass. On the other hand, in a 2010 paper about glacial stratigraphy in the central US, we dated a paleomagnetically normal unit three times by the burial-isochron method (this is the “Fulton” till in that paper). Magnetically normal means it must be younger than the Brunhes-Matuyama reversal at 0.78 Ma; we dated it at 0.8 +/- 0.05 Ma, which is just barely consistent with the constraint. Assuming the 26/10 ratio to be 7 rather than 6.75 would make that age estimate older, 0.87 Ma, which would be badly inconsistent with the paleomagnetic stratigraphy. But — hold the presses — in the time period between that paper and now there’s also been a change in best estimates for muon interaction cross-sections, which acts to bring that age estimate back down to 0.81 (still +/- 0.05) Ma, and this remains consistent with the paleomagnetic boundary. So this highlights that there are a lot of moving parts involved in recalculating old burial age data, but for this one case I can think of immediately in which a burial age estimate bumps up against an an independent age closely enough to provide a serious constraint on the 26/10 production ratio, the hypothesis that the production ratio is near 7 at sea level passes.

So what ratio to use for burial dating? As noted above, physical arguments appear rather compellingly in favor of an elevation dependence for the ratio. If this is true, then if the average ratio computed from a set of samples that span a range of elevations is 6.75,  the true production ratio must be higher at low elevation (conveniently, Nishiizumi and others made their measurement at middle elevations). I think it seems sensible to include this effect in burial-age calculations.

P.S.: I still haven’t addressed the issue of what the uncertainty in the production ratio is.

 

Ten years of the online exposure age calculator formerly known as…

January 10, 2017

It’s recently come to my attention that I, like everyone else, unfortunately seem to have completely missed the ten-year anniversary of the online exposure age calculators formerly known as the CRONUS-Earth online exposure age calculators(*). Fortunately, the possibly more significant milestone of 1 million calculations served hasn’t happened yet: I expect that to happen sometime around the end of 2017. In any case, though, here are the latest statistics on usage of the online calculators at hess.ess.washington.edu. The following plot is denominated in number of exposure ages calculated per month — that is, if you submit ten samples at the same time, it counts as ten  — and I have made an effort to exclude samples provided as test data and also samples that are obviously synthetic (e.g., having zero or integer latitude and/or longitude). So this should be a decent approximation of how much actual use is happening. In any case the following shows more than ten years of usage by month.
usage_201701_a

The axes are for exposure age calculations (top), erosion rate calculations (middle), and production rate calibration (bottom). The color-coding reflects the various calculator versions: gray is the short-lived version 1; green the also fairly short-lived version 2.0; red the excessively-long-lived-and-well-past-sell-by-date version 2.2, and blue the recently introduced version 2.3. During the time period outlined in black in mid-2014, usage data weren’t being consistently logged due to various technical problems associated with hardware failures, operating system upgrades, and the switch from MATLAB to Octave. Thus, no data from that period are shown here. Shortly after that time period, the bars for two one-month periods go well off scale and are not shown here in their entirety: persons unknown, for unknown reasons, dropped tens of thousands of samples into the system all at once on several occasions. I have not yet investigated in detail whether they really pasted in that many samples by hand, or wrote some software to do that, or what exactly they were doing.

Agglomerating all the various functions and versions into a single number for total usage gives the following, again in units of exposure ages or erosion rates calculated per month:

usage_201701_b

There are a few interesting things about this.

First of all, overall usage continues to grow steadily. I guess this is not a total surprise, because even if the rate of new exposure age generation isn’t increasing, the total inventory of data that requires periodic recalculation continues to get bigger. But it does make you wonder exactly where the saturation point is.

As usual, calculating exposure ages dominates: the rate of erosion rate calculations is well behind. But usage rates are surprisingly high overall: average usage for the last year is about 10,000 samples monthly. That’s a lot of exposure age calculations.

Second, there hasn’t yet been any discernible impact from the introduction of two new online calculator services: the new CRONUS-Earth product, ‘CRONUScalc,’ developed by Shasta Marrero, and CREp, developed by Leo Martin, PH Blard, and colleagues. These have been available for approximately a year now without, apparently, making a significant dent in usage of the older system. This may just reflect the fact that right now everyone has to calculate everything three times, using all three systems, to make sure they’re not missing anything important. This would tend to imply that the brutal-competition-and-shakeout phase will come later. We can’t all be Facebook: someone has to be MySpace.

My bid to not become MySpace has been to put together a version 3 calculator with many technical and speed improvements, that does exposure-age and production-rate-calibration calculations and competes directly with CREp and CRONUScalc (but is faster, better, and cheaper, with more features, of course, even though CREp wins the swimsuit competition1 hands down and walking away). I am still working on version 3 and it doesn’t look like it has yet begun to cannibalize usage of the older version 2.x family, but the goal is to make that happen soon so that I can stop worrying about all the somewhat-out-of-date 2.x code. Version 3 usage, on the same time scale as above, is here:

usage_v3_20170103In this plot, the upper panel shows exposure age calculations and the lower panel shows use of the production rate calibration interface (which has only existed for less than a month). Light green shows programmatic requests using the web service API, and dark green shows requests by actual people pasting data into the data entry webpage. The web service requests, which comprise the vast majority of usage, reflect the fact that the version 3 code is the back end for the antarctica.ice-d.org database: any web page served by the ICE-D server uses the version 3 web service API to compute exposure ages, and those numbers bulk up the light green bars here. As this setup generates lots of requests with not much user effort, I am a bit surprised that these numbers are not bigger. However, so far, the direct use of version 3 through the web page interface (that is, the dark green) is nowhere near that of the version 2.x family. I’ve pretty much completed the version 3 code at this point, though, so hopefully that’ll change soon.

Summary: 10,000 exposure age or erosion rate calculations every month is a lot of exposure age or erosion rate calculations.


1A “swimsuit competition” is a typical component of contests known as “beauty pageants” that are popular in many parts of America as well as a few other countries worldwide. U.S. President-elect Donald Trump has been closely associated with promoting and judging beauty pageants in the past. In a beauty pageant, young women compete on the basis of their appearance, personality, and accomplishments, with the aim of winning the right to represent their city, state, country, or other group either in other beauty pageants or for charitable or publicity purposes. The competition involves a series of events, usually including one or more in which contestants appear before a panel of judges while clad in garments such as swimsuits (typically one-piece) or formal dresses. These events highlight the appearance, grooming, and fashion sense of the contestants, in contrast to other events that may focus more on their skills and accomplishments. Although such events have incurred widespread criticism on the basis that they perpetuate obsolete gender roles and validate discriminatory behaviour based on appearance, the term “swimsuit competition” remains commonly used in colloquial English to denote contests or evaluation criteria that emphasize appearance over substance. For example, one might, hypothetically, state that “the selection process for GSA Fellows is a total swimsuit competition.” In the present blog entry, the phrase is used rhetorically to highlight the professionally designed web pages and online user interface for the CREp software, which contrast favorably with the near-total lack of attention to graphic design and visual communication displayed by other online exposure age calculator websites. It is not intended to imply that CREp would not also score well in, for example, the talent or personal interview portions of the competition.

It’s all about $\mu$

November 4, 2016

no_muons

By far the biggest pain in the neck in the overall field of cosmogenic-nuclide geochemistry is the need to compute nuclide production rates due to muon interactions. Production rates due to muons are quite small relative to total production rates at the surface, but they’re not small enough that you can ignore them completely. In the subsurface, they’re the majority of production, so are actually important for applications such as burial dating and depth profile dating, but they’re complex to calculate and hard to verify due to a general lack of subsurface calibration data. It’s much more difficult to obtain useful calibration data for production rates due to muons than for surface production rates due to spallation, mainly because there aren’t really any geologic processes that completely remove the entire preexisting muon-produced nuclide inventory, which extends to tens of meters depth, all at once, leaving a blank slate. And if there were, production rates are really low, so the blank slate would have to sit around for a really long time  before you could do the measurements accurately, in which case something else bad would probably happen to it. So the only hope for decent calibration data is to collect tens-of-meters-long cores from bedrock surfaces that have been sitting around at extremely low erosion rates for millions of years, and there aren’t a whole lot of such things.

The last couple of years, largely due to work in the CRONUS-Earth and CRONUS-EU projects, has either improved or unimproved this situation by generating some new methods of computing muon production rates and also by collecting several new bedrock cores. This increases the overall complexity of the situation as a whole, but also helps a bit by providing enough data to make a decent start at evaluating which methods of computing muon production rates work best. In the last few months I’ve gotten tired of being perpetually confused on this subject, so I’ve waded through all these data and put together an extraordinarily long, tedious, and pedantic document on the subject of benchmarking muon production rate calculation methods against the available calibration data set. I imagine that I will submit this to an actual journal at some stage, but for now it is here (the PDF file). Warning: it is long, tedious, and pedantic. However, if you want my best shot at a synoptic view of all reasonably accessible knowledge that is relevant to production by muons, it might be potentially useful. Of course, it hasn’t been reviewed in detail by anyone else, so it might also be wrong. The MATLAB code that does all the calculations is also available at the same address (the zip-file).

Presumably, about six people on Earth will be interested in this document. However, I imagine that all six of those folks are also regular readers of this blog. You know who you are.  As noted, this will probably be sent off to a journal at some stage, in which case some of you will also have the truly unpleasant and unfortunate duty of reviewing it. Enjoy!

Oh yeah. The 2%-of-production-98%-of-suffering logo above is also in that directory, in case you want to put it on a T-shirt. For example, from CafePress:89_230x230_front_color-white

 

 

 

Production rate scaling devious plots

October 22, 2016

A large amount of the scientific literature on cosmogenic-nuclide geochemistry uses the term “production rate scaling scheme” to refer to one of a number of computational methods used to calculate how cosmogenic-nuclide production rates vary geographically and over time. Of course, the word “scheme” is intended to have its fairly common scientific meaning of a formal mathematical procedure for carrying out some task, as one would refer to a numerical integration scheme or an implicit solution scheme. As most people are aware, however, that’s not really what’s connoted by “scheme” in the rest of the English language. An article in Physics Today from 2011 called attention to this with the following table:

table

This list is absolutely right on target. In the admittedly unlikely event that a non-cosmogenic-nuclide geochemist were to read the cosmogenic-nuclide geochemistry literature, they might legitimately be mystified as to why you had to choose one of several dastardly plots, whilst rubbing one’s hands together with an evil cackle, to compute nuclide production rates. I’m probably the worst past offender in this regard, starting with this paper, and I’m not proposing to give up terms like “positive feedback” that are specifically meaningful because they are derived from the presence of positive or negative signs in mathematical descriptions of the phenomenon, but the sad fact is that “scheme” doesn’t have a whole lot of redeeming qualities. It really does sound like a devious plot, and it doesn’t have any special meaning that’s not equally well communicated by simpler words like “method” or “procedure.” We don’t need it, so let’s not use it. Henceforth, I suggest just talking about production rate scaling methods, procedures, or possibly algorithms; of these, I think I prefer “methods.” If anyone has any better ideas, let me know, but “scheme” is now out as far as I’m concerned.