Skip to content

Where’s the reference production rate?

September 6, 2018

Recently I received the following question in an email:

I used the CRONUS earth calculator v2.3 to calculate the Be-10 exposure ages of the samples from [some location], but I did not find the numerical value of Be-10 reference production rate used in CRONUS v2.3. On the webpage it only said it is “derived from the primary calibration data set of Borchers and others (2016)”. Can you kindly tell me the value ? Thank you very much.

This also comes up in a lot of paper reviews. Here is an example. The text of the paper is as follows:

Reference Be-10 production rates were determined using the CRONUS-Earth “primary” calibration dataset (Borchers et al., 2016).

And the reviewer said:

…regardless of citing the source, for completeness the SLHL 10Be prod rate should be listed ( with its error)…

The purpose of this posting is to explain why one should state the production rate calibration data set used, but one should not state a value of the reference production rate. As usual, it’s a long posting, and it’s tedious because I’m going to try to do this just in words and not in math, but getting this right is important in understanding how exposure dating works and accurately communicating it in papers.

To begin with, I’ll review the overall structure of an exposure-age calculation one more time. To compute a cosmogenic-nuclide exposure age, you need three things. One, a scaling method that describes how production rates vary with location and time. Two, a set of calibration data that consists of nuclide concentrations measured at sites whose exposure age is already known from a different, independent, dating method such as carbon-14 or argon-argon, such that we know what the true average production rate at those sites has been during the period they were exposed. Three, the nuclide concentration measured at the unknown-age site that we wish to date. Having these three things, one takes the following steps.

  1. Fit the scaling method to the calibration data set. In most cases, the scaling method generates a nondimensional “scaling factor” describing how the average production rate for a particular place and age is related to the production rate at some reference location and time. So for each sample in the calibration data set, the scaling method predicts a scaling factor for that place and exposure age. We then perform a one-parameter fitting exercise in which we choose a value of the “reference production rate” — the production rate at the reference location that is used in that scaling method — that, when multiplied by the scaling factors for all the calibration sites, best duplicates the actual production rates that we measured at those sites. A simple way to think about this is to consider a particular site: at this site we divide the true production rate that we measured by the scaling factor predicted by the scaling method; the result of this calculation is an estimate of the production rate at the reference location. If we have a number of different calibration sites, we have a number of different estimates of the reference production rate, and (given certain assumptions) the value of the reference production rate that best fits the entire calibration data set is just the average of the estimates from each site.
  2. Now that we have an estimate of the reference production rate, we take the scaling factor predicted for the unknown-age site and multiply it by the best-fitting reference production rate in step (1). Then we use this production rate to compute the exposure age from the measured nuclide concentration. Note that for a scaling method in which the production rate varies in time, this calculation must be iterated rather than done only once, but the principle is the same.

There are several important aspects of this procedure.

One, nowhere in this procedure have we ever measured the reference production rate. The actual measurements that we made are the average production rates at the calibration sites. None of the calibration sites are at the reference location (except by accident). For some scaling methods, the reference location is defined such that it does not exist at any real place and time. Thus, the “reference production rate” is not a measurement. It is a parameter estimate that is the result of a model-fitting exercise, and what the parameter actually means is chosen by convenience and not for any physically meaningful reason.

Two, different scaling methods have different reference locations. Historically, for scaling methods (primarily that of Lal, 1991) in which production rates are assumed to be constant over time, the reference location was defined to be at “sea level and high latitude.” What this means is kind of vague, but usually it is assumed to mean something like (i) an atmospheric pressure of 1013.25 hPa (the defined international standard “sea level pressure”, although this is not the actual atmospheric pressure at sea level anywhere (except by accident), and (ii) geomagnetic latitude above about 60°, where the variation in magnetic field strength with latitude becomes negligible. There is no reason the reference location has to be at “SLHL.” It could be on the lawn of the Mar-a-Lago Club in Palm Beach, Florida, or the summit of Mont Blanc. It could be anywhere. For more recent scaling methods, in particular that of Lifton and others (“LSD” or “LSDn”) that is currently in common use, the production rate is predicted to change over time with changes in the magnetic field strength and solar output. This makes it necessary to choose both a reference location and reference values of magnetic and solar properties for the reference location. For the LSDn method, this is typically taken to be 1013.25 hPa, 0 GV, and a zero value of a parameter describing solar modulation. This is a different reference condition than the “SLHL” reference condition historically used for non-time-dependent scaling. Thus, even if we used the same calibration data set and both scaling methods were perfectly accurate, the best-fitting “reference production rates” to each scaling method would be different, because they pertain to different reference conditions. They could not be compared to each other, because they are not estimates of the same thing. They are estimates of different things.

Three, different implementations of the same scaling method will yield different best-fitting estimates of the “reference production rate” for that scaling method. All the commonly used scaling methods are numerically complicated, and different people have written different computer codes that use different numerical approximations, integration methods, interpolation methods, etc. to perform the calculations. In general, if you take one calibration data set and two implementations of the same scaling method and you perform both steps (i) and (ii) above, you will get the same exposure age for the unknown-age site. However, the intermediate value you have computed — the best-fitting value of your fitting parameter — will probably not be the same (except by accident). Thus, if you perform step (1) with one computer code, but then take the resulting fitted value of the reference production rate and perform step (2) with a different computer code, you will probably get the wrong exposure age for the unknown-age site.

So what does this mean for how you should describe how exposure-age calculations were carried out?

Historically, when most exposure-age calculations were carried out using the non-time-dependent scaling method of Lal (1991) or the slightly different implementation of the same method by Stone (2000), it was common in papers for authors to report the value of the reference production rate that they used to compute the exposure ages. Because the calculations involved in these scaling methods are relatively simple, it was possible for readers to easily produce their own implementation of the scaling method, take the value of the reference production rate stated in the paper, and closely reproduce the authors’ calculations.

With more complex, time-dependent, scaling methods in use at present, this is much more difficult. Software implementations of the LSDn scaling method involve thousands of lines of computer code and extensive background data sets describing paleomagnetic and solar variability as well as particle interaction cross-sections, all of which are updated periodically. All such implementations are slightly different. As noted above, if you take a fitted reference production rate derived, as in step (1) above, using one implementation, and then perform step (2) with a different implementation, you will generally get the wrong answer. In addition, different implementations (e.g., “CRONUSCalc” and “the online exposure age calculator formerly known as the CRONUS-Earth online exposure age calculator”) define different reference states for some scaling methods (or don’t define a reference state at all and use a different fitting parameter entirely), so the best-fitting values of the “reference production rates” are not even estimates of the same thing.

What this means for documenting exposure-age calculations in papers or elsewhere is that reporting the value of the reference production rate that you used doesn’t help readers to reproduce your calculations. Although certainly it is possible that your readers can obtain the exact computer code that you used, it is, in general, difficult due to the continued evolution and improvement of both the scaling methods themselves and the background data. Reporting a value for the reference production rate, in contrast, has two negative effects. First, it gives readers the impression that the value of the reference production rate is a measured physical constant. As discussed above, this is not true. Second, it creates a situation where readers who want to replicate your calculations can only perform step (2) above; they cannot perform step (1). Thus, they cannot start from the same actual measurements that went into your calculations and therefore cannot correctly and consisntently reproduce your calculations.

What does help readers to correctly and consistently reproduce your calculations is to report what calibration data set you used. Remember, the calibration data set consists of actual measurements and observations. Unlike the reference production rate, these measurements and observations are the same no matter what scaling method you are using. They are the same no matter what numerical implementation of a particular scaling method you are using. They are the same no matter what the “reference location” is. It is easy to obtain these measurements; all published calibration data sets are publicly accessible in a standardized form at this website. If you tell readers what calibration data set they used, they can reproduce your exposure age calculations with any implementation of any scaling method in an internally consistent way that begins with the same source measurements. They can perform both steps (1) and (2) above, BOTH of which are necessary for an internally consistent exposure-age calculation, instead of only step (2). Although it may seem difficult or intimidating for readers to perform both a production rate calibration and an unknown-age calculation, this task is simple with existing online exposure age calculation systems: CREp allows a choice of a wide variety of published calibration data sets; the online calculator formerly known as the CRONUS-Earth online calculator allows production rate calibration with any calibration data set.

A final point is that, again historically and dating from times where the non-time-dependent Lal scaling was nearly universally used, readers who were familiar with the cosmogenic-nuclide literature could generally figure out from the value of the reference production rate how the ages in that paper would compare to ages in a different paper. For example, prior to about 2000, it was generally thought that the value of the reference production rate appropriate to the implementation of the Lal scaling method that was in wide use at that time was about 5 atoms/g/yr; after 2000 it became clear that the correct value was closer to 4. Of course, this meant that previously measured exposure ages were, in fact, much older than had been stated in the older publications, and readers got in the habit of checking what reference production rate had been used as a means of comparing study results to each other: a moraine given as 12 ka in a study that used 5 atoms/g/yr would be equivalent to 15 ka in a study that used 4. This period of time in the early 2000’s during which most papers used the same scaling method, but different reference production rates, conditioned readers to expect to see the value of the reference production rate somewhere in the methods section. The situation is different now: differences between time-dependent and non-time-dependent scaling, as well as, often, differences between implementations of the same method, are commonly more important than differences in estimated values for reference production rates, and in addition these values can’t even be compared for different implementations. It’s still important for readers to get an idea of the effect of different assumptions on exposure ages reported in a paper, but readers can’t get that information any more from the value of the reference production rate by itself. If you want to tell the reader how differences in production rate calibration affect the end results, then tell them! For example, “if we use calibration data set A, my moraine is 12 ka; if we use calibration data set B, it’s 13 ka.” This is the information the reader wants, so give it to the reader directly, not indirectly through a “reference production rate.” In addition, of course, the reader can easily figure this out for themselves using various online exposure age calculators.

To summarize,

  1. Computing an exposure age is a two-step process that starts with a calibration data set. The calibration data set consists of actual measurements. The “reference production rate” is an inconsistently defined, and never actually measured, parameter that is estimated halfway through this process. Its value is completely dependent on the scaling method used and the specific implementation of that scaling method; it is not a measured physical constant.
  2. Emphasizing the “reference production rate” in data reporting or in a description of calculations does not, in fact, help readers to correctly reproduce the calculations in a paper. In fact, it is more likely to lead to internally inconsistent, and likely misleading, reproduction of the calculations.
  3. In contrast, reporting the calibration data set that was used does allow the reader to reproduce calculations in an internally consistent way.

To say this a slightly different way, there are two independent ingredients that go into computing an exposure age: a calibration data set and a particular implementation of a particular scaling method. The reader needs to know what these are. The reference production rate isn’t one of these things, and the reader doesn’t need to know it.

 

 

 

 

 

Advertisements

Antarctic subglacial sediment has almost no beryllium-10. Repeat: Antarctic subglacial sediment has almost no beryllium-10*.

June 26, 2018

Pinned on the bulletin board in my office, between the Chart of the Nuclides and a New Yorker cartoon showing a cutaway image of the Earth labeled “Crust. Mantle. Chewy Nougat Center,” is a 2013 article in the Economist entitled “How Science Goes Wrong.” It summarizes reviews in biomedical sciences, mostly in the areas of psychology and pharmaceutical research, that showed that a disproportionate number of results in these fields were not reproducible: when other researchers tried to repeat the studies, they got different results. The article highlights funding and publication incentives that lead to overvaluation of results that are interesting and surprising, but irreproducible and likely erroneous. Since then, these studies have prompted a lot of reexamination of funding and publication practices in biomedical fields, and development of some new programs, mainly led by NIH, to do a better job of replicating important results. It is very likely that in Earth science we do an even worse job of ensuring that important results are reproducible, because we share the same incentive structures that discourage funding and publication of replicate studies, and then we add additional obstacles of fieldwork in remote regions, unique samples or data sets from inaccessible places that would be very hard to obtain again, and the fundamental problem that you can’t go back and re-run events in Earth history. But even given these constraints, we don’t do a great job of this and we should do better.

So in this post I am highlighting a study that we really know is reproducible, because it’s been done twice. It’s a recent paper in Nature by Jeremy Shakun and a variety of co-authors that shows, basically, that there is almost (although not quite) zero beryllium-10 in quartz from the AND-1B core in McMurdo Sound, Antarctica. Taken in context, this isn’t particularly surprising, because this core is in 900 meters of water and the material in the core is derived from subglacial erosion beneath the Antarctic ice sheet. We’d only expect to see Be-10 if any of the quartz had been exposed at the surface in some part of Antarctica that was ice-free and above sea level during the past few million years. There’s already a lot of very good evidence, including marine oxygen isotope evidence for the continued existence of the Antarctic ice sheets for more than 30 million years, the stratigraphic record on land in Antarctica, and the fact that parts of Antarctica that probably did deglaciate in past warm-climate periods are below sea level so would still be covered by water anyway, that there haven’t been any such ice-free areas. So we don’t expect a priori to find significant amounts of Be-10 in Antarctic subglacial sediments. The paper, however, very strongly highlights the significance of not finding anything as additional evidence that the East Antarctic Ice Sheet has been around for a while.

What you won’t find in this paper is the fact that it’s the second time this project has been done. In 2009, Dan Morgan was a graduate student at the University of Washington working with his Ph.D. advisor Jaakko Putkonen on applications of cosmogenic-nuclide geochemistry to measuring erosion and sediment transport rates in the Antarctic Dry Valleys, and Dan and Jaakko came up with the idea of looking for for cosmic-ray-produced nuclides in the then-recently-drilled ANDRILL AND-1B core. The Ferrar Glacier, which drains part of the upper Dry Valleys, supplies sediment to the general vicinity of the core site, and their idea was that the marine sediment section in McMurdo Sound might contain some record of past export of sediment from the Dry Valleys. Surface sediment in the DV is mostly derived from surfaces with very low erosion rates and has extremely high cosmogenic-nuclide concentrations, and the idea was that a small amount of this material should be detectable even if it was diluted by a lot of Be-10-free debris from subglacial erosion of fresh rock.  So Dan and Jaakko asked for and received some samples from the core as well as seed funding from PRIME Lab for a few trial AMS measurements. Dan then did essentially exactly the same thing that was done in the Shakun paper, the main exception being that he used smaller samples that yielded less quartz. Here are Dan’s results from 2009 (kindly provided by Dan at my request) on the bottom, with the results from the Nature paper (specifically, Figure 3a in the paper) on top.

What’s being shown here is a comparison between the amount of Be-10 present in process blanks (processed in exactly the same way as the samples, only with no sample) and samples. In both figures, the amounts of Be-10 in the blanks are shown in gray, and the amounts in the samples are shown in colors: all are represented by Gaussian uncertainties. The core depths in the Morgan samples are equivalent to samples A-D in the Shakun paper; E-H are deeper in the core. The main difference here is that there are a lot more blanks represented in the Shakun data set, and they are mostly lower, centered around 8000-10000 atoms Be-10 rather than the 15000-20000 in Dan’s results. The University of Vermont cosmogenic-nuclide lab, where the Shakun measurements were done, has done an enormous amount of work in the last ten years to make their Be-10 blanks very low and very stable, and it’s worked. A secondary difference is that AMS performance has gotten a bit better in the last ten years as well, so the later measurements are a bit more individually precise. So the amounts of Be-10 observed in the samples are similar in the aggregate between the two studies, but the lower blanks shown in the upper figure have the consequence that one can, in fact, conclude at good confidence that some of the samples in the Shakun paper have Be-10 present at levels above blank, whereas for the Morgan data the samples are all indistinguishable from blank. But the overall implication of each data set — that forms the core argument of the paper — is very similar: there is very little Be-10 in quartz in the AND-1B core.

Dan also in 2009-10 sent me splits of two of his samples — from 61 and 81 m depth in the core — for Ne-21 measurements at BGC. This is potentially interesting because Ne-21 is stable, so material that might have been exposed at the surface much longer ago than could be recorded by Be-10 would still retain cosmogenic Ne-21. It’s a lot more difficult to detect small amounts of cosmogenic Ne-21, because there is always a nonzero amount of non-cosmogenic Ne-21 produced by decay of naturally occurring U and Th, and it is difficult to separate cosmogenic from U/Th-derived inventories with the precision needed to detect small amounts of surface exposure. Essentially the null hypothesis is that if no cosmogenic Ne-21 is present, and given a particular neon closure age for the sediment source rocks (which should be effectively constant on million-year time scales in the core if the source area of the sediments is not highly variable) then the Ne-21 concentration should just be a constant multiple of the U/Th concentration. The following figure shows that this is, in fact, the case for two samples (here the U and Th concentrations are expressed as eU, which approximates total alpha particle production):

You can’t exclude cosmogenic Ne-21 with much confidence from only two not-particularly-precise measurements, so this is a bit of a sideshow to the Be-10 data (although an interesting one if you are a noble gas geochemist). But the point is that Ne-21 is completely accounted for by production from U and Th, which again tends to indicate that the quartz is not derived from surfaces that were ever ice-free for very long.

Dan’s measurements were never published. I am speculating a bit as to why because I was not closely involved, but I believe the thinking was that there was no expectation that any Be-10 would be found: if there was some, that would be a story, but no Be-10 equals no story. In the Shakun paper, it is the opposite. No Be-10 is a big story. As an aside, I am concerned that the paper oversells the significance of this result, because possible Antarctic ice sheet change scenarios that are hypothesized to have occurred in Plio-Pleistocene warm periods and are relevant in light of near-term climate change involve deglaciation of marine embayments where the bed of the ice sheet is below sea level. If this happened during past warm periods it would not be recorded by in-situ-produced Be-10 in quartz. So this proxy record has zero sensitivity to the type of deglaciation event that is most concerning and most plausible in the near future. But the main point that I want to make here is that, in contrast to probably too much Earth science research, we know this result is replicable, because it’s been replicated. Although the Economist would probably argue that the funding incentive structure is such that the Shakun work would never have been funded if the Morgan results had been published, it is unfortunate that there is no way to know from the present paper that this work has been done twice, and I think Dan should get some credit for having tried this nine years ago.


* Trying to make this title short and punchy also makes it incorrect. It should refer to in-situ-produced Be-10 in quartz, rather than just Be-10. In fact, some Antarctic subglacial sediment has lots of Be-10, it’s just not in-situ-produced in quartz.

Putting the O in OCTOPUS

April 13, 2018

This post is about the recently developed “OCTOPUS” database of cosmogenic-nuclide erosion rate data. During the last year, folks involved in cosmogenic-nuclide erosion-rate measurements began to receive emails like this:

Dear [researcher],

I hope this email finds you well! You are receiving this email because you and coauthors published catchment-averaged 10Be / 26Al data within the last years. In doing so, you contributed to the number of more than 3500 in situ detrital catchment-averaged 10Be data that are available to date, forming a highly variable, statistically resilient dataset that represents substantial effort of both capital and labour. However, published data are often still inaccessible to researchers, are frequently subject to lacking crucial information, and are commonly different in underlying calculation and standardisation algorithms. Resulting data disharmony has confounded the purposeful (re)use of published 10Be-derived data, for instance for inter-study comparison, for looking at the greater picture of Earth surface’s downwearing or for innovative data re-evaluation.

Going online in June 2017, OCTOPUS database will do away with these problems on global level by providing and maintaining a freely available, fully harmonized, and easy to use compilation of catchment-averaged 10Be data. The project has obtained A$176,000 funding from the Australian National Data Service (ANDS) to build the infrastructure for hosting and maintaining the data at the University of Wollongong (UOW) and making this available to the research community via an OGC compliant Web Map Service.

However, this email is not just to inform you about OCTOPUS, but also to ask for your valued contribution to the project. To reconcile our compiled data and GIS derivates with your original files, we would highly appreciate if you would provide us with (a) drainage point shape file(s), and more important (b) catchment polygon files of your above named publication(s) within the next two weeks. Cross-checking your original files with our reproduced data will be an important part of our quality management.

I thought, OK, that’s interesting, databases are good, but lots of people come up with database projects that sound good to funding agencies but never accomplish much, and in addition I’ll be very surprised if anyone complies with the demand for shapefiles. Then a couple of weeks ago (that is, several months after June 2017) I got this via a listserv:

Dear Friends

A few days ago we have made available online a new open and global database of cosmogenic radionuclide and luminescence measurements in fluvial sediment (OCTOPUS).

With support from the Australian National Data Service (ANDS) we have built infrastructure for hosting and maintaining the data at the University of Wollongong and making this available to the research community via an OGC compliant Web Map Service.

The cosmogenic radionuclide (CRN) part of the database consists of Be-10 and Al-26 measurements in fluvial sediment samples along with ancillary geospatial vector and raster layers, including sample site, basin outline, digital elevation model, gradient raster, flow direction and flow accumulation rasters, atmospheric pressure raster, and CRN production scaling and topographic shielding factor rasters. The database also includes a comprehensive metadata and all necessary information and input files for the recalculation of denudation rates using CAIRN, an open source program for calculating basin-wide denudation rates from Be-10 and Al-26 data.

OCTOPUS can be accessed at: https://earth.uow.edu.au.

Great. Time to check this out and see what it is.

But first of all, to get one thing out of the way, this is proof that Earth science has now reached the acronym-pocalypse — in which it has become so critically important to have a fancy acronym that all rules relating the acronym to the words being abbreviated have been thrown away, and you can do whatever you want.  It is true that even though we hope the responsible parties will be experts in geochemistry, we don’t necessarily need them to be wise in the use of the English language — but, frankly, this is pretty bad. While all the letters in “OCTOPUS” do occur in the full title “an Open Cosmogenic isoTOPe and lUmineScence database”, the total lack of relation between word beginnings and acronym elements makes the reader wonder why it could not equally well have been called “MESS” (‘open cosMogEnic iSotope and luminescence databaSe’) or “POOP” (‘oPen cOsmOgenic isotoPe and luminescence database’), because both “MESS” and “POOP” are just as closely related to geomorphology and/or cosmogenic nuclide geochemistry as “OCTOPUS.”  Or, perhaps, in light of the discussion below, it could be “NODATA” (‘opeN cOsmogenic isotope and luminescence DATAbase’).

So now that that’s out of the way, let’s proceed to actually looking at the database. As noted above and as also evident from the fact that I’ve developed three other online databases of cosmogenic-nuclide data (see here), I think online databases are great. The current state of the art in cosmogenic-nuclide geochemistry — in which everyone interested in synoptic analysis of exposure ages or erosion rates must maintain tedious, unwieldy, and mutually inconsistent spreadsheets that require constant updating — is a huge waste of everyone’s time and is very effectively calculated to maximize extra work and errors. It would be much better if there were an online repository of generally-agreed-upon raw observations from prevous studies, so that everyone interested in synoptic analysis is referring to the same data set. So from this perspective, let’s get over the embarrassing acronym choice and begin looking at the website with a positive attitude.

Right. Upon locating the website, we’re presented with (i) a very large area devoted to advertising the University of Wollongong, surrounding (ii) a Google map of the world, with (iii) a layer selection panel.

Selecting an example layer displays river basins on the map that are presumably associated with cosmogenic-nuclide data:

Clicking on a random basin in South Africa brings up a window displaying some information about this basin:

Great. But now we have a problem. Incomprehensibly, nothing in this information window is a live link. One of the most basic questions that a user wants to ask in this situation is whether the erosion-rate estimate that’s presented for this basin agrees with what one might calculate using a different scaling method or a different online erosion rate calculator. To facilitate this, in this situation I would expect that, for example, the ID number of the basin would be a link to a page displaying all known data for that basin: location, Be-10 concentrations and standardization, etc. and so on, hopefully formatted in such a way as to allow for easy recalculation. However, no such thing is present. This is a serious problem — why is there no access to the actual source data?

Fortunately, a bit of experimenting reveals that making the window bigger causes a “download” link to appear:

This seems like a bit of a coding error — obviously the existence of the link shouldn’t be dependent on the window size — but whatever. A bit more experimentation shows that one can select an area within which data are to be downloaded:

But now we have a serious problem. I can’t actually get any data without providing a name and email address. Full stop.

OK, so far we have what is advertised as “Open” and a “Database”, but isn’t either. It’s not open — anonymous download is not possible — and so far I haven’t actually seen any data, just a picture of what locations are purported to have some associated data. I haven’t so far entered anything in the name and email boxes and pressed ‘request download,’ so I don’t know whether (i) the web server will just record the information and send me the data, or whether (ii) it starts an approval process where the developers have to decide whether or not I’m worthy of the privilege. Let’s consider both options, though. If (i) is the case and I can just enter “Mark Zuckerberg” under “name” and “mzuckerberg@facebook.com” as the email address, then there is absolutely no point to interrupting the user’s interaction with the website to collect valueless information, except to give the user the cheap, queasy, and frustrated feeling, familiar to everyone from their interaction with many commercial websites, that he or she is not the customer but the product. If (ii) is the case and the site developers must decide if I am a suitable person to whom to entrust data which have, of course, already been published in publicly available scientific literature, then this entire project is the opposite of “open.” Either option is pretty bad, and I’m extremely disappointed to see that the developers here don’t seem to have the strength of character to stand behind the use of the word “open” in the title. Not impressive.

In addition, requiring identity information to access data is going to make it rather difficult to really have anonymous peer review of the accompanying paper. Not great if the anonymous peer reviewers have to log in as themselves to see the thing they are supposed to be reviewing.

Fortunately there are some clues to how we can get around this problem. The email quoted above that announces the existence of the database makes reference to OGC-compliant web services — these are, essentially, links that GIS or mapping software can use to access online data dynamically rather than having to download a local copy of the data – and, presumably, this is what is being referenced by the embedded Google map on the web page. Unfortunately, the website itself makes zero reference as to whether or not these web services exist or how to access them. So clearly a bit of investigation is needed. Investigating the Javascript source code associated with the embedded map shows us:

which indicates that the Google map is, in fact, getting data from a OGC-style web map service (wms) at the address https://earth.uow.edu.au/geoserver/wms. A “web map service” is intended to serve raster data — digital elevation models, imagery, etc., which is actually not what we want if we are interested in attribute-type data associated with point or vector features. The equivalent service for that is a “web feature service,” and if such a thing is also running on the CTOPUS web server (if it’s not “Open,” we can’t really call it OCTOPUS any more) we ought to be able to find out via the following URL:

https://earth.uow.edu.au/geoserver/wfs?request=GetCapabilities

Obtaining that URL with a browser (try it) results in a large XML object that indicates that yes, in fact, a web feature service is operating at that address:

This contains a couple of useful bits of information. One, information about what feature layers are available. The following screen shot shows information (<FeatureType> elements) for two layers, which correspond to those on the layer selection box available via the website:

Two, information about what output formats are available:

Using this information, we can form a URL asking the web feature server to supply all data from the ‘crn_aus_basins’ layer as a KML file:

https://earth.uow.edu.au/geoserver/wfs?request=GetFeature&typename=be10-denude:crn_aus_basins&outputformat=KML

Using a browser to download resulting data from that URL does, in fact, yield a (fairly large) KML file containing basin outlines and a complete data dump for each element, including the critical observations such as Be-10 concentrations and standardizations. Here it is when viewed in a local copy of Google Earth:

So it turns out that it is, in fact, possible to anonymously obtain cosmogenic-nuclide data from OCTOPUS. This is quite valuable: a WFS implementation of this data set is potentially really useful for all sorts of applications, and is a major step toward fulfilling the basic goal of doing better than the current state of the art of mutually-inconsistent-spreadsheets. And we can put the “O” back in. But that was kind of a pain in the neck, it required encouraging the server to display capabilities that the website developers apparently did not want me to know about, and it was much more complicated than it had to be. Wouldn’t it be easier simply to provide direct links to the raw data, as well as the data formatted in such a way as to be easily entered into other online erosion rate calculators, on the web site?

So I’ll try to summarize here. Placing this data set online is potentially very useful. The WFS server implementation — once I located it — is enormously better than the primary existing competition in this field, which is the unwieldy spreadsheet distributed as an appendix to an article by Eric Portenga. But in my view there are several serious problems here.

One is that of credibility. A critical element of having a useful online source of these data is that it is generally believed to be a complete and accurate representation of the raw observations — locations, elevations, nuclide concentrations, etc. — and it’s hard for users to believe this if they can’t easily examine the data. The ICE-D exposure age databases have direct access through the web interface to all raw data for this reason. If you can’t examine all the data in a granular way, you start asking why you’re not being allowed to do this, and this makes it harder to believe that the data are really accurate and believable. This is a basic principle of having a data source, or anything else, become trusted and credible: if Ronald Reagan hadn’t been able to trust but also verify, the Cold War would still be in progress.

The second is just the message that is being sent to users. The really useful part of this database implementation — the web feature service that makes it possible to access these data through desktop GIS systems that one would use for data analysis, figure preparation, etc. — is completely hidden from users. One would think that this service would be the thing that is most highlighted and most advertised by the developers, because it’s the most useful and the most potentially transformative in terms of synoptic analysis of these data. It’s really a contribution to science.  This is a good thing. But instead the developers have hidden this and made it as difficult as possible to view, evaluate, or obtain source data. To say the least, this is puzzling, and, frankly, it leads the observer to uncharitable hypotheses about the motivation of the website developers. Are they trying to use this website to keep tabs on competitors who might carry out and publish synoptic research on global erosion rate distribution before they do? Is the key function of the database not so much to be useful to the overall research community as to improve the citation metrics of the authors? Not knowing any of the developers personally, I have no idea whether these hypotheses are anywhere near the truth, and, having developed similar websites, I sympathize with the feeling of having done a lot of work that other people are going to take advantage of for free, and the feeling of wanting to keep control of what I just developed. However, according to the initial email describing this project, the developers are not, in fact, giving this away for free, but rather for $176K in public funding.  Under these circumstances, the lack of openness built into this website could not do otherwise but make me suspicious of the developers’ motivations. This is not a good thing.

To summarize, it’s either an open database or it’s not. Right now it’s not: open means complete, anonymous, granular access to the raw data. We don’t have that. On one hand, this project has the potential to be fundamentally useful in enabling synoptic research into how erosion works globally. That’s good. On the other hand, the developers have unexplainably designed the website to make access to the data harder instead of easier. That’s bad, and why they have done this is a total mystery. But this project is going to remain just potentially transformative, instead of actually transformative, until they fix it.

 

 

 

 

 

 

 

 

 

 

 

Upside-down bananas

March 30, 2017

If you are reading this, you are probably familiar with the two-nuclide diagram commonly used to represent paired Be-10 and Al-26 data:

This example is from a review article by Darryl Granger from 2006 (in GSA Special Paper 415) that gives a good description of what the diagram is and how it is supposed to work. Basically, the diagram shows Be-10 concentration on the x-axis and the Al-26/Be-10 ratio on the y-axis, although commonly both quantities are shown in normalized units computed by dividing observed concentrations by the respective production rates at the sample site – this normalization allows plotting data from different sites, with different production rates, on the same axis. The basic concept here is that if your sample stays at the surface and experiences steady exposure with or without erosion, nuclide concentrations are confined to the “simple exposure region” highlighted with dark lines in the above figure. In certain manifestations of this diagram (primarily when plotted with a log x-axis and a linear y-axis), the simple exposure region vaguely resembles a banana, for example:

This resemblance, perhaps unfortunately, has resulted in the common use of the term “banana diagram.”  Then the important aspect of this diagram is that if the sample gets buried after a period of surface exposure, both Al-26 and Be-10 concentrations decrease due to radioactive decay, and Al-26 decreases faster than Be-10. This is represented by a trajectory that goes down and to the left, as shown above in the Granger example. So samples that are “below the banana” have experienced both a period of exposure and a period of burial. Here’s an example:

The lines are contours of burial time in Myr. So these samples have been buried for 2.5 Myr. This is the foundation of the method of cosmogenic-nuclide burial dating.

So, basically, in the Al-26/Be-10 two-nuclide diagram (let’s not use “banana diagram” any more…historically, it probably should be called a “Lal-Klein-Nishiizumi diagram,” although that is a bit cumbersome), exposure goes to the right and burial goes down. Easy. The problem arises when other nuclides are involved. Here is an example of a Be-10/Ne-21 two-nuclide diagram from one of my papers:

Here I have put Ne-21 (the longer-lived nuclide) on the x-axis and the Be-10/Ne-21 ratio on the y-axis. So, again, exposure goes to the right and burial goes down. On the other hand, here is a Ne-21/Be-10 diagram from a very cool paper by Florian Kober and Vasily Alfimov:

This figure has a lot of data in it that are beside the point from the perspective of this post, but the point is that it has the opposite axes: Be-10 concentration on the x-axis and Ne-21/Be-10 ratio on the y-axis. Thus, exposure still goes to the right (at least for a while), but burial goes UP. Not down. Not what we expect from our previous experience with the Al-26/Be-10 diagram. Very weird.

At present, the choice of axes in two-nuclide diagrams involving Ne-21 in the literature appears to reflect your position in relation to the Atlantic Ocean. US researchers generally plot [Ne-21] vs. Be-10/Ne-21, whereas Europeans plot [Be-10] vs. Ne-21/Be-10. Although I have not made a systematic historiographic study of this phenomenon, I believe that the European style is largely just due to the fact that the “CosmoCalc” software put together by Pieter Vermeesch does it this way.

So the question here is which one to use. I have a pretty clear opinion on this. I think no matter what the nuclides involved, you should always do it the same way as is commonly done for Al-26/Be-10 diagrams, so that burial goes down. Nearly all the two-nuclide diagrams in the existing literature involve the normal implementation of the Al-26/Be-10 diagram, so anyone familiar with this literature expects exposure to go to the right on a tw0-nuclide diagram, and burial to go down. I think inverting the diagram so that burial goes up just confuses readers. Why confuse readers when you don’t have to? Thus, I advocate always plotting the longer-lived nuclide of the pair on the x-axis, and the ratio of the shorter-lived to longer-lived nuclide on the y-axis. Then burial always goes down, as we expect. Of course, I am in the US, but I am not just cheering for my own team here. It really does make more sense for two-nuclide diagrams to always behave the same way no matter what nuclide pair is involved. I’ve done it this way in the version 3 online exposure age calculator, which will generate two-nuclide diagrams for all combinations of Ne-21, Be-10, Al-26, and C-14 in quartz, and also in the ICE-D database which makes use of the v3 calculator as a back end.

Summary: Draw two-nuclide diagrams so that burial always goes the same way. Why confuse people?

 

 

 

 

 

 

 

Is a cheap GPS OK for elevation measurements, or do you need a fancy one?

March 28, 2017

This post is about elevation measurements for exposure-dating samples, and how accurate they need to be. Basically, the main thing that controls cosmogenic-nuclide production rates is site elevation, or, more precisely, atmospheric pressure — at higher elevation, there is less atmosphere between you and the extraterrestrial cosmic-ray flux, so the production rate is higher. Thus, to compute the cosmogenic-nuclide production rate at a sample site, the first thing we need to know is the elevation. Once we know the elevation, we can convert it to a mean atmospheric pressure using a model for how the atmospheric pressure varies with elevation, and then compute the production rate.

Note that there are two places to potentially screw up here. The second one — converting an elevation to a mean atmospheric pressure during the exposure duration of the sample — is actually a fairly complicated problem and is the subject of another post, as well as a fairly large number of papers. However, the first one — accurately measuring the elevation — ought to be pretty simple. In general, determining your elevation is a fairly well-established technology that people have been working on for centuries. So even if we can’t be completely sure that we’ve done the elevation-atmospheric pressure conversion very precisely, we ought to be able to be fairly confident that we’re not making things worse by also making inaccurate elevation measurements. So the rest of this post covers (i) exactly how precise we need elevation measurements to be, and (ii) various ways to accomplish or not accomplish that goal.

So how precise do we need elevation measurements to be? The basic rule of thumb is that 10 m of elevation change is approximately a 1% change in the production rate, but this is a bit variable with elevation. Specifically, if we choose, for example, the production rate scaling method and Antarctic atmosphere model from Stone, 2000 (other scaling methods and atmosphere models are equivalent for all practical purposes from this perspective), here is how much effect a 10 m change in elevation has on the production rate:

It’s about 1% per 10 m at sea level and less at higher elevation. What this means is that accurate elevation measurement is actually fairly important. In most cases, the total uncertainty in estimating production rates for purposes of exposure-dating is around 5% or 6%, and 1% is a decent-sized fraction of that. If having a 10-m uncertainty in measuring elevation is adding an additional 1% uncertainty to your production rate estimate, that certainly seems like something to try to avoid. Basically, the point of all this is that we would like to be able to measure elevations with better than 10 m precision. Preferably quite a lot better.

In the modern era, the default and simplest way to measure your elevation is just with the same inexpensive handheld GPS unit, which is probably a Garmin that you bought a few years ago for about $150, that you are already using to record the position of your sample. Most handheld GPS units indicate their estimated horizontal positioning accuracy in real time, and in most situations it’s generally somewhere in the range of 3-6  m. Note, however, that this condition dates back only to approximately the year 2000. Prior to that time, the GPS satellite network, which of course was operated by the U.S. Department of Defense, included a feature with the Orwellian name of “Selective Availability” that intentionally degraded GPS performance for non-military users. So handheld GPS data from that era are much less accurate, with horizontal precision in the tens-of-meters range.

Unfortunately, in nearly all cases the vertical accuracy of a GPS position is not nearly as good as its horizontal accuracy.  This is just because GPS satellites are mostly not right over your head, but well away from the zenith at shallow angles toward the horizon. In this geometry your vertical  position is more sensitive to range errors than your horizontal position. A rough rule of thumb is that vertical position errors are usually about twice horizontal ones, so if your handheld GPS is indicating 3-6 m horizontal precision, that is something like 6-12 m vertical precision. Here is an example from data that I collected during a couple of exposure-dating projects in Antarctica. What these figures show are histograms of differences between elevations measured with a Garmin consumer handheld GPS ($170; an older version of this) and elevations of the same sites measured with a pair of fancy, differentially-corrected, post-processed, dual-frequency Trimble survey GPS units (probably $40K, but should have decimeter precision).

If we throw out the obvious wild outliers in both of these data sets (more about them later), these residuals are approximately normally distributed with a standard deviation of 7 m, which is actually a little better than we expect from the twice-horizontal-precision rule of thumb. So in general, this is pretty good: with our $170 low-hassle solution we can keep production rate inaccuracy due to errors in elevation measurements below 1%.

There are, however, a couple of other important points about these data. One, they both show an offset in which the handheld measurement is systematically approx. 4 m above the DGPS measurement. These are in different years so we can’t ascribe it to weird satellite availability. I suspect this may reflect my error in converting between ellipsoid and geoid heights (more about this later) for the two sets of measurements: I am not an expert in GPS data reduction and it might be that this is how you can tell that. Alternatively, it could have something to do with the handheld GPS unit having a less sophisticated atmospheric correction than used in the DGPS post-processing.

Two, there are some pretty serious outliers in the 20-30 m range — sometimes the measurement is very wrong. Of course, you expect some of these from any random process, but I suspect that what happened here is just that I didn’t leave the handheld GPS turned on for long enough to collect an accurate position. These data are from Antarctica, which is (i) remote, and (ii) cold. Item (i) means you are highly motivated to conserve battery power because you have only a finite supply of AA batteries in your camp, and item (ii) means you are highly motivated to complete the measurement quickly and put your mittens back on. Both factors would tend to favor not leaving the unit turned on long enough to collect sufficient data. But these are significant outliers that could affect an exposure age by 3%. That’s nearly 1/3 of the Younger Dryas.

In my opinion, the summary of all this is that modern inexpensive handheld GPS units are, basically, just barely good enough for measuring elevations of exposure-dating samples. Assuming  I am right about the outliers in the data set above just being due to my carelessness, if you are careful, you should be able to keep production rate uncertainties stemming from this source to approximately 0.5-0.7%. That’s not bad.

The next question, then, is how to do better. There are several strategies for this. One is just to buy a more expensive dual-frequency GPS that can be differentially corrected. As Trimble has a near-monopoly on this market, the first issue here is that this is expensive. Still, a suitable unit can usually be borrowed from most university geoscience departments. However, this solution creates a number of new problems. One is weight — if your fieldwork is foot-supported, you have just added a lot of weight to your pack. Another is power — most of these units require daily recharging (or heavy gel-cells). Again, a potential problem for remote fieldwork. A third potential problem is that if you are operating someplace very remote (like Antarctica, again), you will not be within range of stationary GPS stations that can be used for differential correction, so you will need to not only carry a rover unit with you to the sample sites, but also establish a temporary base station at your camp with a second unit. A fourth is that you need to learn enough about GPS data processing to not make errors in the processing that are much worse than you would have made with the inexpensive unit. This is a serious consideration. A final problem is that collecting enough data with a Trimble DGPS unit to eventually yield submeter vertical precision takes a while — generally on the order of 30 minutes per site. Doing this at each site would seriously limit the number of samples that could be collected in a day.

A completely different strategy is to measure sample elevations using a barometer or barometric altimeter. This is very old-school and, at this point, kind of in the realm of retro hipster hobbies, but has a couple of big advantages, mainly that barometers designed for this purpose are extremely sensitive — they can register less than 1 m of elevation change — and quite lightweight. Some don’t even require batteries. The problem, of course, is that barometric pressure is constantly changing independently of elevation due to meteorological effects, so collecting good barometric elevation data requires care and thought, and, in some cases, a stationary base-station barometer as well. However, if you are working in any area of the continental US that is covered by a USGS 7.5′ topographic map, you can very accurately estimate the elevation of your sample sites by barometric traverse between sample sites and elevation benchmarks or survey points noted on the map. Of course, in this example in most cases you will find that you could have just plotted your horizontal GPS position on the map and read off the contour elevation to get the same result.

Leaving aside the special case where you have a USGS map (or, even better, a SwissTopo map that is almost so complete that your individual boulder is already marked), you can sometimes significantly improve on the 7-m accuracy of handheld units without incurring serious pack weight problems, by establishing temporary benchmarks with a Trimble DGPS and then using the barometric altimeter to survey elevations of sample sites in relation to the temporary benchmarks. I’ve used this strategy for many years in Antarctica and found it to yield total vertical positioning uncertainty somewhere in the 1.5-2.5 m range, depending on how closely spaced the benchmarks and sample sites are. This strategy does require a fancy GPS unit, but it is fast (because you only need to do a couple of 30-min occupations per day) and light (because you can minimize carrying around the heavy antenna and batteries). This is generally my preferred approach when possible — it’s not a prohibitive amount of work, but it reduces production rate errors associated with elevation measurements to essentially negligible levels.

Summarizing the discussion up to now, a decent-quality handheld GPS is just barely good enough for exposure dating in that errors in elevations measured using this method transfer to 0.5-0.7% errors in production rate estimates. Note that I’m pretty sure your iPhone is substantially worse, just due to the antenna size and geometry, but I have not attempted to verify that. If you can, though, you should do better, either with a more elaborate GPS unit or some combination of DGPS benchmarks and barometric survey.

There is, however, one final topic on the subject of GPS measurements of sample elevations, that is important. What we actually want to measure for exposure dating is the elevation of a sample with respect to sea level. Elevation-atmospheric pressure models are indexed to the bottom of the atmosphere, which is sea level, so we need to know how far above sea level we are. However, for GPS data reduction it is common to work not in a sea-level reference frame but in an ellipsoidal reference frame in which the Earth is represented by a simplified ellipsoid rather than the somewhat lumpy shape of the actual surface of the ocean (generally referred to as the geoid). This is not the place to launch into a full discussion of the differences between ellipsoid and geoid elevations (good places for this are here or here), but the point is that they are different. In some parts of the Earth they are quite different: here is a map (from this site) showing the difference between sea level (in this case represented by the EGM2008 geoid model) and the WGS84 ellipsoid, which is commonly used in GPS data reduction:

The details are beyond the scope of this post, but the point is that a GPS elevation computed with respect to an ellipsoid is not a sea level elevation, and can be tens of meters different from the correct sea level elevation. I am pretty sure there are a lot of errors in the exposure-dating literature where ellipsoid heights have been incorrectly reported instead of sea level heights. Be careful.

 

 

 

The uneven distribution of production rate calibration data

February 2, 2017

This post is about the distribution of production rate calibration data, and whether it’s good enough or not.

The basic background here is that “production rate calibration data” are measurements of cosmogenic-nuclide concentrations from sites where we already know the exposure age of a surface from some sort of independent geochronology. In the simplest possible case, the way surface exposure dating works is that we go and measure the concentration of some cosmic-ray-produced nuclide (call this N, with units of atoms/g) in a rock surface . If we know what the production rate (P, units of atoms/g/yr) is, we can just divide N by P to get the exposure age of the surface in years (call it t). This works great if you know what P is. We can use physical first principles and measurements of the modern cosmic-ray flux to put together “scaling methods” or “scaling models ” that describe how P should vary with elevation and position in the Earth’s magnetic field. Then, to use a scaling model to estimate the actual value of a production rate at a particular location, we need to calibrate the scaling model to actual measurements of the production rate, and, as noted above, we do this by measuring nuclide concentrations in surfaces where we already know the exposure age: we measure N and divide by t to get P.

So to summarize, if we want to estimate production rates everywhere globally, we do that by formulating a scaling model and then fitting the scaling model to measured production rate calibration data. This also allows us to evaluate scaling models by asking whether they fit the calibration data, or not. Because production rates vary by a lot (factors of 10 or more) within the part of the Earth where we’d like to be able to do exposure dating (at pretty much all latitudes from sea level up to about 4-5 km elevation), it is evident that to accurately calibrate a scaling model, and to accurately test whether a scaling model either works at all or works better than another scaling model, we want to have calibration data from the entire range of scaling space, that is, that span a large range in both latitude and elevation. We’d also like to have calibration data that span a large range of true exposure ages (in order to test time-dependent scaling models), but I’ll take that as a secondary issue for now and save it for some other time.

So the question is, do we have production rate calibration data that span the entire useful part of production rate scaling space? We certainly have a lot of production rate calibration data: the ICE-D:CALIBRATION database at this writing includes 72 distinct sites where we have Be-10, Al-26, or He-3 production rate calibration data. There are also a lot of additional sites where we have C-14 or Cl-36 calibration data; we’ll leave that for now on the basis that these nuclides are less commonly measured. Mostly this proliferation of calibration data is the result of a greatly increased rate of calibration data collection during the past few years, both as part of the CRONUS-Earth and CRONUS-EU projects and by other groups. The following image, which is taken from the front page of that database, shows the distribution of the Be-10, Al-26, and He-3 data geographically and in scaling space.

current_cal_map

In the world map, the size of the circles indicates the number of different samples collected at each site. Although the world map reveals the interesting phenomenon that there are exactly zero production rate calibration measurements for these nuclides in the entire continent of Asia, the three lower panels are more relevant in answering the question of how well the calibration data are distributed in scaling space. From left to right, they show the distribution in latitude (a simple proxy for position in the magnetic field) and elevation for Be-10 (red), Al-26 (green), and He-3 (gray) calibration data. In these plots, the size of the circle reflects how many measurements of the respective nuclides were made at each site.

This shows that the available calibration data do span a wide range in latitude. They do span a wide range in elevation. But their distribution across scaling space is not even close to uniform. Rather, with the exception of a couple of He-3 calibration sites near sea level at low latitude, latitude and elevation are highly correlated for these sites, such that they nearly all lie in a fairly narrow zone of scaling space that is at high elevation at low latitude, and at low elevation at high latitude. The opposite corners of scaling space — high-elevation-high-latitude and low-elevation-low-latitude — are nearly empty.

To figure out why, consult the following figure from a well-known 1990 article in Scientific American by George Denton and Wally Broecker.

broeckerdenton1990snowline

This shows the latitudinal distribution of the global snowline, which essentially controls where glaciers exist. Naturally, it’s lower at the poles and higher in the tropics, where it’s warmer. The highest snowlines are actually in the subtropics due to the relative aridity there compared to the equator, but that’s beside the point for now. The red line is the modern snowline; the blue line is the (lower) snowline during the last glacial maximum.

The distribution of global snowlines is a thoroughly well-discussed phenomenon and this diagram, or something like it, appears lots of other places. Here is a nice example from a Russian glacier atlas that shows the same thing for South America (north is to left):

Porter diagram of Andean snowline elevations from Russian atlas of global snow and ice resources. MZS library.

The point here is that the majority of production rate calibration data are from glacial deposits — they’re sites where glacial moraines with boulders on them have been independently radiocarbon-dated. Thus, the very non-uniform distribution of calibration data in scaling space just reflects the fact that most calibration data are from glacial deposits formed during or after the LGM, and these deposits, by nature, are lower in polar regions and higher in tropical regions. Here are the locations of Be-10 and Al-26 calibration data pasted onto the Denton-Broecker diagram:

db_with_sites

Although a lot of the Northern Hemisphere calibration sites are well below mountain snowlines because they are associated with ice-marginal deposits of the Laurentide and Greenland ice sheets (and there are a few sites that aren’t associated with glaciers or ice sheets at all), the calibration data basically follow the LGM snowline, because that’s where you find LGM and late-glacial moraines. Again, the main exceptions are in the He-3 data set (not shown here; see above), where there are a few sites that consist of dated lava flows or other volcanics at low elevation and low latitude.

So, in fact, the production rate calibration data that we have, although fairly abundant now, are not, in fact, very evenly distributed across scaling space. The next question is, do we care? For us to care, two things need to be true.

First, we have to want to exposure-date something that is not near the Pleistocene snowline, e.g., something that is at high latitude and high elevation, or low latitude and low elevation. Do we want to do this? Consider what is by far the most common application of exposure-dating, which is, of course, dating glacial deposits. Like calibration data, glacial deposits tend to be near the global snowline, so we  might not be worried about this. To find out whether we are worried, let’s look at the distribution of where glacial geochronologists actually collect exposure-age data.

edist1

This shows the latitude-elevation distribution of exposure-age data associated with glacial deposits from both the ICE-D:ANTARCTICA database (green dots; includes measurements of many different nuclides) and a prototype of a similar database aimed at global exposure-age data for alpine glacier moraines (gray dots; includes Be-10 and Al-26 data from a future ICE-D:ALPINE database). As expected, most of these data that aren’t in Antarctica are clustered around and a bit below Pleistocene snowlines:

edist3

Now add Be-10 and Al-26 calibration data to these data (without the Denton/Broecker diagram):

edist2

For the most part, the places where we want to actually apply exposure-dating to glacial deposits are located very close in scaling space to the calibration data that we have. That’s good. In most glacier-chronology situations, we don’t have to worry about extrapolating production rate estimates away from the calibration data. However, there are some exceptions. Of course, if we’re trying to exposure-date something that’s not a glacier — landslides, fault scarps, shoreline features, whatever — we could be anywhere in scaling space and could require large extrapolations. But even in the more restricted province of glacier chronology, the most glaring mismatch in this figure (there are others less glaring) is that there are a lot of situations where we want to date glacial deposits that are at relatively high elevations, 2000-3000 m, near both poles…but we don’t have any calibration data at all at high elevations near the poles. So the first requirement for whether or not we care about the restricted distribution of calibration data is, in fact, met. We definitely want to exposure-date things in parts of scaling space that are empty of calibration data.

The second criterion for whether or not we should be worried about uncertainty in extrapolating production rates from where they are well constrained by calibration data near the global snowline to locations that are not near the global snowline is as follows. We have scaling models that predict the variation in production rates everywhere in scaling space, not just near the calibration data. However, we can’t test whether they are correct in areas where we don’t have calibration data. So the question is, do we know that the scaling models are accurate in unsampled parts of scaling space, in particular at high latitude near the poles? Obviously, we can’t answer this question by comparing predictions to calibration data, because we don’t have any calibration data in that area. One alternative, but much less satisfying, way to answer the question is to ask whether different scaling models predict the same production rates for this region, or not. If they do, we might not be that worried; if they don’t, we might be more worried. Here is a comparison of production rate estimates for high latitude derived using the two most au courant scaling methods: that based on the work of Devendra Lal (the “St” scaling method in current jargon) and that based on the work of Nat Lifton and Tatsuhiko Sato (“LSDn”):

comp1

Here I have calibrated both scaling methods using the CRONUS-Earth “primary” calibration data set for Be-10, and plotted the resulting predictions for high latitude (zero cutoff rigidity, mean Holocene solar modulation parameter). Left panel shows the predicted production rates (at high latitude using an Antarctic atmosphere approximation) for the St (blue) and LSDn (red) scaling models; right panel shows the ratio of the two production rates. At low elevation, both scaling models predict production rates that are within a few percent of each other (because they are both pinned to calibration data at relatively low elevations at relatively high latitudes), but the predictions diverge at higher elevations. At 3000 m in Antarctica, LSDn scaling predicts ca. 15% higher production rates than St scaling (note that this is the same phenomenon discussed in this blog posting about saturated surfaces in Antarctica). This is a pretty big difference. As noted in the other blog entry about saturated surfaces, we can potentially use near-saturation Be-10 and Al-26 measurements from Antarctica to in part overcome the lack of independently dated calibration data in this part of scaling space. This exercise, I argue, indicates that LSDn scaling is accurate, and St scaling is not, for high latitude and high elevation. We could also potentially address this issue using samples that are saturated with respect to in-situ-produced cosmogenic C-14 in quartz, although this was attempted by Borchers and others and was inconclusive. Overall, however, this situation could be very much improved with new production rate calibration data from high elevation near the poles.

Summary: the existing set of production rate calibration data, although very far from uniformly distributed in scaling space, is pretty good if you are interested in using cosmogenic-nuclide exposure dating for glacier chronology, with one exception. The exception is that we could really use more calibration data from high elevation near the poles. This is actually a lot harder than it sounds, because there is, for the most part, not much in the way of organic life forms living at high elevation near the poles. By extension, there is not much available for radiocarbon dating, which makes it rather hard to independently date landforms. But this is clearly a problem that could use some creative ideas about how to overcome it.

What is the 26/10 production ratio anyway?

January 10, 2017

This rather long post covers the issue of the Al-26/Be-10 production ratio. The value of this ratio is important for several reasons.

First, it gives some information about whether a surface sample has experienced a single period of exposure at the surface. The basic, although slightly oversimplified, concept here is that if you measure a ratio that is a lot lower than the production ratio, your sample has spent a significant fraction of its life buried far enough below the surface to partially or completely stop the cosmic ray flux and allow previously produced nuclide inventories to decay…so if you compute the exposure age of the sample based on the assumption that it hasn’t been buried, you’ll be wrong.

Second, and more importantly, if you have a sample that is buried below the surface now, and you measure this ratio, the difference between the measured ratio and the production ratio tells you how long the sample has been buried. This concept, although also slightly oversimplified here, is the foundation of the method of burial dating.

The currently accepted value of the 26/10 production ratio, when appropriately normalized to the “07KNSTD” and “KNSTD” isotope ratio standardizations for Be-10 and Al-26, respectively, is 6.75. Interestingly, an uncertainty is almost never quoted for or applied to this number. Also interestingly, as far as I can tell, this number is just taken from a paper written in 1989 by Kuni Nishiizumi and colleagues. They stated it as 6.1; restandardizing Be-10 measurements from the then-current “KNSTD” standardization to the now-current “07KNSTD” corrects it to 6.75. In the widely utilized 2008 paper describing the online exposure age calculators formerly known as the CRONUS-Earth online exposure age calculators, we just accepted this number, mainly because we concluded that production rate calibration data that were available at the time agreed with the Nishiizumi estimate, and we didn’t propose any change. Subsequently, most papers about burial dating (which is where this number is really important) adopted this value or something very close to it (exactly how you do the KNSTD–>07KNSTD conversion affects the actual number at rounding error level). To summarize, we have all basically ignored this issue for 25 years. Why? Because 6.75 works. Most measurements on surface samples seem to more or less agree with this canonical value, and we have not ever found that burial ages computed using a ratio of 6.75 have been inconsistent with other age constraints.

Unfortunately, it may now be time to revisit this issue, because in recent years there has been (i) a lot more collection of production rate calibration data, as well as (ii) a lot more attention to first-principles modeling of production processes and their spatial variation. So in the rest of this post I’ll try to exploit this new information to figure out what the 26/10 production ratio really is, and, more importantly, to determine if we can just keep on not worrying about this issue as we have done successfully for the last 25 years.

To clarify what exactly we are talking about, in this post I am talking about the ratio of total Al-26 and Be-10 production rates at the surface, that is, including both spallation and muon production. In some work (including lots of mine) there is some confusion about whether an “Al-26/Be-10 production ratio” refers to spallogenic production only, or to total production. Fortunately, most of the time these are very close to the same thing, so any consequences of this confusion are small. But here, I am talking about total production.

The most straightforward way to determine the production ratio is simply to do what Nishiizumi and others did. Find some samples that have the following properties: they have been exposed at the surface for a single period of time; at the beginning of this period their nuclide concentrations were zero; surface erosion during the exposure period was negligible; and the duration of exposure is much less than the half-life of either Al-26 or Be-10. Then measure the concentration of both nuclides, and take the ratio. In the Nishiizumi paper these are samples of glacially polished bedrock from the California Sierra Nevada that have been exposed since deglaciation ca. 13,000 years ago.

The easiest way to find a larger set of samples that have these properties is, in general, to look at samples collected for production rate calibration purposes. Selecting samples suitable for measuring the 26/10 production ratio, of course, is less restrictive than selecting ones suitable for production rate calibration, because to measure the production ratio, you don’t have to know the exact duration of exposure, just that the exposure time has been a lot shorter than the Al-26 half-life. So lots of other samples that aren’t suitable for production rate calibration could be used to estimate the 26/10 ratio. However, existing production rate calibration data sets are a convenient way to find a set of samples that have been carefully selected to have simple exposure histories and negligible erosion. Looking in the ICE-D production rate calibration database and removing one extreme outlier measurement (26/10 = 14; not sure what went wrong here), one finds 90 measurements of both Al-26 and Be-10 on samples collected for calibration purposes. Each of these is a distinct measurement in which both nuclides were measured in a particular aliquot of quartz. Here is the distribution of 26/10 ratios in these samples:

r_hist_1 r_hist_2

On the left this is shown as a histogram and on the right as a normal kernel density estimate. A couple of points about this:

The average of all these data (shown as a red line on both plots) is 6.76. OK, we’re done. Back to sleep for another 25 years.

Just kidding. We’re not quite done. One initial thing that is notable about this data set is that it is skewed to the low side. This observation is consistent with the easiest way to make an error when measuring the 26/10 ratio in a sample. Specifically, one measures the Al-26 concentration by multiplying a measured total amount of Al by a measured Al-26/Al-27 ratio. The total Al measurement is made by taking aliquots of a solution of quartz dissolved in HF, evaporating the HF in the aliquots, redissolving the Al in a dilute acid, and measuring the Al concentration by ICP. If you are not careful in this process, you can retain insoluble Al fluorides in the final step, which means that the Al concentration in solution underestimates the total amount of Al in the sample, and leads to an underestimate of the 26/10 ratio. The mild negative skewness of these data suggests that this may have been a problem for some of the measurements, in which case the mean value of these data might slightly underestimate the true production ratio (the median value is 6.84).

For a slightly different perspective on this, then, let’s try estimating the production ratio from calibration data using a slightly different workflow, as follows:

  1. Start with the “CRONUS-Earth primary calibration data sets” for Be-10 and Al-26, again as represented in the ICE-D calibration database.
  2. Using these data and the production rate calibration code from the version 3 online exposure age calculator, determine best-fitting values of reference production rates for Be-10 and Al-26. I’ll use two scaling methods: the non-time-dependent “St” scaling and the time-dependent “LSDn” scaling.
  3. Also compute production rates due to muons for Al-26 and Be-10; add production rates by spallation and muons together; divide to get an estimate of the 26/10 production ratio.

The main difference between this method and simply averaging all direct measurements of the 26/10 ratio as I did above is just that it allows us to incorporate data from calibration samples where only one nuclide was measured: we can utilize Be-10 data that lack corresponding Al-26 measurements, and vice versa.  In addition, all measurements represented in the CRONUS-Earth “primary” data set are relatively recent, meaning that the folks who did them certainly should have been careful about insoluble fluorides. In any case, this workflow uses somewhat different data and should yield an estimate of the 26/10 production ratio that is somewhat independent from the simple average above.

On the other hand, it brings up something new. The “St” scaling method, as implemented in the various online exposure-age calculators, assumes that Al-26 and Be-10 have the same latitude/elevation scaling, but the proportion of total production that is due to muons will vary with elevation and latitude (mostly with elevation). In addition, the relative proportions of fast muon production and negative muon capture change with elevation. As these three production pathways (spallation, fast muons, negative muon capture) all have different 26/10 production ratios, this means that the total 26/10 production ratio must vary with elevation (and a little with latitude as well). The point is that what we know about production rate scaling predicts that the 26/10 production ratio isn’t a constant value as we have been assuming, but will vary with elevation.

This variation is predicted to be close to negligible for the “St” scaling method: muon production makes up at most 2 or 3 percent of total production, so variations in the smallness of this very small fraction result in only per-mil-level variations in the production ratio. Thus, for this scaling method, mostly we disregard this issue completely. The “LSDn” scaling method, in contrast, predicts not only variation in the muon contribution, but also, more importantly, that spallogenic production of Al-26 and Be-10 will scale differently with magnetic field position and elevation. The reason for this is that this scaling method uses the results of an atmospheric particle transport model to predict the energy spectrum of fast neutrons at your location, and then multiplies this spectrum by a set of cross-sections for Al-26 or Be-10 production. These cross-sections depend on energy, and they’re different for the two nuclides. The following shows the neutron cross-section estimates used in LSDn scaling, as a function of neutron energy:

thresholds

Green is the neutron interaction cross-section for production of Al-26 from Si; blue is Be-10 from O; red is Be-10 from Si. The threshold energy for Al-26 production is lower than that for Be-10, so in parts of the atmosphere where neutron energy is lower (that is, at greater atmospheric depths or lower elevations), Al-26 production is favored relative to Be-10 production, and the 26/10 production ratio is predicted to be higher. So once we know that Al-26 and Be-10 have different threshold energies for production (we’ve known this for a while), this implies that the spallogenic production ratio has to vary with location; it can’t not do so. Similar work using an atmospheric particle transport model by Dave Argento (here) also highlighted the prediction that the 26/10 production ratio should change with elevation; he calculated that the spallogenic production ratio should be near 7 at sea level and 6.75 at high altitude.

The main point here is that first-principles physics and calculations based thereon predict that the 26/10 production ratio should vary with location. So let’s look at the data set of measured ratios (the data presented in histogram form above) as a function of elevation, to see if this variation is evident. Here are measured ratios plotted against elevation:

r_elv_1

Here are the same data with (i) the average (6.76), and (ii) a linear regression in elevation (6.94 – 1.022e-4 * elevation) plotted in red:

r_elv_2

Although the correlation between elevation and measured 26/10 is weak (correlation coefficient is -0.25; p = 0.02), it’s not zero. And a simple linear regression through these data agrees very closely with the prediction in the Argento paper.

This shows that the data are consistent with an elevation dependence to the production ratio, even though they don’t require it at very high confidence. But to back up a bit, this whole discussion of elevation dependence interrupted what we were trying to do: estimate the 26/10 production ratio by separately calibrating the Al-26 and Be-10 production rates. Now having shown that that no matter how we do this, we always predict some geographic variation in the ratio, we’ll continue to look at this as a function of elevation. Here is the result for the ‘St’ scaling method:

r_elv_3

Although it is a bit hard to tell this from the figure, there are separate green and blue dashed lines showing predictions for low latitude (high magnetic cutoff rigidity) and high latitude (low cutoff rigidity), respectively. As discussed above, although geographic variation is predicted for this scaling method, it’s very small. Reference production rates fit to the calibration data predict a 26/10 production ratio of 7, which is, in fact, a bit higher than the estimate we obtained above from simply averaging the direct measurements. Again, what seems to be the most likely explanation for this is just that there are more data involved, and we may have excluded some of the low outliers by choosing a more recently generated data set.

Here is the result for the “LSDn” scaling method (again, this is total production including muons):

r_elv_4

As discussed, this scaling method predicts a larger elevation dependence as well as a larger difference between high (green) and low (blue) cutoff rigidity. Again, it’s in decent agreement with the simple linear regression on these data as well as the predictions in the Argento paper (the latter isn’t surprising; they’re based on similar physics). To summarize, put all these predictions together with the data:

r_elv_5

My summary of this is as follows. The data set of direct ratio measurements on individual samples is, by itself, consistent with the canonical value of 6.75, although there is some evidence that this number might be skewed a bit low due to measurement errors. Basic physics of Al-26 and Be-10 production, on the other hand, indicates that the production ratio should be elevation-dependent. Scaling models that use this physics (e.g., LSDn), when fitted to calibration data, predict that the ratio should be in the range 7-7.1 at low elevation and in the range 6.5-6.75 at high elevation; these predictions are also completely consistent with the data set of direct measurements. Because the data set is rather scattered, mainly just due to measurement uncertainty, it does not appear to provide a means of choosing between scaling models at high confidence. So at this point, it appears that (i) simply assuming 6.75 always, and (ii) using an elevation-dependent value as predicted by LSDn scaling, are both consistent with available data. Choose (ii) if you like physics, and (i) if you like simplicity. Overall, I think I lean towards (ii), that is, physics.

For perspective, the difference between 6.75 and 7 is 3.7%. That’s not insignificant — it maps to age differences of 5-15% in typical burial-dating applications — but we’re haggling over fairly small differences here.

Finally, for completeness I’m going to cover one more source of indirect evidence for what the value of the production ratio is. This stems from the observation that if the production ratio at low elevation is 7 rather than 6.75, we should probably recalculate a bunch of previously published burial ages. Thus, it might be a good idea to look into whether changing the production ratio causes any serious problems with existing burial-dating results.  One could pursue this either by recalculating a bunch of burial ages and seeing if you find any conflicts with independent age constraints, or by a more explicit method in which if (i) you have a buried sample, (ii) you know its age independently, and (iii) you know the decay constants for Al-26 and Be-10, you can measure both nuclides in your sample and back-calculate the surface production ratio. This latter method features in a recent paper by Zhou and others, who burial-dated a fluvial gravel whose age was bracketed independently by argon-dated lava flows. They back-calculated the production ratio in this way to be between 6 and 7.2 (at 68% confidence), which is not particularly precise, but shows good agreement with predictions and surface measurements. Good. Pass. On the other hand, in a 2010 paper about glacial stratigraphy in the central US, we dated a paleomagnetically normal unit three times by the burial-isochron method (this is the “Fulton” till in that paper). Magnetically normal means it must be younger than the Brunhes-Matuyama reversal at 0.78 Ma; we dated it at 0.8 +/- 0.05 Ma, which is just barely consistent with the constraint. Assuming the 26/10 ratio to be 7 rather than 6.75 would make that age estimate older, 0.87 Ma, which would be badly inconsistent with the paleomagnetic stratigraphy. But — hold the presses — in the time period between that paper and now there’s also been a change in best estimates for muon interaction cross-sections, which acts to bring that age estimate back down to 0.81 (still +/- 0.05) Ma, and this remains consistent with the paleomagnetic boundary. So this highlights that there are a lot of moving parts involved in recalculating old burial age data, but for this one case I can think of immediately in which a burial age estimate bumps up against an an independent age closely enough to provide a serious constraint on the 26/10 production ratio, the hypothesis that the production ratio is near 7 at sea level passes.

So what ratio to use for burial dating? As noted above, physical arguments appear rather compellingly in favor of an elevation dependence for the ratio. If this is true, then if the average ratio computed from a set of samples that span a range of elevations is 6.75,  the true production ratio must be higher at low elevation (conveniently, Nishiizumi and others made their measurement at middle elevations). I think it seems sensible to include this effect in burial-age calculations.

P.S.: I still haven’t addressed the issue of what the uncertainty in the production ratio is.