Skip to content

What color is beryllium-10?

October 8, 2018

This posting relates to some planned upgrades and additions to the ICE-D:ANTARCTICA website and the related parts of the online exposure-age calculators that have to do with diagrams, figures, and images. Even now, the display of some data sets via this website can produce a somewhat bewildering array of diagrams, figures, and images that are supposed to present exposure-age data in some way. Examples include the neat-looking but largely unexplained and unintelligible front page of the website:

Age-elevation plots:

Carbon-14 saturation plots:

And, in future, possibly extremely complex data-model comparison plots associated with this project. To make this proliferation of plots a little less intimidating, it seemed like a good time for myself and BGC postdoc Perry Spector, who is responsible for the data-model comparison project, to at the very least come up with a standardized color scheme for plotting measurements of different cosmogenic nuclides together on the same images. Hence the need to determine what color beryllium-10 is. And all the other nuclide/mineral pairs, which for practical purposes for this application includes He-3 in a variety of mineral targets, Be-10 in quartz, C-14 in quartz, Ne-21 in quartz, Al-26 in quartz, and Cl-36 in many possible targets. So how to do this? Here are some general principles.

  1. In reality, most of these things are either colorless (noble gases) or grayish (metals), with the exception that chlorine is a yellow-green gas if found in immediately-fatal concentrations at normal P-T conditions. So that’s not too helpful.
  2. There should be some kind of sense to the color-nuclide pairing, e.g., colors in spectral order should pertain to nuclides in mass order, or something like that.
  3. Obviously, neon-21 has to be neon  (the preceding text, which is probably unreadable, is and says “neon”). This was a critical new insight stemming from our discussion. How was this not obvious before?
  4. The colors shouldn’t look terrible (except for neon-21 as noted above), and should reproduce OK for print and screen use.
  5. Color assignments should be kind to green/red colorblind viewers to the extent possible, which may be simplest to accommodate by using one, but not both, of those colors.
  6. Black probably shouldn’t be used, because it will conflict with other plot elements, grids, etc.

So here’s our attempt to do this:

 

 

This scheme does a pretty good job of overlapping with the goals above, as follows, with a couple of additional features:

  1. Helium-3 data are the same tone (gray) but different values (light and dark) for different minerals. This is helpful in de-emphasizing He-3-in-quartz data, which are often uninterpretable due to diffusive loss of helium. If helium-3-in-quartz is one of the nuclide/mineral pairs being plotted, you probably want to pay more attention to whatever the other ones are.
  2. Neon-21 is neon.
  3. No green. No black.
  4. The colors are more or less in spectral order as you go up in mass.
  5. It’s not terribly ugly, with the exception of neon-21.

GMT is being used to make all these plots, so here are the HSV codes for these colors for use in GMT scripts:

 

Nuclide/mineral Face color Outline color
He-3 (qtz) 0-0-.78 0-0-.28
He-3 (px/ol) 0-0-.6 0-0-.1
Be-10 (qtz) 0-.7-.97 0-.7-.47
C-14 (qtz) 36-.85-1 36-.85-.5
Ne-21 (qtz) 65-.75-1 65-.75-.5
Al-26 (qtz) 211-.72-.90 211-.72-.4
Cl-36 (all) 291-.48-.65 291-.48-.15

 

Thus, expect to see this color scheme in anything generated by the ICE-D web server and online exposure age calculators in future.

One thing this does leave unaddressed is two-nuclide plots:

Obviously, you know which two nuclides are being plotted by reading the axis labels, so it is not necessary to color-code the nuclides. Instead, what is being color-coded here is the scaling method used to normalize the data to unit production rate. Red is non-time-dependent “St” scaling (Lal, 1991 and Stone, 2000); green is the “Lm” time-dependent adaptation of Lal/Stone scaling from Balco et al. (2008), and blue is the time-dependent “LSDn” scaling developed by Nat Lifton and others. This particular plot shows data from Antarctic samples with very high nuclide concentrations, so the tiny red and green ellipses out to the right in the “forbidden zone” highlight the difference in performance between St/Lm and LSDn scaling for high-elevation Antarctic sites. For these plots the R-G-B order is just inherited from the 2008 online calculators and represents the increasing order of complexity of the scaling methods, but it has the disadvantage of including both red and green. So this may have to be fixed in the near future as well.

Summary: the color scheme is a bit uglier but now standardized and hopefully more sensible.

Advertisements

The “fancy-pants” camelplot

September 25, 2018

This follows up on a previous post that describes the application of the normal kernel density estimate, also sometimes known as a “camel plot,” as a visual aid for displaying cosmogenic-nuclide exposure-age data. This type of diagram has one interesting problem that arises when it is used to display data that cover a wide range of ages. As covered at some length in the previous post, one useful aspect of this diagram for visual communication of data is that, as usually constructed, the Gaussian kernel drawn for each measurement is normalized to unit area. Thus, a more precise measurement will produce a narrower and taller kernel than a less precise one, which draws the viewer’s eye to the more precise measurement and provides a neat and effective way of rapidly communicating the difference in precision between measurements.

The problem with this is that, again as these figures are usually drawn, the normalization to unit area is linear in real units, i.e. units of years, rather than in relative units. Thus, suppose you have two measurements that have very different ages, say 10,000 and 100,000 years, but the same relative precision, say 3%. Even though the relative precision is the same, the uncertainty in the older age is larger in units of years; for this example the uncertainties are 300 years and 3000 years. Thus, the kernel for the older age spans more years, so normalizing to unit area involves dividing by a larger factor. The result is that even though both ages have the same relative precision, the kernel for the older age is shorter than the one for the younger age. Here is what this example looks like:

The ages have the same relative precision, but they don’t look like it. Depending on what you are trying to communicate with this diagram, this might not be what you want: the difference in size signals to the viewer that the older age is less important or less reliable than the younger one.

So one might want to correct this. Here is one way to do it. The equation for a Gaussian probability distribution is:

p(x) =  \frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left[ \frac{-\left(x-\mu\right)^{2}}{2\sigma^{2}}\right]}

Where, \mu is the age, \sigma is the uncertainty, and the factor of \sqrt{2\pi\sigma^{2}}  is the normalization factor needed to make each kernel have unit area.  So the maximum height p_{max}  of each normalized Gaussian, i.e., the value when x = \mu , is:

p_{max} = \frac{1}{\sqrt{2\pi\sigma^{2}}}

To correct our problem, we want this not to vary with the age. A simple way to do this is to assume that all measurements have the same relative uncertainty — for example, a typical analytical uncertainty of 3% — and compute the expected height of the kernels as a function of the age. This expected height p^{*}_{max}(\mu)  is:

p^{*}_{max}(\mu) = \frac{1}{\sqrt{2\pi\left(0.03\mu\right)^{2}}}

Then, if we want to adjust the visuals so that measurements with the same relative precision, but different ages, have the same kernel height, we can just normalize again by this expected height. Then the kernels plotted on the diagram have the form:

p(x) =  \frac{1}{p^{*}_{max}(\mu)}\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left[ \frac{-\left(x-\mu\right)^{2}}{2\sigma^{2}}\right]}

This loses the normalization to unit area, but this isn’t relevant in most applications of this diagram for visual presentation of data. What it does is make it so exposure ages with different ages, but the same relative uncertainty, have the same height. So the example above now looks like this:

fcp2

Measurements with different relative uncertainties will still have different heights. If the older measurement now has 6% uncertainty:

fcp3

If the younger has 6% and the older has 3%:

fcp4

So this formulation gets rid of the problem that two measurements with the same relative precision, but different age, will appear to have different importance, while retaining the effect that measurements with different relative precision will have different heights. It will have a minimal effect on the appearance of this figure for a data set if all the ages are similar; it only changes the appearance for data sets with a very wide range of ages. It does have the side effect that the area under each kernel scales with the age, so older ages may visually appear very large; this may or may not be a problem depending on what you are trying to do. It doesn’t affect the basic functions of the camel diagram in communicating information about whether ages do or don’t agree with each other that is difficult to do accurately with a histogram.

Note that this modification is intended to solve a visual-communication problem, and not to actually generate a quantitative probability density estimate. So you would not want to use the results of this procedure for subsequent likelihood estimates or anything of that nature.

The MATLAB code to do this is here:

fancy_pants_camelplot.m

The name is terrible, but it does give the idea that this is both kind of complicated and also a little bit fake at the same time, which is probably not a bad description and may help as a reminder that it’s designed for a visual effect and not a quantitative probability estimate. Other options might be pretentious_camelplot.m, uppity_camelplot.m, or too_big_for_britches_camelplot.m.

 

 

 

 

 

Where’s the reference production rate?

September 6, 2018

Recently I received the following question in an email:

I used the CRONUS earth calculator v2.3 to calculate the Be-10 exposure ages of the samples from [some location], but I did not find the numerical value of Be-10 reference production rate used in CRONUS v2.3. On the webpage it only said it is “derived from the primary calibration data set of Borchers and others (2016)”. Can you kindly tell me the value ? Thank you very much.

This also comes up in a lot of paper reviews. Here is an example. The text of the paper is as follows:

Reference Be-10 production rates were determined using the CRONUS-Earth “primary” calibration dataset (Borchers et al., 2016).

And the reviewer said:

…regardless of citing the source, for completeness the SLHL 10Be prod rate should be listed ( with its error)…

The purpose of this posting is to explain why one should state the production rate calibration data set used, but one should not state a value of the reference production rate. As usual, it’s a long posting, and it’s tedious because I’m going to try to do this just in words and not in math, but getting this right is important in understanding how exposure dating works and accurately communicating it in papers.

To begin with, I’ll review the overall structure of an exposure-age calculation one more time. To compute a cosmogenic-nuclide exposure age, you need three things. One, a scaling method that describes how production rates vary with location and time. Two, a set of calibration data that consists of nuclide concentrations measured at sites whose exposure age is already known from a different, independent, dating method such as carbon-14 or argon-argon, such that we know what the true average production rate at those sites has been during the period they were exposed. Three, the nuclide concentration measured at the unknown-age site that we wish to date. Having these three things, one takes the following steps.

  1. Fit the scaling method to the calibration data set. In most cases, the scaling method generates a nondimensional “scaling factor” describing how the average production rate for a particular place and age is related to the production rate at some reference location and time. So for each sample in the calibration data set, the scaling method predicts a scaling factor for that place and exposure age. We then perform a one-parameter fitting exercise in which we choose a value of the “reference production rate” — the production rate at the reference location that is used in that scaling method — that, when multiplied by the scaling factors for all the calibration sites, best duplicates the actual production rates that we measured at those sites. A simple way to think about this is to consider a particular site: at this site we divide the true production rate that we measured by the scaling factor predicted by the scaling method; the result of this calculation is an estimate of the production rate at the reference location. If we have a number of different calibration sites, we have a number of different estimates of the reference production rate, and (given certain assumptions) the value of the reference production rate that best fits the entire calibration data set is just the average of the estimates from each site.
  2. Now that we have an estimate of the reference production rate, we take the scaling factor predicted for the unknown-age site and multiply it by the best-fitting reference production rate in step (1). Then we use this production rate to compute the exposure age from the measured nuclide concentration. Note that for a scaling method in which the production rate varies in time, this calculation must be iterated rather than done only once, but the principle is the same.

There are several important aspects of this procedure.

One, nowhere in this procedure have we ever measured the reference production rate. The actual measurements that we made are the average production rates at the calibration sites. None of the calibration sites are at the reference location (except by accident). For some scaling methods, the reference location is defined such that it does not exist at any real place and time. Thus, the “reference production rate” is not a measurement. It is a parameter estimate that is the result of a model-fitting exercise, and what the parameter actually means is chosen by convenience and not for any physically meaningful reason.

Two, different scaling methods have different reference locations. Historically, for scaling methods (primarily that of Lal, 1991) in which production rates are assumed to be constant over time, the reference location was defined to be at “sea level and high latitude.” What this means is kind of vague, but usually it is assumed to mean something like (i) an atmospheric pressure of 1013.25 hPa (the defined international standard “sea level pressure”, although this is not the actual atmospheric pressure at sea level anywhere (except by accident), and (ii) geomagnetic latitude above about 60°, where the variation in magnetic field strength with latitude becomes negligible. There is no reason the reference location has to be at “SLHL.” It could be on the lawn of the Mar-a-Lago Club in Palm Beach, Florida, or the summit of Mont Blanc. It could be anywhere. For more recent scaling methods, in particular that of Lifton and others (“LSD” or “LSDn”) that is currently in common use, the production rate is predicted to change over time with changes in the magnetic field strength and solar output. This makes it necessary to choose both a reference location and reference values of magnetic and solar properties for the reference location. For the LSDn method, this is typically taken to be 1013.25 hPa, 0 GV, and a zero value of a parameter describing solar modulation. This is a different reference condition than the “SLHL” reference condition historically used for non-time-dependent scaling. Thus, even if we used the same calibration data set and both scaling methods were perfectly accurate, the best-fitting “reference production rates” to each scaling method would be different, because they pertain to different reference conditions. They could not be compared to each other, because they are not estimates of the same thing. They are estimates of different things.

Three, different implementations of the same scaling method will yield different best-fitting estimates of the “reference production rate” for that scaling method. All the commonly used scaling methods are numerically complicated, and different people have written different computer codes that use different numerical approximations, integration methods, interpolation methods, etc. to perform the calculations. In general, if you take one calibration data set and two implementations of the same scaling method and you perform both steps (i) and (ii) above, you will get the same exposure age for the unknown-age site. However, the intermediate value you have computed — the best-fitting value of your fitting parameter — will probably not be the same (except by accident). Thus, if you perform step (1) with one computer code, but then take the resulting fitted value of the reference production rate and perform step (2) with a different computer code, you will probably get the wrong exposure age for the unknown-age site.

So what does this mean for how you should describe how exposure-age calculations were carried out?

Historically, when most exposure-age calculations were carried out using the non-time-dependent scaling method of Lal (1991) or the slightly different implementation of the same method by Stone (2000), it was common in papers for authors to report the value of the reference production rate that they used to compute the exposure ages. Because the calculations involved in these scaling methods are relatively simple, it was possible for readers to easily produce their own implementation of the scaling method, take the value of the reference production rate stated in the paper, and closely reproduce the authors’ calculations.

With more complex, time-dependent, scaling methods in use at present, this is much more difficult. Software implementations of the LSDn scaling method involve thousands of lines of computer code and extensive background data sets describing paleomagnetic and solar variability as well as particle interaction cross-sections, all of which are updated periodically. All such implementations are slightly different. As noted above, if you take a fitted reference production rate derived, as in step (1) above, using one implementation, and then perform step (2) with a different implementation, you will generally get the wrong answer. In addition, different implementations (e.g., “CRONUSCalc” and “the online exposure age calculator formerly known as the CRONUS-Earth online exposure age calculator”) define different reference states for some scaling methods (or don’t define a reference state at all and use a different fitting parameter entirely), so the best-fitting values of the “reference production rates” are not even estimates of the same thing.

What this means for documenting exposure-age calculations in papers or elsewhere is that reporting the value of the reference production rate that you used doesn’t help readers to reproduce your calculations. Although certainly it is possible that your readers can obtain the exact computer code that you used, it is, in general, difficult due to the continued evolution and improvement of both the scaling methods themselves and the background data. Reporting a value for the reference production rate, in contrast, has two negative effects. First, it gives readers the impression that the value of the reference production rate is a measured physical constant. As discussed above, this is not true. Second, it creates a situation where readers who want to replicate your calculations can only perform step (2) above; they cannot perform step (1). Thus, they cannot start from the same actual measurements that went into your calculations and therefore cannot correctly and consisntently reproduce your calculations.

What does help readers to correctly and consistently reproduce your calculations is to report what calibration data set you used. Remember, the calibration data set consists of actual measurements and observations. Unlike the reference production rate, these measurements and observations are the same no matter what scaling method you are using. They are the same no matter what numerical implementation of a particular scaling method you are using. They are the same no matter what the “reference location” is. It is easy to obtain these measurements; all published calibration data sets are publicly accessible in a standardized form at this website. If you tell readers what calibration data set they used, they can reproduce your exposure age calculations with any implementation of any scaling method in an internally consistent way that begins with the same source measurements. They can perform both steps (1) and (2) above, BOTH of which are necessary for an internally consistent exposure-age calculation, instead of only step (2). Although it may seem difficult or intimidating for readers to perform both a production rate calibration and an unknown-age calculation, this task is simple with existing online exposure age calculation systems: CREp allows a choice of a wide variety of published calibration data sets; the online calculator formerly known as the CRONUS-Earth online calculator allows production rate calibration with any calibration data set.

A final point is that, again historically and dating from times where the non-time-dependent Lal scaling was nearly universally used, readers who were familiar with the cosmogenic-nuclide literature could generally figure out from the value of the reference production rate how the ages in that paper would compare to ages in a different paper. For example, prior to about 2000, it was generally thought that the value of the reference production rate appropriate to the implementation of the Lal scaling method that was in wide use at that time was about 5 atoms/g/yr; after 2000 it became clear that the correct value was closer to 4. Of course, this meant that previously measured exposure ages were, in fact, much older than had been stated in the older publications, and readers got in the habit of checking what reference production rate had been used as a means of comparing study results to each other: a moraine given as 12 ka in a study that used 5 atoms/g/yr would be equivalent to 15 ka in a study that used 4. This period of time in the early 2000’s during which most papers used the same scaling method, but different reference production rates, conditioned readers to expect to see the value of the reference production rate somewhere in the methods section. The situation is different now: differences between time-dependent and non-time-dependent scaling, as well as, often, differences between implementations of the same method, are commonly more important than differences in estimated values for reference production rates, and in addition these values can’t even be compared for different implementations. It’s still important for readers to get an idea of the effect of different assumptions on exposure ages reported in a paper, but readers can’t get that information any more from the value of the reference production rate by itself. If you want to tell the reader how differences in production rate calibration affect the end results, then tell them! For example, “if we use calibration data set A, my moraine is 12 ka; if we use calibration data set B, it’s 13 ka.” This is the information the reader wants, so give it to the reader directly, not indirectly through a “reference production rate.” In addition, of course, the reader can easily figure this out for themselves using various online exposure age calculators.

To summarize,

  1. Computing an exposure age is a two-step process that starts with a calibration data set. The calibration data set consists of actual measurements. The “reference production rate” is an inconsistently defined, and never actually measured, parameter that is estimated halfway through this process. Its value is completely dependent on the scaling method used and the specific implementation of that scaling method; it is not a measured physical constant.
  2. Emphasizing the “reference production rate” in data reporting or in a description of calculations does not, in fact, help readers to correctly reproduce the calculations in a paper. In fact, it is more likely to lead to internally inconsistent, and likely misleading, reproduction of the calculations.
  3. In contrast, reporting the calibration data set that was used does allow the reader to reproduce calculations in an internally consistent way.

To say this a slightly different way, there are two independent ingredients that go into computing an exposure age: a calibration data set and a particular implementation of a particular scaling method. The reader needs to know what these are. The reference production rate isn’t one of these things, and the reader doesn’t need to know it.

 

 

 

 

 

Antarctic subglacial sediment has almost no beryllium-10. Repeat: Antarctic subglacial sediment has almost no beryllium-10*.

June 26, 2018

Pinned on the bulletin board in my office, between the Chart of the Nuclides and a New Yorker cartoon showing a cutaway image of the Earth labeled “Crust. Mantle. Chewy Nougat Center,” is a 2013 article in the Economist entitled “How Science Goes Wrong.” It summarizes reviews in biomedical sciences, mostly in the areas of psychology and pharmaceutical research, that showed that a disproportionate number of results in these fields were not reproducible: when other researchers tried to repeat the studies, they got different results. The article highlights funding and publication incentives that lead to overvaluation of results that are interesting and surprising, but irreproducible and likely erroneous. Since then, these studies have prompted a lot of reexamination of funding and publication practices in biomedical fields, and development of some new programs, mainly led by NIH, to do a better job of replicating important results. It is very likely that in Earth science we do an even worse job of ensuring that important results are reproducible, because we share the same incentive structures that discourage funding and publication of replicate studies, and then we add additional obstacles of fieldwork in remote regions, unique samples or data sets from inaccessible places that would be very hard to obtain again, and the fundamental problem that you can’t go back and re-run events in Earth history. But even given these constraints, we don’t do a great job of this and we should do better.

So in this post I am highlighting a study that we really know is reproducible, because it’s been done twice. It’s a recent paper in Nature by Jeremy Shakun and a variety of co-authors that shows, basically, that there is almost (although not quite) zero beryllium-10 in quartz from the AND-1B core in McMurdo Sound, Antarctica. Taken in context, this isn’t particularly surprising, because this core is in 900 meters of water and the material in the core is derived from subglacial erosion beneath the Antarctic ice sheet. We’d only expect to see Be-10 if any of the quartz had been exposed at the surface in some part of Antarctica that was ice-free and above sea level during the past few million years. There’s already a lot of very good evidence, including marine oxygen isotope evidence for the continued existence of the Antarctic ice sheets for more than 30 million years, the stratigraphic record on land in Antarctica, and the fact that parts of Antarctica that probably did deglaciate in past warm-climate periods are below sea level so would still be covered by water anyway, that there haven’t been any such ice-free areas. So we don’t expect a priori to find significant amounts of Be-10 in Antarctic subglacial sediments. The paper, however, very strongly highlights the significance of not finding anything as additional evidence that the East Antarctic Ice Sheet has been around for a while.

What you won’t find in this paper is the fact that it’s the second time this project has been done. In 2009, Dan Morgan was a graduate student at the University of Washington working with his Ph.D. advisor Jaakko Putkonen on applications of cosmogenic-nuclide geochemistry to measuring erosion and sediment transport rates in the Antarctic Dry Valleys, and Dan and Jaakko came up with the idea of looking for for cosmic-ray-produced nuclides in the then-recently-drilled ANDRILL AND-1B core. The Ferrar Glacier, which drains part of the upper Dry Valleys, supplies sediment to the general vicinity of the core site, and their idea was that the marine sediment section in McMurdo Sound might contain some record of past export of sediment from the Dry Valleys. Surface sediment in the DV is mostly derived from surfaces with very low erosion rates and has extremely high cosmogenic-nuclide concentrations, and the idea was that a small amount of this material should be detectable even if it was diluted by a lot of Be-10-free debris from subglacial erosion of fresh rock.  So Dan and Jaakko asked for and received some samples from the core as well as seed funding from PRIME Lab for a few trial AMS measurements. Dan then did essentially exactly the same thing that was done in the Shakun paper, the main exception being that he used smaller samples that yielded less quartz. Here are Dan’s results from 2009 (kindly provided by Dan at my request) on the bottom, with the results from the Nature paper (specifically, Figure 3a in the paper) on top.

What’s being shown here is a comparison between the amount of Be-10 present in process blanks (processed in exactly the same way as the samples, only with no sample) and samples. In both figures, the amounts of Be-10 in the blanks are shown in gray, and the amounts in the samples are shown in colors: all are represented by Gaussian uncertainties. The core depths in the Morgan samples are equivalent to samples A-D in the Shakun paper; E-H are deeper in the core. The main difference here is that there are a lot more blanks represented in the Shakun data set, and they are mostly lower, centered around 8000-10000 atoms Be-10 rather than the 15000-20000 in Dan’s results. The University of Vermont cosmogenic-nuclide lab, where the Shakun measurements were done, has done an enormous amount of work in the last ten years to make their Be-10 blanks very low and very stable, and it’s worked. A secondary difference is that AMS performance has gotten a bit better in the last ten years as well, so the later measurements are a bit more individually precise. So the amounts of Be-10 observed in the samples are similar in the aggregate between the two studies, but the lower blanks shown in the upper figure have the consequence that one can, in fact, conclude at good confidence that some of the samples in the Shakun paper have Be-10 present at levels above blank, whereas for the Morgan data the samples are all indistinguishable from blank. But the overall implication of each data set — that forms the core argument of the paper — is very similar: there is very little Be-10 in quartz in the AND-1B core.

Dan also in 2009-10 sent me splits of two of his samples — from 61 and 81 m depth in the core — for Ne-21 measurements at BGC. This is potentially interesting because Ne-21 is stable, so material that might have been exposed at the surface much longer ago than could be recorded by Be-10 would still retain cosmogenic Ne-21. It’s a lot more difficult to detect small amounts of cosmogenic Ne-21, because there is always a nonzero amount of non-cosmogenic Ne-21 produced by decay of naturally occurring U and Th, and it is difficult to separate cosmogenic from U/Th-derived inventories with the precision needed to detect small amounts of surface exposure. Essentially the null hypothesis is that if no cosmogenic Ne-21 is present, and given a particular neon closure age for the sediment source rocks (which should be effectively constant on million-year time scales in the core if the source area of the sediments is not highly variable) then the Ne-21 concentration should just be a constant multiple of the U/Th concentration. The following figure shows that this is, in fact, the case for two samples (here the U and Th concentrations are expressed as eU, which approximates total alpha particle production):

You can’t exclude cosmogenic Ne-21 with much confidence from only two not-particularly-precise measurements, so this is a bit of a sideshow to the Be-10 data (although an interesting one if you are a noble gas geochemist). But the point is that Ne-21 is completely accounted for by production from U and Th, which again tends to indicate that the quartz is not derived from surfaces that were ever ice-free for very long.

Dan’s measurements were never published. I am speculating a bit as to why because I was not closely involved, but I believe the thinking was that there was no expectation that any Be-10 would be found: if there was some, that would be a story, but no Be-10 equals no story. In the Shakun paper, it is the opposite. No Be-10 is a big story. As an aside, I am concerned that the paper oversells the significance of this result, because possible Antarctic ice sheet change scenarios that are hypothesized to have occurred in Plio-Pleistocene warm periods and are relevant in light of near-term climate change involve deglaciation of marine embayments where the bed of the ice sheet is below sea level. If this happened during past warm periods it would not be recorded by in-situ-produced Be-10 in quartz. So this proxy record has zero sensitivity to the type of deglaciation event that is most concerning and most plausible in the near future. But the main point that I want to make here is that, in contrast to probably too much Earth science research, we know this result is replicable, because it’s been replicated. Although the Economist would probably argue that the funding incentive structure is such that the Shakun work would never have been funded if the Morgan results had been published, it is unfortunate that there is no way to know from the present paper that this work has been done twice, and I think Dan should get some credit for having tried this nine years ago.


* Trying to make this title short and punchy also makes it incorrect. It should refer to in-situ-produced Be-10 in quartz, rather than just Be-10. In fact, some Antarctic subglacial sediment has lots of Be-10, it’s just not in-situ-produced in quartz.

Putting the O in OCTOPUS

April 13, 2018

This post is about the recently developed “OCTOPUS” database of cosmogenic-nuclide erosion rate data. During the last year, folks involved in cosmogenic-nuclide erosion-rate measurements began to receive emails like this:

Dear [researcher],

I hope this email finds you well! You are receiving this email because you and coauthors published catchment-averaged 10Be / 26Al data within the last years. In doing so, you contributed to the number of more than 3500 in situ detrital catchment-averaged 10Be data that are available to date, forming a highly variable, statistically resilient dataset that represents substantial effort of both capital and labour. However, published data are often still inaccessible to researchers, are frequently subject to lacking crucial information, and are commonly different in underlying calculation and standardisation algorithms. Resulting data disharmony has confounded the purposeful (re)use of published 10Be-derived data, for instance for inter-study comparison, for looking at the greater picture of Earth surface’s downwearing or for innovative data re-evaluation.

Going online in June 2017, OCTOPUS database will do away with these problems on global level by providing and maintaining a freely available, fully harmonized, and easy to use compilation of catchment-averaged 10Be data. The project has obtained A$176,000 funding from the Australian National Data Service (ANDS) to build the infrastructure for hosting and maintaining the data at the University of Wollongong (UOW) and making this available to the research community via an OGC compliant Web Map Service.

However, this email is not just to inform you about OCTOPUS, but also to ask for your valued contribution to the project. To reconcile our compiled data and GIS derivates with your original files, we would highly appreciate if you would provide us with (a) drainage point shape file(s), and more important (b) catchment polygon files of your above named publication(s) within the next two weeks. Cross-checking your original files with our reproduced data will be an important part of our quality management.

I thought, OK, that’s interesting, databases are good, but lots of people come up with database projects that sound good to funding agencies but never accomplish much, and in addition I’ll be very surprised if anyone complies with the demand for shapefiles. Then a couple of weeks ago (that is, several months after June 2017) I got this via a listserv:

Dear Friends

A few days ago we have made available online a new open and global database of cosmogenic radionuclide and luminescence measurements in fluvial sediment (OCTOPUS).

With support from the Australian National Data Service (ANDS) we have built infrastructure for hosting and maintaining the data at the University of Wollongong and making this available to the research community via an OGC compliant Web Map Service.

The cosmogenic radionuclide (CRN) part of the database consists of Be-10 and Al-26 measurements in fluvial sediment samples along with ancillary geospatial vector and raster layers, including sample site, basin outline, digital elevation model, gradient raster, flow direction and flow accumulation rasters, atmospheric pressure raster, and CRN production scaling and topographic shielding factor rasters. The database also includes a comprehensive metadata and all necessary information and input files for the recalculation of denudation rates using CAIRN, an open source program for calculating basin-wide denudation rates from Be-10 and Al-26 data.

OCTOPUS can be accessed at: https://earth.uow.edu.au.

Great. Time to check this out and see what it is.

But first of all, to get one thing out of the way, this is proof that Earth science has now reached the acronym-pocalypse — in which it has become so critically important to have a fancy acronym that all rules relating the acronym to the words being abbreviated have been thrown away, and you can do whatever you want.  It is true that even though we hope the responsible parties will be experts in geochemistry, we don’t necessarily need them to be wise in the use of the English language — but, frankly, this is pretty bad. While all the letters in “OCTOPUS” do occur in the full title “an Open Cosmogenic isoTOPe and lUmineScence database”, the total lack of relation between word beginnings and acronym elements makes the reader wonder why it could not equally well have been called “MESS” (‘open cosMogEnic iSotope and luminescence databaSe’) or “POOP” (‘oPen cOsmOgenic isotoPe and luminescence database’), because both “MESS” and “POOP” are just as closely related to geomorphology and/or cosmogenic nuclide geochemistry as “OCTOPUS.”  Or, perhaps, in light of the discussion below, it could be “NODATA” (‘opeN cOsmogenic isotope and luminescence DATAbase’).

So now that that’s out of the way, let’s proceed to actually looking at the database. As noted above and as also evident from the fact that I’ve developed three other online databases of cosmogenic-nuclide data (see here), I think online databases are great. The current state of the art in cosmogenic-nuclide geochemistry — in which everyone interested in synoptic analysis of exposure ages or erosion rates must maintain tedious, unwieldy, and mutually inconsistent spreadsheets that require constant updating — is a huge waste of everyone’s time and is very effectively calculated to maximize extra work and errors. It would be much better if there were an online repository of generally-agreed-upon raw observations from prevous studies, so that everyone interested in synoptic analysis is referring to the same data set. So from this perspective, let’s get over the embarrassing acronym choice and begin looking at the website with a positive attitude.

Right. Upon locating the website, we’re presented with (i) a very large area devoted to advertising the University of Wollongong, surrounding (ii) a Google map of the world, with (iii) a layer selection panel.

Selecting an example layer displays river basins on the map that are presumably associated with cosmogenic-nuclide data:

Clicking on a random basin in South Africa brings up a window displaying some information about this basin:

Great. But now we have a problem. Incomprehensibly, nothing in this information window is a live link. One of the most basic questions that a user wants to ask in this situation is whether the erosion-rate estimate that’s presented for this basin agrees with what one might calculate using a different scaling method or a different online erosion rate calculator. To facilitate this, in this situation I would expect that, for example, the ID number of the basin would be a link to a page displaying all known data for that basin: location, Be-10 concentrations and standardization, etc. and so on, hopefully formatted in such a way as to allow for easy recalculation. However, no such thing is present. This is a serious problem — why is there no access to the actual source data?

Fortunately, a bit of experimenting reveals that making the window bigger causes a “download” link to appear:

This seems like a bit of a coding error — obviously the existence of the link shouldn’t be dependent on the window size — but whatever. A bit more experimentation shows that one can select an area within which data are to be downloaded:

But now we have a serious problem. I can’t actually get any data without providing a name and email address. Full stop.

OK, so far we have what is advertised as “Open” and a “Database”, but isn’t either. It’s not open — anonymous download is not possible — and so far I haven’t actually seen any data, just a picture of what locations are purported to have some associated data. I haven’t so far entered anything in the name and email boxes and pressed ‘request download,’ so I don’t know whether (i) the web server will just record the information and send me the data, or whether (ii) it starts an approval process where the developers have to decide whether or not I’m worthy of the privilege. Let’s consider both options, though. If (i) is the case and I can just enter “Mark Zuckerberg” under “name” and “mzuckerberg@facebook.com” as the email address, then there is absolutely no point to interrupting the user’s interaction with the website to collect valueless information, except to give the user the cheap, queasy, and frustrated feeling, familiar to everyone from their interaction with many commercial websites, that he or she is not the customer but the product. If (ii) is the case and the site developers must decide if I am a suitable person to whom to entrust data which have, of course, already been published in publicly available scientific literature, then this entire project is the opposite of “open.” Either option is pretty bad, and I’m extremely disappointed to see that the developers here don’t seem to have the strength of character to stand behind the use of the word “open” in the title. Not impressive.

In addition, requiring identity information to access data is going to make it rather difficult to really have anonymous peer review of the accompanying paper. Not great if the anonymous peer reviewers have to log in as themselves to see the thing they are supposed to be reviewing.

Fortunately there are some clues to how we can get around this problem. The email quoted above that announces the existence of the database makes reference to OGC-compliant web services — these are, essentially, links that GIS or mapping software can use to access online data dynamically rather than having to download a local copy of the data – and, presumably, this is what is being referenced by the embedded Google map on the web page. Unfortunately, the website itself makes zero reference as to whether or not these web services exist or how to access them. So clearly a bit of investigation is needed. Investigating the Javascript source code associated with the embedded map shows us:

which indicates that the Google map is, in fact, getting data from a OGC-style web map service (wms) at the address https://earth.uow.edu.au/geoserver/wms. A “web map service” is intended to serve raster data — digital elevation models, imagery, etc., which is actually not what we want if we are interested in attribute-type data associated with point or vector features. The equivalent service for that is a “web feature service,” and if such a thing is also running on the CTOPUS web server (if it’s not “Open,” we can’t really call it OCTOPUS any more) we ought to be able to find out via the following URL:

https://earth.uow.edu.au/geoserver/wfs?request=GetCapabilities

Obtaining that URL with a browser (try it) results in a large XML object that indicates that yes, in fact, a web feature service is operating at that address:

This contains a couple of useful bits of information. One, information about what feature layers are available. The following screen shot shows information (<FeatureType> elements) for two layers, which correspond to those on the layer selection box available via the website:

Two, information about what output formats are available:

Using this information, we can form a URL asking the web feature server to supply all data from the ‘crn_aus_basins’ layer as a KML file:

https://earth.uow.edu.au/geoserver/wfs?request=GetFeature&typename=be10-denude:crn_aus_basins&outputformat=KML

Using a browser to download resulting data from that URL does, in fact, yield a (fairly large) KML file containing basin outlines and a complete data dump for each element, including the critical observations such as Be-10 concentrations and standardizations. Here it is when viewed in a local copy of Google Earth:

So it turns out that it is, in fact, possible to anonymously obtain cosmogenic-nuclide data from OCTOPUS. This is quite valuable: a WFS implementation of this data set is potentially really useful for all sorts of applications, and is a major step toward fulfilling the basic goal of doing better than the current state of the art of mutually-inconsistent-spreadsheets. And we can put the “O” back in. But that was kind of a pain in the neck, it required encouraging the server to display capabilities that the website developers apparently did not want me to know about, and it was much more complicated than it had to be. Wouldn’t it be easier simply to provide direct links to the raw data, as well as the data formatted in such a way as to be easily entered into other online erosion rate calculators, on the web site?

So I’ll try to summarize here. Placing this data set online is potentially very useful. The WFS server implementation — once I located it — is enormously better than the primary existing competition in this field, which is the unwieldy spreadsheet distributed as an appendix to an article by Eric Portenga. But in my view there are several serious problems here.

One is that of credibility. A critical element of having a useful online source of these data is that it is generally believed to be a complete and accurate representation of the raw observations — locations, elevations, nuclide concentrations, etc. — and it’s hard for users to believe this if they can’t easily examine the data. The ICE-D exposure age databases have direct access through the web interface to all raw data for this reason. If you can’t examine all the data in a granular way, you start asking why you’re not being allowed to do this, and this makes it harder to believe that the data are really accurate and believable. This is a basic principle of having a data source, or anything else, become trusted and credible: if Ronald Reagan hadn’t been able to trust but also verify, the Cold War would still be in progress.

The second is just the message that is being sent to users. The really useful part of this database implementation — the web feature service that makes it possible to access these data through desktop GIS systems that one would use for data analysis, figure preparation, etc. — is completely hidden from users. One would think that this service would be the thing that is most highlighted and most advertised by the developers, because it’s the most useful and the most potentially transformative in terms of synoptic analysis of these data. It’s really a contribution to science.  This is a good thing. But instead the developers have hidden this and made it as difficult as possible to view, evaluate, or obtain source data. To say the least, this is puzzling, and, frankly, it leads the observer to uncharitable hypotheses about the motivation of the website developers. Are they trying to use this website to keep tabs on competitors who might carry out and publish synoptic research on global erosion rate distribution before they do? Is the key function of the database not so much to be useful to the overall research community as to improve the citation metrics of the authors? Not knowing any of the developers personally, I have no idea whether these hypotheses are anywhere near the truth, and, having developed similar websites, I sympathize with the feeling of having done a lot of work that other people are going to take advantage of for free, and the feeling of wanting to keep control of what I just developed. However, according to the initial email describing this project, the developers are not, in fact, giving this away for free, but rather for $176K in public funding.  Under these circumstances, the lack of openness built into this website could not do otherwise but make me suspicious of the developers’ motivations. This is not a good thing.

To summarize, it’s either an open database or it’s not. Right now it’s not: open means complete, anonymous, granular access to the raw data. We don’t have that. On one hand, this project has the potential to be fundamentally useful in enabling synoptic research into how erosion works globally. That’s good. On the other hand, the developers have unexplainably designed the website to make access to the data harder instead of easier. That’s bad, and why they have done this is a total mystery. But this project is going to remain just potentially transformative, instead of actually transformative, until they fix it.

 

 

 

 

 

 

 

 

 

 

 

Upside-down bananas

March 30, 2017

If you are reading this, you are probably familiar with the two-nuclide diagram commonly used to represent paired Be-10 and Al-26 data:

This example is from a review article by Darryl Granger from 2006 (in GSA Special Paper 415) that gives a good description of what the diagram is and how it is supposed to work. Basically, the diagram shows Be-10 concentration on the x-axis and the Al-26/Be-10 ratio on the y-axis, although commonly both quantities are shown in normalized units computed by dividing observed concentrations by the respective production rates at the sample site – this normalization allows plotting data from different sites, with different production rates, on the same axis. The basic concept here is that if your sample stays at the surface and experiences steady exposure with or without erosion, nuclide concentrations are confined to the “simple exposure region” highlighted with dark lines in the above figure. In certain manifestations of this diagram (primarily when plotted with a log x-axis and a linear y-axis), the simple exposure region vaguely resembles a banana, for example:

This resemblance, perhaps unfortunately, has resulted in the common use of the term “banana diagram.”  Then the important aspect of this diagram is that if the sample gets buried after a period of surface exposure, both Al-26 and Be-10 concentrations decrease due to radioactive decay, and Al-26 decreases faster than Be-10. This is represented by a trajectory that goes down and to the left, as shown above in the Granger example. So samples that are “below the banana” have experienced both a period of exposure and a period of burial. Here’s an example:

The lines are contours of burial time in Myr. So these samples have been buried for 2.5 Myr. This is the foundation of the method of cosmogenic-nuclide burial dating.

So, basically, in the Al-26/Be-10 two-nuclide diagram (let’s not use “banana diagram” any more…historically, it probably should be called a “Lal-Klein-Nishiizumi diagram,” although that is a bit cumbersome), exposure goes to the right and burial goes down. Easy. The problem arises when other nuclides are involved. Here is an example of a Be-10/Ne-21 two-nuclide diagram from one of my papers:

Here I have put Ne-21 (the longer-lived nuclide) on the x-axis and the Be-10/Ne-21 ratio on the y-axis. So, again, exposure goes to the right and burial goes down. On the other hand, here is a Ne-21/Be-10 diagram from a very cool paper by Florian Kober and Vasily Alfimov:

This figure has a lot of data in it that are beside the point from the perspective of this post, but the point is that it has the opposite axes: Be-10 concentration on the x-axis and Ne-21/Be-10 ratio on the y-axis. Thus, exposure still goes to the right (at least for a while), but burial goes UP. Not down. Not what we expect from our previous experience with the Al-26/Be-10 diagram. Very weird.

At present, the choice of axes in two-nuclide diagrams involving Ne-21 in the literature appears to reflect your position in relation to the Atlantic Ocean. US researchers generally plot [Ne-21] vs. Be-10/Ne-21, whereas Europeans plot [Be-10] vs. Ne-21/Be-10. Although I have not made a systematic historiographic study of this phenomenon, I believe that the European style is largely just due to the fact that the “CosmoCalc” software put together by Pieter Vermeesch does it this way.

So the question here is which one to use. I have a pretty clear opinion on this. I think no matter what the nuclides involved, you should always do it the same way as is commonly done for Al-26/Be-10 diagrams, so that burial goes down. Nearly all the two-nuclide diagrams in the existing literature involve the normal implementation of the Al-26/Be-10 diagram, so anyone familiar with this literature expects exposure to go to the right on a tw0-nuclide diagram, and burial to go down. I think inverting the diagram so that burial goes up just confuses readers. Why confuse readers when you don’t have to? Thus, I advocate always plotting the longer-lived nuclide of the pair on the x-axis, and the ratio of the shorter-lived to longer-lived nuclide on the y-axis. Then burial always goes down, as we expect. Of course, I am in the US, but I am not just cheering for my own team here. It really does make more sense for two-nuclide diagrams to always behave the same way no matter what nuclide pair is involved. I’ve done it this way in the version 3 online exposure age calculator, which will generate two-nuclide diagrams for all combinations of Ne-21, Be-10, Al-26, and C-14 in quartz, and also in the ICE-D database which makes use of the v3 calculator as a back end.

Summary: Draw two-nuclide diagrams so that burial always goes the same way. Why confuse people?

 

 

 

 

 

 

 

Is a cheap GPS OK for elevation measurements, or do you need a fancy one?

March 28, 2017

This post is about elevation measurements for exposure-dating samples, and how accurate they need to be. Basically, the main thing that controls cosmogenic-nuclide production rates is site elevation, or, more precisely, atmospheric pressure — at higher elevation, there is less atmosphere between you and the extraterrestrial cosmic-ray flux, so the production rate is higher. Thus, to compute the cosmogenic-nuclide production rate at a sample site, the first thing we need to know is the elevation. Once we know the elevation, we can convert it to a mean atmospheric pressure using a model for how the atmospheric pressure varies with elevation, and then compute the production rate.

Note that there are two places to potentially screw up here. The second one — converting an elevation to a mean atmospheric pressure during the exposure duration of the sample — is actually a fairly complicated problem and is the subject of another post, as well as a fairly large number of papers. However, the first one — accurately measuring the elevation — ought to be pretty simple. In general, determining your elevation is a fairly well-established technology that people have been working on for centuries. So even if we can’t be completely sure that we’ve done the elevation-atmospheric pressure conversion very precisely, we ought to be able to be fairly confident that we’re not making things worse by also making inaccurate elevation measurements. So the rest of this post covers (i) exactly how precise we need elevation measurements to be, and (ii) various ways to accomplish or not accomplish that goal.

So how precise do we need elevation measurements to be? The basic rule of thumb is that 10 m of elevation change is approximately a 1% change in the production rate, but this is a bit variable with elevation. Specifically, if we choose, for example, the production rate scaling method and Antarctic atmosphere model from Stone, 2000 (other scaling methods and atmosphere models are equivalent for all practical purposes from this perspective), here is how much effect a 10 m change in elevation has on the production rate:

It’s about 1% per 10 m at sea level and less at higher elevation. What this means is that accurate elevation measurement is actually fairly important. In most cases, the total uncertainty in estimating production rates for purposes of exposure-dating is around 5% or 6%, and 1% is a decent-sized fraction of that. If having a 10-m uncertainty in measuring elevation is adding an additional 1% uncertainty to your production rate estimate, that certainly seems like something to try to avoid. Basically, the point of all this is that we would like to be able to measure elevations with better than 10 m precision. Preferably quite a lot better.

In the modern era, the default and simplest way to measure your elevation is just with the same inexpensive handheld GPS unit, which is probably a Garmin that you bought a few years ago for about $150, that you are already using to record the position of your sample. Most handheld GPS units indicate their estimated horizontal positioning accuracy in real time, and in most situations it’s generally somewhere in the range of 3-6  m. Note, however, that this condition dates back only to approximately the year 2000. Prior to that time, the GPS satellite network, which of course was operated by the U.S. Department of Defense, included a feature with the Orwellian name of “Selective Availability” that intentionally degraded GPS performance for non-military users. So handheld GPS data from that era are much less accurate, with horizontal precision in the tens-of-meters range.

Unfortunately, in nearly all cases the vertical accuracy of a GPS position is not nearly as good as its horizontal accuracy.  This is just because GPS satellites are mostly not right over your head, but well away from the zenith at shallow angles toward the horizon. In this geometry your vertical  position is more sensitive to range errors than your horizontal position. A rough rule of thumb is that vertical position errors are usually about twice horizontal ones, so if your handheld GPS is indicating 3-6 m horizontal precision, that is something like 6-12 m vertical precision. Here is an example from data that I collected during a couple of exposure-dating projects in Antarctica. What these figures show are histograms of differences between elevations measured with a Garmin consumer handheld GPS ($170; an older version of this) and elevations of the same sites measured with a pair of fancy, differentially-corrected, post-processed, dual-frequency Trimble survey GPS units (probably $40K, but should have decimeter precision).

If we throw out the obvious wild outliers in both of these data sets (more about them later), these residuals are approximately normally distributed with a standard deviation of 7 m, which is actually a little better than we expect from the twice-horizontal-precision rule of thumb. So in general, this is pretty good: with our $170 low-hassle solution we can keep production rate inaccuracy due to errors in elevation measurements below 1%.

There are, however, a couple of other important points about these data. One, they both show an offset in which the handheld measurement is systematically approx. 4 m above the DGPS measurement. These are in different years so we can’t ascribe it to weird satellite availability. I suspect this may reflect my error in converting between ellipsoid and geoid heights (more about this later) for the two sets of measurements: I am not an expert in GPS data reduction and it might be that this is how you can tell that. Alternatively, it could have something to do with the handheld GPS unit having a less sophisticated atmospheric correction than used in the DGPS post-processing.

Two, there are some pretty serious outliers in the 20-30 m range — sometimes the measurement is very wrong. Of course, you expect some of these from any random process, but I suspect that what happened here is just that I didn’t leave the handheld GPS turned on for long enough to collect an accurate position. These data are from Antarctica, which is (i) remote, and (ii) cold. Item (i) means you are highly motivated to conserve battery power because you have only a finite supply of AA batteries in your camp, and item (ii) means you are highly motivated to complete the measurement quickly and put your mittens back on. Both factors would tend to favor not leaving the unit turned on long enough to collect sufficient data. But these are significant outliers that could affect an exposure age by 3%. That’s nearly 1/3 of the Younger Dryas.

In my opinion, the summary of all this is that modern inexpensive handheld GPS units are, basically, just barely good enough for measuring elevations of exposure-dating samples. Assuming  I am right about the outliers in the data set above just being due to my carelessness, if you are careful, you should be able to keep production rate uncertainties stemming from this source to approximately 0.5-0.7%. That’s not bad.

The next question, then, is how to do better. There are several strategies for this. One is just to buy a more expensive dual-frequency GPS that can be differentially corrected. As Trimble has a near-monopoly on this market, the first issue here is that this is expensive. Still, a suitable unit can usually be borrowed from most university geoscience departments. However, this solution creates a number of new problems. One is weight — if your fieldwork is foot-supported, you have just added a lot of weight to your pack. Another is power — most of these units require daily recharging (or heavy gel-cells). Again, a potential problem for remote fieldwork. A third potential problem is that if you are operating someplace very remote (like Antarctica, again), you will not be within range of stationary GPS stations that can be used for differential correction, so you will need to not only carry a rover unit with you to the sample sites, but also establish a temporary base station at your camp with a second unit. A fourth is that you need to learn enough about GPS data processing to not make errors in the processing that are much worse than you would have made with the inexpensive unit. This is a serious consideration. A final problem is that collecting enough data with a Trimble DGPS unit to eventually yield submeter vertical precision takes a while — generally on the order of 30 minutes per site. Doing this at each site would seriously limit the number of samples that could be collected in a day.

A completely different strategy is to measure sample elevations using a barometer or barometric altimeter. This is very old-school and, at this point, kind of in the realm of retro hipster hobbies, but has a couple of big advantages, mainly that barometers designed for this purpose are extremely sensitive — they can register less than 1 m of elevation change — and quite lightweight. Some don’t even require batteries. The problem, of course, is that barometric pressure is constantly changing independently of elevation due to meteorological effects, so collecting good barometric elevation data requires care and thought, and, in some cases, a stationary base-station barometer as well. However, if you are working in any area of the continental US that is covered by a USGS 7.5′ topographic map, you can very accurately estimate the elevation of your sample sites by barometric traverse between sample sites and elevation benchmarks or survey points noted on the map. Of course, in this example in most cases you will find that you could have just plotted your horizontal GPS position on the map and read off the contour elevation to get the same result.

Leaving aside the special case where you have a USGS map (or, even better, a SwissTopo map that is almost so complete that your individual boulder is already marked), you can sometimes significantly improve on the 7-m accuracy of handheld units without incurring serious pack weight problems, by establishing temporary benchmarks with a Trimble DGPS and then using the barometric altimeter to survey elevations of sample sites in relation to the temporary benchmarks. I’ve used this strategy for many years in Antarctica and found it to yield total vertical positioning uncertainty somewhere in the 1.5-2.5 m range, depending on how closely spaced the benchmarks and sample sites are. This strategy does require a fancy GPS unit, but it is fast (because you only need to do a couple of 30-min occupations per day) and light (because you can minimize carrying around the heavy antenna and batteries). This is generally my preferred approach when possible — it’s not a prohibitive amount of work, but it reduces production rate errors associated with elevation measurements to essentially negligible levels.

Summarizing the discussion up to now, a decent-quality handheld GPS is just barely good enough for exposure dating in that errors in elevations measured using this method transfer to 0.5-0.7% errors in production rate estimates. Note that I’m pretty sure your iPhone is substantially worse, just due to the antenna size and geometry, but I have not attempted to verify that. If you can, though, you should do better, either with a more elaborate GPS unit or some combination of DGPS benchmarks and barometric survey.

There is, however, one final topic on the subject of GPS measurements of sample elevations, that is important. What we actually want to measure for exposure dating is the elevation of a sample with respect to sea level. Elevation-atmospheric pressure models are indexed to the bottom of the atmosphere, which is sea level, so we need to know how far above sea level we are. However, for GPS data reduction it is common to work not in a sea-level reference frame but in an ellipsoidal reference frame in which the Earth is represented by a simplified ellipsoid rather than the somewhat lumpy shape of the actual surface of the ocean (generally referred to as the geoid). This is not the place to launch into a full discussion of the differences between ellipsoid and geoid elevations (good places for this are here or here), but the point is that they are different. In some parts of the Earth they are quite different: here is a map (from this site) showing the difference between sea level (in this case represented by the EGM2008 geoid model) and the WGS84 ellipsoid, which is commonly used in GPS data reduction:

The details are beyond the scope of this post, but the point is that a GPS elevation computed with respect to an ellipsoid is not a sea level elevation, and can be tens of meters different from the correct sea level elevation. I am pretty sure there are a lot of errors in the exposure-dating literature where ellipsoid heights have been incorrectly reported instead of sea level heights. Be careful.