Dear Greg ,
I am taking the liberty of writing to you for asking some help. It is convenient to use the online calculator written by you. In order to understand better, I am also trying to calculate the age by the corresponding formula.
Unfortunately, I have some trouble in doing this. Would you mind helping me? Here are some questions I have.
1) the sea level, high latitude production rate: in the document “Al-26-Be-10 exposure age/erosion rate calculators: update from v.2.1 to v.2.2″, Table 1 shows the reference Be-10 production rate due to spallation. If i used the Be-10 standard NIST_certified, how should I modify the production rate? Use 4.49, and just multiply the concentration by the coefficient 1.0425?
2) the percent production by muons at sea level…2.2%(1-0.978)?
3) erosion rate. on the website, it is said the erosion rate must be inferred from independent evidence. if I have both the Be-10 and Al-26 concentration, can i obstain the erosion rate by setting age(10Be)=age(26Al)?
4) with Al-26/Be-10: the Al-26/Be-10 is bigger than 6.1…how to explain?
5) what is the difference between the “time-dependent” and “scaling scheme for spallation?” The former is basd on the latter and adds geomagnetic calibration?
6) how to calculate the internal uncertainty?
I am having a look into the Geometric Shielding Calculator that you wrote and I wonder if it is specific for 10Be and 26Al or it also works with 36Cl.
The geometric shielding calculator computes shielding by assuming a zenith angle distribution of the cosmic radiation suitable for high-energy neutrons. Thus, it applies for production of any nuclide directly by high-energy neutron spallation, or by lower-energy neutrons that are secondary products of the high-energy neutron flux. This includes Be-10, Al-26, and Cl-36 produced by spallation as well as Cl-36 produced by low-energy neutron capture (the latter requires some additional assumptions, but should be more or less correct). However, a shielding factor computed for spallogenic production does not apply to production by muons, because muons have a different zenith angle distribution. The online calculator does not apply any shielding factor to muon production when it calculates exposure ages.
hope that helped,
Here is a graphical tabulation of the relative frequency of words in Tibor Dunai’s recent textbook on cosmogenic-nuclide geochemistry (from this page). Pyroxene is more important than CRONUS. Muons and thermal neutrons are more important than nucleons. Quartz and Nat Lifton are equally important.
Another once-asked-question (OAQ) about the exposure age calculators [names and places anonymized...]:
Dear Greg: I am working with some CRN data from the [Misty] Mountains in [Middle-Earth], and have a question regarding the use of shielding factors in the CRONUS calculator. We believe that snow shielding may be an important factor in this relatively high elevation area (~1000 m asl), and wish to do some simulations of the effect of different snow depths. My first thought had been to do this via the shielding factor, but I note from your paper of 2008 that this factor is only applied to the spallogenic production rate, and not the production rate due to muons. Before I start modifying the Matlab code, I wanted to ask if you had any thoughts or recommendations. Perhaps work has already been done on this issue, and it would be useful to know of any potential pitfalls.
Dear [Frodo]:Yes, the shielding factor is not applied to production by muons. However, i) at elevations much above sea level, production by muons is very small relative to spallogenic production, and ii) the fall-off of muon production with depth is an order of magnitude slower than for spallation. Say, for example, that you have snow shielding in the amount of 10% of spallogenic production. This will only have about a 1% effect on production by muons. Production by muons is about 2% of surface production at 1000 m. Thus, the inaccuracy created by ignoring muon shielding in this case is 1% of 2%, which is obviously very much smaller than the uncertainty in your snow cover correction and is insignificant for your purposes. In fact, this is nearly three orders of magnitude smaller than the production rate uncertainty. The summary is, you can just enter the snow correction you think appropriate and not worry about it further. I strongly suggest instead spending your time looking for good production rate calibration sites…regards,–greg
I’ve recently received a handful of queries as to how to interpret Be AMS results from the SUERC AMS facility so as to make sure they are properly standardized for use with the online exposure age calculators. This question is related to the issue of how the Be-10 half-life is related to the absolute isotope ratio of an AMS standard that I’ve discussed in this previous post and also this one.
Here is a screen grab of the header section of a SUERC results spreadsheet (with sample names blurred to protect the innocent).
After the column for the sample name, this shows the following data.
First, two columns headed “% of standard” and “ (% of standard)”. These columns are the most basic description of the actual AMS measurement: remember, what the AMS measurement actually does is compare the Be-10/Be-9 ratio in a sample to the Be-10/Be-9 ratio in a standard. The first line of the file tells us that a sample called “NIST,” which is presumably the NIST “SRM 4325″ Be standard material, has a Be-10/Be-9 ratio that is 100% of the standard, without uncertainty. The NIST standard material, as already discussed, is a stock of Be whose Be-10/Be-9 ratio is independently known. What this means, therefore, is that the NIST standard is the standard to which all the measurements are referenced. Subsequent lines then describe the relationship between the 10/9 ratio in each unknown sample and that in the standard. The 10/9 ratio in the first sample, for example, is 0.465% of the 10/9 ratio in the NIST standard. These lines have uncertainties in this relationship; the size of the uncertainty mostly depends on how many Be-10 atoms were actually counted. So these two columns, once again, are the actual data that was collected by the AMS — the relationship between the 10/9 ratio in a sample and that in a standard.
The overall goal of this exercise, of course, is to determine the actual Be-10/Be-9 ratio in your sample. So if we define to be the 10/9 ratio in an unknown sample, to be the 10/9 ratio in the NIST standard, and to be the ratio of 10/9 ratios that we have measured, then: i) we want to know , ii) , and iii) we have measured , so iv) to compute the answer we want, we need a value for .
As discussed in the previous post, the absolute isotope ratio of the NIST Be standard is based on two measurements: a measurement of the total amount of Be present, and a measurement of the activity, that is, the rate of radioactive decay, of the Be-10 present. Thus, a value for the Be-10 decay constant, or equivalently the Be-10 half-life, is required to compute this ratio. If we assume a value for the Be-10 half-life, we can compute the amount of Be-10 present in the standard material, we can then compute the absolute 10/9 ratio of the standard, and we can apply the measurement described above to compute the absolute 10/9 ratio for an unknown sample. This is what happens in the red, blue, and green columns above.
The red columns — columns 4 and 5 if in the screen grab above — have the header line “10Be/9Be t(1/2)=1.53 Ma.” What this means is that a value of 1.53 Ma for the half-life of Be-10 was used to compute the amount of Be-10 present in the NIST standard, and thus its absolute 10/9 ratio. Given the Be-10 activity actually measured and the equations in the previous post, this yields an absolute 10/9 ratio of 3.06 x 10^-11 for the standard. Then to compute the absolute 10/9 ratio in the first sample, we apply the relationship described above: 3.06 x 10^-11 x 0.00465 = 1.42 x 10^-13. Again, if we assume that the NIST standard material has an absolute isotope ratio of 3.06 x 10^-11, which follows from its measured activity and the assumption that the Be-10 half-life is 1.53 Ma, then this sample has a 10/9 ratio of 1.42 x 10^-13. One would describe this measurement as being “normalized to the NIST standard with an assumed isotope ratio of 3.06 x 10^-11.” To use an atoms/g concentration calculated from this measured ratio in the online exposure age calculators, one would refer to the table of standardizations here and use the “NIST_30600″ standardization.
The next two columns, colored green, are headed “10Be/9Be t(1/2) = 1.34 Ma.” If we assume that the Be-10 half-life is 1.34 Ma, which was the value originally assumed in preparation of the NIST standard, then the activity measurement implies a true 10/9 ratio for the standard of 2.68 x 10^-11. This is the “certified” ratio for the NIST standard. In this case, we would compute the true 10/9 ratio of the sample by 2.68 x 10^-11 x 0.00465 = 1.25 x 10^-13. So we started with the same AMS measurement, but by assuming a different value for the Be-10 half-life, we obtained a different true isotope ratio for the NIST standard, and thus a different true isotope ratio for the sample. This value for the true 10/9 ratio in the sample would be described as “normalized to the NIST standard with an assumed isotope ratio of 2.68 x 10^-11,” and the standardization code for the online calculators is “NIST_Certified.”
The final two columns, colored blue, are headed “10Be/9Be t(1/2) = 1.36 Ma.” If we assume that the Be-10 half-life is 1.36 Ma, which was a value estimated by Kuni Nishiizumi as a byproduct of creating the implantation standards described in this previous post, then the activity measurement implies a true 10/9 ratio for the standard of 2.79 x 10^-11. In this case, we would compute the true 10/9 ratio of the sample by 2.79x 10^-11 x 0.00465 = 1.30 x 10^-13. This value for the true 10/9 ratio in the sample would be described as “normalized to the NIST standard with an assumed isotope ratio of 2.79 x 10^-11,” and the standardization code for the online calculators is “NIST_27900.”
To summarize, there is a lot of redundant information in this spreadsheet. The actual measurement that was made — a comparison of the 10/9 ratio in the sample with that of the NIST standard — is presented four different ways. Personally, I find this confusing. The important thing, however, is that these all describe the same measurement. Calculating atoms/g in your sample using the ratio reported in the red columns and, in the online exposure age calculator, identifying it with the “NIST_30600″ standardization, will yield the same exposure age as calculating atoms/g using the ratio in the blue columns and identifying it with the “NIST_27900″ standardization.
This post addresses what a “camel diagram” actually is. So what is it? Basically, this is a stupid name, apparently invented by myself (the name, not the diagram, although even that is hard to believe), for a type of diagram which is commonly used to in the cosmogenic-nuclide literature to represent exposure-age data. Here is an example from a recent paper (Kelly MA and 6 0thers, 2008, Quat. Sci. Rev. 27, 2273-2282):
Basically, the caption above says what this is: it’s a way of representing a lot of measurements of the same thing that have Gaussian uncertainties. You draw a Gaussian with mean and standard deviation corresponding to each of your individual measurements, and then add them all up to obtain a summary curve. The use of this type of diagram in geochronology dates back to the ’80′s, mostly in the fission-track and argon-argon dating literature — here is an interesting example from a paper on the ages of spherules in the lunar regolith from Tim Culler (Science 287 pp. 1785-1788):
More relevant to glacial chronology might be another example in a paper by Tom Lowell about radiocarbon dating of LGM moraines in Ohio (Lowell, T.V., 1995. The application of radiocarbon age estimates to the dating of glacial sequences: an example from the Miami Sublobe, Ohio, USA. Quaternary Science Reviews 14, 85–99.). The point of this post is to explain what the point of this diagram is, why and when you should use it, and how to apply it both rightly and wrongly to exposure-age data. As with all good statistical constructs, it can be useful for misleading readers into thinking what you want them to think.
First, what is the point of this diagram? Basically, we are using this diagram to describe the frequency distribution of observations. We have made a set of measurements of what we believe to be the same thing, and we want to represent the distribution of those measurements. Normally to carry out this task, we would use a histogram, which is a fairly basic sort of a diagram in which we divide the observation space into bins, determine how many observations fall into each bin, and then fill each bin with a bar whose height is proportional to the number of measurements. Let’s say we measured a bunch of exposure ages, in thousands of years BP (i.e., ka), on a moraine, and got the following results:
[23.1 24.1 16.3 24.1 21.3 15.9 17.8 20.5 24.6 24.6 16.6 24.7 24.6 19.9 23.0 16.4 19.2 24.2 22.9 24.6 ]
We could create a histogram of these data by defining bins, let’s say with a width of 2000 years and starting at 0, and assigning these data to bins. An exposure age of 19.2 ka goes in the 18-20 ka bin, an exposure age of 22.9 ka goes in the 22-24 ka bin, etc. and so on. This yields the following table of how many samples fall in each bin:
|Bin||Number in bin|
Which in turn produces the following histogram:
The x-axis is the exposure age, each bar is a bin, and the y-axis is the number of samples that fall into each bin. Three important points about histograms. First, they represent an observed frequency distribution of measurements. They’re not necessarily a probability distribution function for the ages of boulders on the moraine. If you i) made the additional assumption that the probability of observing a certain exposure age is exactly equal to the frequency distribution of exposure ages we have already observed (which is highly restrictive, but might be true if you had analysed all the boulders on the moraine), and then ii) renormalized the y-axis so that the sum of all bar heights was equal to 1, then you would arguably have a probability density function for boulder age. Second, you need to make two arbitrary decisions when you create a histogram: how wide are the bins, and where are they located? If you change these things, the histogram changes. Third, there is no uncertainty in histograms. Each measurement goes in one and only one bin.
Whether a histogram is or is not a probability density function is largely semantic and depends on your definition of terms, but the second and third points above mean that histograms are a lousy way to represent data when either one of two things are true: i) there are only a few measurements, and ii) the measurements have uncertainty associated with them. Obviously, these two things describe most geochronological data, cosmogenic-nuclide exposure ages in particular. We don’t collect very many because they’re expensive, and they have measurement uncertainty. So here is an example of how wrong you can go with a histogram representation of exposure-age data. Let’s say you analysed two boulders and found them to have apparent exposure ages of 16.9 +/- 2.1 ka and 18.2 +/- 1.5 ka. There are two important things about these results. First, the two ages are different. Second, they agree when their uncertainties are taken into account. However, it’s impossible to communicate both of these important observations at the same time using a histogram. Here’s one possible histogram for these ages:
This one gives the impression that the two ages are irreconcilably different. Wrong. So that’s misleading. How about this one:
That one indicates that the two ages are the same. Also wrong and misleading. The point is that when data are sparse and have measurement uncertainties, representing their distribution with histograms fails to communicate the information we are trying to communicate. This is the problem that “camel diagrams” are intended to solve. In constructing a histogram, we are basically representing each measurement by a rectangle with width equal to the bin width, and then adding the representations of all the samples together to get a summary histogram. Now what we will do instead is represent each measurement by something other than a rectangle. Usually, because we are generally working with cosmogenic-nuclide measurements that have normal, i.e. Gaussian, uncertainties, we represent each sample by a Gaussian-shaped curve. This is just a curve generated by the formula for a normal probability distribution:
where is the mean and is the standard deviation of the probability distribution. To do this for a single exposure age, we take the age we measured to be and the 1-standard-error uncertainty in the age to be . Doing this for the two data just mentioned above gives:
Representing the measurements as Gaussian curves visually communicates a lot of important things that we couldn’t communicate with the histogram. First, although the measurements are different, they are similar in light of their measurement uncertainties — if we envision each curve as something like a probability density function for the actual age of each of the samples, then the fact that there is a lot of overlap between the curves indicates that there is a high likelihood that they are both measurements of the same thing, and are different only because of measurement error. Second, we can compare the difference between the best estimate of each measurement — the location of each peak — and the size of the uncertainty on each measurement. In this case the measurements are more similar than their uncertainties, which also communicates the high likelihood that they are both measurements of the same thing. Third, because the formula for a Gaussian curve is defined such that the area under each curve is always the same, the height of each curve is inversely proportional to measurement uncertainty. This feature draws the eye immediately toward the most precise, i.e. tallest, measurements, and the viewer naturally tends to give those more weight. So representing data by continuous Gaussians instead of rectangles clears up a lot of the visual misrepresentation that histograms incur with small and uncertain data sets.
Typically one then adds the Gaussian curves corresponding to the single measurements together to come up with a summary plot, as follows:
The black line is the sum of the two individual Gaussians. The fact that it has only one peak correctly visually communicates the idea that the two measurements are both inaccurate measurements of the same thing — the true age of whatever we are dating — and considering them together tells us that this true age is likely to be somewhere between the two measurements, slightly closer to the more precise of the two. Basically we are performing a sort of a visual maximum likelihood estimate.
If we had data that didn’t agree even considering their uncertainties, we’d get something like this:
Because there’s not much overlap between the two Gaussian curves, they don’t add much to each other and we have a two-hump rather than one-hump summary plot. Hence the name “camel plot.” One hump good, two humps bad.
So the summary is that this type of a presentation, in which we represent observations by continuous functions rather than a histogram, solves the fact that a histogram fails to communicate the important information about small data sets with measurement uncertainty.
The next question is what to call it. The term “camel diagram,” while easy to remember, is pretty dumb. It’s not a histogram. It’s not really a probability density function (as suggested in the caption from Meredith Kelly’s paper given as an example above) because it’s not intended to represent the probability of observing a particular outcome — it’s intended to represent the frequency distribution of measurements already collected. This fact led Culler and others (other example above) to call it an “ideogram created by summing the Gaussians.” As the word “ideogram” is more commonly used to describe written characters in Chinese and other “ideographic” languages that communicate an entire idea or concept by a single character, using “ideogram” to describe this sort of a plot is, at the very least, confusing. Really what it is is a sort of a smoothed frequency distribution, and the proper statistical term for it is a “normal kernel density estimate.” This term communicates the fact that we are trying to estimate the frequency density of actual observations. The “kernel” is just what sort of shape is used to represent each datum. In a standard histogram, the kernel is a rectangle. Here it is the equation of a normal, i.e. Gaussian, PDF, so it is a “normal kernel.” In principle one could have any sort of kernel — triangular, Poisson, sinusoidal, anything you want. There is a lot of statistical research devoted to the proper way to construct a kernel density estimate.
When and why to use it? As noted above the value of this type of plot is in overcoming the fact that histograms are visually misleading for sparse and uncertain data. If you have sparse and uncertain data, the camel diagram is a very good way to visually communicate a lot of the important conclusions that should be drawn from the data. For this reason, it’s a very good way of presenting geochronological data. It doesn’t make a whole lot of sense to use it when the opposite things are true of your data set — data that are numerous and whose uncertainties are very small compared to the spread in their values are easily and honestly presented in a histogram.
Finally, several marginally related things to note. First, the fact that one adds the Gaussian kernels together is also related to the question of whether this diagram is a density estimate and not a probability estimate. If, as in the example above, we had two age measurements with Gaussian uncertainties, and we took each of those to be a probability density function for the age of the landform, then if we wanted to combine them into a single probability distribution, one could argue that we would instead want to multiply them to obtain the joint probability of both things being true at once.
Second, one potential serious error in the use of this diagram for geochronological data occurs in the situation where an age measurement is not distinguishable from zero. Let’s say you have an exposure age of 300 +/- 200 years on a Little Ice Age moraine. If you take this to be a Gaussian uncertainty, then you are saying there is a finite probability that the age is less than zero. Of course, it is not possible that the age of the boulder is less than zero, so taking this uncertainty to be Gaussian is wrong. Hence, a normal kernel density estimate representation of such data would also be misleading. In principle, you could overcome this by using a different type of kernel — Poisson for example — that always goes to zero at t = 0. Again, there is lots of statistical literature describing improved kernels that correct this problem.
Third, one question specifically about applying camel diagrams to exposure-age data: which uncertainty to use? Commonly in exposure dating we talk about two different values for the uncertainty. The so-called ”internal” uncertainty includes only measurement uncertainty on the cosmogenic-nuclide concentration. So if we make a Be-10 measurement with 5% precision, then the internal error on the exposure age calculated from that Be-10 concentration is also 5%. The so-called “external” uncertainty adds uncertainty in the nuclide production rate that we use to compute the exposure age from the Be-10 concentration. For example, if the production rate uncertainty is 10%, then the same Be-10 measurement will yield an external uncertainty on the exposure age of about 12%. The important difference between these two is that the internal uncertainties are independent between exposure ages for samples from the same location, whereas the external uncertainties are not — they are all subject to a shared production rate uncertainty. This means that when comparing two samples at the same site to each other, one needs to use the internal uncertainty, not the external uncertainty: if we used the external uncertainty, we would often conclude that two samples agreed within their respective uncertainties when in fact this was not true. Because constructing a “camel diagram” for exposure ages from a particular landform is basically an exercise in comparing a set of samples to each other — you want to come to a conclusion about whether exposure ages are scattered due to postdepositional disturbance, for example — in most cases you should use the internal uncertainty along in constructing the diagram.
Lastly, here is some MATLAB code for actually constructing camel diagrams.
In a previous post I tried to clear up some of the confusion surrounding the fact that the “absolute” isotope ratio in a standard used for AMS measurement of Be-10/Be-9 ratios is defined based on a decay-counting measurement, so what you think this ratio is depends on what you think the half-life of Be-10 is. To review this a bit,
1. The measurement we need to compute an exposure age is the amount of Be-1o in a sample. We determine this by measuring the amount of Be in our sample, determining the Be-10/Be-9 ratio by AMS, and multiplying.
2. AMS measurement of the 10/9 ratio is done by comparing the 10/9 ratio in the sample to the 10/9 ratio of a standard whose absolute isotope ratio is already known.
3. The absolute isotope ratio of the standard is usually defined by a decay-counting measurement to determine how much Be-10 is present. This requires knowing the half-life of Be-10. If you use a different value of the half-life,this implies a different absolute isotope ratio for the standard, a different isotope ratio for your sample, and, eventually, a different exposure age. So from this perspective, an exposure age can be said to be “normalized to a particular value of the half-life.” Unfortunately, this description is extremely confusing if you are not completely familiar with the preparation of AMS standards.
The point of the previous post was to make all readers completely familiar with the preparation of AMS isotope ratio standards. In case that failed, the point of this post is to explain how to reduce the confusion caused by the semi-equivalency of the value of the Be-10 half-life and the number of Be-10 atoms in your sample. I summarize a couple of steps that have been taken in the past few years to alleviate this, as well as recommendations for how to keep things simple and reduce confusion as much as possible.
Step 1. Make an AMS standard whose absolute isotope ratio is determined independently of the Be-10 half-life. Kuni Nishiizumi and a number of co-authors accomplished this in a 2007 paper:
K. Nishiizumi, M. Imamura, M. Caffee, J. Southon, R. Finkel, and J. McAnich. Absolute calibration of Be-10 AMS standards. Nuclear Instruments and Methods in Physics Research B, 258:403–413, 2007.
These authors made the observation that the whole point of an accelerator mass spectrometer is to detect and count the number of atoms of Be-10 that enter a detector. If you could hang onto all those atoms, you would have a precisely determined number of Be-1o atoms that could be used to mix up an AMS standard with known 10/9 ratio. And you would not have determined the number of atoms of Be-10 by decay counting, so this would be independent of the Be-10 half-life. They implemented this very smart idea by placing a silicon wafer in the LLNL AMS detector array that would trap Be-10 atoms entering the detector. They then put a Be-10-rich cathode in the accelerator source and injected a large number of Be-10 atoms into the detector. These were detected, counted, and came to rest in the silicon wafer. They then dissolved the wafer and added a gravimetrically determined amount of Be-9. The result: a stock of Be (I will call this the “implantation standard”) whose absolute 10/9 ratio was known independently of the half-life. They then used this stock of Be to prepare several samples for AMS analysis, and used the AMS to compare the 10/9 ratio of this material to that in commonly used AMS standards (whose isotope ratios were previously only known by a decay-counting measurement). The result: absolute measurements of the 10/9 ratios in these standards that were independent of the value of the Be-10 half-life.
This is an enormously valuable contribution because it decoupled the question of “how many atoms of Be-10 are there in my sample” from the question of “what is the Be-10 half-life.” In addition, this paper includes a long list of absolute isotope ratios determined for various commonly used AMS standards, all referenced to the implantation standard. This list is important because it allows interrelation of AMS measurements normalized to different standards. For example, these measurements showed that the “07KNSTD3110″ standard material has an absolute 10/9 ratio of 2.85 x 10^-12. If we want to compare that with measurements made against the “LLNL3000″ standard material with an assumed isotope ratio of 3 x 10^-12, the intercomparison measurements in this paper reveal that we have to multiply the latter measurements by 0.8644. Some of these intercomparison measurements existed before, but this paper put a large set of internally consistent intercomparison measurements in one place.
Step 2. Strongly encourage anyone calculating exposure ages and/or erosion rates to determine and specify the Be measurement standard used for their AMS measurements. This is critically important for the usefulness and scientific longevity of AMS results — if there’s no information about what actual AMS standard measurements are linked to, then it’s impossible to determine how many atoms of Be-10 are actually in the samples. If you can’t determine how many atoms of Be-10 were observed in a study, then obviously the study is totally useless to future researchers.
This is easy for some measurements — because the AMS lab that made the measurements places this information at the top of all of their results spreadsheets supplied to users. Some labs (i.e., LLNL) clearly state the standard material that they used along with the absolute isotope ratio assumed for that material. This is the simplest and easiest-to-understand approach. Other labs (i.e., SUERC) state the standard material used and the half-life of Be-10 that they used to define its absolute isotope ratio. This approach provides the information in a somewhat less accessible form: the user must obtain more information (i.e. the activity of the standard) to determine what absolute isotope ratio was assumed for the standard. As discussed in the last post, stating the half-life assumed in interpreting the activity of the standard material does fully define the standardization, but experience shows that it is much more confusing for users who are not familiar with how AMS standards are produced. Other labs (e.g., PRIME) do not by default supply standardization information with their results, so users must look on a lab website (e.g., here for PRIME) or fish around in the AMS literature. This approach is the least optimal.
In 2009, I decided that the best way to address this situation was not simply to hector scientists about proper data reporting (although this is also a popular strategy, as documented in this post) but to give them an incentive to do it properly. The online exposure age calculator provided this opportunity. The calculator had started to become very commonly used by Earth scientists who wished to use cosmogenic-nuclide exposure ages and erosion rates in their research, but were not specialists in the method and were not completely familiar with all details of Be-10 measurement and production rate estimation. I modified the online calculators to require as input a description of the Be-1o AMS standard used for the measurements. Basically, I trolled through the tables in the Nishiizumi et al. (2007) paper as well as some other published intercomparisons, and defined a number of possible combinations of actual standard material and assumed isotope ratio. These possible standardizations are tabulated here. This meant that users had to figure out which of these standardizations applied to their measurements, and enter it. Because the online calculator was supplying users a valuable and time-saving service — computing exposure ages according to commonly accepted practice without requiring the user to understand all details — users were willing to incur a small amount of extra work, that is, figuring out the AMS standard used for their measurements, to gain this benefit. The way this happened in practice was that users would ask their AMS lab which standardization applied — the AMS folks would either tell them (if the relevant standard was tabulated) or contact me to add their particular standard/ratio comparison to the approved list. Basically, this worked — by providing users a benefit for properly assembling the information, they became motivated to obtain this information from those responsible for their AMS measurements. In addition, the fact that all users of the online calculator knew that this information was required, motivated them to demand it from others in the course of paper or proposal review. Essentially, adding a standardization requirement to the online calculator harnessed worthwhile incentives and peer pressure to improve data reporting. At present, nearly all publications about cosmogenic-nuclide exposure dating properly describe the standardization of their Be-10 measurements. This is a huge improvement over the past situation.
Recommendations. Finally, I have some recommendations for the best way to make Be-10 measurement standardization as clear as possible for all users.
The first is to reduce confusion about the link between the half-life of Be-10 and the assumed isotope ratio of a standard by not describing AMS standards in terms of a half-life. This means saying “the NIST SRM 4325 standard with an assumed 10/9 ratio of 2.68 x 10^-11″ and not saying “the NIST SRM 4325 standard with a Be-10 half-life of 1.34 Ma.” It also means not making statements such as “these Be-10 measurements are normalized to a Be-10 half-life of 1.5 Myr.” The reasoning here is as follows: what we are really trying to do is determine the number of atoms of Be-10 in a sample. Thus, we should describe how these measurements are standardized by likewise stating how many atoms of Be-10 are in the standard. In addition, all AMS standards must by definition have an absolute 10/9 ratio, but this is not always determined by decay counting. Thus, the absolute isotope ratio is a common descriptor that can be applied to any AMS standard. Yes, it is true that stating a standard material and an assumed half-life does fully and correctly describe an AMS standard — but it’s a lot less confusing to keep separate the question of “how many atoms of Be-10 are in my sample” from “what is the half-life of Be-10.” IT IS LESS CONFUSING to describe standards in terms of an assumed 10/9 ratio. Keep things simple — first, determine how many atoms of Be-10 there are in your sample. Then, after you’ve figured that out, decide what value of the Be-10 half-life should be used to interpret your results (see this post).
The second is to use AMS standards whose absolute isotope ratios are linked to the Nishiizumi implantation experiment, not to a decay-counting measurement. This is why the online exposure age calculator renormalizes all data to the “07KNSTD” standardization, which is derived from the implantation experiment. Again, this keeps the question of “how many atoms” separate from “what is the half-life.” This issue is alleviated somewhat now that accurate measurements of the Be-10 half-life exist — and in principle, using the new half-life measurement with existing activity measurements for AMS standards most likely yields more precise absolute isotope ratios for many AMS standards than does referencing them to the Nishiizumi implantation standards — but converting this half-life measurement into new absolute isotope ratios for decay-counting-based standards has not really made it into published form yet. So I may take back this recommendation in future.