Skip to content

Where to publish cosmogenic-nuclide geochemistry

April 11, 2019

The overall goal of this posting is to highlight a new open-access/open-review journal, just introduced by EGU and Copernicus Publications, entitled “Geochronology.” But first, some context.

For context, consider two things.

One. The dispute between the University of California system and scientific publisher Elsevier. Basically, UC has canceled its subscriptions to all Elsevier journals after failing to come to a blanket agreement for open access publication of UC-hosted research. I care about this because I utilize the UC library journal subscriptions.

Two. The list of peer-reviewed scientific journals where technical articles about cosmogenic-nuclide geochemistry are commonly published. Quaternary Geochronology. Earth and Planetary Science Letters. Geochimica et Cosmochmica Acta. Nuclear Instruments and Methods in Physics Research. Chemical Geology. Possibly Quaternary Science Reviews. For short-lived nuclides in tracer studies, perhaps the Journal of Environmental Radioactivity.

These journals have two things in common. They publish technical research on geochemistry and geochronology that is relevant to your work, so you read them all the time. And they are published by Elsevier. Although there are many other journals where one might sensibly publish any research involving applications of cosmogenic-nuclide geochemistry to broader Earth science projects, consolidation and acquisitions by Elsevier over the past decade or so have produced a near-monopoly on journals suitable for technical articles in this field. About half the articles that I am an author of are in Elsevier journals. The only other journals I can think of where I might publish technical articles rather than applications are, maybe, Geophysical Research Letters or JGR-Earth Surface.

Now put these two things together. In future, I will not be able to legally read any articles published in Elsevier journals, which include nearly every possible outlet for basic research in cosmogenic-nuclide geochemistry. Most sensible people would conclude that in this situation, I should also stop publishing in and reviewing for these journals. While it is true that cosmogenic-nuclide geochemistry was probably not a major contributor to Elsevier’s $1.2B in profits from scientific publishing in 2017, and my decision to boycott is unlikely to significantly threaten said profits, it seems pretty dumb to continue providing free services to journals that I can’t even read.

So, OK, I can’t publish the majority of my research output, which concerns technical research in cosmogenic-nuclide geochemistry, in any of the journals that are most suitable for publishing this research. While the idea of transferring all my future research output to this blog and other non-peer-reviewed websites  is certainly appealing, it may not be a wise career move, or fair to funding agencies who expect to eventually see some results, to disengage completely from normal peer-reviewed journals.

That concludes the context. Although I thoroughly support the UC position in the Elsevier dispute because I think the Elsevier/Wiley/MacMillan business model, in which my free labor supports an enormously profitable business, is ridiculous, I am left at the moment with a lack of publication outlets. Thus, it is an extraordinary coincidence of timing that on nearly exactly the same day that UC/Elsevier negotiations failed, the European Geophysical Union and Copernicus Publications introduced a new non-profit, open-access, open-review journal called Geochronology, that aims to focus on basic research in geochronology and associated areas of geochemistry and analytical methods.

From my perspective, this is the only such journal that is not published by Elsevier, so it looks like I have to publish in it whether I like it or not — although there is a slight complication in that I am one of the editors, so it would probably look bad if I were also the most prolific author. In the rest of this post I will explain why you — a reader of this blog who presumably carries out and publishes research in cosmogenic-nuclide geochemistry — should publish in it too.

First, what is it? In the past several years, EGU has worked with Copernicus Press, which is a publisher and conference organizer organized as a non-profit corporation under German law, to develop a series of online journals that provide (i) open access without subscription fees, (ii) impose on authors or funding agencies the minimal costs needed to support publication and archiving, and (iii) utilize an open review process. More about the open review process later. Several of these journals have become extremely successful and widely read; for example, The Cryosphere, which is rapidly becoming the de facto international journal of record for research in glaciology and cold regions climate and surface processes.

Second, what is the open review process? Basically, papers submitted to these journals, if the editors deem that they meet basic standards, are immediately posted as discussion drafts. In effect this is the same as posting to a preprint server such as arXiv. Then two things can happen. The editors solicit reviews of the draft paper; these reviewers get to choose whether or not to remain anonymous. And in addition, anyone can submit an unsolicited review under their name; unsolicited reviews may not be anonymous. Finally, the authors submit a revised paper and a response to the reviews; if acceptable, the editors post this as the final version. The important difference between this and the normal review process at most journals is that all parts of this — the reviews, the responses, and successive versions of the paper — are accessible online with the paper and become part of the permanent record.

I think this last element — the fact that all the review correspondence is an accessible part of the permanent record — is an enormous improvement on standard practice. From my perspective as a reader, I learn a lot more from reading the paper with the reviews than I would from the paper alone. It is a throwback — a good throwback — to a long-past bit of the history of science in which many papers were published as conference proceedings, and a stenographer was on hand at the conference to record and later publish the discussion that took place after the presentation. Speaking as a reviewer, I put a lot of work into reviews, sometimes including extensive calculations and model exercises, and if this work disappears into anonymity and is never seen again, it is hard to see why it wasn’t a waste of my time. And then as an author, I am pretty sure that if reviewers know that their comments will be publicly accessible and permanently on the record, I am going to get a more careful, fair, and well-thought-out review. I certainly feel that pressure when reviewing in these journals. The point is, these are all good things. They make it so I, or you, learn more from published research. They are probably more work for everyone, but I think they also produce better results.

To summarize, Geochronology (or “GChron” if you don’t have time for the extra four syllables) solves my immediate problem created by the UC-Elsevier struggle for dominance. But even if you don’t have that problem, if you publish research in this field, it corrects several other problems and improves the situation in several ways. The reason I signed on as an editor long before the Elsevier incident is that it creates an outlet for basic research in geochronology and related areas that is independent of applications, which will hopefully make it possible to focus on doing the best possible job of determining the age of geologic deposits, without having to bury all the interesting technical research in the supplementary methods to make a paper suitable for a more general journal. Then, the open review system is better than the conventional system. And finally, open access publication is better than a subscription model. Publication fees in the EGU/Copernicus journals are cheap — 77 Euros per page — and many European universities and funding agencies, as well as a couple in the US, have blanket agreements. And for GChron, fees are waived for the first two years until there is an impact factor, so it’s free. That’s free, as in free. There is no downside.

The Geochronology website is here. Click on the link and submit the paper that you are writing right now.

 

 

 

 

 

 

Advertisements

YD vs. ACR: Who Would Win?

February 8, 2019

Any readers with kids — of the six people in the world who read this blog, statistics suggest that at least one or two will have children — may be familiar with the “Who Would Win” series of books by prolific children’s book author Jerry Pallotta, including such titles as “Who Would Win: Komodo Dragon vs. King Cobra,” “Who Would Win: Jaguar vs. Skunk,” and many, many, many others. This posting is about an equally epic matchup that does not appear in that series, but is a good way to highlight yet another database project and some ideas about what we should be doing with it.

The database project is the ‘ICE-D:ALPINE‘ database, which is basically a copy of the now-reasonably-well-established ‘ICE-D:ANTARCTICA‘ database of all known cosmogenic-nuclide data from Antarctica. The difference, as indicated by the name, is that the ICE-D:ALPINE database is intended to contain all known cosmogenic-nuclide exposure-age data from alpine glacier systems worldwide (except for Antarctica of course). This is a much bigger project, because alpine glacial landforms in populated mountain ranges of the world are a lot easier to get to than glacial deposits in Antarctica — a substantial fraction of them are directly accessible by train from the Zurich airport — so the density of geologists and therefore exposure-dated boulders is correspondingly higher. Thus, the quantity of data is a lot bigger. If we count by measurements, the alpine database is about 4 times larger than the Antarctica database at the moment; if we count by published journal articles, alpine glaciers are winning by about a factor of 3. The alpine glacier database is also a much bigger project, and there are more people working to make this happen. The initial data content was primarily based on the global compilation of exposure ages on boulders put together by Jakob Heyman, and this has now been augmented by an international group of contributors whose names appear on the database front page. It’s also less complete than the Antarctica database. We guess that it will have to grow by 25-50% to include all published data, and of course the rate of publication of new exposure-age data for alpine glacial deposits worldwide is a lot higher. So right now it’s a pretty good, but not totally comprehensive, representation of most of the data that are out there. The present distribution of data looks like this, where each dot signifies a glacial landform, typically an alpine glacier moraine, that’s been exposure-dated at least once. There are 2163 dots.

The matchup concerns the two biggest climate events — or at least the ones with the largest media presence — from the period of global deglaciation following the Last Glacial Maximum. The Younger Dryas (“YD”) event is a 1200-year cold period between 12,900 and 11,700 years ago that is extremely prominent in climate records from Greenland and the North Atlantic region. The Antarctic Cold Reversal (“ACR”) is sort of like the opposite of the YD: it is a 1700-year cold period between 14,700 and 13,000 years ago that is prominent in Antarctic and Southern Hemisphere climate records. The idea is that during the ACR, it was warm in Europe and cold in Antarctica; during the YD it was the opposite.

Most paleoclimatologists and glacial geologists would probably expect that alpine glaciers worldwide paid attention to these climate events. Glaciers expand during cold periods, and contract during warm ones. An extraordinary amount of work in the glacial-geological and paleoclimate literature, which will not be summarized or enumerated here because this is a blog entry and not a peer-reviewed journal article, has been devoted to associating particular glacier advances in particular places with the YD or the ACR. Nearly all this work consists of single studies of single glacier systems, or a handful of glacier systems in the same general area. But if we were to do a global synthesis of all dated alpine glacier deposits, a decent initial hypothesis would be that in the Northern Hemisphere and particularly around the North Atlantic, it was cold during the YD and warm during the ACR, so we should see a lot of YD moraines and not a lot of ACR moraines. In the Southern Hemisphere, it was the opposite, so we should see a lot of ACR moraines and not a lot of YD moraines.

So let’s see if this is really true, using the entire data set of all dated moraines and not just a few individual hand-picked ones that happen to be part of our own Ph.D. dissertation. To do this, I’ll use what is generally thought to be more or less the present state of the art for production rate scaling and calibration, specifically (i) the time-dependent ‘LSDn’ scaling method developed by Nat Lifton and others in 2014-16, as implemented in version 3 of the online exposure age calculators described by Balco et al. (2008) and subsequently updated, and (ii) the CRONUS-Earth “primary” production rate calibration data sets for Be-10, Al-26, and He-3. The majority of the exposure ages in the database are derived from Be-10 measurements, but there are significant numbers of Al-26 and He-3 data in there too, and for purposes of this I’ll treat them all equally. Using these tools, calculate exposure ages for all the samples on all 2163 moraines in the database. For each landform, apply the outlier-detection and averaging scheme used in the online exposure age calculators — which basically attempts to discard the minimum number of outliers necessary to yield a single population — to generate a summary age and uncertainty for each moraine. Assume that these define a Gaussian uncertainty distribution and compute what fraction thereof lies within the YD (11700-12900 years BP) and ACR (14700-13000 BP). Basically, this is an estimate of the probability that a particular landform was emplaced during each of these climate events.

As an aside, a measure of how useful it is to have all these data in a modern online database is that this entire analysis takes slightly less than 5 minutes to run in MATLAB, even with a slow household network connection.

Now plot the results on a map.

Obviously, lots of moraines have ages that are nowhere near either the YD or the ACR; these have zero probability of belonging to either event and are denoted by black dots. But a decent number of moraines have a significant probability of belonging to one event or the other. Note that because the typical overall uncertainty in a moraine age is similar in magnitude to the length of these climate events, this algorithm will never yield a 100% probability; given this, a high probability of belonging to a millenial-scale climate event will be anything greater than approximately 50%. So the colored circles on both maps denote moraines that have more than half a chance of belonging to each climate event. Out of 2163 moraines, this scheme identifies 44 likely YD moraines (the green circles) and 156 likely ACR moraines (the blue circles).

What is interesting about these results is that they are not quite what we expect. It does seem evident that YD moraines are rare, and ACR moraines are common, across all latitudes in the Pacific Cordillera. I am not sure whether we expect this: the ACR is typically explained as something having to do mostly with Atlantic heat transport, and although ACR-age glacier advances in much of South America have been explained as a response to cooling in the Southern Ocean, there is significant disagreement about whether this should be true in the tropics, and it is definitely not obvious why it should also happen in the north Pacific. What is not what we expected, though, is that YD moraines are not dominant, or even very common at all, anywhere, even in Europe, where there are certainly lots of late-glacial moraines, but most of them seem to be ACR age, not YD-age. It is important to remember that these data only pertain to alpine glaciers, not ice sheets, so the general lack of evidence for YD glacier advances may be in agreement with the idea that the seasonality of YD climate change resulted in different responses from alpine glaciers and ice sheets. But the main point is that this analysis does not at all show that the YD and ACR each have their own geographic sphere of influence on glacier change. In fact, nothing like it. ACR wins and YD loses globally.

We also get this result if we just look at the derived ages of all 2163 moraines as a histogram.

The green and light blue boxes indicate the YD and ACR, respectively. Viewed in this way, it is actually evident that there are fewer moraines that belong to the YD than there are for the rest of the late glacial: there are a lot of moraines older than 13-ish ka and a lot of moraines younger than 12-ish ka, but not a whole lot in the middle. Weird.

So, what is going on here? One thing that is interesting about this analysis is that it has reached the level of complexity where it’s hard to know exactly what’s going on in detail. This is potentially bad because it’s hard to know what’s going on in detail. For example, although I think that the outlier-removal scheme mostly behaves rationally, if there are some pathological age distributions in here that caused it to do the wrong thing, I’ll never know it. On the other hand, it’s good because it makes the analysis unbiased: even if I have some preconception of what the results ought to be, it is so complicated that it would be really hard to even figure out how to consciously or unconsciously fudge the algorithm or the data to get the desired result. But, still, there are some reasons why these results could be systematically wrong. A couple are fairly obvious. One, summary moraine ages could skew young because of postdepositional disturbance (just because of how the outlier-rejection scheme works in this example, they are less likely to be skewed old due to inheritance). But this would be mitigated by the fact that production rate calibration data are largely from the same types of landforms, which would skew production rates to the young side and neutralize this effect. And, in any case, if this were happening it would push moraines out of the YD to the young ages and also push other moraines into it from the old side, so it would not be expected to produce a systematic lack of YD moraines, except by accident. Two, production rate estimates could be inaccurate. If we globally lowered production rates by 5%, that would push the big group of ages near 11-11.5 ka right into the end of the YD near 12 ka. But we would have to ignore a lot of presumably correct production rate calibration data to do that, so that is not very compelling either.

Many people would argue that this entire analysis is oversimplified, misleading, and probably wrong exactly because the data set is not highly curated and subject to detailed examination and quality control at the level of each individual landform. Many specialists in exposure-dating of glacial deposits, if you get them talking privately, will reveal that data collected by other research groups are sometimes not good. Scattered. Hopelessly contaminated by postdepositional disturbance. Not to be trusted. Garbage. Certainly not to be used in an analysis of this sort for fear of leading the results astray. Of course, this data set is so large that I have no idea who even collected most of the data, so even if I had some of these opinions myself it would be nearly impossible to act on them to influence the results. But we can make a bit of an effort to de-garbage the data set by applying some objective criteria. For example, we can only accept exposure-age distributions where we can’t reject the probability that the ages belong to a single population at the 95% confidence level, which basically has the effect of further screening out landforms whose ages are scattered more than can be dealt with by discarding a couple of outliers. The effect on the global distribution of moraine ages:

is to make the relative absence of moraines of YD age in relation to older and younger time periods even more striking. Applying this filter geographically produces this:

Again, being more strict about discarding scattered age distributions results in rejecting a whole lot of moraines that might have appeared YD-age just because of excess scatter, and makes ACR-age moraines (and “Preboreal,” i.e. post-YD, moraines) even more dominant. We are down to only a handful of YD moraines in the Alps and only one in the western US. What do we conclude? We conclude that YD-age alpine glacier moraines are actually kind of rare, which is at least not exactly what I expected, and possibly not what you did either. Perhaps the YD was cold, but also dry, so alpine glaciers shrank everywhere. Perhaps this is a manifestation of the seasonal distribution of YD cooling, something like Meredith Kelly suggested in this paper. Or perhaps Gerard Roe is right and glacier change is just an autoregressive process driven by stochastic variability in a steady climate.

But in any case, ACR wins! Game over.

So, to summarize, that was kind of fun, but what is the point? It could be argued that this analysis is oversimplified, misleading, based on crappy data, and largely BS. Maybe that’s why this is an unreviewed blog posting instead of a Nature article. But what is really important here is that this exercise highlights that we can move away from making large-scale inferences about glacier response to climate change based on small sets of highly curated data, and move towards doing it with much larger and more inclusive data sets, even if they are noisy, and, hopefully, with globally applied and less biased algorithms. We have, or almost have, a modern database infrastructure with dynamic, internally consistent, and mostly transparent, exposure-age recalculation, that we can use to do this. We should do it. And if, in fact, my shot at this really is oversimplified, misleading, and largely BS, then go do it better. All the data are here.

 

 

Stone (2000) revisited

February 5, 2019

This is just to tie up a loose end from a previous post in which I argued that a compilation of Be-10 and Al-26 concentrations in extremely old, low-erosion-rate surfaces from Antarctica indicated that the time-dependent, paleomagnetically-aware “LSDn” production rate scaling method of Lifton and others did a better job of production rate estimation than the non-time-dependent “St” scaling method of Lal (1991) as implemented by Stone (2000). Basically, the difference is the altitude dependence of the production rate. Tying up this loose end is an interesting exercise in data archaeology, and also highlights some evidence that either the Al-26 half-life or the Al-26/Be-10 production ratio may need some attention.

The issue in the previous post is just that in the St scaling method, the production rate increases more slowly with elevation in polar latitudes, with the result that many high-elevation Antarctic samples have measured Be-10 concentrations that are higher than saturation concentrations predicted by that scaling method. With LSDn scaling, predicted production rates at high altitudes are higher, so all the observed concentrations are near or below predicted saturation.

However, as I pointed out in the previous post, the 2000 paper already looked at available near-saturation data from Antarctica, and that paper concluded the opposite: that near-saturation measurements from Antarctica were consistent with the scaling method proposed in that paper. This was an important conclusion at the time, that was extremely persuasive in the adoption of lower production rate estimates and the abandonment of the standard atmosphere as a single global elevation-pressure conversion. However, lots of stuff has changed between the two comparisons. Errors in Be-10 measurement standardization were corrected by Nishiizumi and others (2007) and related work. The Be-10 half-life was remeasured, and remeasured again. Large amounts of new Be-10 and Al-26 production rate calibration data have been collected. Everything is different. So figuring out whether what the 2000 paper says is still valid is really complicated. But, let’s try anyway. It’s probably not worth the effort to explore the result of each individual one of those changes, but it’s possible to use the ICE-D:ANTARCTICA database and the latest version of the online exposure age calculator to look at the effect of the last 18 years of progress together. 18 years is a while, so hopefully there has actually been some progress.

First, a review of the original discussion. Here is the relevant figure from the 2000 paper.

This shows selected Be-10 and Al-26 data from Antarctica plotted on a two-isotope diagram. The simple exposure region — that is, the part of the diagram where it is physically possible for concentrations to lie given a single period of steady exposure at some surface erosion rate — is the region outlined by solid black lines. The point marking the right-hand boundary of this region is the saturation endpoint: concentrations cannot lie to the right of this point, or above the upper boundary of the region. Potentially, concentrations could lie below the region if sites had been ice-covered for a time, but the majority of these sites are from mountain summits that are unlikely to have experienced ice cover in the past. In the original paper, the point of this figure was to show that certain combinations of Be-10 production rates and scaling methods produced either unphysical results (like in the upper left panel where many concentrations lie above the simple exposure region) or unlikely ones (as in the upper right panel where all concentrations imply significant burial). On the other hand, the scaling approach proposed in the paper for Antarctic sites, shown in the lower left panel, produced plausible and physically possible results.

Now, if you believe my previous posting, changes in Be-10 standardization and production rate scaling in the subsequent 18 years have superseded these results: when we take these into account, the “St” scaling method actually forces concentrations to lie outside the simple exposure region. So, let’s try to reconstruct this diagram with our current best estimate of all this stuff.

Step 1 is to figure out which samples are involved. The 2000 paper doesn’t list sample names, but describes the sample set only as “Antarctic Be-10/Al-26 data from Nishiizumi et al. (1991), Ivy-Ochs et al. (1995), and Brook et al. (1995), selected to show the oldest exposure ages in each data set.” We can make a decent estimate of figuring out which samples these are using the ICE-D:ANTARCTICA database; choosing the highest-concentration samples from the three 1990’s papers yields this set of samples. We can then replicate  the figure from the 2000 paper using the following tools. One, the various scaling method implementations in version 3 of the online exposure age calculator described by Balco et al. (2008) and subsequently updated., specifically the “St” scaling method, which is an implementation of the method originally described in the 2000 paper, and the “LSDn” scaling method developed by Lifton and others in 2014-16. Two, the CRONUS “primary” production rate calibration data sets for Be-10 and Al-26. Three, the Be-10 and Al-26 standardization relationships used in the online exposure age calculator and tabulated here. With these ingredients and the “St” scaling method as implemented in the online exposure age calculator, the figure from the 2000 paper now looks like this:

The important change is now that at least two, and probably more, of the measurements exceed the saturation endpoint. They plot in the impossible region of the diagram to the right of the simple exposure region. So this answers one question: yes, trying to repeat the Stone (2000) calculation with the same data, but revised measurement standardizations and production rate calibration does, in fact, support the argument that this scaling method underpredicts elevation scaling at the poles. Primarily, what has happened here is that once everything has been corrected for measurement restandardization, new production rate calibration data imply a lower production rate than the best estimate at the time, so Be-10 concentrations are relatively higher in comparison to predicted saturation concentrations. Of course, we knew this already, but recalculating all these data is actually a fairly difficult test of whether the entire web of restandardization and recalibration is internally consistent. It seems to work, which is good.

For completeness, we should also try this with the latest “LSDn” scaling. Here is the result:

As discussed in the previous post, this scaling method predicts higher production rates, which pulls all the normalized Be-10 concentrations back to the left into the region of the possible. So from the perspective of Be-10 production rates, as discussed in the previous posting, this seems to show that LSDn scaling works: Be-10 concentrations are all at least a little bit below predicted saturation. However, both results, and in particular the LSDn result, are really interesting from the perspective of Al-26 production rates, because they imply that all these sites lie in the region of intermittent burial, below the simple exposure line, and thus have been covered by ice for significant parts of their exposure history. Of course, ice cover is not exactly unusual in Antarctica, but these samples, especially those from the Ivy-Ochs and Brook papers, are mostly from high-elevation mountaintop sites where ice cover is very unlikely to have occurred at any time. So this is consistent with the observation in the previous post that Al-26 concentrations throughout Antarctica are systematically lower with respect to predicted saturation than the corresponding Be-10 measurements, which probably indicates that something related to Al-26 is not as we expect. Note that we can’t fix this just by lowering production rates together (the difference between upper right and lower left panels in the original figure), because that would force Be-10 concentrations too far to the right — which tends to indicate that Be-10 is probably fine, and our problem likely has something to do with Al-26. Although of course it is hard to exactly reconstruct what happened to Al-26 measurements 20 years ago, there is not an obvious means by which Al-26 measurement standardization errors would cause this effect. In addition, we can look at more recently collected Al-26 data, for example recent unpublished measurements by Perry Spector at the Whitmore Mountains, and we get the same result. Here are some of the Whitmore Mountains data with LSDn scaling:

Same story. Be-10 concentrations at (or in this case maybe slightly beyond) the predicted saturation endpoint, but Al-26/Be-10 ratios significantly below that predicted for saturation. So if we can’t explain the problem with measurement standardization errors (and we’re pretty sure the Al-26 measurements were done right), this leaves two possibilities. First, the Al-26 half-life might be shorter than we think it is. This is possible: existing measurements of the Al-26 half-life might leave room for as much as a 3-5% inaccuracy, which could account for most of the apparent burial for these samples. If this is the answer it would imply an Al-26 half-life near 675 ka…but although this is technically within the uncertainty of some of the direct measurements of the half-life, it’s substantially lower than any of them. Second, we might be overestimating the Al-26/Be-10 production ratio, although fully accounting for apparent burial without changing the half-life would require the production ratio to be 6.3, which is lower than the mean ratio at moderate elevations allowed by currently available data, but not totally outside the range of the data. Certainly, though, the data set of near-saturation measurements from Antarctica are not at the moment consistent with proposals that the 26/10 production ratio is significantly higher than 6.75. But in any case, this seems to indicate that it’s probably worth taking another look at the highest Al-26 concentrations in Antarctica, perhaps making a systematic effort to remeasure Al-26/Be-10 ratios in the highest exposure-age samples, and maybe even looking into the Al-26 half-life.

Summary: recalculating the results in Stone (2000) to incorporate 18 years of work on measurement standardization and production rate calibration does, in fact, support the hypothesis that LSDn scaling performs better at scaling existing calibration data to high-elevation, high-latitude sites, at least for Be-10. It’s also relevant that we seem to actually be doing a decent job of systematically applying the improvements: adjusting 20-year-old data and calculations for all the subsequent changes in standardization and scaling seems to produce internally consistent results. That’s good. However, looking carefully at the data set of near-saturation measurements from Antarctica indicates that maybe the Al-26 half-life and the Al-26/Be-10 production ratio need some attention.

The ICE-D x OCTOPUS collabo

January 7, 2019

The point of this blog post is to highlight that cosmogenic-nuclide geochemists will not be outdone by Nigerian rappers. A little late — Vogue has just declared that fashion/hip-hop collaborations denoted by “x” are completely 2018 — but not bad for a field whose conceptualization of success and promotion is still defined by the pre-industrial concept of literature citations. Sadly, online mash-ups are not peer-reviewed (which is probably a good thing, based on the results of the interactive review process linked below) and have zero h-factor impact.

Seriously, this is a follow-up to a blog posting and an interactive paper review on the subject of the “OCTOPUS” database of cosmogenic-nuclide basin-scale erosion rate measurements. In these, I expressed, at length, bafflement as to why the existing “OCTOPUS” website did not allow direct, anonymous, granular access to the data in the database, instead providing data via an asynchronous and inconvenient email download scheme. I argued that not only was this very strange — why not just serve the raw data via the website — but also damaging to the overall credibility of the project, because it made it unnecessarily difficult for the casual user to examine the source data used for erosion rate calculations. I even quoted Ronald Reagan (who was quoting a Russian proverb at the time) to the effect that the project would not be credible unless researchers interested in using these data for synoptic erosion rate analyses could “trust, but verify.” Of course, it’s easy to quote Ronald Reagan — in fact, it’s terrifyingly easy to worship Reagan in the present American political environment — but not as easy to fix the problem. So here is the ICE-D x OCTOPUS collabo, which is an effort to fix the problem.

The effort to fix the problem is a mash-up of the ICE-D and OCTOPUS databases at octopus.x.ice-d.org.

What it is is a text-based, non-graphical, browser interface to the OCTOPUS cosmogenic-nuclide databases, that makes it possible to (i) examine the source data recorded for each sample in that database, and (ii) recalculate erosion rates from these source data using other calculation methods.

Technically, what is happening is that this web server is interrogating the WFS server that serves the OCTOPUS data as if it were GIS software, but then is parsing the resulting XML feed and displaying the data directly in human-readable form instead of showing it in map form or using it for geographic analysis. You could do the same thing with some scripting and a local copy of ArcGIS or QGIS configured to talk to the WFS server, but this web server provides a simple browse interface. So (as long as the OCTOPUS providers continue to let me do this), this solves the problem of providing casual, anonymous, granular examination of the source data. It makes it easy to, for example, quickly check the OCTOPUS database to make sure that data from a particular paper are accurately recorded. Of course, it is not clear that authors of published erosion-rate studies have the same relationship to the OCTOPUS compilers as the US and Russian strategic nuclear forces had to each other in 1987, but the idea of “trust, but verify” is still not a bad thing. So now you can do this.

The second thing that this project enables is the ability to use the source data in the OCTOPUS database to compute basin-scale erosion rates with a different calculation framework than the one (CAIRN) used to compute the apparent erosion rates that are stored in the OCTOPUS database. The present ICE-D x OCTOPUS implementation, for example, recalculates apparent erosion rates via a highly simplified method that uses a single-point approximation of each basin and the version 3 online erosion rate calculator. It is not clear whether this method is more or less accurate than CAIRN — each involves different simplifications and approximations — but the comparison is useful in evaluating how sensitive an erosion rate estimate is to the choice of scaling method, production rate calibration data, and numerical calculation scheme. If one were to find that erosion rates are correlated with some landscape metric using one erosion-rate calculation method, but not with another, it would be time to figure out why. The current implementation doesn’t include bulk download of OCTOPUS data in v3 input form — at the moment it happens at the individual sample level — but it’s potentially feasible in future. If this is a problem, take heart from Reagan and “never let the things you can’t do stop you from doing what you can.”

Reagan also said that “if you’re explaining, you’re losing.” So that’s it for the explanation. Thanks to the OCTOPUS developers for not blocking the WFS feed to this server, and remember that “there is no limit to the amount of good you can do if you don’t care who gets the credit*.”

 

 


* Yup, Ronald Reagan again. If only we could go back.

 

What color is beryllium-10?

October 8, 2018

This posting relates to some planned upgrades and additions to the ICE-D:ANTARCTICA website and the related parts of the online exposure-age calculators that have to do with diagrams, figures, and images. Even now, the display of some data sets via this website can produce a somewhat bewildering array of diagrams, figures, and images that are supposed to present exposure-age data in some way. Examples include the neat-looking but largely unexplained and unintelligible front page of the website:

Age-elevation plots:

Carbon-14 saturation plots:

And, in future, possibly extremely complex data-model comparison plots associated with this project. To make this proliferation of plots a little less intimidating, it seemed like a good time for myself and BGC postdoc Perry Spector, who is responsible for the data-model comparison project, to at the very least come up with a standardized color scheme for plotting measurements of different cosmogenic nuclides together on the same images. Hence the need to determine what color beryllium-10 is. And all the other nuclide/mineral pairs, which for practical purposes for this application includes He-3 in a variety of mineral targets, Be-10 in quartz, C-14 in quartz, Ne-21 in quartz, Al-26 in quartz, and Cl-36 in many possible targets. So how to do this? Here are some general principles.

  1. In reality, most of these things are either colorless (noble gases) or grayish (metals), with the exception that chlorine is a yellow-green gas if found in immediately-fatal concentrations at normal P-T conditions. So that’s not too helpful.
  2. There should be some kind of sense to the color-nuclide pairing, e.g., colors in spectral order should pertain to nuclides in mass order, or something like that.
  3. Obviously, neon-21 has to be neon  (the preceding text, which is probably unreadable, is and says “neon”). This was a critical new insight stemming from our discussion. How was this not obvious before?
  4. The colors shouldn’t look terrible (except for neon-21 as noted above), and should reproduce OK for print and screen use.
  5. Color assignments should be kind to green/red colorblind viewers to the extent possible, which may be simplest to accommodate by using one, but not both, of those colors.
  6. Black probably shouldn’t be used, because it will conflict with other plot elements, grids, etc.

So here’s our attempt to do this:

 

 

This scheme does a pretty good job of overlapping with the goals above, as follows, with a couple of additional features:

  1. Helium-3 data are the same tone (gray) but different values (light and dark) for different minerals. This is helpful in de-emphasizing He-3-in-quartz data, which are often uninterpretable due to diffusive loss of helium. If helium-3-in-quartz is one of the nuclide/mineral pairs being plotted, you probably want to pay more attention to whatever the other ones are.
  2. Neon-21 is neon.
  3. No green. No black.
  4. The colors are more or less in spectral order as you go up in mass.
  5. It’s not terribly ugly, with the exception of neon-21.

GMT is being used to make all these plots, so here are the HSV codes for these colors for use in GMT scripts:

 

Nuclide/mineral Face color Outline color
He-3 (qtz) 0-0-.78 0-0-.28
He-3 (px/ol) 0-0-.6 0-0-.1
Be-10 (qtz) 0-.7-.97 0-.7-.47
C-14 (qtz) 36-.85-1 36-.85-.5
Ne-21 (qtz) 65-.75-1 65-.75-.5
Al-26 (qtz) 211-.72-.90 211-.72-.4
Cl-36 (all) 291-.48-.65 291-.48-.15

 

Thus, expect to see this color scheme in anything generated by the ICE-D web server and online exposure age calculators in future.

One thing this does leave unaddressed is two-nuclide plots:

Obviously, you know which two nuclides are being plotted by reading the axis labels, so it is not necessary to color-code the nuclides. Instead, what is being color-coded here is the scaling method used to normalize the data to unit production rate. Red is non-time-dependent “St” scaling (Lal, 1991 and Stone, 2000); green is the “Lm” time-dependent adaptation of Lal/Stone scaling from Balco et al. (2008), and blue is the time-dependent “LSDn” scaling developed by Nat Lifton and others. This particular plot shows data from Antarctic samples with very high nuclide concentrations, so the tiny red and green ellipses out to the right in the “forbidden zone” highlight the difference in performance between St/Lm and LSDn scaling for high-elevation Antarctic sites. For these plots the R-G-B order is just inherited from the 2008 online calculators and represents the increasing order of complexity of the scaling methods, but it has the disadvantage of including both red and green. So this may have to be fixed in the near future as well.

Summary: the color scheme is a bit uglier but now standardized and hopefully more sensible.

The “fancy-pants” camelplot

September 25, 2018

This follows up on a previous post that describes the application of the normal kernel density estimate, also sometimes known as a “camel plot,” as a visual aid for displaying cosmogenic-nuclide exposure-age data. This type of diagram has one interesting problem that arises when it is used to display data that cover a wide range of ages. As covered at some length in the previous post, one useful aspect of this diagram for visual communication of data is that, as usually constructed, the Gaussian kernel drawn for each measurement is normalized to unit area. Thus, a more precise measurement will produce a narrower and taller kernel than a less precise one, which draws the viewer’s eye to the more precise measurement and provides a neat and effective way of rapidly communicating the difference in precision between measurements.

The problem with this is that, again as these figures are usually drawn, the normalization to unit area is linear in real units, i.e. units of years, rather than in relative units. Thus, suppose you have two measurements that have very different ages, say 10,000 and 100,000 years, but the same relative precision, say 3%. Even though the relative precision is the same, the uncertainty in the older age is larger in units of years; for this example the uncertainties are 300 years and 3000 years. Thus, the kernel for the older age spans more years, so normalizing to unit area involves dividing by a larger factor. The result is that even though both ages have the same relative precision, the kernel for the older age is shorter than the one for the younger age. Here is what this example looks like:

The ages have the same relative precision, but they don’t look like it. Depending on what you are trying to communicate with this diagram, this might not be what you want: the difference in size signals to the viewer that the older age is less important or less reliable than the younger one.

So one might want to correct this. Here is one way to do it. The equation for a Gaussian probability distribution is:

p(x) =  \frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left[ \frac{-\left(x-\mu\right)^{2}}{2\sigma^{2}}\right]}

Where, \mu is the age, \sigma is the uncertainty, and the factor of \sqrt{2\pi\sigma^{2}}  is the normalization factor needed to make each kernel have unit area.  So the maximum height p_{max}  of each normalized Gaussian, i.e., the value when x = \mu , is:

p_{max} = \frac{1}{\sqrt{2\pi\sigma^{2}}}

To correct our problem, we want this not to vary with the age. A simple way to do this is to assume that all measurements have the same relative uncertainty — for example, a typical analytical uncertainty of 3% — and compute the expected height of the kernels as a function of the age. This expected height p^{*}_{max}(\mu)  is:

p^{*}_{max}(\mu) = \frac{1}{\sqrt{2\pi\left(0.03\mu\right)^{2}}}

Then, if we want to adjust the visuals so that measurements with the same relative precision, but different ages, have the same kernel height, we can just normalize again by this expected height. Then the kernels plotted on the diagram have the form:

p(x) =  \frac{1}{p^{*}_{max}(\mu)}\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp{\left[ \frac{-\left(x-\mu\right)^{2}}{2\sigma^{2}}\right]}

This loses the normalization to unit area, but this isn’t relevant in most applications of this diagram for visual presentation of data. What it does is make it so exposure ages with different ages, but the same relative uncertainty, have the same height. So the example above now looks like this:

fcp2

Measurements with different relative uncertainties will still have different heights. If the older measurement now has 6% uncertainty:

fcp3

If the younger has 6% and the older has 3%:

fcp4

So this formulation gets rid of the problem that two measurements with the same relative precision, but different age, will appear to have different importance, while retaining the effect that measurements with different relative precision will have different heights. It will have a minimal effect on the appearance of this figure for a data set if all the ages are similar; it only changes the appearance for data sets with a very wide range of ages. It does have the side effect that the area under each kernel scales with the age, so older ages may visually appear very large; this may or may not be a problem depending on what you are trying to do. It doesn’t affect the basic functions of the camel diagram in communicating information about whether ages do or don’t agree with each other that is difficult to do accurately with a histogram.

Note that this modification is intended to solve a visual-communication problem, and not to actually generate a quantitative probability density estimate. So you would not want to use the results of this procedure for subsequent likelihood estimates or anything of that nature.

The MATLAB code to do this is here:

fancy_pants_camelplot.m

The name is terrible, but it does give the idea that this is both kind of complicated and also a little bit fake at the same time, which is probably not a bad description and may help as a reminder that it’s designed for a visual effect and not a quantitative probability estimate. Other options might be pretentious_camelplot.m, uppity_camelplot.m, or too_big_for_britches_camelplot.m.

 

 

 

 

 

Where’s the reference production rate?

September 6, 2018

Recently I received the following question in an email:

I used the CRONUS earth calculator v2.3 to calculate the Be-10 exposure ages of the samples from [some location], but I did not find the numerical value of Be-10 reference production rate used in CRONUS v2.3. On the webpage it only said it is “derived from the primary calibration data set of Borchers and others (2016)”. Can you kindly tell me the value ? Thank you very much.

This also comes up in a lot of paper reviews. Here is an example. The text of the paper is as follows:

Reference Be-10 production rates were determined using the CRONUS-Earth “primary” calibration dataset (Borchers et al., 2016).

And the reviewer said:

…regardless of citing the source, for completeness the SLHL 10Be prod rate should be listed ( with its error)…

The purpose of this posting is to explain why one should state the production rate calibration data set used, but one should not state a value of the reference production rate. As usual, it’s a long posting, and it’s tedious because I’m going to try to do this just in words and not in math, but getting this right is important in understanding how exposure dating works and accurately communicating it in papers.

To begin with, I’ll review the overall structure of an exposure-age calculation one more time. To compute a cosmogenic-nuclide exposure age, you need three things. One, a scaling method that describes how production rates vary with location and time. Two, a set of calibration data that consists of nuclide concentrations measured at sites whose exposure age is already known from a different, independent, dating method such as carbon-14 or argon-argon, such that we know what the true average production rate at those sites has been during the period they were exposed. Three, the nuclide concentration measured at the unknown-age site that we wish to date. Having these three things, one takes the following steps.

  1. Fit the scaling method to the calibration data set. In most cases, the scaling method generates a nondimensional “scaling factor” describing how the average production rate for a particular place and age is related to the production rate at some reference location and time. So for each sample in the calibration data set, the scaling method predicts a scaling factor for that place and exposure age. We then perform a one-parameter fitting exercise in which we choose a value of the “reference production rate” — the production rate at the reference location that is used in that scaling method — that, when multiplied by the scaling factors for all the calibration sites, best duplicates the actual production rates that we measured at those sites. A simple way to think about this is to consider a particular site: at this site we divide the true production rate that we measured by the scaling factor predicted by the scaling method; the result of this calculation is an estimate of the production rate at the reference location. If we have a number of different calibration sites, we have a number of different estimates of the reference production rate, and (given certain assumptions) the value of the reference production rate that best fits the entire calibration data set is just the average of the estimates from each site.
  2. Now that we have an estimate of the reference production rate, we take the scaling factor predicted for the unknown-age site and multiply it by the best-fitting reference production rate in step (1). Then we use this production rate to compute the exposure age from the measured nuclide concentration. Note that for a scaling method in which the production rate varies in time, this calculation must be iterated rather than done only once, but the principle is the same.

There are several important aspects of this procedure.

One, nowhere in this procedure have we ever measured the reference production rate. The actual measurements that we made are the average production rates at the calibration sites. None of the calibration sites are at the reference location (except by accident). For some scaling methods, the reference location is defined such that it does not exist at any real place and time. Thus, the “reference production rate” is not a measurement. It is a parameter estimate that is the result of a model-fitting exercise, and what the parameter actually means is chosen by convenience and not for any physically meaningful reason.

Two, different scaling methods have different reference locations. Historically, for scaling methods (primarily that of Lal, 1991) in which production rates are assumed to be constant over time, the reference location was defined to be at “sea level and high latitude.” What this means is kind of vague, but usually it is assumed to mean something like (i) an atmospheric pressure of 1013.25 hPa (the defined international standard “sea level pressure”, although this is not the actual atmospheric pressure at sea level anywhere (except by accident), and (ii) geomagnetic latitude above about 60°, where the variation in magnetic field strength with latitude becomes negligible. There is no reason the reference location has to be at “SLHL.” It could be on the lawn of the Mar-a-Lago Club in Palm Beach, Florida, or the summit of Mont Blanc. It could be anywhere. For more recent scaling methods, in particular that of Lifton and others (“LSD” or “LSDn”) that is currently in common use, the production rate is predicted to change over time with changes in the magnetic field strength and solar output. This makes it necessary to choose both a reference location and reference values of magnetic and solar properties for the reference location. For the LSDn method, this is typically taken to be 1013.25 hPa, 0 GV, and a zero value of a parameter describing solar modulation. This is a different reference condition than the “SLHL” reference condition historically used for non-time-dependent scaling. Thus, even if we used the same calibration data set and both scaling methods were perfectly accurate, the best-fitting “reference production rates” to each scaling method would be different, because they pertain to different reference conditions. They could not be compared to each other, because they are not estimates of the same thing. They are estimates of different things.

Three, different implementations of the same scaling method will yield different best-fitting estimates of the “reference production rate” for that scaling method. All the commonly used scaling methods are numerically complicated, and different people have written different computer codes that use different numerical approximations, integration methods, interpolation methods, etc. to perform the calculations. In general, if you take one calibration data set and two implementations of the same scaling method and you perform both steps (i) and (ii) above, you will get the same exposure age for the unknown-age site. However, the intermediate value you have computed — the best-fitting value of your fitting parameter — will probably not be the same (except by accident). Thus, if you perform step (1) with one computer code, but then take the resulting fitted value of the reference production rate and perform step (2) with a different computer code, you will probably get the wrong exposure age for the unknown-age site.

So what does this mean for how you should describe how exposure-age calculations were carried out?

Historically, when most exposure-age calculations were carried out using the non-time-dependent scaling method of Lal (1991) or the slightly different implementation of the same method by Stone (2000), it was common in papers for authors to report the value of the reference production rate that they used to compute the exposure ages. Because the calculations involved in these scaling methods are relatively simple, it was possible for readers to easily produce their own implementation of the scaling method, take the value of the reference production rate stated in the paper, and closely reproduce the authors’ calculations.

With more complex, time-dependent, scaling methods in use at present, this is much more difficult. Software implementations of the LSDn scaling method involve thousands of lines of computer code and extensive background data sets describing paleomagnetic and solar variability as well as particle interaction cross-sections, all of which are updated periodically. All such implementations are slightly different. As noted above, if you take a fitted reference production rate derived, as in step (1) above, using one implementation, and then perform step (2) with a different implementation, you will generally get the wrong answer. In addition, different implementations (e.g., “CRONUSCalc” and “the online exposure age calculator formerly known as the CRONUS-Earth online exposure age calculator”) define different reference states for some scaling methods (or don’t define a reference state at all and use a different fitting parameter entirely), so the best-fitting values of the “reference production rates” are not even estimates of the same thing.

What this means for documenting exposure-age calculations in papers or elsewhere is that reporting the value of the reference production rate that you used doesn’t help readers to reproduce your calculations. Although certainly it is possible that your readers can obtain the exact computer code that you used, it is, in general, difficult due to the continued evolution and improvement of both the scaling methods themselves and the background data. Reporting a value for the reference production rate, in contrast, has two negative effects. First, it gives readers the impression that the value of the reference production rate is a measured physical constant. As discussed above, this is not true. Second, it creates a situation where readers who want to replicate your calculations can only perform step (2) above; they cannot perform step (1). Thus, they cannot start from the same actual measurements that went into your calculations and therefore cannot correctly and consisntently reproduce your calculations.

What does help readers to correctly and consistently reproduce your calculations is to report what calibration data set you used. Remember, the calibration data set consists of actual measurements and observations. Unlike the reference production rate, these measurements and observations are the same no matter what scaling method you are using. They are the same no matter what numerical implementation of a particular scaling method you are using. They are the same no matter what the “reference location” is. It is easy to obtain these measurements; all published calibration data sets are publicly accessible in a standardized form at this website. If you tell readers what calibration data set they used, they can reproduce your exposure age calculations with any implementation of any scaling method in an internally consistent way that begins with the same source measurements. They can perform both steps (1) and (2) above, BOTH of which are necessary for an internally consistent exposure-age calculation, instead of only step (2). Although it may seem difficult or intimidating for readers to perform both a production rate calibration and an unknown-age calculation, this task is simple with existing online exposure age calculation systems: CREp allows a choice of a wide variety of published calibration data sets; the online calculator formerly known as the CRONUS-Earth online calculator allows production rate calibration with any calibration data set.

A final point is that, again historically and dating from times where the non-time-dependent Lal scaling was nearly universally used, readers who were familiar with the cosmogenic-nuclide literature could generally figure out from the value of the reference production rate how the ages in that paper would compare to ages in a different paper. For example, prior to about 2000, it was generally thought that the value of the reference production rate appropriate to the implementation of the Lal scaling method that was in wide use at that time was about 5 atoms/g/yr; after 2000 it became clear that the correct value was closer to 4. Of course, this meant that previously measured exposure ages were, in fact, much older than had been stated in the older publications, and readers got in the habit of checking what reference production rate had been used as a means of comparing study results to each other: a moraine given as 12 ka in a study that used 5 atoms/g/yr would be equivalent to 15 ka in a study that used 4. This period of time in the early 2000’s during which most papers used the same scaling method, but different reference production rates, conditioned readers to expect to see the value of the reference production rate somewhere in the methods section. The situation is different now: differences between time-dependent and non-time-dependent scaling, as well as, often, differences between implementations of the same method, are commonly more important than differences in estimated values for reference production rates, and in addition these values can’t even be compared for different implementations. It’s still important for readers to get an idea of the effect of different assumptions on exposure ages reported in a paper, but readers can’t get that information any more from the value of the reference production rate by itself. If you want to tell the reader how differences in production rate calibration affect the end results, then tell them! For example, “if we use calibration data set A, my moraine is 12 ka; if we use calibration data set B, it’s 13 ka.” This is the information the reader wants, so give it to the reader directly, not indirectly through a “reference production rate.” In addition, of course, the reader can easily figure this out for themselves using various online exposure age calculators.

To summarize,

  1. Computing an exposure age is a two-step process that starts with a calibration data set. The calibration data set consists of actual measurements. The “reference production rate” is an inconsistently defined, and never actually measured, parameter that is estimated halfway through this process. Its value is completely dependent on the scaling method used and the specific implementation of that scaling method; it is not a measured physical constant.
  2. Emphasizing the “reference production rate” in data reporting or in a description of calculations does not, in fact, help readers to correctly reproduce the calculations in a paper. In fact, it is more likely to lead to internally inconsistent, and likely misleading, reproduction of the calculations.
  3. In contrast, reporting the calibration data set that was used does allow the reader to reproduce calculations in an internally consistent way.

To say this a slightly different way, there are two independent ingredients that go into computing an exposure age: a calibration data set and a particular implementation of a particular scaling method. The reader needs to know what these are. The reference production rate isn’t one of these things, and the reader doesn’t need to know it.