Skip to content

How well do we actually know sample elevations in Antarctica?

December 8, 2019

This is a guest post about verifying the elevations of samples collected in Antarctica for exposure dating purposes using a high-resolution DEM of the continent. Basically, what I’m doing is comparing the elevations reported in the literature of all samples in the ICE-D:ANTARCTICA database to the elevations of those same samples calculated from the high-resolution Reference Elevation Model of Antarctica (REMA). This is kind of a long post, but the main points are that (i) DEM-derived elevations agree reasonably well with reported elevations for most samples, but a large number of samples have misfits of 10s to 100s of meters; (ii) there is no obvious indication that folks are reporting elevations relative to the ellipsoid rather than the geoid, which is a good thing; (iii) misfits between reported and REMA-derived elevations decrease considerably starting in the early 2000s, presumably because portable GPS units had become widely available by then, and (iv)  large misfits of 10s to 100s of meters occur in both the pre-GPS era and the GPS era, and, in many cases, appear to be due to folks reporting incorrect or imprecise latitude and longitude coordinates.

I’m doing this exercise for two main reasons. First, elevation is the primary control on cosmogenic-nuclide production rates – production rates change by around 1% for every 10 m change in elevation – and so we’d really like to know whether elevations in the literature are actually correct. It’s reasonable to be a skeptic here because almost no one reports how they determined sample elevations, what their uncertainties are, and whether elevation uncertainty has been propagated into exposure-age uncertainty (probably not). In Antarctica, sample elevation is important for a second reason, which is that, provided that you also know the elevation of the local ice surface, you can often say something about how much ice thickness may have changed in the past.

I’m using REMA for this exercise, which is a DEM constructed from high-resolution stereo imagery shot by commercial satellites. Note, that I’ve re-referenced REMA from the WGS84 ellipsoid to the EGM96 geoid. Here’s how reported elevations compare to REMA elevations.

There’s actually a surprising amount of scatter. Given uncertainties in both REMA and in handheld GPS units (discussed below), I think about 10 m is as small as we can realistically hope the misfit to be for a given sample. However, of the 3300 samples in the database, only 61% have absolute misfits that are 10 m or less. 24% of samples have misfits less than 20 m and 5% have misfits that exceed 100 m. That’s big. An elevation error of 100 m would translate to a production rate error of roughly 10%.

At this point it’s reasonable to ask how much elevation error we expect from both field measurements and from REMA. Reported REMA elevation errors are generally less than 1 m for the open ice sheet but, not surprisingly, they are somewhat higher in rough terrain. The version of REMA that I’m using has 8 m horizontal resolution (a 2 m product is available, but the jump from 8 m to 2 m increases the hard-drive space needed for this exercise by a factor of 16). REMA errors for all samples (represented by vertical lines in the left panel above) have an average value of 2 m and a standard deviation of 7 m. So, I think we can safely say that, in almost all cases, the misfits of 10s to 100s of meters are not due to errors in REMA. For field-based elevation measurements, accuracy really depends on whether samples were collected in the pre-GPS era or the GPS era, as will be demonstrated later in this post. In the pre-GPS era, sample elevations were presumably estimated using topographic maps or barometric methods, and I would guess that these estimates have uncertainties of at least tens of meters in most cases. Since roughly the year 2000, most sample elevations have probably been determined with handheld GPS units. In this blog post, Greg compared measurements he made with an inexpensive handheld GPS unit to measurements with a fancy geodetic-grade one. He found normally distributed elevation residuals with a standard deviation of 7 m. So, to summarize, I think it’s safe to say that neither field GPS measurement error nor REMA error explains the large misfits of 10s to 100s of meters that we see.

Before moving on, it’s worth pointing out that the histogram in the right panel above is skewed significantly to the right. In other words, reported elevations are, on average, higher than REMA-derived elevations. We’ll come back to this later in the post.

Now let’s look at the spatial distribution of the misfit. In the figures below each 50 km box is colored by the average (upper panel) and standard deviation (lower panel) of the misfit of samples located in that box.

Interestingly, many of the blue squares in the upper panel are sites where either Mike Bentley, Jo Johnson, or John Stone (and academic descendants) have worked (e.g. Southern TAMs, Amundsen Sea Coast, Antarctic Peninsula, Pensacola Mtns., Lassiter Coast). You can also see that some of the largest misfits are clustered in certain areas such as the Shackleton Range, located east of the Filchner–Ronne Ice Shelf, and sites in Dronning Maud Land farther east along the coast. Note, however, that presenting the data in this way masks how many samples are actually contributing to each box. For example, in the case of the Shackleton Range, the three red boxes only contain five samples.

There are a few plausible factors that could be contributing to the large misfits. One possibility is that folks are reporting elevations relative to the ellipsoid rather than to the geoid. In Antarctica, depending on where you are, the difference between the ellipsoid and the geoid can be up to about 60 m. So, while this can’t explain the largest misfits, it is important to know whether or not sample elevations are being reported in a consistent fashion. The plot below shows the relationship between misfit and ellipsoid-geoid correction for all samples.

The distribution is skewed to the right (as noted above), but there is no obvious indication that folks are reporting elevations referenced to the ellipsoid rather than the geoid, which is a good thing.

A second possible explanation for the large misfits is that samples with large misfits were collected in the pre-GPS era. The figure below shows that this effect does, in fact, explain a lot of the misfit. In the lower panel, the misfit distribution for each year is represented by a box plot where the red line shows the median misfit, the bottom and top of the box represent the 25th and 75th percentiles, respectively, and the entire range of misfit is represented by the dashed line. We don’t actually know the collection date for all samples (or it is at least not compiled in the ICE-D:ANTARCTICA database), and, for those samples, I’m using the date of the earliest publications in which they are mentioned, which probably lags the collection date by a few years on average.

What’s cool about the lower panel is that you can actually see when folks started using precise GPS units in the field. Starting in the early 2000s the average misfit decreases to 6 m, which is about what we would expect given REMA errors and handheld GPS errors discussed above. 2019 is kind of an outlier, but as you can see in the upper panel, there are very few samples associated with that year (presumably more samples were actually collected in 2019 but have just not been published yet). Note that although GPS units were occasionally used in Antarctica to determine sample elevations at least as early as 1995, GPS signals for civilian use were intentionally degraded by the U.S. government until 2000 under a program known as “selective availability”, during which GPS-determined positions had uncertainty of 10s of meters.

It’s important to mention that the large difference between pre-GPS and GPS era misfits doesn’t necessarily mean that the elevation of samples collected in the pre-GPS era are wrong. Because REMA is queried at reported latitude and longitude coordinates, any error in these coordinates translates into elevation error in areas where there is significant vertical relief. So, it’s possible that the reported pre-GPS era elevations are actually correct, but that the samples are incorrectly located. This actually appears to be what’s going on for at least some samples.

Here’s an example of this effect from Dronning Maud Land. These samples were collected by a German expedition in 1995-1996. The samples are reported to have been collected from either the nunatak in the lower part of the image or from the nearby debris fields, however, as you can see, they appear to be located several kilometers away on the ice sheet.

Here’s another example of samples collected in 1999 from Migmatite Ridge in Marie Byrd Land. Five of the samples appear to be located hundreds of meters or more than a kilometer away from outcropping rock.

Interestingly, the problem of mis-located samples is not actually limited to samples collected in the pre-GPS era. Below is an example from the west side of the Antarctic Peninsula, where samples were collected from Overton Peak on Rothschild Island. Despite the fact that the collectors of these samples were using handheld GPS units in the field, six of their samples appear to have been collected in the middle of an ice shelf 15 km away from Overton Peak. Interestingly, these misplaced samples actually account for all of the samples with misfit in excess of 100 m from the year 2012.

Although the examples above show obviously misplaced samples, for most samples with large misfits it is not clear whether (i) the reported latitude and longitude coordinates are incorrect/imprecise, (ii) the reported elevations are incorrect, or (iii) both are wrong. Samples with large misfits commonly map to areas of outcropping rock, and, in these cases, it’s difficult to tell what the problem is. However, there is actually another piece of evidence in support of the idea that many of the large misfits are due to bad latitude and longitude coordinates. Recall the right panel in the first figure I showed. The distribution is skewed significantly to the right, which means that, on average, reported elevations are higher than REMA-derived elevations. Samples for exposure-dating purposes are more likely to be collected from topographic highs (summits, ridges, etc.) than from topographic depressions. That means that horizontal position error for a given sample will more often than not translate to a REMA-derived elevation that is lower than the sample’s true elevation, which is consistent with the skew in the distribution.

What’s interesting is that, if we assume that horizontal (rather than vertical) position error is the source of the bulk of the misfit, then samples are actually misplaced horizontally by much more than the 5-10 m typical of the accuracy of handheld GPS units. For example, if we look at all of the samples with misfits greater than 25 m, only 5% of these have reported elevations that can be found in REMA within 50 m of the reported latitude and longitude. For samples with misfits greater than 100 m, this value decreases to 0.6%.

So, to conclude, what have we learned from this exercise? Well, for one thing, the advent of the portable GPS unit was a big gift to cosmogenic-nuclide geochemists. However, even equipped with this technology, in some cases folks have still managed to report sample locations that are demonstrably incorrect in the vertical and/or the horizontal. Now that we are also equipped with an extremely high-resolution DEM of the entire continent, it’s probably a good idea to double check your elevation, latitude, and longitude measurements once you get back from the field.

Why are there no exposure-age data from glaciated Mexican volcanoes?

November 8, 2019

In a previous post a while ago, I used this diagram, which is a fairly famous (at least among glacial geologists) representation of the changes in the extent of mountain glaciers worldwide between the last glacial maximum (LGM) about 20,000 years ago and the present time, and is generally thought to have first been put together by the late glacial geologist Steve Porter.

This particular version appears in an article in Scientific American by Wally Broecker and George Denton. It shows a topographic profile down the North American Cordillera that forms the eastern edge of the Pacific Ocean, and highlights the difference between the equilibrium line altitudes of modern glaciers (red) and glaciers at the LGM (blue). I was using the diagram to highlight the impressive amount of cosmogenic-nuclide exposure age data that have been collected from glacial deposits worldwide and are tabulated in the ICE-D:ALPINE database; the blue dots added below show the locations of exposure-dated alpine glacial landforms in the mountains of North and South America as well as the same sector of Antarctica.

There are a lot of measurements here. Researchers who collect these data have focused for many years on visiting glaciers across the entire range of latitude, with the goal of learning about latitudinal variations in temperature and heat transport during the LGM and major late-glacial climate changes like the Younger Dryas and the Antarctic Cold Reversal. Nearly all the areas that Denton and Broecker specifically highlighted in this figure, as areas where prominent LGM-to-present glacial deposits could be found, have been visited and thoroughly studied.

With one exception. The “Mexican Volcanoes.” In fact, there is a pretty big gap between southern California (the southern end of the Sierra Nevada in the image) and Colombia (beneath the “America” in “Central America”). There are at least a few measurements that exist from this range of latitude that I know about that are not at the moment included in the database (mainly because of growing pains having to do with dynamically calculating chlorine-36 exposure ages); they are from Costa Rica and described in this dissertation. However, the reason that “Mexican Volcanoes” appear as a specific callout in this figure is because there are glacial deposits there, and they were fairly well studied in the 1980’s by Klaus Heine and others, as described, for example, in this paper.  But no one seems to have been there for exposure-dating purposes. Get to it, people! This is the only significant gap left in the Porter transect. It is easy to see why there are very few exposure ages on glacial deposits from, say, trans-Baikalia or interior Yukon, but the Mexican volcano sites are near major cities that you can drive to, in your own car, in only a few days’ drive from major cosmogenic-nuclide labs in the US. According to the Heine papers, there are enough radiocarbon dates on some of the moraines that they could potentially be used as production rate calibration sites, which could be useful. And if you are interested in tropical glaciers, the radiocarbon dates in that paper indicate that there are late-glacial moraines and ~30-35 ka moraines, but possibly not any standard-issue 18-25-ka-LGM moraines. Is this really true? Are there boulders enough to figure it out? You could find out.

 

 

ccSTP is not an SI unit for a reason. Use moles instead.

October 31, 2019

This post is about the unit of “ccSTP,” or cubic-centimeters-at-standard-temperature-and-pressure, that is used to measure amounts of gas. This subject has been on my list to write about for a while, but I am ashamed to say that even though this posting has been sitting around in draft form for a couple of years, I couldn’t be bothered to actually finish it and push the ‘publish’ button until Marissa Tremblay brought up the subject. Really, there was no collusion:  convergent evolution actually happens.

The important difference is, I get more than 280 characters to talk about it, so I can begin at the beginning. If you are not like me and you began your career as a noble gas geochemist when you were young and impressionable in graduate school, instead of backing into it later as an already-opinionated mid-career cosmogenic-nuclide geochemist, you have absorbed the fact that these units are used in a major fraction of the scientific literature having to do with noble gas geochemistry (which is voluminous, that is, the literature is, not the gas). Your brain is hard-wired to think in ccSTP. Or, even worse, “ncc” or “pcc”, which are hopelessly uninformative shorthand in which the nano and pico are added and the STP left off. OK, I get this — as gas measurement technology initially developed, measurements of pressure, volume, and temperature were the data actually used to determine how much of a gas was present or to prepare a standard for mass-spectrometric measurements, and folks wanted to keep the units of subsequent measurements linked to the source data. So, being able to use this unit fluently indicates that you are well-grounded in the history and fundamentals of your craft. There is some analogy to the secret handshake.

If you are like me, however, you think these units are terrible, for the following reasons.

  1. I have no idea what “standard temperature and pressure” actually is. Believe it or not, Wikipedia lists eighteen different definitions of standard reference conditions used by various entities and organizations. Before sitting down to write this blog entry, I was pretty sure that most uses of STP in the noble gas literature implied something resembling 1 atmosphere at either 0° or 25° C, probably 0°. But now, having read the Wikipedia entry, I am no longer so sure. In addition, Wikipedia adds the rather disturbing information that in 1982, IUPAC changed their definition of the “P” part of STP from 101.325 kPa (1 atm) to 100 kPa (almost 1 atm). Did post-1982 geochemists know this? I probably wouldn’t have noticed for years.
  2. In cosmogenic-nuclide geochemistry, we think about pretty much everything in units of atoms. Concentrations in atoms per gram, production rates in atoms per gram per year, etc. So, in general, any measurements in the cosmogenic-noble-gas literature that are in ccSTP have to be converted to atoms or moles before one can do anything useful with them. However, the conversion factor to do this is deeply obscure. In fact, it is a testament to the profound obscurity of this unit that Google does not know how to do this. If I type “1 dram in jiggers” into a Google search box, I get the correct answer. “One furlong in cubits.” No problem. But “1 ccSTP in mol?” Total blank. It does not make me feel good about the societal relevance of noble gas geochemistry if no one at Google has even bothered to figure this out. On the other hand, perhaps they read the Wikipedia entry and they were too confused to proceed.

Hence, the rest of this entry will do two things. First, try to figure out what the ccSTP/mol/atom conversion factors are, so that even if we don’t really know what T and P were used in a particular paper, we ought to be able to get fairly close to the correct number of atoms. Second, encourage people to stop using these units. Just say no. Use moles or atoms, which at least have only one agreed-upon definition, instead.

So, conversion factors. First, to help future victims of typing ‘1 ccSTP in mol’ into Google, let’s include all the possible phrases one might want to search for. ccSTP in mol. ccSTP in atoms. nccSTP in mol. Convert ncc to moles. Convert ccSTP to moles. Convert ccSTP to atoms. That should give the Google index-bot a bit of a head start in finding this posting.

Even if Google doesn’t know about them, some references do give a conversion factor from ccSTP to mol. For example, we can consult page 6 of a fairly standard reference, the 2002 edition of Noble Gas Geochemistry by Ozima and Podosek. This page contains a table of constants that gives the conversion as 1 ccSTP = 4.465e-5 mol. OK, that is some progress, but what is STP? There are some clues in the table on this page, although it is not clear that they make things any clearer. First of all, the table reminds us that 1 atmosphere is 101.325 kPa, which makes one think that even though we are writing in 2002, we might be thinking about the pre-1982 IUPAC STP that used 1 atm instead of 100 kPa. Then the table also tabulates RT (the  ideal gas constant times the temperature) in units of cm3 atm mol-1, which also makes one think that we are using 1 atm as standard P. But the table gives RT for both 0° C and 25° C, which is rather disturbing. So, OK, if PV = nRT, P = 1 atm, V = 1 cm3, and RT = 2.24141e4 cm3 atm mol-1 (this is the value tabulated for 0° C), then n = 4.4615e-5, which is close to, but not exactly what is in the table. The difference between 4.465e-5 and 4.4615e-5 could be rounding error or a typo, but then there have to be at least two typos, because it is further stated to be equal to 2.688e19 molecules, which is, in fact, 4.465e-5 * 6.022e23. However, it does appear that these authors were, in fact, assuming 0° C and 1 atm; if we use the 25°C value in the table (2.44655e4 cm3 atm mol-1), we get n = 4.0874e-5 mol, which is much more different from the tabulated value. To the extent that this book is a good representation of the noble gas literature generally, at least we can be reasonably confident that most folks think STP is 0°C at somewhere near 1 atmosphere (even if it remains rather mysterious what the value for 25°C is doing in this table at all).

Of course, lacking an early indoctrination in the arcana of noble gas geochemistry, I may just never have been informed of a global conspiracy to stick with the pre-1982 definition, which would be almost exactly analogous to the global conspiracy among radiocarbon-dating specialists to retain the obsolete Libby half-life for data reporting purposes. However, one would think that Ozima and Podosek would have mentioned it. Regardless, let’s assume, even though we are probably wrong for a lot of papers in the existing literature, that we are talking about the current IUPAC STP, which is 100 kPa and 0° C or 273.15 ° K. We obtain the ideal gas constant from a different source (Wikipedia again) as 8.314462 m3 Pa K-1 mol-1. So, 1 cm3 is 1e-6 m3, 100 kPa is 1e5 Pa, and n = PV/RT = (1e5 * 1e-6)/(8.314462 * 273.15) = 4.4032e-5, which, again, is fairly close to but a couple of percent different from the value tabulated in Ozima and Podosek. Apparently this unit is not unlike the biblical cubit

Although the assumption that most authors were paying attention to the IUPAC in 1982 may be too optimistic, I would guess that this value is probably the most sensible thing to assume, so:

1 ccSTP = 4.4032e-5 mol = 2.6516e19 atoms

1 nccSTP = 4.4032e-14 mol = 2.6516e10 atoms

1 pccSTP = 4.4032e-17 mol = 2.6516e7 atoms = 26.516 million atoms (this is actually a useful amount for cosmogenic He-3 measurements)

These should at least get you within a few percent of the true value in most papers in the literature. If someone disagrees with this, or notices that I have made an error in the above, please let me know. And for heaven’s sake, if you are talking or writing about cosmogenic noble gases, use units of atoms or moles. You know what the pressure and temperature were when you made your gas standards, and we don’t, so please use this information to do the conversion to moles or atoms, yourself, now, before it becomes impossible later.

 

 

 

 

Isostatic rebound corrections are still on a squishy footing

September 18, 2019

This blog post covers the subject of whether or not you should account for isostatic elevation changes in production rate calculations, comes to the conclusion that we don’t really know the answer, highlights one cautionary tale that is probably relevant, and suggests some ways to fix the situation.

This subject is important because production rates vary with atmospheric pressure. Higher  elevation, less atmosphere, higher production rate. In exposure-dating, because it’s not possible to measure the mean atmospheric pressure over the entire exposure duration of a sample, what we actually do in nearly all cases is (i) measure the elevation, (ii) convert to atmospheric pressure using a relationship based on the modern atmosphere, and (iii) assume that the resulting pressure applies for the entire exposure duration. Unfortunately, for sites that are more than a few thousand years old, and especially for sites that are, or were, marginal to large ice sheets, this usual procedure has to be at least partly wrong on two counts. First, as ice sheets, sea level, and global temperature have changed in the past, the mass of the atmosphere has moved around: the atmosphere has to get out of the way of ice sheets, fill up space created by eustatic sea-level fall, and shrink and expand as global temperature cools and warms. The pressure-elevation relationship in many places was probably quite different in the past. Second, because of glacioisostatic depression and rebound associated with the advance and retreat of large ice sheets, many potential exposure-dating sites have not only had the atmosphere move around on them, but have themselves moved up and down in the atmosphere. These observations would tend to indicate that maybe we shouldn’t just compute the production rate at the present elevation — we should account for the fact that all these things are changing.

Several folks have tried to do this. Or, at least, part of it. Specifically, a number of exposure-dating papers have extracted elevation time series for their sample sites from glacioisostasy models (such as the ICE-XG series of global models generated by Richard Peltier and colleagues), computed production rates corresponding to those elevations, and   then, basically, used a time-averaged production rate instead of an instantaneous modern production rate to compute the exposure ages. In effect this approach assumes that the atmosphere is stationary and the sites are moving up and down in it, and in all cases where I can remember it being applied, it results in lower calculated production rates (because the sites were ice-marginal, so at lower elevation in the past than they are now) and therefore older calculated exposure ages.

OK, this seems like a good idea. And it is possible to do this for nearly any exposure-dating study that deals with LGM-to-present events, because there exist global models for isostatic depression and relative sea-level change for the last glacial cycle. Of course, it’s not easily possible for pre-LGM exposure-dating applications.

However, there are several problems with this otherwise good idea.

One, as noted above, it’s not just the sites that are moving up and down in the atmosphere, the atmosphere is moving around as well. This aspect of the problem is the subject of a very good paper from quite a long time ago by Jane Willenbring (Staiger). This paper looked mainly at the atmosphere effects and pointed out, among other things, that they may act to offset isostatic depression near ice sheet margins, resulting in less variation in production rates than one would expect from each process individually.

Two, the production rate calibration sites are moving up and down too. Suppose that your production rate calibration data experienced an increase in the surface production rate over time, as expected for glacioisostatic rebound in the absence of any offsetting atmospheric effects. If you don’t account for this — that is, you just use the modern elevations of calibration sites — when using these data for production rate calibration, you will underestimate the production rate. If you then use this calibrated production rate, but make an isostatic rebound correction when computing ages for unknown-age sites, you will make your production rate estimate at your site, which is already too low globally because you forgot to include isostasy in the production rate calibration, even more too low locally. Thus, the resulting exposure ages will be too old, twice. Instead, to do this correctly, you must include the isostatic rebound correction in both the production rate calibration phase and the exposure-age calculation phase.

There exist several papers that appear on a reasonably close reading to have made this mistake and doubly-compounded errors, although in many cases it is hard to tell because the details have disappeared into a methods supplement. Because this is a blog entry and not an actual journal article, I don’t have to name them all in detail. But the point is that ages calculated in this way are unambiguously wrong. If you want to use an isostatic rebound correction in your exposure-age calculations, you must also use it in your production rate calibration. Or you can do neither: if you have ice-marginal calibration sites and ice-marginal unknown-age sites with similar rebound histories, and you don’t make the correction at either one, you will get close to the right answer, because any errors you introduced by skipping the correction in part 1 will be canceled by the offsetting error in part 2. Thus, no matter what, it is unquestionably better to not do this correction at all than to do it halfway.

The third problem is that the existing calibration data set does not show any evidence that a correction based on elevation change alone is either necessary or desirable. This is kind of a complicated claim, and it’s important to clarify the difference between (i) testing a particular model, that is, asking “does a particular glacioisostasy model, ICE-5G or whatever, improve the performance of production rate scaling models when measured against calibration data” and (ii) asking whether the calibration data themselves provide any evidence that production rates are affected by glacioisostatic elevation change considered alone. I would argue that we have to do (ii) first. Basically, if we can’t establish (ii) from the calibration data, then it is very difficult to determine what the answer to (i) would mean, or if a “yes” answer was meaningful at all.

So, here I will consider item (ii). The basic principle here is that the existing set of calibration data includes both calibration sites that are marginal to large ice sheets, and those that are not. If glacioisostatic elevation change by itself is an important control on production rates, then ice-sheet-marginal calibration sites will yield lower production rates than far-field ones.

This seems like a pretty simple question, but it is not obvious that anyone has actually answered it. A recent paper by Richard Jones and others gets tantalizingly close to at least asking the question, but then disappointingly stops at the last minute. The bulk of this paper is a quite thorough global calculation of the effect on time-integrated production rates expected from glacioisostatic elevation change. In the course of discussing whether or not this should be routinely taken into account in exposure-dating studies, the authors point out that the four (or six, depending on how you group the Scottish data) calibration sites for beryllium-10 data in the widely used “CRONUS-Earth primary calibration data set” are all in areas that experienced very small glacioisostatic elevation changes, so if you use this calibration data set, including or not including a glacioisostatic elevation change correction in the production rate calibration phase doesn’t really make any difference. This is true and, of course, represents a loophole in the absolutely strict rule described above that if you want to use an isostatic correction in exposure-age calculations, you must also use it in production rate calibration. After this, however, the paper stops thinking about calibration data and goes on to other things. Stopping here is extremely disappointing in the context of the otherwise very clear and comprehensive paper, because there are not just four Be-10 production rate calibration sites. In fact, there are 38, seventeen of which are from ice-sheet-marginal features. Although this is the subject of a whole different discussion, there is no particular reason that most of these sites are more or less reliable than the “CRONUS primary” calibration data, which, IMHO, were somewhat arbitrarily selected. Thus, it is possible to test production rate predictions based on glacioisostatic elevation change against a fairly large data set of calibration measurements from both ice-sheet-marginal and far-field sites, and as I read this paper I was very surprised that the authors did not do it. Or, at least, I could not find it in the paper. Why not?

So, let’s try it, at least to the extent feasible without making this blog posting any longer than its current 14,000 words. Specifically, we are asking the question: are production rates inferred from ice-sheet-proximal calibration sites lower than those inferred from other calibration sites? If glacioisostatic elevation change by itself is an important control on production rates, we expect the answer to be yes. However…

…the answer is no. The above histograms show Be-10 production rate scaling parameters for LSDn scaling inferred from all Be-10 calibration sites in the ICE-D:CALIBRATION database (top panel) as well as ice-sheet-marginal and far-field subsets (lower two panels). The blue histograms are for all samples; the red histograms are for averages from each site. Remember, the LSDn scaling algorithm directly predicts production rates with units of atoms/g/yr instead of nondimensional scaling factors, so the calibrated parameter is accordingly a nondimensional correction factor rather than a “reference production rate” with physical units. Regardless, the mean and standard deviation of correction factors inferred from ice-sheet-marginal and far-field calibration sites are, basically, the same. Almost exactly the same.

This result, therefore, fails to provide any evidence that production rates are significantly affected by glacioisostatic elevation change alone. Instead, it tends to suggest that elevation-change effects at ice-marginal sites could very well be substantially offset by atmospheric effects. This, in turn, implies that one should probably not include isostatic effects in a production rate scaling model without also including atmospheric mass redistribution effects.

There is no point in giving up too soon, though, so let’s try to come up with some other first-principles predictions that we can test with the calibration data. The simplest prediction was just that production rates from ice-marginal calibration sites should be lower. A slightly more complex prediction is that we expect that this effect should be more pronounced for younger calibration sites. Suppose an ice sheet margin retreats between the LGM and the mid-Holocene, and we have ice-marginal calibration sites that deglaciated at a range of times during this retreat. In general terms, the sites that are closer to the center of the ice sheet deglaciated more recently and should be more affected by isostatic depression: not only is the magnitude of the depression typically larger at sites near the center of the ice sheet, but the period of time during which the sites are isostatically depressed makes up a larger fraction of their total exposure history. Therefore, we expect a correlation between the age of ice-marginal calibration sites and the inferred production rates: all sites should give lower production rates than non-ice-marginal sites, but the younger ice-marginal sites should yield lower production rates than the older ones. Is this true?

The answer: well, sort of. The plot above shows values of the LSDn calibration parameter inferred from far-field sites (blue) and ice-sheet-marginal sites (red). The solid and dotted black lines give the mean and standard deviation of the entire data set. Production rates inferred from non-ice-proximal calibration data are uncorrelated with site age, as they should be, but those inferred from the ice-marginal sites are strongly correlated with site age. This correlation would be a point in favor of an elevation-change effect being important…except for one thing. As expected from the previous observation that the two groups yield the same mean production rate, half the ice-marginal sites give production rate estimates that are HIGHER, not lower, than the mean of the non-ice-marginal sites. If glacioisostatic elevation change is the primary control on production rate, we do not expect this. Forebulge effects are not expected for sites that are well within the LGM boundaries of the ice sheets, so (at least to me) there appears no obvious way that elevation change alone can account for both (i) the similarity in the distribution of production rate estimates between production rates inferred from ice-marginal and far-field sites, and (ii) the strong age-production rate correlation for ice-marginal sites. Overall, this result is sort of ambiguous but, again, appears to indicate that correcting for glacioisostatic elevation changes by themselves is oversimplified.

The summary up to this point is that arguing that glacioisostatic elevation change should be an element in production rate scaling models is probably reasonable. However, both the calculations in Jane’s paper and an analysis of available calibration data indicate that it is probably oversimplified, and probably not a good idea, to correct only for the elevation changes without also considering simultaneous atmospheric mass redistribution.

On to the cautionary tale. The cautionary tale part of this posting is that this situation is very similar to a somewhat obscure, but analogous and pertinent, aspect of cosmogenic-nuclide applications to erosion-rate measurements. Specifically, this is the question of whether or not to apply topographic shielding corrections to production rate estimates for basin-scale erosion-rate calculations.

In the isostasy example, we know that there are two time-dependent effects on production rates in ice-sheet marginal areas, and we think they probably offset each other, at least to some extent. One of them (isostatic elevation change) is easy to calculate, because there exist a number of global, time-dependent, LGM-to-present glacioisostasy models that can be easily used to produce a time-dependent elevation change estimate for any site on Earth. The second one (atmospheric mass redistribution) is difficult to calculate, because there are no equivalent, easily accessible, LGM-to present models for atmospheric pressure distribution.

In the topographic shielding example, we know that the erosion rate we infer from a nuclide concentration in sediment leaving a basin is inversely proportional to the mean production rate in the basin, and directly proportional to the effective attenuation  length for subsurface production. In a steep basin where many parts of the basin are shielded by surrounding topography, shielding reduces both the mean production rate and the attenuation length. Because these have opposite proportionalities, therefore, corrections for these effects offset each other. Again, one of these corrections is easy: using digital elevation models to compute the reduction in the mean surface production rate in a basin due to topographic shielding is a satisfying and fun application of raster GIS. And one of them is hard: computing the shielding effect on the subsurface attenuation length is computationally painful, time-consuming, and really not fun at all. The result is that for quite a long time, many researchers just did the easy correction and ignored the hard correction, until a thoroughly helpful paper by Roman DiBiase did the hard part of the correction, and showed that doing only the easy correction results in an answer that is more wrong than it would have been if you had ignored both corrections. Oops.

The analogy is pretty clear. Both cases feature two offsetting corrections, one easy and one hard, and a seductive heuristic trap created by the availability of code to do the easy correction but not the hard one. It seems like it might be a good idea to learn from history, and try not to shove our left foot in a mouth still filled with the taste of our right shoe.

So, to conclude, what should we do or not do about this? Well, for one thing, we should probably not apply a time-dependent isostatic rebound model to production rate calculations without an atmosphere model as well. The evidence seems to indicate that this is probably oversimplified and may not improve the accuracy of exposure ages. However, it is not obvious that there is a good candidate atmosphere model at the moment; this may require some work. And then, if we use both models, it is necessary to apply them to both calibration data and unknown-age sites, which requires hard-coding them into everyone’s  exposure-age calculation code. We should complete the Jones paper by doing a better job than I have done above in examining more carefully whether scatter in fitting calibration data to production rate scaling models can be explained by glacioisostatic or atmospheric-mass-redistribution effects. And, finally, we will probably need to collect more calibration data. This is because applying these corrections will very likely have the largest effect on inferred exposure ages for times and places where we do not have calibration data. It is probably not a good idea to extrapolate production rate corrections into places outside the range of calibration data where being wrong could incur large and hard-to-identify errors. Targeting places and times where we expect the largest corrections, and also places and times where elevation-change and mass-redistribution effects are likely to have offsetting or compounding effects, to collect new calibration data would add a lot of credibility to the argument that these effects are important.

 

 

 

The DMC:ANTARCTICA project

August 29, 2019

By Perry Spector

Summary

This is the first guest post on this blog, and the point is to describe a data-model comparison project called DMC:ANTARCTICA that I have been working on for the past year. The overall objective of the project is to provide synoptic evaluation of Antarctic ice sheet models that simulate the evolution of the ice sheet over long time periods using the entire dataset of cosmogenic-nuclide measurements from Antarctica. This project is still under development, but there now exists a publicly available DMC:ANTARCTICA web interface that serves data-model comparisons for every site in Antarctica where data exist and for 15 different ice-sheet model simulations.

Why?

We rely on numerical ice-sheet models for establishing the climate sensitivity of the Antarctic ice sheet and for forecasting its future behavior and contribution to sea level. Although these models provide glaciologically-feasible reconstructions ice-sheet evolution, they are of limited use unless they can be shown to agree with observations of past ice-sheet change.

Considerably progress actually has been made on this front. Recent simulations by David Pollard, Pippa Whitehouse, Rob Briggs, and others that are constrained by geologic observations (e.g. relative sea-level curves from Antarctica, exposure ages, etc.) do, in fact, agree with those observations remarkably well in many cases.

Existing data-constrained models, however, are primarily restricted to the period between the LGM and the present. Longer simulations, spanning important time periods when the climate was as warm or warmer than present, have not been thoroughly tested. One limitation to doing this is that many of the constraint datasets, such as RSL curves from Antarctica, do not extend beyond the LGM. On Pleistocene-Pliocene timescales, cosmogenic-nuclide measurements from Antarctica are some of the only data available to test hypotheses of past ice-sheet behavior.

The existing set of Antarctic cosmogenic-nuclide measurements

All known published (and many unpublished) cosmogenic nuclide data from Antarctica have been compiled, mostly by Greg Balco, in the ICE-D:ANTARCTICA database. The database currently comprises 4361 measurements on 2551 samples. For this to be a useful set of ice-sheet model constraints, it should span as much of the continent as possible and also be distributed in time. Let’s look at the breakdown. The figure below shows the distribution for samples of glacial erratics and for bedrock surfaces.

Minimum and maximum circle sizes represent 1 and 88 samples, respectively.

Both populations do a pretty good job highlighting where outcropping mountains exist in Antarctica. There are large areas, however, that are entirely unrepresented, such as the East Antarctic interior and the coast between northern Victoria Land and the Lambert-Amery Basin. Fortunately these are sectors where most ice-sheet models predict relatively small changes over glacial-interglacial cycles. For glacial periods, models predict large ice thickness changes in the Ross, Weddell, and Amundsen Sea sectors, areas where the coverage is relatively good.

Interestingly, around 80% of the samples are of glacial erratics while the remaining 20% are of bedrock surfaces. This distribution is probably, in part, due to the combination of two factors. First, outcropping bedrock surfaces in Antarctica that were ice-covered during the LGM commonly have exposure ages much older than the LGM because, in many cases, past ice cover was frozen to the bed and non-erosive. Second, the objective of many exposure-dating projects has specifically been to establish the ice sheet’s configuration during or following the LGM, and so these projects likely opted to sample glacial deposits rather than bedrock. However, as has been discussed by some authors (e.g. here) and as will be discussed below, well-preserved bedrock surfaces can provide important information about exposure and ice cover over multiple glacial-interglacial cycles.

The next set of maps show the distribution by nuclide.

Most samples in the database only have a single analysis of a single nuclide, and, unsurprisingly, that nuclide is usually Be-10. Multiple nuclides (e.g. Be-10 and Al-26) have been measured on some samples, as shown below. These samples are particularly important because paired nuclide data can provide information not just about past exposure, but also about past ice-cover and erosion.

The next figure shows histograms of exposure ages (note logarithmic axis) and gives a sense of the temporal distribution of the database.

Although a large fraction of the samples only provide information about the last ice age and the subsequent deglaciation (i.e. the focus of many exposure dating studies), a substantial population provides constraints on timescales extending as far back as the Miocene.

Therefore, to summarize, the set of Antarctic cosmogenic-nuclide measurements is reasonably well distributed in space and time, which suggests that it should be of some use for evaluating numerical ice-sheet models that run over a range of time periods. It would be useful if the database contained more samples from slowly-eroding bedrock surfaces, but the coverage isn’t terrible at least.

A Data-model comparison framework

For most samples, there are an infinite number of different histories of exposure, ice-cover, and erosion that could produce the observed nuclide concentrations. Therefore, these data cannot be simply inverted to generate unique ice-thickness chronologies. However, they can be used to distinguish incorrect chronologies predicted from ice-sheet models from plausible ones. The basic idea is that, for a given rock in Antarctica, the history of exposure and ice-cover simulated by a model predicts certain concentrations of cosmogenic nuclides at the present-day that can be directly compared to concentrations that have actually been measured in the rock.

This isn’t a new idea. In a 2012 paper, Sujoy Mukhopadhyay and others reported that Be-10 and Ne-21 concentrations measured in bedrock samples from the Ohio Range were consistent with concentrations predicted by a 5 million year ice-sheet model run. Since then, the only other example that I am aware of is a paper I recently wrote with John Stone and Brent Goehring, which is under review in The Cryosphere, that evaluates several models using results from the Pirrit Hills and the Whitmore Mountains.

However, rather than doing one-off data-model comparisons for one or two sites, the objective of the DMC:ANTARCTICA project is to perform synoptic evaluation of model performance using all available cosmogenic-nuclide measurements from Antarctica. At the moment, that capability is not yet available, but it is coming. What is available is the DMC:ANTARCTICA web interface which serves comparisons between every site in Antarctica where data exist and a number of published ice-sheet model simulations. The rest of this post will give a basic description of how the web interface works, and highlight a couple interesting results.

For a given site in Antarctica and ice-sheet model of interest, the modeled ice-thickness history at the site is extracted via bi-linear interpolation. Here is an example from the model of Pollard & DeConto (2009) for the Pirrit Hills.

The next steps are to (i) generate synthetic samples of both bedrock surfaces and glacial erratics spanning a range of altitudes at the site, (ii) determine the history of exposure and ice cover experienced by each sample, (iii) use those histories to calculate the present-day concentrations of various nuclides, and then (iv) compare those predictions to concentrations that have actually been measured in samples from the site.

For each DMC request, the calculations are performed in real time on a Google Cloud virtual machine. Concentrations, as well as exposure ages, are calculated using the Matlab/Octave code behind the version 3 online exposure age calculator, which was described by Balco et al. (2008) and has since been updated. Additional details about data-model comparison calculations are available here.

All samples are assumed to have zero cosmogenic nuclides at the beginning of the model simulation. Thus running the web interface with a site where there are 12 million year Ne-21 exposure ages, paired with a LGM-to-present ice-sheet model, will obviously produce large data-model misfits that are unrelated to the actual performance of the model. Similar misfits can be obtained by doing the reverse, that is, pairing a long-term model with a site that only has, say, Holocene exposure ages. This latter scenario can be dealt with by employing the option to change the start time of the model so you can look at, for example, only the last 20 kyr of a 800 kyr run.

The web interface can also serve model predictions for any latitude and longitude in Antarctica, even if no samples have been collected at that location. This feature could potentially be useful for planning future field projects.

Some interesting DMC results: (i) elevation transects of bedrock surfaces and glacial erratics

DMC simulates periodic deposition of erratics at a site whenever the ice sheet there is thinning. This is because most glacial erratics in Antarctica are ablation deposits, meaning that they were transported englacially from upstream erosion areas to ablation areas on the ice surface (where they were initially exposed to the cosmic-ray flux), and subsequently deposited on mountain flanks by ice thinning. Periodic deposition of erratics is simulated for all sites, regardless of whether erratics are actually known to be present.

Ice-sheet models that simulate many glacial-interglacial cycles can therefore produce erratics at similar elevations but at different times. The effect of this is shown in the upper-right panel below for the 5 Myr simulation of Pollard & DeConto (2009) at Mt. Turcotte in the Pirrit Hills (see ice-thickness time series above).

Different colors represent different nuclide-mineral pairs as follows: orange = C-14 (quartz); red = Be-10 (quartz); blue = Al-26 (quartz); neon = Ne-21 (quartz); gray = He-3 (pyroxene/olivine).

Concentrations for a given nuclide in erratics are scattered and have a relatively weak relationship with elevation. The exception is for C-14 (shown in orange), which decays rapidly with a 5730 year half life and therefore has no memory of the exposure and ice-cover history prior to around 30 kyr ago. In contrast, the bedrock surfaces (upper-left panel), which are predicted to have been intermittently ice covered, show concentrations that always increase monotonically with altitude. In part, this is due to the fact that production rates increase with elevation, but high elevation samples also must have spent a lesser proportion of their existence ice-covered than low elevation samples from the same site.

Note 1: Both bedrock and erratics are assumed to not experience erosion nor get covered by till or snow. For erratics, once they are deposited they are assumed to not change elevation, roll over, etc.

Note 2: The four panels above have different x and y axis ranges. Sorry. Working on that.

The predictions for Mt. Turcotte bedrock and erratics (upper panels) are interesting because they are qualitatively pretty similar to what is actually observed there (lower panels), as well as at many other Antarctic sites. Observed Be-10, Al-26, and Ne-21 in bedrock samples monotonically increase with elevation. There are only minor exceptions to this, which are probably related to small amounts of erosion experienced by some samples or, in the case of Ne-21 (shown in neon), radiogenic production. In contrast to the bedrock samples, Be-10 measurements on glacial erratics (lower-right panel) are scattered and have no obvious concentration-elevation relationship. Thus, at least qualitatively, the model predictions for both bedrock and erratics are consistent with what we actually observe.

One take-home point from this comparison is that, in Antarctica, where past ice cover was commonly cold-based and non-erosive, weathered bedrock surfaces can, in some cases, be much better indicators of long-term exposure and ice cover than glacial erratics. Next time you’re in the field, collect an elevation transect of weathered bedrock surfaces.

Some interesting DMC results: (ii) LGM-to-present ice-sheet models consistently simulate premature deglaciation in the western Weddell Sea

Okay, now let’s look at an example in which we actually evaluate the performance of ice-sheet models.

Our knowledge of the post-LGM deglaciation history of the two largest embayments in Antarctica, the Ross and the Weddell, has remained very lopsided. Decades of geological (marine and terrestrial) and glaciological research in and around the Ross Embayment has generated a large network of constraints on the thickness and extent of grounded ice during the LGM and during the subsequent retreat. In contrast, relatively few field studies in the Weddell Embayment have produced commensurately few constraints there.

Recent papers by Keir Nichols, Jo Johnson, Brent Goehring, and others (published and in review here and here) are increasing our knowledge of the deglacial history of the Weddell Sea. These papers describe C-14 exposure ages from an elevation transect of 8 bedrock samples from the Lassiter Coast on the east side of the Peninsula. All samples have mid-Holocene ages and indicate abrupt thinning there at this time.

Let’s see how these results compare to predictions from state-of-the-art ice sheet models. In the figures below, orange circles show measured C-14 exposure ages plotted against sample elevation. Gray circles show model predictions. Note that the 8 bedrock samples on which C-14 concentrations were measured were collected from a few different but nearby peaks, which have been classified as different sites in ICE-D:ANTARCTICA. Thus, only 5 of the 8 samples are shown below, which does not affect the data-model comparison.

The first figure shows the predictions from the reference simulation of the model by Jonny Kingslake and others (2018).

The second figure shows predictions from the best-scoring run of the large ensemble by David Pollard and others (2016). Other recent models by David Pollard and others predict relatively similar transects.

The third figure shows predictions from the last 20 kyr of the model by Tigchelaar et al. (2018).

These three state-of-the-art ice-sheet models consistently simulate premature thinning on the Lassiter Coast by several millennia, which likely means that the grounding line in the western Weddell Sea retreats too early in the models. Note that the observation of premature thinning is not limited to this site but is also seen at the Heritage Range and at the Pirrit Hills, sites which are presumably sensitive to ice-sheet behavior in the western Weddell Sea.

The model that performs best at this site appears to be that of Pollard et al. (2016), which is perhaps not surprising because, unlike the models by Kingslake et al. (2018) and Tigchelaar et al. (2018), it was calibrated using a large network of observational constrains from various other sites in Antarctica.

Conclusions

The overall goal of the DMC:ANTARCTICA project is to systematically evaluate ice-sheet model simulations using the entire set of cosmogenic-nuclide measurements. This isn’t available yet, but it’s coming.

The DMC:ANTARCTICA web interface is publicly available for dynamically comparing observations and model predictions anywhere in Antarctica. Check it out.

The web interface helps address the issue of ice-sheet model opacity. Typically the complex behavior of an ice-sheet model is conveyed only in a few small figures in a paper. Now it’s possible for anyone to explore models and see what they predict anywhere in the continent.

The web interface still has some issues to be ironed out. Some of these are discussed here. Let me know if you find bugs or if there is a specific feature you’d like to see implemented.

Don’t see your data? The problem is probably either that (i) it’s not in ICE-D:ANTARCTICA yet, or (ii) certain information needed to calculate exposure ages (e.g. sample thickness) is missing from ICE-D. Contact me or Greg to fix that.

Have a model you would like included in this project? Get in touch.

Apparently, it takes 10 years to recruit a guest blogger. But don’t give up.

August 29, 2019

Despite the fact that the total audience of this blog is about ten cosmogenic-nuclide geochemists, one reasonable criticism of the project is that it does not do a great job of representing the diversity of opinion among these ten people — because I am the only author so far. Although, of course, many of the blog entries here make reference to work or publications by other people, so far it has been difficult to tell from the blog alone whether these are real, live, cosmogenic-nuclide geochemists, or purely characters introduced for rhetorical purposes, kind of like the obscure Greek philosophers who lob questions at Socrates, or the opponents of the Harlem Globetrotters — because none of them have ever posted blog entries themselves. A handful of readers (although a surprisingly small handful) have left comments, but all my attempts so far to recruit guest bloggers to contribute their own postings have resulted in the subject of the request immediately remembering another really important commitment and abruptly ending the conversation. I can’t imagine why.

Thus, it is with great pleasure that I can report that Perry Spector, a BGC postdoctoral researcher and specialist in cosmogenic-nuclide applications to Antarctic ice sheet change, has agreed to author at least one post — and made it sound like he meant it. Expect to see it soon. This represents major progress, and an opportunity to point out that, at least for the other nine readers, this blog is potentially a really useful forum for comments and opinions on obscure technical bits of cosmogenic-nuclide geochemistry that are not fully-baked enough, or not important or useful enough, to be worth the effort of an actual publication. I find writing blog postings to be largely painless, sometimes entertaining, a time-saver when the subject of the post is something I would otherwise have to answer a lot of emails about, and often helpful in organizing ideas. So, if you have something to say that you want to say here, let me know.

Where to publish cosmogenic-nuclide geochemistry

April 11, 2019

The overall goal of this posting is to highlight a new open-access/open-review journal, just introduced by EGU and Copernicus Publications, entitled “Geochronology.” But first, some context.

For context, consider two things.

One. The dispute between the University of California system and scientific publisher Elsevier. Basically, UC has canceled its subscriptions to all Elsevier journals after failing to come to a blanket agreement for open access publication of UC-hosted research. I care about this because I utilize the UC library journal subscriptions.

Two. The list of peer-reviewed scientific journals where technical articles about cosmogenic-nuclide geochemistry are commonly published. Quaternary Geochronology. Earth and Planetary Science Letters. Geochimica et Cosmochmica Acta. Nuclear Instruments and Methods in Physics Research. Chemical Geology. Possibly Quaternary Science Reviews. For short-lived nuclides in tracer studies, perhaps the Journal of Environmental Radioactivity.

These journals have two things in common. They publish technical research on geochemistry and geochronology that is relevant to your work, so you read them all the time. And they are published by Elsevier. Although there are many other journals where one might sensibly publish any research involving applications of cosmogenic-nuclide geochemistry to broader Earth science projects, consolidation and acquisitions by Elsevier over the past decade or so have produced a near-monopoly on journals suitable for technical articles in this field. About half the articles that I am an author of are in Elsevier journals. The only other journals I can think of where I might publish technical articles rather than applications are, maybe, Geophysical Research Letters or JGR-Earth Surface.

Now put these two things together. In future, I will not be able to legally read any articles published in Elsevier journals, which include nearly every possible outlet for basic research in cosmogenic-nuclide geochemistry. Most sensible people would conclude that in this situation, I should also stop publishing in and reviewing for these journals. While it is true that cosmogenic-nuclide geochemistry was probably not a major contributor to Elsevier’s $1.2B in profits from scientific publishing in 2017, and my decision to boycott is unlikely to significantly threaten said profits, it seems pretty dumb to continue providing free services to journals that I can’t even read.

So, OK, I can’t publish the majority of my research output, which concerns technical research in cosmogenic-nuclide geochemistry, in any of the journals that are most suitable for publishing this research. While the idea of transferring all my future research output to this blog and other non-peer-reviewed websites  is certainly appealing, it may not be a wise career move, or fair to funding agencies who expect to eventually see some results, to disengage completely from normal peer-reviewed journals.

That concludes the context. Although I thoroughly support the UC position in the Elsevier dispute because I think the Elsevier/Wiley/MacMillan business model, in which my free labor supports an enormously profitable business, is ridiculous, I am left at the moment with a lack of publication outlets. Thus, it is an extraordinary coincidence of timing that on nearly exactly the same day that UC/Elsevier negotiations failed, the European Geophysical Union and Copernicus Publications introduced a new non-profit, open-access, open-review journal called Geochronology, that aims to focus on basic research in geochronology and associated areas of geochemistry and analytical methods.

From my perspective, this is the only such journal that is not published by Elsevier, so it looks like I have to publish in it whether I like it or not — although there is a slight complication in that I am one of the editors, so it would probably look bad if I were also the most prolific author. In the rest of this post I will explain why you — a reader of this blog who presumably carries out and publishes research in cosmogenic-nuclide geochemistry — should publish in it too.

First, what is it? In the past several years, EGU has worked with Copernicus Press, which is a publisher and conference organizer organized as a non-profit corporation under German law, to develop a series of online journals that provide (i) open access without subscription fees, (ii) impose on authors or funding agencies the minimal costs needed to support publication and archiving, and (iii) utilize an open review process. More about the open review process later. Several of these journals have become extremely successful and widely read; for example, The Cryosphere, which is rapidly becoming the de facto international journal of record for research in glaciology and cold regions climate and surface processes.

Second, what is the open review process? Basically, papers submitted to these journals, if the editors deem that they meet basic standards, are immediately posted as discussion drafts. In effect this is the same as posting to a preprint server such as arXiv. Then two things can happen. The editors solicit reviews of the draft paper; these reviewers get to choose whether or not to remain anonymous. And in addition, anyone can submit an unsolicited review under their name; unsolicited reviews may not be anonymous. Finally, the authors submit a revised paper and a response to the reviews; if acceptable, the editors post this as the final version. The important difference between this and the normal review process at most journals is that all parts of this — the reviews, the responses, and successive versions of the paper — are accessible online with the paper and become part of the permanent record.

I think this last element — the fact that all the review correspondence is an accessible part of the permanent record — is an enormous improvement on standard practice. From my perspective as a reader, I learn a lot more from reading the paper with the reviews than I would from the paper alone. It is a throwback — a good throwback — to a long-past bit of the history of science in which many papers were published as conference proceedings, and a stenographer was on hand at the conference to record and later publish the discussion that took place after the presentation. Speaking as a reviewer, I put a lot of work into reviews, sometimes including extensive calculations and model exercises, and if this work disappears into anonymity and is never seen again, it is hard to see why it wasn’t a waste of my time. And then as an author, I am pretty sure that if reviewers know that their comments will be publicly accessible and permanently on the record, I am going to get a more careful, fair, and well-thought-out review. I certainly feel that pressure when reviewing in these journals. The point is, these are all good things. They make it so I, or you, learn more from published research. They are probably more work for everyone, but I think they also produce better results.

To summarize, Geochronology (or “GChron” if you don’t have time for the extra four syllables) solves my immediate problem created by the UC-Elsevier struggle for dominance. But even if you don’t have that problem, if you publish research in this field, it corrects several other problems and improves the situation in several ways. The reason I signed on as an editor long before the Elsevier incident is that it creates an outlet for basic research in geochronology and related areas that is independent of applications, which will hopefully make it possible to focus on doing the best possible job of determining the age of geologic deposits, without having to bury all the interesting technical research in the supplementary methods to make a paper suitable for a more general journal. Then, the open review system is better than the conventional system. And finally, open access publication is better than a subscription model. Publication fees in the EGU/Copernicus journals are cheap — 77 Euros per page — and many European universities and funding agencies, as well as a couple in the US, have blanket agreements. And for GChron, fees are waived for the first two years until there is an impact factor, so it’s free. That’s free, as in free. There is no downside.

The Geochronology website is here. Click on the link and submit the paper that you are writing right now.