Resolved Cat

EDIT 2015/12/20: Added umlauts. I’ve always heard English speakers pronounce it more like Schrodinger, but it is Schrödinger


The moon at 2150 km – back of the envelope

I’ve been seeing a lot of this video recently:

We can probably all agree that it looks pretty cool, particulary around the time when Luna occludes Sol at 0:53. But the imagery flipped some interrupt flags and, per my programming, I had to head to the back of the envelope to get an idea of just what would happen if Luna were to switch places with the ISS. First of all, it would be silly to take the statement “at the same distance” as literal. The moon’s radius is almost 4 times the height above ground of the ISS at apogee, so placing the lunar center of gravity within the ISS orbital range of 415-419 km [1, Oct. 18, 2013] would lead to a bit less idyllic scene than seen in the video.


I’m not sure what proportion of matter would have a significant probabability of fusing, or what the average proportion of each nuclei pair would be converted to energy, but the density of the Earth/Luna overlapping volume would immediately increase to 160% of normal Earth density, increasing in temperature by about the same proportion.

Instead, let’s consider transporting the moon from its current orbit, with an apogee of about 407,000 km, to an orbit where the edge of the moon is the same distance from the earth’s surface as the ISS. That would place the approximate center of gravity of the moon at 2150 km.


This puts our only natural satellite well within the Roche limit[3], the minimum distance at which a satellite can remain intact without tidal forces tearing it apart, leaving a debris field in its place.


But we would run into trouble long before the debris field began to give us problems. Luna’s tangent velocity is about 1.022 km/s on average[4]. Remember the cartoon you saw in physics class where Newton shoots a cannon over the hill fast enough to enter orbit? This would be the one that didn’t make it. To maintain an orbit at this altitude, an object would have to be travelling at 6.844 km/s [5] to avoid spiralling down to the earth’s surface.


The energy imparted throughout the impact would pretty much defy any metric we have for intuitively thinking about energy. At 2150 km above ground acceleration due to gravity is about 5.5 m/s2[7], and Lunar mass is about 7.34 X1022 kg[4], for a combined energy due to velocity and gravity potential of 3.5X1030 Joules. That’s a lot (5.2X1016, or over 50 quadrillion) Little Boy equivalents.

Even if we incresed lunar orbital velocity so that the the moon doesn’t immediately fall to ground, orbital decay would still come into play fairly quickly. The ISS loses about 2 km a month, making it dependent on expensive station keeping maneuvers. Even at the 6.844 km/s required to maintain orbit at 2150 km, a great deal of Luna would be dragging through the upper atmosphere (The thermosphere tops out between 500-1000 km [6]). So we would have a continuous rain of meteoric debris to look forward to as tidal forces ripped our moon apart, the majority of the mass of the moon following a decaying orbit until eventually (my guess is well within 20 years) the largest chunks and the majority of the total mass had dissipated their kinetic and potential energy relative to the Earth in a concussive fashion.

Free Luna, comrades.

Open Access Death Knell. Or Is It?

I told you publication was a fiat currency

Last week, Science published a critical investigation into the state of peer review in open access journals. John Bohannon, the author,  generated a faux paper describing a set of fictional experiments testing the effects of secondary metabolites from lichen on cultured cancer cells. These papers were sent to a selection of open access journals sampled from the Directory of Open Access Journals (DOAJ) and Larry Beall’s infamous list. The lichen species, secondary metabolite, and cancer cell line were varied randomly to generate iterations of the paper with slight differences, but the shoddy results were the same in each. In a few of the iterations I examined, the structures didn’t even match the secondary metabolite described. The researchers didn’t exist, their names random combinations from a database of first and last names with random middle initials, and the institutions they worked at were fictional. A “dose-dependent” growth inhibition effect in ethanol buffer (with no EtOH in controls) spans five orders of magnitude, and shows growth rates all with overlapping confidence intervals at 95%.

Of 304 papers submitted to various open access journals, 157 were accepted, many of them without any real review taking place. 98 were rejected, and 49 were still up in the air at press time. The article seems to make a solid case against the relatively nascent open access model of publishing, and that is certainly the tone represented by the article and associated press coverage. However, if I assume that the average reader of Science is scientifically literate, then I would expect that most readers will remain unconvinced that open access is dead and dangerous.

In Who’s Afraid of Peer Review Bohannon combines language from both scientific and journalistic writing styles, taking advantage of the credibility implied by describing sample-selection and procedural decisions in a semi-scientific manner, as well as the journalist’s ability to make general claims with a strength that would be questionable in a scientific article.


And the acceptances and rejections of the paper provide the first global snapshot of peer review across the open-access scientific enterprise.

137 of the journals chosen for this investigation were pulled from a black list maintained by Jeffrey Beall at the University of Colorado Boulder. In places (such as the general description of results) the overlap between Beall’s list and the journals selected from the DOAJ is not clear. In the original sample, 16 of these journals are in both the DOAJ and Beall’s list, but it is difficult to tell if they made it into the final analysis because 49 of the 304 journals selected for submission were thrown out for “appearing derelict” or failing to complete the article review by press time.

For the publishers on his [Beall’s] list that completed the review process, 82% accepted the paper. Of course that also means that almost one in five on his list did the right thing—at least with my submission. A bigger surprise is that for DOAJ publishers that completed the review process, 45% accepted the bogus paper.

This is somewhat misleading, as it implies that the 45% and 82% results are exclusive of each other. I could not tell just from reading the paper what proportion of the 16 journals found in both Beall’s list and the DOAJ made it to the final analysis. Furthermore, I know this is misleading based on how Jeffrey Beall, who is quite close to the subject, interpreted it: “Unfortunately, for journals on DOAJ but not on my list, the study found that 45% of them accepted the bogus paper, a poor indicator for scholarly open-access publishing overall.”

Acceptance was the norm, not the exception.

157/304 journals (51.64%) accepted the paper. While this is a majority, I would hardly qualify acceptance as a typical result when the split is so nearly even, especially when 137 of the 304 journals had already been blacklisted. Discrediting open access in general based on the results reported is not a fair conclusion.

Overall, the article just misses making a strong critical statement about the state of scientific publication, instead focusing only on problems with predatory publishing in open access. By ignoring traditional journals, we are left without a comparison to inform what may be quite necessary reform in scientific publishing. Bohannon’s article is likely to be seen and read by a large number of people in both science and scientific publishing. Editors can be expected to be on alert for the sort of fake paper used by Bohannon and Science, making any comparison to traditional publishing models just about impossible for now. Finally, the overall effect is to damn innovation in publishing, particularly open access models, and it is not surprising that the sting article was published by the “king-of-the-hill” of traditional scientific journals. It is possible that the backlash against open access and publishing innovation in general will actually impede necessary progress in scientific publishing.

As long as an academic career is judged blindly on marketing metrics such as publication frequency and researchers continue to accept excessive publication fees, there will remain an incentive for grey market “paper-mills” to gather up unpublishable papers for profit. Overall, the open access model has thus far incorporated too much from traditional publishing and not enough from the open source movement.

Science magazine warns you that open access is too open, I say that open access is not too open enough.

text in block quotes is from Who’s Afraid of Peer Review by John Bohannon, Science, Oct. 4 2013

Original version of image here

EDIT: link to John Bohannon’s article

Mars orbiter MAVEN will make its launch window

MAVEN is back on line

The federal government shutdown this week has a lot of scientists scratching their heads and packing their bags. All “non-essential” elements of the federal government, i.e. things without guns attached and people with high IQs, get the axe. It is a bit like congress holding the nation hostage while whining about themselves. Oh, and congress still gets paid while the CDC isn’t allowed to keep track of the coming flu season.

In somewhat of a surprise move, the NASA Mars orbiter mission, MAVEN, has been deemed “essential” and will actually get to make its launch window. But its not just a case of the grinch’s heart growing three sizes, spurred on by the magic of xmas. The status of the MAVEN project was switched to essential due to an exception in a law from 1884 called the Antideficiency Act.

The act’s main provisions are actually in place to prevent government institutions or employees from spending money that has not been appropriated to them through legislation. Federal institutions and workers aren’t allowed to accept voluntary services or spend any non-appropriated money. . .

. . .except in cases of emergency involving the safety of human life or the protection of property. 31 U.S.C. § 1342.

The Mars Oddyssey and Mars Reconaissance Orbiter are currently serving as necessary communication relays for Curiosity and Opportunity rovers on the planet surface. Launching MAVEN on time (a three week window from November 18 to December 7) ensures that communication with the rovers will continue unabated. Bruce Jakosky, principal investigator for MAVEN at the University of Colorado Boulder, points out that the decision was made for non-science reasons, but the reactivation should allow for MAVEN to meet all of its scientific objectives as well as act as a rover relay.

MAVEN, for Mars Atmostphere and Volatile EvolutioN, has primary scientific objectives are to sample and measure the Martian atmosphere, uncovering clues as to the current and past rates of atmosphere loss and what this has meant and will mean for the planet. The orbiter will use a highly elliptical orbit to make measurements ranging from direct sampling of the Mars atmosphere when MAVEN dips into the upper atmosphere as close as 125 km (77 mi) to the red plant, to global ultraviolet imaging from 6000 km (3278 mi) at apogee. The three sensor suites will include the Particles and Fields package, measuring particles and electromagnetic fields mostly associated with solar wind, the Remote Sensing Package for imaging the upper atmosphere, and the Neutral Gas and Ion Mass Spectrometer for spectroscopy of atmospheric samples (it is not clear from the mission facts sheet whether the spectrometer package might provide any insight into the ongoing methane measurement discrepancies discrepancies reported by Chris Webster et al). These instruments should gather data that will point to the role of solar radiation in atmosphere loss on Mars, how fast it is happening today and what this might have meant for ancient Mars.

I hate to think that the state of U.S. Congress will become the new norm, but does this point to a mission operations strategy that could lessen vulnerability to government shutdowns? Incorporating some sort of reliance on future missions into probes like the Mars rovers prevents those future missions from being postponed and ultimately cancelled might be tempting, but I’d hate to see mission design robustness sacrificed to account for the decidedly un-robust nature of U.S. lawmakers.

Papers published begets more papers published

So what?


In a recent article first-authored by William Laurance researchers report that, rather unremarkably, publishing more papers before receiving a PhD predicts that an individual will have a more successful career in research, measured solely by publication frequency. They also considered first language, gender precociousness of first article, and university prestige. If publication frequency before attaining the PhD is the best predictor of career publication frequency, just how good is it? They report an r2 value of about 0.14 for the best model incorporating pre-PhD publications, with models lacking this predictor faring much worse.

Wait, what?

If I have a model that only explains 14% of the deviance of the data, well, I think it is time to find a new model. When they included the first three years immediately following attaining a PhD, the r2 value jumped to 0.29 for publications alone, and slightly better when the model includes one or more of the other predictors. Better, but still pretty pathetic. If you are hiring people with a 29% rate of picking the right candidate based on some metric of success chances are you won’t be in charge of hiring for long. The paper only looked at the first ten years immediately following the PhD degree, so including the first three years is a bit like predicting rain when you are already wet. Why were the models so miserable? The range of publication frequency over the first ten years was pretty wide, from 0 to 87 papers published. On top of that, their sample consisted only of individuals who had managed to land a university faculty job. That’s right, one or more of these scientists landed a tenure-track position with zero publications. Jealous?

The sample selection is a pretty major flaw of the paper, in my opinion. The scientists surveyed were all on one rung or another of the assistant/associate/full professor ladder, which is to say that everyone they considered were extremely high acheivers among the total population of people holding biology PhDs. The rate of biology PhDs attaining faculty positions six years post-degree has dropped from 55% in 1973, to 15% in 2006 [1]. Since their data only represented successful academics, their models had no chance of predicting which individuals would drop out of research altogether as opposed to going on to become a principal investigator. Predicting whether an individual is able and willing to continue in science research would be a lot more telling than whether they published 2 versus 10 articles per year their first decade out of grad school.

Using publication frequency as the sole measure of success is certainly rife with limitations (though they do mention a close correlative agreement with h-index). What about quality? What about real, meaningful, contributions to the field? What about retractions? I would be much more interested in a model that could predict whether a researcher would have to withdraw an article during their career than how many articles they might generate. Hopefully with a bit better r2 than 0.14, though.

Publication is often referred to as the “currency” of academia. Well I’d like to posit that this currency is purely fiat. If inflation continues as it has been doing [2], the rate of fraudulent papers can only increase [3]. In my estimation, 300 papers with 3 retractions is worth a lot less than a “measly” 30 papers total. The commonplace occurrence of papers that must be withdrawn (not to mention fraudulent papers never outed, frivolous claims and tenuous conclusions) has broader implications beyond an individual’s career or a journal’s bottom line. When bad science becomes the new normal, public trust deteriorates, and anti-science sentiments thrive.

The authors of the paper did have what I would consider a good take-home: faced with two applicants, one with a PhD from a prestigious university and the other from a lesser-known institution, pick the one with the better publication record. I would go one further and encourage hiring decisions to be informed by actually reading the papers. And vet the sources in these papers’ references. It’s not too hard, and if your job description includes hiring new talent, it’s your job. ‘A’s hire ‘A’s, and ‘B’s hire ‘C’s. Don’t be a ‘B,’ Science (with a capital ‘S’) depends on it.

Laurance et al Predicting Publication Success for Biologists Bioscience Oct. 2013

via conservation bytes

DEAR ABBE: What’s with the “twinkle” in this Hubble image?


the Spirit of Ernst Abbe
Legendary physicist Ernst Abbe answers your photonics questions

DEAR ABBE: I was cruising around the internet the other day in my web-rocket when I came across this stellar image of the comet ISON, taken by the Hubble space telescope. The stars appear to be twinkling. I was under the impression that the twinkling effect we see on earth is due to the atmosphere, and last time I checked the Hubble was something of a space telescope, so shouldn’t Hubble be above twinkling? -HUMBLED BY HUBBLE

DEAR HUMBLED: You’re right about twinkling, it is not apparent to observers located outside of a dense atmosphere, the topic of the 1969 paper “Importance of observation that stars don’t twinkle outside the earth’s atmosphere” by astronaut Walt Cunningham and co-author L. Marshall Libby. But twinkling is not likely to produce such picturesque points on stars as you see in that Hubble image. Rather, what appears to the naked eye as twinkling will serve to blur and smudge the image of a star in a time-averaged intensity measurement, such as a photograph.


The spikes you see in the image in question are due to something else entirely. Twinkling stars are a result of a fickle refractive media, the atmosphere, inadvertently being included in an imaging system. The culprits causing these spikes are intentionally built into the optical system, though the effect on the image formed is a byproduct of their form rather than their primary function. What you see as four regular points oriented to the same direction on every bright star is actually the result of diffraction around the secondary mirror support struts[2][3]. Since the spikes are the Fourier transform of the struts themselves[4], they will affect every light source in the image according to their shape and brightness. The appearance of diffraction spikes is so common that the human mind essentially expects it in this type of image, and can be considered aesthetic. Ultimately, though, any light ending up in the diffraction spikes is light that could have contributed to forming the accurate image of the scene. If a dim object of interest resides by a very bright point of light, the diffraction spikes of the latter can interfere with the clear few of the dim object.

Hubble’s successor, the James Webb telescope will have three struts rather than four[5], resulting in a very different set of diffraction spikes. Not only will the James Webb struts differ in number, but these will be arranged in a sort of triangular pyramid. Diffraction around the strut will affect the final image differently at different lengths along each strut, because they will occupy a range of distances from the primary mirror. The resulting spikes should be quite interesting.

Comet ISON image available at

Do you have a question for Abbe? Ask it in the comments or tweet it @theScinder