A windier world

A new paper in Nature Climate Change reports a reversal in “terrestrial stilling” since 2010 – i.e. global wind speeds, thought to be in decline thanks to deforestation and real estate development, actually stopped slowing around 2010 and have been climbing since.

The paper’s authors, a group of researchers from China, France, Singapore, Spain, the UK and the US, argue that the result can be explained by “decadal ocean-atmosphere oscillations” and conclude with further analysis that the increase “has increased potential wind energy by 17 ± 2% for 2010 to 2017, boosting the US wind power capacity factor by ~2.5% and explains half the increase in the US wind capacity factor since 2010.”

Now that we have some data to support the theory that both terrestrial and oceanic processes affect wind speeds and to what extent, the authors propose building models to predict wind speeds in advance and engineer wind turbines accordingly to maximise power generation.

This seems like a silver lining but it isn’t.

Global heating does seem to be influencing wind speeds. To quote from the paper again: “The ocean-atmosphere oscillations, characterised as the decadal variations in [mainly three climate indices] can therefore explain the decadal variation in wind speed (that is, the long-term stilling and the recent reversal).” This in turn empowers wind turbines to produce more energy and correspondingly lowers demand from non-renewable sources.

DOI: 10.1038/s41558-019-0622-6

However, three of the major sources of greenhouse gas emissions are concrete, plastics and steel manufacturing – and all three materials are required in not insubstantial quantities to build a wind turbine. So far from being a happy outcome of global heating, the increase in average regional wind speed – which the authors say could last for up to a decade – could drive the construction of more or, significantly, different turbines which in turn causes more greenhouses gases to be released into the atmosphere.

Finally, while the authors estimate the “global mean annual wind speed” increased from 3.13 m/s in 2010 to 3.3 m/s in 2017, the increase in the amount of energy entering a wind turbine is distributed unevenly by location: “22 ± 2% for North America, 22 ± 4% for Europe and 11 ± 4% for Asia”. Assuming these calculations are reliable, the figures suggest industrialised nations have a stronger incentive to capitalise on the newfound stilling reversal (from the same paper: “We find that the capacity factor for wind generation in the US is highly and significantly correlated with the variation in the cube of regional-average wind speed”).

On the other hand Asia, which still has a weaker incentive, will continue to bear a disproportionate brunt of the climate crisis. To quote from an article published in The Wire Science today,

… as it happens, the idea that ‘green technology’ can help save the environment is dangerous because it glosses over the alternatives’ ills. In a bid to reduce the extraction of hydrocarbons for fuel as well as to manufacture components for more efficient electronic and mechanical systems, industrialists around the world have been extracting a wide array of minerals and metals, destroying entire ecosystems and displacing hundreds of thousands of people. It’s as if one injustice has replaced another.

Godwin Vasanth Bosco, The Wire Science, December 2, 2019

The circumstances in which scientists are science journos

On September 6, 2019, two researchers from Israel uploaded a preprint to the bioRxiv preprint server entitled ‘Can scientists fill the science journalism void? Online public engagement with two science stories authored by scientists’. Two news sites invited scientists to write science articles for them, supported by a short workshop at the start of the programme and then by a group of editors during the ideation and editing process. The two researchers tracked and analysed the results, concluding:

Overall significant differences were not found in the public’s engagement with the different items. Although, on one website there was a significant difference on two out of four engagement types, the second website did not have any difference, e.g., people did not click, like or comment more on items written by organic reporters than on the stories written by scientists. This creates an optimistic starting point for filling the science news void [with] scientists as science reporters.

Setting aside questions about the analysis’s robustness: I don’t understand the point of this study (insofar as it concerns scientists being published in news websites, not blogs), as a matter of principle. When was the optimism in question ever in doubt? And if it was, how does this preprint paper allay it?

The study aims to establish whether articles written by scientists can be just as successful – in terms of drawing traffic or audience engagement – as articles penned by trained journalists working in newsrooms. There are numerous examples that this is the case, and there are numerous other examples that this is not. But by discussing the results of their survey in a scientific paper, the authors seem to want to elevate the possibility that articles authored by scientists can perform well to a well-bounded result – which seems questionable at best, even if it is strongly confined to the Israeli market.

To take a charitable view, the study effectively reaffirms one part of a wider reality.

I strongly doubt there’s a specific underlying principle that suggests a successful outcome, at least beyond the mundane truism that the outcome is a combination of many things. From what I’ve seen in India, for example, the performance of a ‘performant article’ depends on the identity of the platform, the quality of its editors, the publication’s business model and its success, the writer’s sensibilities, the magnitude and direction of the writer’s moral compass, the writer’s fluency in the language and medium of choice, the features of the audience being targeted, and the article’s headline, length, time of publication and packaging.

It’s true that a well-written article will often perform better than average and a poorly written written article will perform worse than average, in spire of all these intervening factors, but these aren’t the only two states in which an article can exist. In this regard, claiming scientists “stand a chance” says nothing about the different factors in play and even less about why some articles won’t do well.

It also minimises editorial contributions. The two authors write in their preprint, “News sites are a competitive environment where scientists’ stories compete for attention with other news stories on hard and soft topics written by professional writers. Do they stand a chance?” This question ignores the publisher’s confounding self-interest: to maximise a story’s impact roughly proportional to the amount of labour expended to produce it, such as with the use of a social media team. More broadly, if there are fewer science journalists, there are also going to be fewer science editors (an event that precipitated the former will most likely precipitate the latter as well), which means there will also be fewer science stories written by anyone in the media.

Another issue here is something I can’t stress enough: science writers, communicators and journalists don’t have a monopoly on writing about science or scientists. The best science journalism has certainly been produced by reporters who have been science journalists for a while, but this is no reason to write off the potential for good journalism – in general – to produce stories that include science, nor to exclude such stories from analyses of how the people get their science news.

A simple example is environmental journalism in India. Thanks to prevalent injustices, many important nuggets of environmental and ecological knowledge appear in articles written by reporters working the social justice and political economics beats. This has an important lesson for science reporters and editors everywhere: not being employed full-time is typically a bitter prospect but your skills don’t have to manifest in stories that appear on pages or sections set aside for science news alone.

It also indicates that replenishing the workforce (even with free labour) won’t stave off the decline of science journalism – such as it is – as much as tackling deeper, potentially extra-scientific, issues such as parochialism and anti-intellectualism, and as a second step convincing both editors and marketers about the need to publish science journalism including and beyond considerations of profit.

Last, the authors further write:

This study examined whether readers reacted differently to science news items written by scientists as compared to news items written by organic reporters published on the same online news media sites. Generally speaking, based on our findings, the answer is no: audiences interacted similarly with both. This finding justifies the time and effort invested by the scientists and the Davidson science communication team to write attractive science stories, and justifies the resources provided by the news sites. Apparently if websites publish it, audiences will consume it.

An editor could have told you this in a heartbeat. Excluding audiences that consume content from niche outlets, and especially including audiences that flock to ‘destination’ sites (i.e. sites that cover nearly everything), authorship rarely ever matters unless the author is prominent or the publication highlights it. But while the Israeli duo has reason to celebrate this user behaviour, as it does, others have seen red.

For example, in December 2018, the Astronomy & Astrophysics journal published a paper by an Oxford University physicist named Jamie Farnes advancing a fanciful solution to the dark matter and dark energy problems. The paper was eventually widely debunked by scientists and science journalists alike but not before hundreds, if not thousands, of people were taken by an article in The Conversation that seemed to support the paper’s conclusions. What many of them – including some scientists – didn’t realise was that The Conversation often features scientists writing articles about their own work, and didn’t know the problem article had been written by Farnes himself.

So even if the preprint study skipped articles written by scientists about their own work, the duos’s “build it and they will come” inference is not generalisable, especially if – for another example – someone else from Oxford University had written favourably about Farnes’s paper. I regularly field questions from young scientist-writers baffled as to why I won’t publish articles that quote ‘independent’ scientists commenting on a study they didn’t participate in but which was funded, in part or fully, by the independent scientists’ employer(s).

I was hoping to neatly tie my observations together in a conclusion but some other work has come up, so I hope you won’t mind the abrupt ending as well as that, in the absence of a concluding portion, you won’t fall prey to the recency effect.

The imperfection of strontium titanate

When you squeeze some crystals, you distort their lattice of atoms just enough to separate a pair of charged particles and that in turn gives rise to a voltage. Such materials are called piezoelectric crystals. Not all crystals are piezoelectric because the property depends on what the arrangement of atoms in the lattice is.

For example, the atoms of strontium, titanium and oxygen are arranged in a cubic structure to form strontium titanate (SrTiO3) such that each molecule displays a mirror symmetry through its centre. That is, if you placed a mirror passing through the molecule’s centre, the object plus its reflection would show the molecule as it actually is. Such molecules are said to be centrosymmetric, and centrosymmetric crystals aren’t piezoelectric.

In fact, strontium titanate isn’t ferroelectric or pyroelectric either – an external electric field can’t reverse their polarisation nor do they produce a voltage when they’re heated or cooled – for the same reason. Its crystal lattice is just too symmetrical.

The strontium titanate lattice. Oxygen atoms are red, titanium cations are blue and strontium cations are green.

However, scientists haven’t been deterred by this limitation (such as it is) because its perfect symmetry indicates that messing with the symmetry can introduce new properties in the material. There are also natural limits to the lattice itself. A cut and polished diamond looks beautiful because, at its surface, the crystal lattice ends and the air begins – arbitrarily stopping the repetitive pattern of carbon atoms.

An infinite diamond that occupies all points in the universe might look good on paper but it wouldn’t be nearly as resplendent because only the symmetry-breaking at the surface allows light to enter the crystal and bounce around. Similarly, centrosymmetric strontium titanate might be a natural wonder, so to speak, but the centrosymmetry also keeps it from being useful (despite its various unusual properties; e.g. it was the first insulator found to be a superconductor at low temperatures, in 1967).

Tausonite, a naturally occurring mineral form of strontium titanate. Credit: Materialscientist/Wikimedia Commons, CC BY-SA 3.0

So does strontium titanate exhibit pyro- or piezoelectricity on its surface? Surprisingly, while this seems like a fairly straightforward question to ask, it hasn’t been straightforward to answer.

A part of the problem is the definition of a surface. Obviously, the surface of any object refers to the object’s topmost or outermost layer. But when you’re talking about, say, a small electric current originating from the material, it’s difficult to imagine how you could check if the current originated from the bulk of the material or just the surface.

Researchers from the US, Denmark and Israel recently reported resolving this problem using concepts from thermodynamics 101. If the surface of strontium titanate is pyroelectric, the presence of electric currents should co-exist with heat. So if a bit of heat is applied and taken away, the material should begin cooling (or thermalising) and the electric currents should also dissipate. The faster the material cools, the faster the currents dissipate, and the faster the currents dissipate, the lower the depth to which the material is pyroelectric.

In effect, the researchers induced pyroelectricity and then tracked how quickly it vanished to infer how deeply inside the material it existed.

Both the bulk and the surface are composed of the same atoms, but the atomic lattice on the surface also has a bit of surface tension. Materials scientists have already calculated how deeply this tension penetrates the surface of strontium titanate, so the question was also whether the pyroelectric behaviour was contained in this region or went beyond, into the rest of the bulk.

The team sandwiched a slab of strontium titanate between two electrodes, at room temperature. At the crystal-electrode interface, which is a meeting of two surfaces, opposing charged particles on either side gather and neutralise themselves. But when an infrared laser is shined on the ensemble (as shown above), the surface of strontium titanate heats up and develops a voltage, which in turn draws the charges at its surface away from the interface. The charges in the electrode are then left without a partner so they flow through a wire connected to the other electrode and create a current.

The laser is turned off and the strontium titanate’s surface begins to cool. Its voltage drops and allows the charged particles to move away from each other, and some of them move towards the surface to once again neutralise oppositely charged particles from the other side. This process stops the current. So measuring how quickly the current drops off gives away how quickly the voltage vanishes, which gives away how much of the material’s volume developed a voltage due to the pyroelectric effect.

The penetration depth the group measured was in line with the calculations based on surface tension: about 1.2 nm. To be sure the effect didn’t involve the bulk, the researchers repeated the experiment with a thin layer of silica (the major component of sand) on top of the strontium titanate surface, and there was no electric current when the laser was on or off.

In fact, according to a report in Nature, the team also took various precautions to ensure any electric effects originated only from the surface, and due to effects intrinsic to the material itself.

… they checked that the direction of the heat-induced current does not depend on the orientation of the crystal, ruling out a bulk effect; and that the local heating produced by the laser is very small…, which means that the strain gradients induced by thermal expansion are insignificant. Other experiments and data analysis were carried out to exclude the possibility that the induced current is due to molecules … adsorbed to the surface, charges trapped by lattice defects, excitation of free electrons induced by light, or the thermoelectric Seebeck effect (which generates currents in semiconductors that contain temperature gradients).

Now we know strontium titanate is pyroelectric, and piezoelectric, on its surface at room temperature – but this is not all we know. During their experiments (with different samples of the crystal), the researchers spotted something odd:

The pyroelectric coefficient – a measure of the strength of the material’s pyroelectricity – was constant between 193 K and 225 K (–80.15º C to –48.15º C) but dropped sharply above 225 K and vanished above 380 K. The researchers note in their paper, published on September 18, that others have previously reported that the strontium titanate lattice near the surface changes from a cubic to a tetragonal structure at around 150 K, and that a similar transformation could be happening at 225 K.

In other words, the surface pyroelectric effect wasn’t just producing a voltage but could in fact be altering the relative arrangement of atoms itself. What the precise mechanism of action could be we don’t know – nor any other features that might arise in the material as a result. The researchers hope future studies can resolve these questions.

A composition of 300 13-second exposures taken within 70 minutes from Waldenburg, Germany, on August 12, 2018. Most of the lines are made by satellites reflecting sunlight from the Sun below the horizon. Caption and photo: Eckhard Slawik/IAU

Starlink and astronomy

SpaceX’s Starlink constellation is currently a network of 120+ satellites and which, in the next decade, will expand to 10,000+ to provide low-cost internet from space around the world. Astronomers everywhere have been pissed off with these instruments because they physically interfere with observations of the night sky, especially those undertaken by survey telescopes with wide fields of view, and some of whose signals could interfere electromagnetically with radio-astronomy.

In his resourceful new book The Consequential Frontier (2019), on “challenging the privatisation of space”, Peter Ward quotes James Vedda, senior policy analyst for the Centre for Space Policy and Strategy at the Aerospace Corporation, on the expansion of the American railroad in the 19th century:

Everybody likes to point to the railroad and say that, ‘Oh, back in the nineteenth century, when all this was all being built up, it was all built by the private sector.’ Well, hold on a minute. They didn’t do it alone because they were given huge amounts of land to lay their tracks and to build their stations. And not just a little strip of land wide enough for the tracks, they were usually given up to a mile on either side. … I read one estimate that in the nineteenth-century development of the railroads, the railroad companies were given land grants that if you total them all up together were equivalent to the size of Texas. They sold off all that extra land [and] they found that they got to keep the money. Besides that, the US Geological Survey went out and did this surveying for them and gave them the results for free so that is a significant cost that they didn’t have.

Ward extends Vedda’s comments to the activities of SpaceX and Blue Origin, the private American space companies stewarded by Elon Musk and Jeff Bezos, respectively. We’re not in the golden age of private spaceflight thanks to private enterprise. Instead, just like the Information Age owes itself to defence contracts awarded to universities and research labs during World War II and the Cold War, private operators owe themselves to profitable public-private partnerships funded substantially by federal grants and subsidies in the 1980s and 1990s. It would be doubly useful to bear this in mind when thinking about Starlink as well.

When Musk was confronted a month or so ago with astronomers’ complaints, he replied (via Twitter) that astronomers will have to launch more space telescopes “anyway”. This is not true, but even if it were, it recalls the relationship between private and public enterprise from over a century ago. As the user @cynosurae pointed out on Twitter, space telescopes are expensive (relative to ground-based instruments with similar capabilities and specifications) and they can only be built with government support in terms of land, resources and funds. That is, the consequences of Musk’s ambition – economists call them negative externalities – vis-à-vis the astronomy community can only be offset by taxpayer money.

Many Twitter users have been urging Musk to placate Starlink’s detractors by launching a telescope for them but science isn’t profitable except in the long-term. More importantly, the world’s astronomers are not likely to persuade the American government (whose FAA issues payload licenses and FCC regulates spectrum use) to force SpaceX to work with them, such as through the International Astronomical Union, which has offered its assistance, and keep Starlink from disrupting studies of the night sky.

It’s pertinent to remind ourselves at this juncture that while the consequences for astronomy have awakened us to SpaceX’s transgression, the root cause is not the availability of the night sky for unimpeded astronomical observations. That’s only the symptom; the deeper malaise is unilateral action to impact a resource that belongs to everyone.

Musk or anyone else can’t deny that their private endeavours often incur, and impose, costs that the gloss of private enterprise tends to pass over.

It wouldn’t matter if SpaceX is taken to court for its rivalrous use of the commons. Without the FAA, FCC or any other, even an international, body regulating satellite launches, orbital placement, mission profile, spectrum use, mission lifetime and – now – appearance, orbital space is going to get really crowded really fast. According to one projection, “between 2019 and 2028, more than 8,500 satellites will be launched, half of which will be to support broadband constellations, for a total market value of $42 billion”. SpaceX’s Falcon 9 rocket can already launch 60 Starlink satellites in one go; India and China have also developed new rockets to more affordably launch more small-sats more often.

A comparable regulatory leverage currently exists only with the International Telecommunications Union (ITU), which oversees spectrum use. It has awarded 1,800 orbital slots in the geosynchronous orbit to national telecom operators, such as FCC in the US and DoT in India. Regional operators register these slots and station telecommunication satellites there, each working with a predetermined set of frequencies.

Non-communication satellites as well as satellites in other orbits aren’t so formally organised. Satellite operators do work with the space and/or defence agencies of other countries to ensure their instruments don’t conflict with others in any way, in the interest of both self-preservation and debris mitigation. But beyond the ITU, no international body regulates satellite launches into any other orbits, and even the ITU doesn’t regulate any mission parameters beyond data transmission.

Starlink satellites will occupy the low-Earth (550 km and 1,150 km) and very-low-Earth orbits (340 km).

So an abundance of financial incentives, a dearth of policies and the complete absence of regulatory bodies allow private players a free run in space. Taking SpaceX to court at this juncture would miss the point (even if it were possible): the commons may have indirect financial value but their principal usefulness is centred on their community value, and which the US has undermined with its unilateral action. Musk has said his company will work with astronomers and observatories to minimise Starlink’s impact on their work but astronomers are understandably miffed that this offer wasn’t extended before launch and because absolute mitigation is highly unlikely with 12,000 (if not 42,000) satellites in orbit.

Taking a broader view, Starlink is currently the most visible constellation – literally and figuratively – but it’s not alone: space is only becoming more profitable, and other planned or functional constellations include Athena, Iridium and OneWeb. It would be in everyone’s best interests in this moment to get in front of this expansion and find a way to ensure all countries have equal access and opportunities to extract value from orbital space as well as equal stake in maintaining it as a shared resource.

In fact, like the debate between SpaceX and its supporters on the one hand and astronomers on the other has spotlighted what’s really at stake, it should also alert us that others should get to participate as well.

The bigger issue doesn’t concern astronomical observations – less interference with astronomical activity won’t make SpaceX’s actions less severe – nor low-cost internet (although one initial estimate suggests a neat $80, or Rs 5,750, per month) but of a distinctly American entity colonising a commons and preventing others from enjoying it. Governments – as in the institutions that make railroads, universities and subsidies possible – and not astronomers alone should decide, in consultation with their people as well as each other, what the next steps should be.

An edited version of this article appeared in The Wire on November 20, 2019.

The press office

A press-officer friend recently asked me for pointers on how he could help journalists cover the research institute he now works at better. My response follows:

  1. Avoid the traditional press release format and use something like Axios’s. answer the key questions, nothing more. No self-respecting organisation is going to want to republish press releases. This way also saves you time.
  2. Make scientists from within the institute, especially women, members of minority groups and postdocs, available for comment – whether on their own research or on work by others. This means keeping them available (at certain times if need be) and displaying their contact information.
  3. If you’re going to publish blogs, it would be great if they’re on a CC BY or BY-SA (or even something a little more restrictive like CC BY NC ND) license so that interested news organisations can republish them. If you’re using the ND license, please ensure the copy is clear.
  4. Pictures are often an issue. If you could take some nice pics on your phone and post them on, say, the CC library on Flickr, that would be great. These can be pics of the institute, instruments, labs, important people, events, etc.

If you have inputs/comments for my friend and subscribe to this blog, simply reply to the email in your inbox containing this post and you’ll reach me.

A woman holder her right index finger over her lips, indicating silence.

Indian scicomm’s upside-down world

Imagine a big, poisonous tree composed of all the things you need to screw up to render a field, discipline or endeavour an elite club of just one demographic group. When it comes to making it more inclusive, whether by gender, race, ethnicity, etc., the lowest of low-hanging fruit on this tree is quantitative correction: increase the number of those people there aren’t enough of. Such a solution should emerge from any straightforward acknowledgment that a problem exists together with a need to be seen to be acting quickly.

Now, the lower the part of the tree, the easier it should be to address. There’s a corresponding suckiness figure here, denoted by the inverse of the relative height of the thing from the ground: not plucking low-hanging fruits and throwing them away is the suckiest thing because doing so would be the easiest thing. For example, the National Centre for Science Communicators (NCSC) recently organised an event composed entirely of men – i.e. a manel – and it was the suckiest thing because manels are the most fixable solutions available to address gender disparities in science and science communication without requiring any cultural remediation.

The lidless eye of @IndScicomm picked up on this travesty and called the NCSC out on Twitter, inadvertently setting off an avalanche of responses, each one more surprised than the last over the various things the NCSC has let slip in this one image. Apart from the sausage fest, for example, all eight men are older (no need to guess numbers, they all look like boomers).

It’s possible:

  1. Each one of these men, apart from the one from the organising body, wasn’t aware he was going to be on a manel,
  2. They don’t recognise that there’s a problem,
  3. They recognise the problem but simply don’t care that there aren’t any women among them – by itself a consideration that limits itself to the smallest modicum of change but in its entirety should include a variety of people of various genders and castes, or
  4. They believe the principles of science communication are agnostic of – rather transcend – the medium used, and the medium is what has changed the most since the boomers until the millennials.

I find the last two options most plausible (the first two are forms of moral abdication), and only the last one worth discussing, because it seems to be applicable to a variety of science communication endeavours being undertaken around India with a distinct whiff of bureaucracy.

In December 2018, one of the few great souls that quietly flag interesting things brought to my attention an event called the ‘Indian Science Communication Congress’ (ISCC), ep. 18, organised by CSIR NISCAIR and commemorating the 200th year of ‘science journalism in India’. What happened in 1818? According to Manoj Patairiya, the current director of NISCAIR, “Science journalism started in India in 1818 with the publication of monthly Digdarshan published in Hindi, Bengali and English, carrying a few articles on science and technology.” This is a fairly troublesome description because of its partly outdated definition of science journalism, at least if NISCAIR considers what Digdarshan did to be science journalism, and because the statement implies a continuous presence of communication efforts in the country from the early 19th century – which I doubt has been the case.

I didn’t attend the event – not because I wasn’t invited or that I didn’t know such an event existed but because I wouldn’t have been the ideal participant given the format:

It seems (including based on one attendee’s notes) the science communication congress was a science of science communication + historical review congress, the former a particularly dubious object of study for its scientistic attitude, and which the ISCC’s format upholds with barely contained irony. Perhaps there’s one more explanation: an ancient filtration system (such as from 1951, when NISCAIR was set up) broke but no one bothered to fix it – i.e. the government body responsible for having scientists speak up about their work is today doing the bare minimum it needs to to meet whatever its targets have been, which includes gathering scholars of science communication in a room and having them present papers about how they think it can be improved, instead of setting new targets for a new era. This is the principal symptom of directive-based change-making.

Then again, I might be misguided on the congress’s purpose. On two fairly recent occasions – in August 2018 and September 2019 – heart-in-the-right-place scientists have suggested they could launch a journal, of all things, to help popularise science. Is it because scientists in general have trouble seeing beyond journals vis-à-vis the ideal/easiest way to present knowledge (if such a thing even exists); because they believe other scientists will take them more seriously if they’re reaching out via a journal; or because writing for a journal allows them to justify how they’re spending their time with their superiors?

The constructive dilemma inherent in the possible inability to imagine a collection of articles beyond journals also hints at a possible inability to see beyond the written article. But with the medium have changed the messages as well, together with ways in which people are seeking new information. Moreover, by fixating on science communication as a self-contained endeavour that doesn’t manifest outside of channels earmarked for it, we risk ignoring science communication when it happens in new, even radical, environments.

For example, we’re all learning about the role archaeological findings play in the construction of historical narratives by questioning the Supreme Court’s controversial verdict on the Ayodhya title case. For another, I once learnt about why computational fluid dynamics struggles to simulate flowing water (because of how messed up the Navier-Stokes equations are) during a Twitch livestream.

But if manel-ridden conferences and poster presentations are what qualify as science communication, and not just support for it, the hyperobject of our consternation as represented in the replies to @IndScicomm’s tweet is as distinct a world as Earth is relative to Jupiter, and we might all just be banging our heads over the failures of a different species of poisonous tree. Maybe NCSC and NISCAIR, the latter more so, mean something else when they say ‘science communication’.

Maybe the ‘science communication’ that The Wire or The Print, etc. practice is a tradition imported from a different part of the world, with its own legacy, semantics and purpose, such as to be addressed to English-speaking, upper-class urbanites. At a talk in Chennai last year, for example, a prominent science communicator mentioned that there were only a handful of science journalists in India, which could’ve been true if he was reading only English-language newspapers. Maybe these labels are in passive conflict with the state-sponsored variety of ‘science journalism’ that the government nurtured shortly after Independence to cater to lower-class, Indian-languages-speaking citizens of rural India, which didn’t become profitable until the advent of economic liberalisation and the internet, but which today – and perhaps as seen from the PoV of a different audience – seems bureaucratic and insipid.

Then again, the rise of the ‘people’s science movement’ in the 1970s, led by organisations like Eklavya, Kalpavriksh, Vidushak Karkhana, Vigyan Shiksha Kendra and Medico Friend Circle would suggest that ‘science communication’ of the latter variety wasn’t entirely successful. Thanks also to Gauhar Raza, the scientist and social activist who spent years studying the impact of government-backed science communication initiatives and came away unable to tell if they had succeeded at all, and given what we’re seeing of NCSC’s, NISCAIR’s and the science congress’s activities, it may not be unreasonable to ask if the two ‘science communications’ are simply two different worlds or a new one still finding its footing and an older one whose use-case is rapidly diminishing.

Ultimately, let’s please stop inviting discussion on science communication through abstracts and research papers, organising “scientific sessions” for a science communication congress (which seems to be in the offing at a ‘science communicator’s meet’ at the 2020 Indian Science Congress as well) and having old men deliberate on “recent trends in science communication” – and turn an ear to practising communicators and journalists instead.

Cassini's last shot of Titan, taken by the probe's narrow-angle camera on September 13, 2017. Credit: NASA

A new map of Titan

It’s been a long time since I’ve obsessed over Titan, primarily because after the Cassini mission ended, the pace of updates about Titan died down, and because other moons of the Solar System (Europa, Io, Enceladus, Ganymede and our own) became more important. There have been three or four notable updates since my last post about Titan but this post that you’re reading has been warranted by the fact that scientists recently released the first global map of the Saturnian moon.

(This Nature article offers a better view but it’s copyrighted. The image above is a preview offered by Nature Astronomythe paper itself is behind a paywall and I couldn’t find a corresponding copy on Sci-Hub or arXiv nor have I written to the corresponding author – yet.)

It’s fitting that Titan be accorded this privilege – of a map of all locations on the planetary body – because it is by far the most interesting of the Solar System’s natural satellites (although Europa and Triton come very close) and were it not orbiting the ringed giant, it could well be a planet of its own accord. I can think of a lot of people who’d agree with this assessment but most of them tend to focus on Titan’s potential for harbouring life, especially since NASA’s going to launch the Dragonfly mission to the moon in 2026. I think they’ve got it backwards: there are a lot of factors that need to come together just right for any astronomical body to host life, and fixating on habitability combines these factors and flattens them to a single consideration. But Titan is amazing because it’s got all these things going on, together with many other features that habitability may not be directly concerned with.

While this is the first such map of Titan, and has received substantial coverage in the popular press, it isn’t the first global assessment of its kind. Most recently, in December 2017, scientists (including many authors of the new paper) published two papers of the moon’s topographical outlay (this and this), based on which they were able to note – among other things – that Titan’s three seas have a common sea level; many lakes have surfaces hundreds of meters above this level (suggesting they’re elevated and land-locked); many lakes are connected under the surface and drain into each other; polar lakes (the majority) are bordered by “sharp-edged depressions”; and Titan’s crust has uneven thickness as evidenced by its oblateness.

According to the paper’s abstract, the new map brings two new kinds of information to the table. First, the December 2017 papers were based on hi- and low-res images of about 40% of Titan’s surface whereas, for the new map, the authors write: “Correlations between datasets enabled us to produce a global map even where datasets were incomplete.” More specifically, areas for which authors didn’t have data from Cassini’s Synthetic Aperture Radar instrument for were mapped at 1:2,000,000 scale whereas areas with data enabled a map at 1:8,000,000 scale. Second is the following inferences of the moon’s geomorphology (from the abstract the authors presented to a meeting of the American Astronomical Society in October 2018):

We have used all available datasets to extend the mapping initially done by Lopes et al. We now have a global map of Titan at 1:800,000 scale in all areas covered by Synthetic Aperture Radar (SAR). We have defined six broad classes of terrains following Malaska et al., largely based on prior mapping. These broad classes are: craters, hummocky/mountainous, labyrinth, plains, lakes, and dunes [see image below]. We have found that the hummocky/mountainous terrains are the oldest units on the surface and appear radiometrically cold, indicating icy materials. Dunes are the youngest units and appear radiometrically warm, indicating organic sediments.

SAR images of the six morphological classes (in the order specified in the abstract)

More notes once I’ve gone through the paper more thoroughly. And if you’d like to read more about Titan, here’s a good place to begin.

The trouble with laser-cooling anions

For scientists to use lasers to cool an atom, the atom needs to have two energy states. When laser light is shined on an atom moving towards the source of light, one of its electrons absorbs a photon, climbs to a higher energy state and the atom as a whole loses some momentum. A short span of time later, the electron loses the photon in a random direction and drops back to its lower energy state, and the atom’s momentum changes only marginally.

By repeating this series of steps over and over, scientists can use lasers to considerably slow atoms and decrease their temperature as well. For a more detailed description + historical notes (including a short profile of a relatively forgotten Indian scientist who contributed to the development of laser-cooling technologies), read this post.

However, it’s hard to use this technique with most anions – negatively charged ions – because they don’t have a higher energy state per se. Instead, when laser light is shined on the atom, the electron responsible for the excess negative charge absorbs the photon and the atom simply ejects the energised electron.

If the technique is to work, scientists need to find an anion that is bound to its one excess electron (keeping it from being electrically neutral) strongly enough that as the electron acquires more energy, the atom ascends to a higher energy state with it instead of just losing it. Scientists discovered the first such anion in the previous decade – osmium – and have since added only three more candidates to the list: lanthanum, cerium and diatomic carbon (C2). Lanthanum is and remains the most effective anion coolable with lasers. However, if the results of a study published on November 12 are to be believed, the thorium anion could be the new champion.

Laser-cooling is relatively simpler than most atomic cooling techniques, such as laser-assisted evaporative cooling, and is known to be very effective. Applying it to anions would expand its gamut of applications. There are also techniques like sympathetic cooling, in which one type of laser-cooled anions can cool other types of anions trapped in the same container. This way, for example, physicists think they can produce ultra-cold anti-hydrogen atoms required to study the similarities between matter and antimatter.

The problem with finding a suitable anion is centred on the atom’s electron affinity. It’s the amount of energy an electrically neutral atom gains or loses when it takes on one more electron and becomes an anion. If the atom’s electron affinity is too low, the energy imparted or taken away by the photons could free the electron.

Until recently, theoretical calculations suggested the thorium anion had an electron affinity of around 0.3 eV – too low. However, the new study found based on experiments and calculations that the actual figure could be twice as high, around 0.6 eV, advancing the thorium anion as a new candidate for laser-cooling.

The study’s authors also report other properties that make thorium even more suitable than lanthanum. For example, the atomic nucleus of the sole stable lanthanum isotope has a spin, so as it interacts with the magnetic field produced by the electrons around it, it subtly interferes with the electrons’ energy levels and makes laser-cooling more complicated than it needs to be. Thorium’s only stable isotope has zero nuclear spin, so these complications don’t arise.

There doesn’t seem to be a working proof of the study’s results but it’s only a matter of time before other scientists devise a test because the study itself makes a few concrete predictions. The researchers expect that thorium anions can be cooled with laser light of frequency 2.6 micrometers to a frosty 0.04 microkelvin. They suggest doing this in two steps: first cooling the anions to around 10 kelvin and then cooling a collection of them further by enabling the absorption and emission of about 27,000 photons, tuned to the specified frequency, in a little under three seconds.

A cloud of grey-black smoke erupts over a brown field, likely the result of an explosion of some sort.

Disastrous hype

This is one of the worst press releases accompanying a study I’ve seen:

The headline and the body appear to have nothing to do with the study itself, which explores the creative properties of an explosion with certain attributes. However, the press office of the University of Central Florida has drafted a popular version that claims researchers – who are engineers more than physicists – have “detailed the mechanisms that could cause the [Big Bang] explosion, which is key for the models that scientists use to understand the origin of the universe.” I checked with a physicist, who agreed: “I don’t see how this is relevant to the Big Bang at all. Considering the paper is coming out of the department of mechanical and aerospace engineering, I highly doubt the authors intended for it to be reported on this way.”

Press releases that hype results are often the product of an overzealous university press office working without inputs from the researchers that obtained those results, and this is probably the case here as well. The paper’s abstract and some quotes by one of the researchers, Kareem Ahmed from the University of Central Florida, indicate the study isn’t about the Big Bang but about similarities between “massive thermonuclear explosions in space and small chemical explosions on Earth”. However, the press release’s author slipped in a reference to the Big Bang because, hey, it was an explosion too.

The Big Bang was like no other stellar explosion; its material constituents were vastly different from anything that goes boom today – whether on Earth or in space – and physicists have various ideas about what could have motivated the bang to happen in the first place. The first supernovas are also thought to have occurred a few billion years after the Big Bang. This said, Ahmed was quoted saying something that could have used more clarification in the press release:

We explore these supersonic reactions for propulsion, and as a result of that, we came across this mechanism that looked very interesting. When we started to dig deeper, we realized that this is relatable to something as profound as the origin of the universe.

Err…

The climate and the A.I.

A few days ago, the New York Times and other major international publications sounded the alarm over a new study that claimed various coastal cities around the world would be underwater to different degrees by 2050. However, something seemed off; it couldn’t have been straightforward for the authors of the study to plot how much the sea-level rise would affect India’s coastal settlements. Specifically, the numbers required to calculate how many people in a city would be underwater aren’t readily available in India, if at all they do exist. Without this bit of information, it’s easy to disproportionately over- or underestimate certain outcomes for India on the basis of simulations and models. And earlier this evening, as if on cue, this thread appeared:

This post isn’t a declaration of smugness (although it is tempting) but to turn your attention to one of Palanichamy’s tweets in the thread:

One of the biggest differences between the developed and the developing worlds is clean, reliable, accessible data. There’s a reason USAfacts.org exists whereas in India, data discovery is as painstaking a part of the journalistic process as is reporting on it and getting the report published. Government records are fairly recent. They’re not always available at the same location on the web (data.gov.in has been remedying this to some extent). They’re often incomplete or not machine-readable. Every so often, the government doesn’t even publish the data – or changes how it’s obtained, rendering the latest dataset incompatible with previous versions.

This is why attempts to model Indian situations and similar situations in significantly different parts of the world (i.e. developed and developing, not India and, say, Mexico) in the same study are likely to deviate from reality: the authors might have extrapolated the data for the Indian situation using methods derived from non-native datasets. According to Palanichamy, the sea-level rise study took AI’s help for this – and herein lies the rub. With this study itself as an example, there are only going to be more – and potentially more sensational – efforts to determine the effects of continued global heating on coastal assets, whether cities or factories, paralleling greater investments to deal with the consequences.

In this scenario, AI, and algorithms in general, will only play a more prominent part in determining how, when and where our attention and money should be spent, and controlling the extent to which people think scientists’ predictions and reality are in agreement. Obviously the deeper problem here lies with the entities responsible for collecting and publishing the data – and aren’t doing so – but given how the climate crisis is forcing the world’s governments to rapidly globalise their action plans, the developing world needs to inculcate the courage and clarity to slow down, and scrutinise the AI and other tools scientists use to offer their recommendations.

It’s not a straightforward road from having the data to knowing what it implies for a city in India, a city in Australia and a city in Canada.