A cloud of grey-black smoke erupts over a brown field, likely the result of an explosion of some sort.

Disastrous hype

This is one of the worst press releases accompanying a study I’ve seen:

The headline and the body appear to have nothing to do with the study itself, which explores the creative properties of an explosion with certain attributes. However, the press office of the University of Central Florida has drafted a popular version that claims researchers – who are engineers more than physicists – have “detailed the mechanisms that could cause the [Big Bang] explosion, which is key for the models that scientists use to understand the origin of the universe.” I checked with a physicist, who agreed: “I don’t see how this is relevant to the Big Bang at all. Considering the paper is coming out of the department of mechanical and aerospace engineering, I highly doubt the authors intended for it to be reported on this way.”

Press releases that hype results are often the product of an overzealous university press office working without inputs from the researchers that obtained those results, and this is probably the case here as well. The paper’s abstract and some quotes by one of the researchers, Kareem Ahmed from the University of Central Florida, indicate the study isn’t about the Big Bang but about similarities between “massive thermonuclear explosions in space and small chemical explosions on Earth”. However, the press release’s author slipped in a reference to the Big Bang because, hey, it was an explosion too.

The Big Bang was like no other stellar explosion; its material constituents were vastly different from anything that goes boom today – whether on Earth or in space – and physicists have various ideas about what could have motivated the bang to happen in the first place. The first supernovas are also thought to have occurred a few billion years after the Big Bang. This said, Ahmed was quoted saying something that could have used more clarification in the press release:

We explore these supersonic reactions for propulsion, and as a result of that, we came across this mechanism that looked very interesting. When we started to dig deeper, we realized that this is relatable to something as profound as the origin of the universe.

Err…

The climate and the A.I.

A few days ago, the New York Times and other major international publications sounded the alarm over a new study that claimed various coastal cities around the world would be underwater to different degrees by 2050. However, something seemed off; it couldn’t have been straightforward for the authors of the study to plot how much the sea-level rise would affect India’s coastal settlements. Specifically, the numbers required to calculate how many people in a city would be underwater aren’t readily available in India, if at all they do exist. Without this bit of information, it’s easy to disproportionately over- or underestimate certain outcomes for India on the basis of simulations and models. And earlier this evening, as if on cue, this thread appeared:

This post isn’t a declaration of smugness (although it is tempting) but to turn your attention to one of Palanichamy’s tweets in the thread:

One of the biggest differences between the developed and the developing worlds is clean, reliable, accessible data. There’s a reason USAfacts.org exists whereas in India, data discovery is as painstaking a part of the journalistic process as is reporting on it and getting the report published. Government records are fairly recent. They’re not always available at the same location on the web (data.gov.in has been remedying this to some extent). They’re often incomplete or not machine-readable. Every so often, the government doesn’t even publish the data – or changes how it’s obtained, rendering the latest dataset incompatible with previous versions.

This is why attempts to model Indian situations and similar situations in significantly different parts of the world (i.e. developed and developing, not India and, say, Mexico) in the same study are likely to deviate from reality: the authors might have extrapolated the data for the Indian situation using methods derived from non-native datasets. According to Palanichamy, the sea-level rise study took AI’s help for this – and herein lies the rub. With this study itself as an example, there are only going to be more – and potentially more sensational – efforts to determine the effects of continued global heating on coastal assets, whether cities or factories, paralleling greater investments to deal with the consequences.

In this scenario, AI, and algorithms in general, will only play a more prominent part in determining how, when and where our attention and money should be spent, and controlling the extent to which people think scientists’ predictions and reality are in agreement. Obviously the deeper problem here lies with the entities responsible for collecting and publishing the data – and aren’t doing so – but given how the climate crisis is forcing the world’s governments to rapidly globalise their action plans, the developing world needs to inculcate the courage and clarity to slow down, and scrutinise the AI and other tools scientists use to offer their recommendations.

It’s not a straightforward road from having the data to knowing what it implies for a city in India, a city in Australia and a city in Canada.

The virtues of local travel

Here’s something I wish I’d read before overtourism and flygskam removed the pristine gloss of desirability from the selfies, 360º panoramas and videos the second-generation elites posted every summer on the social media:

It’s ok to prioritize friendships, community, and your mental health over travelling.

Amir Salihefendic, the head of a tech company, writes this after having moved from Denmark to Taiwan for a year, and reflects on the elements of working remotely, the toll it inevitably takes, and how the companies (and the people) that champion this mode of work often neglect to mention its unglamorous side.

Remote work works only if the company’s management culture is cognisant of it. It doesn’t work if one employee of a company that ‘extracts’ work by seating its people in physical proximity, such as in offices or even co-working spaces, chooses to work from another location. This is because, setting aside the traditional reasons for which people work in the presence of other people,  offices are also designed to institute conditions that maximise productivity and, ideally, minimise stress or mental turbulence.

But what Salihefendic wrote is also true for travelling, which he undertook by going from Denmark to Taiwan. Travelling here is an act that – in the form practiced by those who sustain the distinction between a place to work, or experience pain, and a place in which to experience pleasure – renders long-distance travel a class aspiration, and the ‘opposing’ short-distance travel a ‘lesser’ thing for not maintaining the same social isolation that our masculine cities do.

This is practically the Protestant ethic that Max Weber described in his analysis of the origins of capitalism, and which Silicon Valley dudebros dichotomised as ‘word hard, party harder’. And for once, it’s a good thing that this kind of living is out of reach of nearly 99% of humankind.

Exploring neighbourhood sites is more socio-economically and socio-culturally (and not just economically and just culturally) productive. Instead of creating distinct centres of pain and pleasure, of value creation and value dispensation, local travel can reduce the extent and perception of urban sprawl, contribute to hyperlocal economic development, birth social knowledge networks that enhance civilian engagement, and generally defend against the toll of extractive capitalism.

For example, in Bengaluru, I would like to travel from Malleshwaram to Yelahanka, or – in Chennai – from T Nagar to Kottivakkam, or – in Delhi – from Jor Bagh to Vasant Kunj, for a week or two at a time, and in each case exploring a different part of the city that might as well be a different city, characterised by a unique demographic distribution, public spaces, cuisine and civic issues. And when I do, I will still have my friends and access to my community and to the social support I need to maintain my mental health.

India’s Delhi-only air pollution problem

I woke up this morning to a PTI report telling me Delhi’s air quality had fallen to ‘very poor’ on Deepavali, the Hindu ostensible festival of lights, with many people defying the Supreme Court’s direction to burst firecrackers only between 8 pm and 10 pm. This defiance is unsurprising: the Supreme Court doesn’t apply to Delhi because, and not even though, the response to the pollution was just Delhi-centric.

In fact, it’s probably only a problem because Delhi is having trouble breathing, despite the fact that the national capital is the eleventh-most polluted city in the world, behind eight other Indian ones.

The report also noted, “On Saturday, the Delhi government launched a four-day laser show to discourage residents from bursting firecrackers and celebrating Diwali with lights and music. During the show, laser lights were beamed in sync with patriotic songs and Ramayana narration.”

So the air pollution problem rang alarm bells and the government solved just that problem. Nothing else was a problem so it solved nothing else. The beams of light the Delhi government shot up into the sky would have caused light pollution, disturbing insects, birds and nocturnal creatures. The sound would no doubt have been loud, disturbing animals and people in the area. It’s a mystery why we don’t have familial, intimate celebrations.

There is a concept in environmental philosophy called the hyperobject: a dynamic super-entity that lots of people can measure and feel at the same time but not see or touch. Global warming is a famous hyperobject, described by certain attributes, including its prevalence and its shifting patterns. Delhi’s pollution has two hyperobjects. One is what the urban poor experiences – a beast that gets in the way of daily life, that you can’t wish away (let alone fight), and which is invisible to everyone else. The is the one in the news: stunted, inchoate and classist, it includes only air pollution because its effects have become unignorable, and sound and light don’t feature in it – nor does anything even a degree removed from the singular sources of smoke and fumes.

For example, someone (considered smart) recently said to me, “The city should collect trash better to avoid roadside garbage fires in winter.” Then what about the people who set those fires for warmth because they don’t have warm shelter for the night? “They will find another way.”

The Delhi-centrism is also visible with the ‘green firecrackers’ business. According to the CSIR National Environmental Engineering Research Institute (NEERI), which developed the crackers, its scientists “developed new formulations for reduced emission light and sound emitting crackers”. But it turns out the reduction doesn’t apply to sound.

The ‘green’ crackers’ novel features include “matching performance in sound (100-120dBA) with commercial crackers”. A 100-120 dBA is debilitating. The non-crazy crackers clock about 60-80 dBA. (dB stands for decibels, a logarithmic measure of sound pressure change; the ‘A’ corresponds to the A-setting, a scale used to measure sounds according to human loudness.)

In 2014, during my neighbours’ spate of cracker-bursting, I “used an app to make 300 measurements over 5 minutes” from a distance of about 80 metres, and obtained the following readings:

Min: 41.51 dB(A)
Max: 83.88 dB(A)
Avg.: 66.41 dB(A)

The Noise Pollution (Regulation and Control) Rules 2000 limit noise in the daytime (6 am to 10 pm) to 55 dB(A), and the fine for breaking the rules was just Rs 100, or $1.5, before the Supreme Court stepped up taking cognisance of the air pollution during Deepavali. This is penalty is all the more laughable considering Delhi was ranked the world’s second-noisiest city in 2017. There’s only so much the Delhi police, including traffic police, can do, with the 15 noise meters they’ve been provided.

In February 2019, Romulus Whitaker, India’s ‘snake man’, expressed his anguish over a hotel next door to the Madras Crocodile Bank Trust blasting loud music that was “triggering aberrant behaviour” among the animals (to paraphrase the author). If animals don’t concern you: the 2014 Heinz Nixdorf Recall study found noise is a risk factor for atherosclerosis. Delhi’s residents also have the “maximum amount of hearing loss proportionate to their age”.

As Dr Deepak Natarajan, a Delhi-based cardiologist, wrote in 2015, “It is ironic that the people setting out to teach the world the salutatory effects of … quietness celebrate Yoga Day without a thought for the noise that we generate every day.”

Someone else tweeted yesterday, after purchasing some ‘green’ firecrackers, that science “as always” (or something similar) provided the solution. But science has no agency: like a car, people drive it. It doesn’t ask questions about where the driver wants to go or complain when he drives too rashly. And in the story of fixing Delhi’s air pollution, the government has driven the car like Salman Khan.

The calculus of creative discipline

Every moment of a science fiction story must represent the triumph of writing over world-building. World-building is dull. World-building literalises the urge to invent. World-building gives an unnecessary permission for acts of writing (indeed, for acts of reading). World-building numbs the reader’s ability to fulfil their part of the bargain, because it believes that it has to do everything around here if anything is going to get done. Above all, world-building is not technically necessary. It is the great clomping foot of nerdism.

Once I’m awake and have had my mug of tea, and once I’m done checking Twitter, I can quote these words of M. John Harrison from memory: not because they’re true – I don’t believe they are – but because they rankle. I haven’t read any writing of Harrison’s, I can’t remember the names of any of his books. Sometimes I don’t remember his name even, only that there was this man who uttered these words. Perhaps it is to Harrison’s credit that he’s clearly touched a nerve but I’m reluctant to concede anymore than this.

His (partial) quote reflects a narrow view of a wider world, and it bothers me because I remain unable to extend the conviction that he’s seeing only a part of the picture to the conclusion that he lacks imagination; as a writer of not inconsiderable repute, at least according to Wikipedia, I doubt he has any trouble imagining things.

I’ve written about the virtues of world-building before (notably here), and I intend to make another attempt in this post; I should mention what both attempts, both defences, have in common is that they’re not prescriptive. They’re not recommendations to others, they’re non-generalisable. They’re my personal reasons to champion the act, even art, of world-building; my specific loci of resistance to Harrison’s contention. But at the same time, I don’t view them – and neither should you – as inviolable or as immune to criticism, although I suspect this display of a willingness to reason may not go far in terms of eliminating subjective positions from this exercise, so make of it what you will.

There’s an idea in mathematical analysis called smoothness. Let’s say you’ve got a curve drawn on a graph, between the x- and y-axes, shaped like the letter ‘S’. Let’s say you’ve got another curve drawn on a second graph, shaped like the letter ‘Z’. According to one definition, the S-curve is smoother than the Z-curve because it has fewer sharp edges. A diligent high-schooler might take recourse through differential calculus to explain the idea. Say the Z-curve on the graph is the result of a function Z(x) = y. If you differentiate Z(x) where ‘x’ is the point on the x-axis where the Z-curve makes a sharp turn, the derivative Z'(x) has a value of zero. Such points are called critical points. The S-curve doesn’t have any critical points (except at the ends, but let’s ignore them); L-, and T-curves have one critical point each; P- and D-curves have two critical points each; and an E-curve has three critical points.

With the help of a loose analogy, you could say a well-written story is smooth à la an S-curve (excluding the terminal points): it it has an unambiguous beginning and an ending, and it flows smoothly in between the two. While I admire Steven Erikson’s Malazan Book of the Fallen series for many reasons, its first instalment is like a T-curve, where three broad plot-lines abruptly end at a point in the climax that the reader has been given no reason to expect. The curves of the first three books of J.K. Rowling’s Harry Potter series resemble the tangent function (from trigonometry: tan(x) = sin(x)/cosine(x)): they’re individually somewhat self-consistent but the reader is resigned to the hope that their beginnings and endings must be connected at infinity.

You could even say Donald Trump’s presidency hasn’t been smooth at all because there have been so many critical points.

Where world-building “literalises the urge to invent” to Harrison, it spatialises the narrative to me, and automatically spotlights the importance of the narrative smoothness it harbours. World-building can be just as susceptible to non-sequiturs and deus ex machinae as writing itself, all the way to the hubris Harrison noticed, of assuming it gives the reader anything to do, even enjoy themselves. Where he sees the “clomping foot of nerdism”, I see critical points in a curve some clumsy world-builder invented as they went along. World-building can be “dull” – or it can choose to reveal the hand-prints of a cave-dwelling people preserved for thousands of years, and the now-dry channels of once-heaving rivers that nurtured an ancient civilisation.

My principal objection to Harrison’s view is directed at the false dichotomy of writing and world-building, and which he seems to want to impose instead of the more fundamental and more consequential need for creative discipline. Let me borrow here from philosophy of science 101, specifically of the particular importance of contending with contradictory experimental results. You’ve probably heard of the replication crisis: when researchers tried to reproduce the results of older psychology studies, their efforts came a cropper. Many – if not most – studies didn’t replicate, and scientists are currently grappling with the consequences of overturning decades’ worth of research and research practices.

This is on the face of it an important reality check but to a philosopher with a deeper view of the history of science, the replication crisis also recalls the different ways in which the practitioners of science have responded to evidence their theories aren’t prepared to accommodate. The stories of Niels Bohr v. classical mechanicsDan Shechtman v. Linus Pauling and the EPR paradox come first to mind. Heck, the philosophers Karl Popper, Thomas Kuhn, Imre Lakatos and Paul Feyerabend are known for their criticisms of each other’s ideas on different ways to rationalise the transition from one moment containing multiple answers to the moment where one emerges as the favourite.

In much the same way, the disciplined writer should challenge themself instead of presuming the liberty to totter over the landscape of possibilities, zig-zagging between one critical point and the next until they topple over the edge. And if they can’t, they should – like the practitioners of good science – ask for help from others, pressing the conflict between competing results into the service of scouring the rust away to expose the metal.

For example, since June this year, I’ve been participating on my friend Thomas Manuel’s initiative in his effort to compose an underwater ‘monsters’ manual’. It’s effectively a collaborative world-building exercise where we take turns to populate different parts of a large planet with sizeable oceans, seas, lakes and numerous rivers with creatures, habitats and ecosystems. We broadly follow the same laws of physics and harbour substantially overlapping views of magic, but we enjoy the things we invent because they’re forced through the grinding wheels of each other’s doubts and curiosities, and the implicit expectation of one creator to make adequate room for the creations of the other.

I see it as the intersection of two functions: at first, their curves will criss-cross at a point, and the writers must then fashion a blending curve so a particle moving along one can switch to the other without any abruptness, without any of the tired melodrama often used to mask criticality. So the Kularu people are reminded by their oral traditions to fight for their rivers, so the archaeologists see through the invading Gezmin’s benevolence and into the heart of their imperialist ambitions.

Scientism is not ‘nonsense’

The @realscientists rocur account on Twitter took a surprising turn earlier today when its current curator, Teresa Ambrosio, a chemist, tweeted the following:

If I had to give her the benefit of doubt, I’d say she was pointing this tweet at the hordes of people – especially Americans – whose conspiratorial attitude towards vaccines and immigrants is founded entirely on their personal experiences being at odds with scientific knowledge. However, Ambrosio wasn’t specific, so I asked her:

The responses to my tweet, encouraged in part by Ambrosio herself, were at first dominated by (too many) people who seemed to agree, broadly, that science is an apolitical endeavour that could be cleanly separated from the people who practice it and that science has nothing to do with the faulty application of scientific knowledge. However, the conversation rapidly turned after one of the responders called scientism “nonsense” – a stance that would rankle not just the well-informed historian of science but in fact so many people in non-developed nations where scientific knowledge is often used to legitimise statutory authority.

I recommend reading the whole conversation, especially if what you’re looking for is a good and sufficiently well-referenced summary of a) why scientism is anything but nonsense; b) why science is not apolitical; and c) how scientism is rooted in the need to separate science and the scientist.

IBT’s ice-nine effect on Newsweek

In his 1963 novel Cat’s Cradle, Kurt Vonnegut describes a fictitious substance called ice-nine: a crystalline form of water that converts all the liquid water it comes into contact with into more ice-nine. This is the sort of effect the International Business Times had on Newsweek, which, as Daniel Tovrov writes in the Columbia Journalism Review, went from being one of the ‘big three’ American news magazines to a lesser entity that can’t say why it exists within the last decade of its eighty-year – and counting – existence.

One big reason, apart from Newsweek editors’ continuing preference for page-views over informed reportage, is IBT’s ownership of the magazine from 2012 to 2018. IBT is a business, not a journalism organisation; it made its money through ads on its pages, and it got people to come see those ads and maybe click on a few by publishing a large volume of articles with clickbait headlines.

It’s certainly not alone in adopting this business model but what Tovrov leaves unsaid is that Google’s and Facebook’s – but especially Google’s – decisions to make this model profitable has allowed businesses like IBT to assume ownership of journalism organisations like Newsweek, running them aground. Like the ice-nine in Cat’s Cradle, it isn’t just that IBT was shot to hell but that Google empowered its employees – who are to blame here as much as Google itself – to consign other organisations it came into contact with to the same fate.

There’s even a distressing self-symmetry to this story; to quote Tovrov:

… Jeffrey Rothfeder, our Editor-in-Chief, said that the clickbait would bring in revenue while hard-news reporting would build our reputation. Much of Newsweek’s current disorder was incubated in those early days of IBT, when we were still figuring out how digital journalism would work. We quickly learned that the patience of the owners, who own Newsweek today, was short. I witnessed incredible journalists lose their jobs over inconsistent traffic, despite editors’ best efforts to save them by shifting them from desk to desk to avoid detection.

There’s a ‘moral of the story’ moment tucked away here about a causal link – which wasn’t so obvious until BuzzFeed’s famous failure came along in January this year – between the gambler’s conceit of adopting the CPM model and the eventual ruin the model brings to newsroom practices. The best safeguard would be to have editors empowered to hit the brakes but by that time the organisation has likely changed in a way that that’s too much to ask for.

Many of us adopted the strategy of using a pseudonym to [cook up stories] when we needed quick hits. The owners and editors were fine with this, but a CMS update created automated bylines and ended the practice. It was in this era that, due to a contagious morale problem, IBT management added a carrot to go along with the stick: traffic bonuses.

It seems Newsweek – of all the publications possible – today exemplifies the worst of what happens when publishers sink more money into the ads-based CPM model of generating revenue: the newsroom becomes yet another late-capitalism enterprise whose employees fight for a sliver of the pie while their work lands significant chunks in the hands of its owners. It’s also a sign of how dependent the magazine is on Google that (a part of) Newsweek‘s existing staff is optimistic Google’s new changes to its ranking algorithm, to prioritise original in-depth reportage over recycled material, will make their jobs more enjoyable.

An engraved bust of Alfred Nobel. Credit: sol_invictus/Flickr, CC BY 2.0

Why are the Nobel Prizes still relevant?

Note: A condensed version of this post has been published in The Wire.

Around this time last week, the world had nine new Nobel Prize winners in the sciences (physics, chemistry and medicine), all but one of whom were white and none were women. Before the announcements began, Göran Hansson, the Swede-in-chief of these prizes, had said the selection committee has been taking steps to make the group of laureates more racially and gender-wise inclusive, but it would seem they’re incremental measures, as one editorial in the journal Nature pointed out.

Hansson and co. seems to find the argument that the Nobel Prizes award achievements at a time where there weren’t many women in science tenable when in fact it distracts from the selection committee’s bizarre oversight of such worthy names as Lise Meitner, Vera Rubin, Chien-Shiung Wu, etc. But Hansson needs to understand that the only meaningful change is change that happens right away because, even for this significant flaw that should by all means have diminished the prizes to a contest of, for and by men, the Nobel Prizes have only marginally declined in reputation.

Why do they matter when they clearly shouldn’t?

For example, according to the most common comments received in response to articles by The Wire shared on Twitter and Facebook, and always from men, the prizes reward excellence, and excellence should brook no reservation, whether by caste or gender. As is likely obvious to many readers, this view of scholastic achievement resembles a blade of grass: long, sprouting from the ground (the product of strong roots but out of sight, out of mind), rising straight up and culminating in a sharp tip.

However, achievement is more like a jungle: the scientific enterprise – encompassing research institutions, laboratories, the scientific publishing industry, administration and research funding, social security, availability of social capital, PR, discoverability and visibility, etc. – incorporates many vectors of bias, discrimination and even harassment towards its more marginalised constituents. Your success is not your success alone; and if you’re an upper-caste, upper-class, English-speaking man, you should ask yourself, as many such men have been prompted to in various walks of life, who you might have displaced.

This isn’t a witch-hunt as much as an opportunity to acknowledge how privilege works and what we can do to make scientific work more equal, equitable and just in future. But the idea that research is a jungle and research excellence is a product of the complex interactions happening among its thickets hasn’t found meaningful purchase, and many people still labour with a comically straightforward impression that science is immune to social forces. Hansson might be one of them if his interview to Nature is anything to go by, where he says:

… we have to identify the most important discoveries and award the individuals who have made them. If we go away from that, then we’ve devalued the Nobel prize, and I think that would harm everyone in the end.

In other words, the Nobel Prizes are just going to look at the world from the top, and probably from a great distance too, so the jungle has been condensed to a cluster of pin-pricks.

Another reason why the Nobel Prizes haven’t been easy to sideline is that the sciences’ ‘blade of grass’ impression is strongly historically grounded, with help from notions like scientific knowledge spreads from the Occident to the Orient.

Who’s the first person that comes to mind when I say “Nobel Prize for physics”? I bet it’s Albert Einstein. He was so great that his stature as a physicist has over the decades transcended his human identity and stamped the Nobel Prize he won in 1921 with an indelible mark of credibility. Now, to win a Nobel Prize in physics is to stand alongside Einstein himself.

This union between a prize and its laureate isn’t unique to the Nobel Prize or to Einstein. As I’ve said before, prizes are elevated by their winners. When Margaret Atwood wins the Booker Prize, it’s better for the prize than it is for her; when Isaac Asimov won a Hugo Award in 1963, near the start of his career, it was good for him, but it was good for the prize when he won it for the sixth time in 1992 (the year he died). The Nobel Prizes also accrued a substantial amount of prestige this way at a time when it wasn’t much of a problem, apart from the occasional flareup over ignoring deserving female candidates.

That their laureates have almost always been from Europe and North America further cemented the prizes’ impression that they’re the ultimate signifier of ‘having made it’, paralleling the popular undercurrent among postcolonial peoples that science is a product of the West and that they’re simply its receivers.

That said, the prize-as-proxy issue has contributed considerably as well to preserving systemic bias at the national and international levels. Winning a prize (especially a legitimate one) accords the winner’s work with a modicum of credibility and the winner, of prestige. Depending on how the winners of a prize to be awarded suitably in the future are to be selected, such credibility and prestige could be potentiated to skew the prize in favour of people who have already won other prizes.

For example, a scientist-friend ranted to me about how, at a conference he had recently attended, another scientist on stage had introduced himself to his audience by mentioning the impact factors of the journals he’d had his papers published in. The impact factor deserves to die because, among other reasons, it attempts to condense multi-dimensional research efforts and the vagaries of scientific publishing into a single number that stands for some kind of prestige. But its users should be honest about its actual purpose: it was designed so evaluators could take one look at it and decide what to do about a candidate to whom it corresponded. This isn’t fair – but expeditiousness isn’t cheap.

And when evaluators at different rungs of the career advancement privilege the impact factor, scientists with more papers published earlier in their careers in journals with higher impact factors become exponentially likelier to be recognised for their efforts (probably even irrespective of their quality given the unique failings of high-IF journals, discussed here and here) over time than others.

Brian Skinner, a physicist at Ohio State University, recently presented a mathematical model of this ‘prestige bias’ and whose amplification depended in a unique way, according him, on a factor he called the ‘examination precision’. He found that the more ambiguously defined the barrier to advancement is, the more pronounced the prestige bias could get. Put another way, people who have the opportunity to maintain systemic discrimination simultaneously have an incentive to make the points of entry into their club as vague as possible. Sound familiar?

One might argue that the Nobel Prizes are awarded to people at the end of their careers – the average age of a physics laureate is in the late 50s; John Goodenough won the chemistry prize this year at 97 – so the prizes couldn’t possibly increase the likelihood of a future recognition. But the sword cuts both ways: the Nobel Prizes are likelier than not to be the products a prestige bias amplification themselves, and are therefore not the morally neutral symbols of excellence Hansson and his peers seem to think they are.

Fourth, the Nobel Prizes are an occasion to speak of science. This implies that those who would deride the prizes but at the same time hold them up are equally to blame, but I would agree only in part. This exhortation to try harder is voiced more often than not by those working in the West, with publications with better resources and typically higher purchasing power. On principle I can’t deride the decisions reporters and editors make in the process of building an audience for science journalism, with the hope that it will be profitable someday, all in a resource-constrained environment, even if some of those choices might seem irrational.

(The story of Brian Keating, an astrophysicist, could be illuminating at this juncture.)

More than anything else, what science journalism needs to succeed is a commonplace acknowledgement that science news is important – whether it’s for the better or the worse is secondary – and the Nobel Prizes do a fantastic job of getting the people’s attention towards scientific ideas and endeavours. If anything, journalists should seize the opportunity in October every year to also speak about how the prizes are flawed and present their readers with a fuller picture.

Finally, and of course, we have capitalism itself – implicated in the quantum of prize money accompanying each Nobel Prize (9 million Swedish kronor, Rs 6.56 crore or $0.9 million).

Then again, this figure pales in comparison to the amounts that academic institutions know they can rake in by instrumentalising the prestige in the form of donations from billionaires, grants and fellowships from the government, fees from students presented with the tantalising proximity to a Nobel laureate, and in the form of press coverage. L’affaire Epstein even demonstrated how it’s possible to launder a soiled reputation by investing in scientific research because institutions won’t ask too many questions about who’s funding them.

The Nobel Prizes are money magnets, and this is also why winning a Nobel Prize is like winning an Academy Award: you don’t get on stage without some lobbying. Each blade of grass has to mobilise its own PR machine, supported in all likelihood by the same institute that submitted their candidature to the laureates selection committee. The Nature editorial called this out thus:

As a small test case, Nature approached three of the world’s largest international scientific networks that include academies of science in developing countries. They are the International Science Council, the World Academy of Sciences and the InterAcademy Partnership. Each was asked if they had been approached by the Nobel awarding bodies to recommend nominees for science Nobels. All three said no.

I believe those arguments that serve to uphold the Nobel Prizes’ relevance must take recourse through at least one of these reasons, if not all of them. It’s also abundantly clear that the Nobel Prizes are important not because they present a fair or useful picture of scientific excellence but in spite of it.

Ad verecundiam

That Swedish group announced today that Esther Duflo, Abhijit Banerjee and Michael Kremer are the winners of this year’s Nobel Prize for economics. Within minutes, my Twitter feed was awash with congratulations as well as links to criticisms Duflo and Banerjee had voiced in the past against the economic policies of the Narendra Modi government. If nothing else, I can think of three motives on the part of those who shared these links: to draw traffic to certain news sites (i.e. the links had been shared by accounts belonging to news publishers), because the posters were deferring to the laureates’ reestablished authority to make a point, or to call attention to the fact that a woman had won a Nobel Prize. The first two are opportunistic and dicey at best. Setting aside for a moment the presumably small (if any) overlap between the group of people who shared the links to articles about Duflo’s and Banerjee’s work and the group of people who think the Nobel Prizes should be “cancelled” (to borrow Ed Yong’s word), using the Nobel Prizes to denote authority is to further cement the prizes’ undeserved place in the public consciousness of scholastic merit. Of course, lots of people are looking for the slightest opportunity to tell the Modi government it got something wrong – and both Banerjee and Duflo have admitted they couldn’t understand demonetisation, for starters – but using the Nobel Prizes to say “I told you so!” is not a free lunch. I don’t know what the alternatives could be; it’s certainly infeasible to think anyone could persuade mainstream Indian newsrooms to stop covering the announcement of the Nobel Prizes if only because it’s an excellent opportunity to talk/write about something in science and have your audience listen/read. We could try harder, but until we don’t, it also makes sense to criticise the Nobel Prizes while popularising them – and this is what’s missing in the social-media conversations shout-outs that seek to challenge one form of authority with another.

New Scientist violates the laws of physics (updated)

new article in the New Scientist begins with a statement of Newton’s third law that is blissfully ignorant of the irony. The article’s headline is:

The magazine is notorious for its use of sensationalist headlines and seems to have done it again. Jon Cartwright, the author of the article, has done a decent job of explaining the ‘helical drive’ proposed by a manager at NASA named David Burns, and hasn’t himself suggested that the drive violates any laws of physics. It seems more like someone else was responsible for the headline and decided to give it the signature New Scientist twist.

The featured image is a disaster, showing concept art of Robert Shawyer’s infamous em-drive. Shawyer had claimed the device could in fact violate the laws of physics by converting the momentum of microwaves confined in a chamber into thrust. Various experts have debunked the em-drive as fantasy, but their caution against suggesting the laws of physics could be broken so easily appears to have missed the New Scientist.

Update, 7.06 am, October 16, 2019: In a new article, Chris Lee at Ars Technica has explained why the helical drive won’t work, and comes down harshly on Burns for publicising his idea before getting it checked with his peers at NASA, which would’ve spared him the embarrassment that Lee dished out. That said, Lee is also a professional physicist, and perhaps Cartwright isn’t entirely in the clear if the answer to why the helical drive won’t work is as straightforward as Lee makes it to be.

With the helical drive, Burns proposes to use an object that moves back and forth inside a box, bouncing off either end. Each bounce imparts momentum to the box but the net momentum after two bounces is zero because they’re in equal and opposite directions. But if the object could become heavier just before it strikes one end and lighter before it strikes the other, the box will receive a ‘kick’ at one end and start moving that direction.

Burns then says if we could replace the object with a particle and the box with a particle accelerator, it should be possible to accelerate the particle in one direction, let it bounce off, then decelerate it in the other direction and recover most of the energy imparted to it, and repeat. This way, the whole setup can be made to constantly accelerate in one direction.

The flip side is that the mass-energy equivalence is central to Burns’s idea, but according to the theory of special relativity that it’s embedded in, it’s actually the mass-energy-momentum equivalence. As Lee put it, special relativity conserves energy and momentum together, which means a heavier particle bouncing off one end of the setup won’t keep accelerating the setup in its direction. Instead, when the particle becomes heavier and acquires more momentum, it does so by absorbing virtual photons from an omnipresent energy field. When the particle slows down, it emits these photons into the field around it.

According to special relativity and Newton’s third law, the release process will accelerate the setup, and the absorption process will decelerate the setup. The particle knocking on either ends is just incidental.