What if…?
A weekly conversation on some topics that were on @HT_ED's mind.
As what-if stories go, this is a fairly simple one: What if the air safety regulator’s audit, which grounded four Learjet aircraft of charter operator VSR Ventures Pvt. Ltd., had been done in early January instead of early February, and ended up finding “non-compliances in approved procedures related to airworthiness, flight operations, and safety” (as an official in the regulator shared with Hindustan Times on the condition of anonymity) in five aircraft, with the fifth being the one that crashed on 28 January? Hint: The four jets grounded account for 57% of the operator’s current Learjet fleet.
That means there was an even chance that the jet that crashed was non-compliant in terms of “approved procedures related to airworthiness, flight operations, and safety”. With the caveat that not all (and perhaps none) of these non-compliances may have resulted in a crash, it is unlikely that anyone would have knowingly boarded an aircraft had they known there was at least a 50% chance of it being in violation of safety/air worthiness procedures.
What’s that idiom about stables and horses, again?

P.S: If the 28 January crash involving Ajit Pawar, and the 23 February crash involving an air ambulance that killed seven people—it later emerged that the aircraft wasn’t equipped with a black box (the rules allow aircraft below the weight of 5.7 tonnes to not have black boxes)—have highlighted lapses in how charters operate in India, including the seemingly-lax regulatory oversight they enjoy, then minor incidents in India’s flying schools point to a disaster in the making, writes our columnist Anjuli Bhargava. Maybe the air safety regulator could attend to this before it is too late.
Another what-if?
Most people have read the Citrini Research report that started it all. Titled The 2028 Global Intelligence Crisis, and released on 22 February, it roiled markets around the world. Part of this was also because it was written as if it was set in the future—30 June 2028, actually, looking back at events of 2026 and 2027. If you haven’t read it, do so—it makes for gripping reading.
There has been some criticism of the report—from analysts, experts, and CEOs of the very companies which Citrini expects to see hit the most.
The Citrini report came a month after a study in Science said that “GenAI increases output and helps programmers expand into new domains—but only for senior-level developers”. It added that “early-career developers, despite being the most enthusiastic adopters, see no measurable gains.”
Our columnist Anirban Mahapatra (he writes the well-regarded Scientifically Speaking column), referred to the Science article and said its findings were only to be expected:
“This is what happens when the marginal cost of production collapses. Value migrates upward in the skill chain, from routine execution to high-level design and judgment. Manufacturing went through this transformation decades ago. Automation reduced the need for routine assembly work and increased the premium on engineers and system architects. Something similar may now be unfolding in software and other knowledge industries.”
Mahapatra also refers to another study, by MIT, that showed that “AI excels at tedious, repetitive work. But for the complex problems where experienced engineers really earn their keep, the tools are not enough. The current AI models have limited working memory. They lose track of what they are doing on longer tasks and fail to account for how different parts of a large software system interact.”
But what if AI can now do the work of engineers and system architects? What if they can now perform longer tasks and also understand how different parts of a large software system interact?
After all, the most recent data used in the study published in Science was from 2024. Given the speed at which AI is evolving, and its versatility, it is likely that AI can already deal with complex problems, and also work atop the existing legacy systems of companies. An Anthropic blog post on how Claude could handle Cobol published earlier this week (again) rocked the markets.
As my colleague Roshan Kishore put it in an Op-Ed that built on the Citrini report, this is a “decades happening in weeks” moment, and it is hard to see how this will not excise a significant proportion of the skilled workforce in the service sector.
And the ultimate What-if?
Anyone who has been following the ambitious predictions of the people behind the top AI companies—last week’s newsletter referred to some of them—will, at some point, encounter questions related to sentience and consciousness, which, at least in the case of the curious, is likely to take them directly into the rabbit hole of understanding these very human characteristics.
This week, I have been reading A World Appears: A Journey into Consciousness by Michael Pollan, who approaches the subject with the same verve and focus on detail (and depth) he demonstrated while dealing with food and psychedelic drugs. The short answer for those from the TL;DR brigade is simply that AI’s big weakness when it comes to consciousness is the lack of a basic pre-requisite for this: feeling. But it’s how Pollan fleshes out this finding that’s really interesting.




