Valour and prestige — the world of special operations
Air Vice Marshal Manmohan Bahadur VM (retired)
is former Additional Director General, Centre for Air Power Studies
Operation Kaveri, the Sudan rescue mission, is an example of why the ethos and the training that goes into the special operations of the Indian Air Force crew should not be diluted.
The evacuation of 121 Indians from Wadi Seidna, north of Khartoum in Sudan, in the dead of night, using an Indian Air Force (IAF) C-130J Super Hercules, has been lauded all round. The IAF’s press release is an understatement of the stupendous task done on the night of April 27-28, but is a subtle shabash to the personnel involved. And let us not forget the steed they flew, the C-130J, and the foresight of the IAF and national leadership in the beginning of the century which, considering the growing stature and responsibilities of the nation, had planned the purchase of this aircraft, an outstanding capability enabler. One also needs to acknowledge the acquisition of the other aircraft for the IAF, the C-17 Globemaster heavy lift aircraft.
The Wadi Seidna mission will soon be forgotten and it is only right that the reader is able to have some idea of how special operations capability has progressed, and what should be kept in mind as it is developed further.
The impact of Kandahar
During the ‘Kandahar’ incident of December 24, 1999, when an Indian Airlines flight IC-814 was hijacked while on a flight from Kathmandu to New Delhi — it ended on December 31, 1999 — I happened to be with Air Chief Marshal A.Y. Tipnis on December 24 in Israel when the chief’s mobile phone rang; the Vice Chief was on line with the news that IC-814 had been hijacked and had landed at Amritsar, a civil airfield. We all know how standard operating procedures did not work thereafter and the plane eventually landed at Kandahar in Afghanistan, leading to the release of dreaded terrorists.
Could India have done a rescue like the famous Israeli rescue at Entebbe, Uganda in July 1976? Here, Israeli commandos flew all the way to Uganda and stormed a hijacked Air France jet in trying circumstances. The will is sure to have been there but for two big impediments — the presence of Pakistan whose territory could not have been overflown and no IAF aircraft that could carry out a very risky mission avoiding Pakistani airspace, entering Afghanistan from the south and returning without refuelling.
The Afghanistan rescue missions
Enter the C-130J in the IAF’s inventory, and we now have this capability if the political leadership decides to intervene in such a critical situation where national interests and reputation are at stake. Before the Sudan rescue there have been two other such missions that are known in the open domain. The first was the evacuation of Indian Embassy personnel from Herat, in Afghanistan in April 2020. The aircraft flew from India and had its engines running even after landing; the IAF’s Garud commandos stood guard while the diplomatic staff emplaned.
The second mission, on August 20, 2021 was an equally high risk one from Kabul; one had evacuation videos by the United States and the fiasco that unfolded going viral. The airspace was uncontrolled and the ground situation chaotic for want of a better word. There were a large number of aircraft in the air and the pilots had to avoid them and use night vision goggles while landing; the only call they received from the ground controller was the line, “Land at your own risk” (a phrase that is etched on the shoulder patch that squadron crew wear on their flying overalls).
While the crew on these missions were awarded for their professionalism and gallantry, what needs to be appreciated too is the aircrew selection and their special training.
In the case of the Sudan rescue, the crew faced many problems too. Intelligence was poor and the runway was rough with no landing aids. All they had was top-class onboard aircraft instrumentation such as synthetic runway generation on the head-up display, electro-optical night vision capability, night vision goggles, and of course, great confidence in their ability to pull it off.
The essence of special ops
Special operations are much more than stick and throttle operations, night vision goggles and dark nights. Every member of such a mission bears on his shoulders the weight of a nation’s prestige. They are India’s ‘strategic corporals’. This term, coined by General Charles C. Krulak of the U.S. Marine Corps, denotes that in modern warfare, the actions of even the enlisted man on the front lines has a strategic effect on a nation’s policies; and that institutional training should cater for this. When it comes to failure, the bungled hostage rescue attempt by the Americans from Iran in 1980 or the picture of Gary Powers in USSR custody when his U-2 was shot down in 1960 are reminders of the loss of face for the U.S. In terms of success, what brought laurels for the U.S. was the elimination of Osama bin Laden in a special forces raid. There is, thus, a non-military intangible element in every such operation that a young officer or a corporal, far removed from his base, has to accomplish. It is only right that this ethos and training in the IAF’s special operations crew not be diluted by the lure of sending the versatile C-130s for routine tasks and VIP carriage.
The IAF’s C-130J special ops squadrons (there are two) call themselves the ‘Veiled Vipers’ and the ‘Raiding Raptors.’ It is incumbent on the leadership to ensure that their sting stays potent.
Europe uses India, China to get Russian oil despite sanctions
India’s crude imports from Russia post-war surged by 1,350%, became largest fuel exporter to Europe
THE HINDU BUREAU
The Centre for Research on Energy and Clean Air has published a data-driven study highlighting a loophole that enables the European Union (EU), majority of G7 nations, and Australia to indirectly obtain oil from Russia through India and China, despite having banned or restricted imports of Russian crude oil and petroleum products.
The EU, as part of a broader price cap coalition, has banned or limited seaborne imports of Russian crude oil and established a $60 price cap on it as of December 5, 2022. Despite this, the countries enforcing these limitations have increased their intake of processed petroleum products from nations that are primary importers of Russian crude oil post the Ukraine invasion. This loophole can weaken the sanctions imposed on Russia, the report observed.
One year following Russia’s invasion of Ukraine, countries implementing price cap restrictions have increased their imports of refined oil products from China, India, Turkey, the UAE, and Singapore. These five nations, known as the “laundromat” countries, have augmented their purchases of Russian oil, processed it, and exported the products to countries enforcing sanctions on Russian oil.
The “laundromat” countries imported more Russian crude oil, compared to the year before the war. Import of crude oil from Russia into China increased from 39.8 million tonnes in the one year before the invasion to 57.7 million tonnes in the one year post the invasion. Similar analysis for India shows that the import figures surged from just 3.85 million tonnes before the invasion to 55.9 million tonnes post the invasion. Figures for all the five countries are presented in chart 1.
The growth in Russian crude oil import volumes for these five nations saw a 140% rise (or an increase of €48.2 billion in value terms) compared to the year before the invasion. Due to their lack of sanctions on Russian crude oil, the “laundromat” countries are increasing their imports of Russian crude oil, which is available to them at a reduced price.
These countries are refining the crude oil bought from Russia and exporting it to price cap coalition countries. Between the start of the crude oil import ban and price cap policy on December 5, 2022 and February 24, 2023 (a year after the Russian invasion), price cap coalition countries imported 12.9 million tonnes or €9.5 billion worth of oil products from the “laundromat” countries. Monthly oil products exported to price cap coalition countries from “laundromat” countries reached a peak in December–January 2023, after which it fell in February, heading towards levels of export before the invasion as shown in chart 2. The chart shows the monthly oil product exports (in ‘000’ tonnes per day) from the “laundromat” countries to price cap coalition countries.
According to the report, India (3.7 million tonnes) was the largest exporter of oil products to price cap coalition countries since the implementation of the crude oil price cap up to the one-year anniversary, followed by China (3.0 million tonnes). In the one year since the Russian invasion, the EU was the largest region importing from these “laundromat” countries, buying 20.1 million tonnes of oil products. Australia, the largest country to import oil products from these countries, purchased 9.1 million tonnes (€8.0 billion), followed by the U.S. that imported 8.5 million tonnes (€6.6 billion).
Chart 3 shows the amount of crude oil (in million tonnes) imported by the “laundromat” countries from Russia one year post invasion and their exports of oil products to price cap coalition countries.
The EU’s Artificial Intelligence Act
What are the stipulations mentioned in the new draft document of the European Union’s AI Act? Why are AI tools often called black boxes? What are the four risk categories of AI? How did the popularity of ChatGPT accelerate and change the process of bringing in regulation for artificial intelligence?
The story so far:
After intense last-minute negotiations in the past few weeks on how to bring general-purpose artificial intelligence systems (GPAIS) like OpenAI’s ChatGPT under the ambit of regulation, members of European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act, first drafted two years ago.
Why regulate artificial intelligence?
As artificial intelligence technologies become omnipresent and their algorithms more advanced — capable of performing a wide variety of tasks including voice assistance, recommending music, driving cars, detecting cancer, and even deciding whether you get shortlisted for a job — the risks and uncertainties associated with them have also ballooned.
Many AI tools are essentially black boxes, meaning even those who designed them cannot explain what goes on inside them to generate a particular output. Complex and unexplainable AI tools have already manifested in wrongful arrests due to AI-enabled facial recognition; discrimination and societal biases seeping into AI outputs; and most recently, in how chatbots based on large language models (LLMs) like Generative Pretrained Trasformer-3 (GPT-3) and 4 can generate versatile, human-competitive and genuine looking content, which may be inaccurate or copyrighted material.
Recently, industry stakeholders including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak signed an open letter asking AI labs to stop the training of AI models more powerful than GPT-4 for six months, citing potential risks to society and humanity. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. It urged global policymakers to “dramatically accelerate” the development of “robust” AI governance systems.
How was the AI Act formed?
The legislation was drafted in 2021 with the aim of bringing transparency, trust, and accountability to AI and creating a framework to mitigate risks to the safety, health, fundamental rights, and democratic values of the EU. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. The legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”.
Similar to how the EU’s 2018 General Data Protection Regulation (GDPR) made it an industry leader in the global data protection regime, the AI law aims to “strengthen Europe’s position as a global hub of excellence in AI from the lab to the market” and ensure that AI in Europe respects the 27-country bloc’s values and rules.
What does the draft document entail?
The draft of the AI Act broadly defines AI as “software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. It identifies AI tools based on machine learning and deep learning, knowledge as well as logic-based and statistical approaches. The Act’s central approach is the classification of AI tech based on the level of risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act — unacceptable, high, limited and minimal.
The Act prohibits using technologies in the unacceptable risk category with little exception. These include the use of real-time facial and biometric identification systems in public spaces; systems of social scoring of citizens by governments leading to “unjustified and disproportionate detrimental treatment”; subliminal techniques to distort a person’s behaviour; and technologies which can exploit vulnerabilities of the young or elderly, or persons with disabilities.
The Act lays substantial focus on AI in the high-risk category, prescribing a number of pre-and post-market requirements for developers and users of such systems. Some systems falling under this category include biometric identification and categorisation of natural persons, AI used in healthcare, education, employment (recruitment), law enforcement, justice delivery systems, and tools that provide access to essential private and public services (including access to financial services such as loan approval systems). The Act envisages establishing an EU-wide database of high-risk AI systems and setting parameters so that future technologies or those under development can be included if they meet the high-risk criteria. Before high-risk AI systems can make it to the market, they will be subject to strict reviews known in the Act as ‘conformity assessments’— algorithmic impact assessments to analyse data sets fed to AI tools, biases, how users interact with the system, and the overall design and monitoring of system outputs. It also requires such systems to be transparent, explainable, allow human oversight and give clear and adequate information to the user. Moreover, since AI algorithms are specifically designed to evolve over time, high-risk systems must also comply with mandatory post-market monitoring obligations such as logging performance data and maintaining continuous compliance, with special attention paid to how these programmes change through their lifetime.
AI systems in the limited and minimal risk category such as spam filters or video games are allowed to be used with a few requirements like transparency obligations.
What is the recent proposal on general purpose AI like ChatGPT?
As recently as February this year, general-purpose AI such as the language model-based ChatGPT, used for a plethora of tasks from summarising concepts on the internet to serving up poems, news reports, and even a Colombian court judgment, did not feature in EU lawmakers’ plans for regulating AI technologies. The bloc’s 108-page proposal for the AI Act, published two years earlier, included only one mention of the word “chatbot.” By mid-April, however, members of the European Parliament were racing to update those rules to catch up with an explosion of interest in generative AI, which has provoked awe and anxiety since OpenAI unveiled ChatGPT six months ago.
Lawmakers now target the use of copyrighted material by companies deploying generative AI tools such as OpenAI’s ChatGPT or image generator Midjourney, as these tools train themselves from large sets of text and visual data on the internet. They will have to disclose any copyrighted material used to develop their systems. While the current draft does not clarify what obligations GPAIS manufacturers would be subject to, lawmakers are also debating whether all forms of GPAIS should be designated as high-risk. The draft could be amended multiple times before it actually comes into force.
How has the AI industry reacted?
While some industry players have welcomed the legislation, others have warned that broad and strict rules could stifle innovation. Companies have also raised concerns about transparency requirements, fearing that it could mean divulging trade secrets. Explainability requirements in the law have caused unease as it is often not possible for even developers to explain the functioning of algorithms.
Lawmakers and consumer groups, on the other hand, have criticised it for not fully addressing risks from AI systems.
The Act also delegates the process of standardisation for AI technologies to the EU’s expert standard-setting bodies in specific sectors. A Carnegie Endowment paper points out, however, that the standards process has historically been driven by industry, and it will be a challenge to ensure governments and the public have a meaningful seat at the table.
Where does global AI governance currently stand?
The rapidly evolving pace of AI development has led to diverging global views on how to regulate these technologies. The U.S., currently does not have comprehensive AI regulation and has taken a fairly hands-off approach. The Biden administration released a blueprint for an AI Bill of Rights (AIBoR). Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR outlines the harms of AI to economic and civil rights and lays down five principles for mitigating these harms. The blueprint, instead of a horizontal approach like the EU, endorses a sector-specific approach to AI governance, with policy interventions for individual sectors such as health, labour, and education, leaving it to sectoral federal agencies to come out with their plans. The AIBoR has been described by the administration as a guidance or a handbook rather than a binding legislation.
On the other end of the spectrum, China over the last year came out with some of the world’s first nationally binding regulations targeting specific types of algorithms and AI. It enacted a law to regulate recommendation algorithms with a focus on how they disseminate information. China’s Cyberspace Administration of China (CAC), which drafted the rules, told companies to “promote positive energy”, to not “endanger national security or the social public interest” and to “give an explanation” when they harm the legitimate interests of users. Another piece of legislation targets deep synthesis technology used to generate deepfakes. In order to have transparency and understand how algorithms function, China’s AI regulation authority has also created a registry or database of algorithms where developers have to register their algorithms, information about the data sets used by them and potential security risks.
Members of European Parliament reached a preliminary deal this week on a new draft of the European Union’s ambitious Artificial Intelligence Act.
Many AI tools are essentially black boxes, meaning even those who designed them cannot explain what goes on inside them to generate a particular output.
The new legislation seeks to strike a balance between promoting “the uptake of AI while mitigating or preventing harms associated with certain uses of the technology”.
India slips in press freedom index, ranks 161 out of 180 nations
World Press Freedom Index is to compare the level of press freedom enjoyed by media.
THE HINDU BUREAU
India’s ranking in the 2023 World Press Freedom Index has slipped to 161 out of 180 countries, according to the latest report released by the global media watchdog Reporters Without Borders (RSF). In comparison, Pakistan has fared better when it comes to media freedom as it was placed at 150, an improvement from last year’s 157th rank. In 2022, India was ranked at 150.
Sri Lanka also made significant improvement on the index, ranking 135th this year as against 146th in 2022.
Norway, Ireland and Denmark occupied the top three positions in press freedom, while Vietnam, China and North Korea constituted the bottom three.
Reporters Without Borders (RSF) comes out with a global ranking of press freedom every year. RSF is an international NGO whose self-proclaimed aim is to defend and promote media freedom. Headquartered in Paris, it has consultative status with the United Nations. The objective of the World Press Freedom Index, which it releases every year, “is to compare the level of press freedom enjoyed by journalists and media in 180 countries and territories” in the previous calendar year.
RSF defines press freedom as “the ability of journalists as individuals and collectives to select, produce, and disseminate news in the public interest independent of political, economic, legal, and social interference and in the absence of threats to their physical and mental safety”.
The Indian Women’s Press Corps, the Press Club of India, and the Press Association released a joint statement voicing their concern over the country’s dip in the index.
“The indices of press freedom have worsened in several countries, including India, according to the latest RSF report,” the joint statement said.
“The constraints on press freedom due to hostile working conditions like contractorisation have to also be challenged. Insecure working conditions can never contribute to a free press,” it added.
(With PTI inputs)
Highly pathogenic bird flu virus puts Centre on alert
BINDU SHAJAN PERAPPADAN
Housing one of the largest livestock reserves across the world, India is at “risk and vulnerable” to the ongoing outbreaks of avian influenza (H5N1) worldwide, a worry compounded by the threat of mammalian transmission, officials have said.
“Across the world, the virus is being detected among wild birds and other species, which makes the chance of it mutating and becoming harmful greater,” warns the World Economic Forum (WEF) in its latest paper on pandemic and preparedness.
“We are concerned about it. The COVID-19 pandemic taught us that we need to be prepared all the time. If we are caught off guard, anything can happen,” said Abhijit Mitra, Animal Husbandry Commissioner, speaking to The Hindu. The Central government is now reviewing the H5N1 situation daily.
H5N1, a highly pathogenic subtype of avian influenza, was detected by the ICAR-National Institute of High Security Animal Disease, Bhopal in the samples received from the Government Poultry Farm at Bokaro, Jharkhand on February 17, 2023. India has now initiated the animal pandemic preparedness programme.
“This year has been the worst in numbers and consequent spillover to mammals,” said biologist Vinod Scaria.
Dr. Mitra also added that the speed of the spread and the expansion in the range of species affected were causes for worry.
“India has been dealing with influenza since 2006-07 and there is no vaccine for high-pathogenic avian influenza. Currently we have checked 1,500 samples from hotspots such as waterbodies and poultry farms. Out of this only one case was found positive from Alappuzha in Kerala. There is nothing to panic about, but we need to be vigilant. We must strengthen our bio-security measures,” added Dr. Mitra.
Rajnath lays foundation