Sunday, June 8, 2025

Reassessing the Indus Pact Amid Rising Tensions

On April 22, 2025, the Baisaran meadows turned bloody when the 26 tourists, including foreign nationals, were shot dead by the terrorists in Pahalgam. This terrorist act was seen as the ‘worst civilian massacre in Kashmir in recent memory’. This assault, which claimed the lives of civilians, was not only a reminder of the persistent threat to India’s national security by terrorism but also a test of our diplomatic resolve. In response to this gruesome attack, the government of India took the unprecedented step of keeping the Indus Waters Treaty in Abeyance- a step which marked a significant change in India’s approach towards cross-border terrorism.

The government's firm step in keeping the treaty in abeyance shows its strong resolve against terrorism and sends a firm message that cooperation cannot exist in the shadow of terrorism. This water-sharing pact was signed between India and Pakistan in 1960, brokered by the World Bank after years of negotiation. The need for this treaty arose after the partition of India in 1947, when the six rivers of the Indus river system were divided between India and Pakistan. The horror of the partition still didn’t leave when the issue of water sharing rose tensions between these two nations. The issue was mainly because the headwaters of the Indus and its tributaries lie mostly in India, while the river flows downstream into Pakistan, making Pakistan heavily reliant on water that originates from Indian territory. According to this treaty, the six major rivers of the Indus Basin were divided into eastern rivers and western rivers. India was given the eastern rivers, which included the Ravi, Beas, and Sutlej. And Pakistan was allocated western rivers, which included the Indus, Jhelum, and Chenab, although India was permitted limited access to these rivers for non-consumptive needs such as irrigation, hydropower generation, and transport, under strict technical conditions.

Since 1960, the IWT has stood firm in spite of all the hostility from both nations. However, for the past two decades, the treaty has come under increasing strain. In recent years, there have been demands to revisit or revise the treaty, especially in the wake of the terrorist attacks allegedly originating on Pakistani soil. In this context, the recent decision of the government to put the Indus Waters Treaty “in abeyance” is a bold move and a significant departure from its previous stance of compliance and restraint. The big question is what is the future of the treaty post India’s decision to keep it in abeyance.

Notably, the treaty does not allow either side to unilaterally stop, withdraw from, or terminate the agreement, even while it includes all procedures for settling disagreements, problems, and disputes. Only a fully ratified agreement between the two countries may amend or terminate the treaty, as stated in its Final Provisions.

In international law, the word “suspension” is commonly used instead of “abeyance”. Since the IWT does not mention suspension, states may refer to the Vienna Convention on the Law of Treaties (VCLT) or Customary International Law (CIL) for their rights. However, VCLT came in 1980, and IWT was signed in 1960. Thus, VCLT cannot be applied to IWT. However, multiple provisions continue to apply as CIL. Notwithstanding, Article 60, according to the ICJ, is reflective of CIL, which allows the suspension and termination of a treaty in case of a material breach. India ultimately is left with one recourse, which is Article 62, which talks about a Fundamental change in circumstances(FCC). In its letter to Pakistan, the reason given by India to hold IWT in abeyance is FCC.

In light of the horrifying Pahalgam terror assault, India's decision to put the Indus Waters Treaty on hold is a daring and unusual change in its geopolitical position. Although the IWT has traditionally been seen as a symbol of collaboration in times of crisis, India has been compelled to reevaluate its responsibilities due to changing security conditions and ongoing cross-border terrorism. India's action is a diplomatic message that peaceful engagement cannot continue in the face of unbridled hostility, even though international law provides few options for unilateral suspension, particularly under Article 62's stringent requirements for Fundamental Change in Circumstances. This moment could determine the future of Indo-Pak ties and the viability of water diplomacy in conflict-prone areas as the region struggles with intricate hydro-political and security issues.

 

 

 

 

Saturday, June 7, 2025

The Role of AI in today’s Data-driven Environment



Introduction

In the 21st century, data on ordinary people is being collected at an unprecedented rate. Data is collected from smartphones, social media accounts, and sensors. Big companies are datafying our everyday lives and gaining an unfair advantage. So much data in the online field is both a challenge and an opportunity. In this scenario, the emergence of AI came as a boon to the industries; this technology could do things that were nearly impossible for manpower to do with precision. It enabled organisations, governments, and individuals to make sense of vast and complex datasets.

In simple terms, AI simulates human intelligence to perform tasks like reasoning, learning, problem solving, and decision making. To work, AI also needs data from various fields and areas. Suppose AI is combined with the ever-growing data from all over the world. In that case, it will become as powerful as ever- detect patterns with precision, predict outcomes and automate complex decisions. The combination of AI and data is making every sector more powerful than ever- healthcare, finance, marketing, transportation, and education, all these sectors are enjoying unrestricted freedom to use personal data of people to make a profit and strengthen their power over common people's choices.

Society today is not merely data-rich but also data-driven. Decisions in various sectors—from micro-targeted ads to national policy—depend on the supply of data, which is enabled and accelerated by AI systems. Thus, AI can be seen as a helping hand in converting raw data into usable information.

The interdependence of AI and Data

Data and AI in this technological era are intertwined. AI works on machine learning; it has to be provided with data to work. AI heavily depends on data to function, learn, and improve. In fact, AI does not have the ability to possess intelligence; rather, they are provided with large datasets that help it to perform exceptionally. AI is mainly dependent on learning from the data given. For instance, in supervised learning, algorithms are trained on labelled data, such as thousands of images tagged as cats or dogs, to learn how to distinguish between them. In unsupervised learning, AI explores datasets to find hidden structures, such as customer segments or fraudulent transactions. In reinforcement learning, systems learn optimal actions through trial and error, using data from their environment. In all these cases, the quantity, quality, and diversity of data directly influence AI’s performance.

A new phrase, “data is the new oil,” has emerged from this interdependence of the data and AI, which means that the data fuels the new AI system. But in this scenario, unlike oil, which is a one-time use, data can be reimagined and repurposed to improve algorithms over time. The relation between data and the performance of AI is proportional. The more AI is provided with data, the more accurate and robust it becomes. Examples include AI-based voice assistants like Siri or Alexa, which improve their speech by understanding the user's language and collecting the speech data. However, it is not necessary that this interdependence does not pose challenge. Providing a wide range of datasets or incomplete datasets can make the AI biased. Since AI do not think of itself, it is possible that the AI outputs become biased and reinforce stereotypes or make inaccurate predictions. Along with this, data privacy and protection also become a critical concern when AI is dealing with sensitive, personal and financial information of a wide range of people.

Working with AI is not merely technical—it is strategic and ethical. Organizations that take this Relationships seriously harness a good relationship and gain a competitive edge over those that ignore its modern-day implications.

Key Applications of AI in a Data-Driven World

In this data-centric era, as data volume escalates, the necessity for AI to analyze, comprehend, and respond to information intensifies, rendering AI the sole feasible answer. AI is necessary for organizations like business and marketing, healthcare, finance, and the environment to work together with their enormous databases. AI has completely changed how businesses make decisions, interact with customers, and run their operations. Personalisation of the customer is crucial in influencing our decisions. AI is used by streaming services like Netflix and ecommerce sites like Amazon to analyse user behaviour and browsing history and provide tailored suggestions for content or products. With each engagement, this system gains knowledge and gets better over time. AI is also capable of predicting demand, market trends or customer churn by analysing historical sales, transactional logs, and feedback data.

In healthcare, AI helps diagnose diseases. The algorithms are trained on vast datasets of medical images like X-rays, MRIS, etc., and they are capable of diagnosing the most complex diseases with accuracy. AI can also analyse patient records and lifestyle data to predict the onset of chronic illnesses like diabetes or heart disease, allowing for early intervention. AI accelerates the process of identifying new drug compounds by simulating molecular behaviour, drastically reducing the time and cost of development.

In the finance sector, AI managed to evaluate risks, enhance security and create value through data intelligence. AI systems monitor thousands of transactions per second, flagging suspicious activities by identifying anomalies in behaviour patterns. Traditional credit scores are enhanced using AI models that analyse alternative data, such as social behaviour, spending patterns, or even smartphone usage, particularly helpful for assessing unbanked or underbanked individuals. AI is at the core of high-frequency trading systems, analysing real-time market data to make split-second investment decisions.

Challenges and Ethical Concerns

Even though there are many benefits to adopting AI in this data-driven world, there are significant ethical, social, and technical challenges as well. The concepts of privacy, fairness, accountability, and human dignity are not merely hearsay but have real-life consequences.

As stated above, AI works on the data provided by the user or the existing data. The main issue in AI is its algorithmic bias. Since AI learn from the data, any existing bias in the data can be reflected and even amplified by the algorithms. For example, Facial recognition systems have been found to misidentify women and people of colour at significantly higher rates due to underrepresentation in training datasets. Hiring algorithms trained on historical company data may unfairly favour certain genders or backgrounds, perpetuating workplace inequality. Such biases can lead to discrimination in critical areas like hiring, law enforcement, healthcare, and lending.

AI systems frequently need access to behavioural, personal, and even biometric data because they are data-driven. This brings up important issues of autonomy, surveillance, and consent. Social media sites and smart devices are always gathering user data, frequently without explicit or informed consent. The erosion of civil liberties and widespread surveillance brought about by AI-driven technologies like facial recognition and predictive policing have provoked criticism. Strict data protection regulations are necessary since AI systems use geographic, financial, and personal health data.

Many artificial intelligence (AI) systems, particularly deep learning models, function as "black boxes"—they provide results without clearly stating how they were arrived at. Explainable AI is essential in high-stakes fields like criminal justice and healthcare. An AI's recommendation or forecast must make sense to a judge or doctor. Accountability is compromised by lack of interpretability, which makes it challenging to spot mistakes or challenge judgments. Explainable AI frameworks are being developed, however it's still difficult to provide transparency without compromising efficiency.

Repetitive, manual, and even some cognitive tasks are being replaced by AI-driven automation. Although efficiency is increased, the labor market is also disrupted. Employees in customer service, manufacturing, and logistics are most at risk from automation. The gap between those with access to AI and data literacy and those without is widening in the digital sphere. To guarantee a fair transition for displaced workers, governments and businesses must fund reskilling and upskilling initiatives.

The speed of AI innovation often outpaces the development of laws and regulations. As a result, there is a lack of global consensus on how to govern AI ethically and effectively. Some countries have proposed ethical AI guidelines, but enforcement remains weak. Cross-border data flows and AI applications in warfare or deepfakes highlight the need for international cooperation. Without strong legal frameworks, there is a risk that AI could be used unethically or irresponsibly, further widening inequality and destabilising institutions.

Regulatory and Governance Frameworks

As AI becomes increasingly embedded in our daily lives and critical infrastructure, there is a strong need to regulate and govern it through frameworks. While AI presents transformative opportunities, the ethical, social, and economic challenges it poses cannot be left to selfregulation by tech companies alone. Governments, international bodies, and civil society must play a central role in shaping AI’s development and deployment to ensure that it serves the public good.

Countries and regions are forming frameworks to regulate the use of Artificial Intelligence and data usage. The European Union has proposed the world’s first comprehensive AI regulation – the AI Act, which classifies AI systems based on risk levels(unacceptable, high, limited, and minimal). According to this act, High-risk applications such as facial recognition, predictive policing, and biometric identification are subject to strict obligations regarding transparency, fairness, and human oversight.

In India, the DPDP Act, 2023, has been introduced, which focuses primarily on data governance but indirectly shapes AI development by mandating lawful and fair processing of personal data, consent-based data collection, and user rights. As most AI systems rely on personal data, this law introduces a much-needed layer of accountability for developers and users of AI systems. However, in recent times, this act has been criticised for giving too much leverage to AI and data collection. Instead of protecting the rights of common people as against the abuse of their personal data, this act has been protecting the data abusers.

The United States has taken a sectoral approach, with guidelines issued by various federal agencies (e.g., the FDA for medical AI, the FTC for consumer protection). However, calls for a unified national AI framework are growing louder amid concerns about algorithmic bias and disinformation. Soft Law has been introduced by international organisations to promote responsible AI. The OECD AI principles promote AI that is innovative and trustworthy, that respects human rights and democratic values. UNESCO’s Recommendation on the Ethics of Artificial Intelligence was the first global agreement that provided a framework for the ethical use of AI across borders. Companies like Google, Microsoft, and Openai have published internal AI principles, but critics argue these are insufficient without third-party oversight. 

Even after all the progress, regulation still lags behind innovation. There are differing national laws that are leading to a regulatory framework. The lack of technical expertise and institutional capacity hampers enforcement. The way forward should be that the governance must involve governments, academia, the private sector, and civil society to ensure that diverse perspectives are considered.

The Future of AI in a Data-Driven Society

The future of AI in a data-driven society is very promising. As we move forward, the role of AI will not only be just about automation and prediction, but it will also be about collaboration, creativity, and augmenting human potential in ways previously unimaginable. As computing power, data availability, and machine learning techniques progress, AI will become more integrated into every aspect of our daily lives.

As AI becomes more popular, more developments are being introduced. Generative AI is one recent development. Tools like ChatGPT, DALLE.E, and Sora can create human-like texts, images, music, and videos. These systems are not just processing data—they are producing content, assisting writers, artists, marketers, coders, and researchers. In the future, AI is going to make our work easier by enhancing productivity through AI-assisted writing, coding, designing and transforming industries such as media, advertising, education, and entertainment.

The major fear related to AI is that it will replace the manpower in many industries, but in the future, AI will be mainly about collaborating with humans. Industries would move on the path of profit if both AI and humans collaborated in the future. Doctors will use AI to interpret scans, but still make the final diagnosis. Teachers will use AI to track student performance, but still guide learning. Policymakers will use AI to simulate economic impacts, but still make value-based decisions.

As AI continues to evolve, there will be increasing emphasis on responsible and inclusive development. Greater demand for ethical AI education and research. Stricter global norms to ensure data protection, fairness, and transparency. Inclusion of marginalized communities in the design and governance of AI systems to prevent digital inequality.

Conclusion

Artificial intelligence is more than simply a new technology in today's data-driven world; it is a force that is changing the way we work, live, and think. AI has demonstrated its potential to extract value from data, making processes more effective, individualized, and predictive in a variety of industries, including healthcare, banking, education, and entertainment. Its mutually beneficial interaction with data has opened up previously unimaginable possibilities. This change is not without its difficulties, though. If utilized improperly, the same data that gives AI its power can also be used to discriminate, monitor, and exclude people. The ethical considerations of accountability, privacy, and transparency serve as a reminder that human values are inextricably linked to the development of AI. As society becomes more reliant on intelligent systems, the need for robust governance frameworks, ethical standards, and inclusive development practices becomes paramount.

As we look to the future, the development of AI has both great potential and significant responsibility. We must make sure that AI is created to benefit people as well as markets, boosting human potential, lowering inequality, and upholding fundamental rights. Governments, technologists, civil society, and the international community must work together to achieve this. In the end, AI's function in a data-driven world is not limited to information processing; it also aids in the creation of more intelligent, compassionate, and sensible decisions. AI has the potential to be a formidable ally in creating a more intelligent, just, and sustainable world if used properly.

Reassessing the Indus Pact Amid Rising Tensions

O n April 22, 2025, the Baisaran meadows turned bloody when the 26 tourists, including foreign nationals, were shot dead by the terrorists i...