AI in the Military Domain: Inria Chile at the Paris Peace Forum 2025

Date :
Changed on 12/11/2025
Inria Chile participated in the Paris Peace Forum, a key event for global dialogue and peace, held annually in France since 2018. Nayat Sánchez-Pi, Director of Inria Chile and the Franco-Chilean Binational Center on Artificial Intelligence, actively took part in the workshop organized on Day 0 of the Forum that was focused on the impact of digital technologies, particularly artificial intelligence, in the military domain.
_DSC7276

 

The 8th edition of the Paris Peace Forum, an annual initiative launched by French President Emmanuel Macron in 2018, was held on October 29 and 30, 2025. The event has established itself as a key platform for global diplomacy and governance. Under the theme "New Coalitions for Peace, People, and the Planet," the Paris Peace Forum 2025 emphasized the urgency of forging new alliances to address global challenges in a context marked by political fragmentation and the proliferation of conflicts.

Within this framework, the workshop “From disruptive code to restrained conflict: Governing the war-to-peace continuum of emerging technologies” took place. The session explored how recent conflicts have increasingly integrated information technologies, cyber capabilities, and AI-based systems into military operations, posing new challenges for international law and the protection of civilian populations.

Nayat Sánchez-Pi, Director of Inria Chile and the Franco-Chilean Binational Center on Artificial Intelligence, participated in the workshop’s first roundtable, titled “Governing emerging means of warfare”. She was joined by Sofía Romansky, Strategic Analyst at the Hague Centre for Strategic Studies; Saeed Aldhaheri, Director of the Center for Future Studies at the University of Dubai; Bruce Watson, President of Qorsa Labs; and Ernst Noorman, Ambassador for Cyber Affairs of the Netherlands.

The session focused on AI as a transformative technology shaping civilization, with profound implications for the military domain. It reviewed the international political momentum toward the responsible use of AI in military contexts and discussed key findings and recommendations from strategic reports aimed at enhancing international cooperation and governance mechanisms.

 

Day0 Photos8

The New Battlefield: Cognitive Warfare

In her remarks, Nayat Sánchez-Pi emphasized that today’s most urgent security challenge is no longer limited to kinetic force but has evolved into "cognitive warfare." She argued that modern conflicts are fought on the cognitive plane, seeking to subvert network users rather than merely disrupting the networks themselves.

The central objective of this new form of conflict, she noted, is to manipulate the human decision-making space. This is achieved by fostering distrust, destabilizing institutions, and distorting societal decisions. This is a battle fought using digital information, meaning the civilian population, its data, and its attention are all part of the frontline.

The AI Paradox: Dual Use

The Director of Inria Chile commented that artificial intelligence lies at the heart of this conflict, presenting a profound dual-use dilemma.

  • As a shield: AI is our most promising defense, used for real-time cyber defense, anomaly detection, and the automation of protection systems.
  • As a weapon: Adversaries use it to automate sophisticated attacks designed to "hack people." These attacks exploit the cognitive biases inherent in the human mind and turn our own digital footprint against us.

According to Sánchez-Pi, the consequence is a dual threat: not only are technical systems at risk, but also are rational judgment and democratic functioning.

 

5314

Responsible by Design, the First Report on Risks, Opportunities, and Governance of AI in the Military Domain

The report "Responsible by Design: Strategic Guidance Report on the Risks, Opportunities, and Governance of Artificial Intelligence in the Military Domain" is the result of 18 months of work by experts from the Global Commission on Responsible AI in the Military Domain (GC REAIM). This Commission is an initiative launched by the Ministry of Foreign Affairs of the Kingdom of the Netherlands, with the Hague Centre for Strategic Studies (HCSS) serving as its Secretariat.

Among the Commissioners, Nayat Sánchez-Pi’s, Director of Inria Chile, contributions stands out, as a member of the Expert Advisory Group since March 2024. For Sánchez-Pi, the involvement of institutions like Inria in preparing reports such as this is essential.

Verbatim

Europe, and France in particular with institutions like Inria, has a unique role to play: one focused on digital sovereignty grounded in human values.

We are major actors in AI research, which gives us the technical legitimacy to shape the conversation. Our priority is to ensure that the standards, definitions, and governance models for military AI align with our core values ​​of human agency and international law.

Auteur

Nayat Sánchez Pi

Poste

Director Inria Chile / Director Franco-Chilean Binational Center on Artificial Intelligence

The Report addresses the integration of artificial intelligence (AI) in the military domain, highlighting the profound implications this has for the conduct of conflicts and the management of international peace. It acknowledges that AI, particularly machine learning (ML), raises serious ethical, legal, and operational concerns that challenge conventional notions of human agency and accountability. To mitigate these risks, the Commission advocates for a "responsibility by design" approach, ensuring that ethical and legal compliance is embedded from the earliest stages of development and throughout the entire lifecycle of AI systems.

The document was launched during the 80th United Nations General Assembly in 2025 in New York on September 25, 2025, by the Netherlands, as part of the UN Security Council debate on "Artificial Intelligence and International Peace and Security," presided over by the Republic of Korea.

It articulates three guiding principles: compliance with international law and ethical principles; adherence to structured design and testing processes to safeguard human agency (ensuring that ultimate responsibility for critical decisions remains with humans); and continuous training and institutional support for operators.

The Report calls for "responsibility by design," where ethical and legal compliance is integrated throughout the lifecycle of AI systems and it highlights five recommendations tailored for responsible practices at all levels of the lifecycle of AI sociotechnical systems, including organizational design considerations, as well as specific guidance for states, armed forces, and industry, to advance effective governance of AI in this sensitive domain.

  • Anchor the responsible development and use of AI in the military domain in relevant and applicable ethical principles and international law.
  • Agree, at a legally binding level, that the decision to authorize the use of nuclear weapons should remain under human control.
  • Implement national policies that guarantee human responsibility across the AI system lifecycle and that are demonstrably grounded in human-centric training and rigorous testing, evaluation, validation, and verification.
  • Establish a permanent, inclusive, multi-stakeholder, and multilateral dialogue on the responsible integration of AI into the military domain.
  • Develop a centralized, multi-stakeholder expert network on AI in the military domain to disseminate knowledge for capability and capacity building.

The primary objective of these recommendations is to harness AI transformative potential while addressing associated risks by demarcating acceptable and unacceptable uses, as well as criteria for institutional design.