Introduction
Military training, a cornerstone of the readiness and effectiveness of the Armed Forces,
stands out as a pivotal point of transformation. Traditional training, reliant on static,
analogue and often linear learning models, struggles to meet the growing demands of an environment where
threats are increasingly complex, multifaceted, and rapidly evolving. In this context, artificial intelligence (AI)
emerges as a powerful catalyst, enabling a shift from the traditional ‘analogue warrior’ to the
‘algorithmic strategist’ , reflecting the integration of advanced computational models, data, and automated
systems into the processes of learning, cost management, and decision-making
(Biggs, 2025, p. 24).
This (r)evolution is marked by the parallel adoption of technologies such as
Live Virtual Constructive (LVC) synthetic environments, big data processing, decision-support tools,
and the application of machine learning algorithms for adaptive and personalised training.
The result is the creation of a new training ecosystem that offers greater realism, faster learning times,
improved performance, and a safer environment for testing and iterating missions
(Rozman, 2020).
At the international level, organisations such as NATO, as well as countries such as the United Kingdom
and the United States have already introduced strategic frameworks that not only promote the development
of technological solutions but also enforce strict ethical, governance, and security frameworks to ensure
that the integration of AI into military training is conducted responsibly
(Burt, 2021, p. 1).
The article attempts to map the educational AI landscape in military training, focusing on AI’s
evolving role in Professional Military Education (PME). It adopts a qualitative, desk-based synthetic
methodology characteristic of defence and strategic studies.
It systematically reviews and synthesises recent (2018–2025) open-source academic literature,
official policy documents, and programmatic reports from NATO, the EU, UK Ministry of Defence,
and US Defence research bodies (i.e. ARL, DARPA, CDAO). Through comparative analysis, it contrasts
traditional analogue training with emerging AI-enabled algorithmic paradigms across pedagogical,
technological, and operational dimensions. International responsible-AI frameworks are examined,
while selected real-world cases (e.g., US Adaptive Training System, Project Maven) serve as
illustrative proof-of-concept. No primary data is collected; the contribution lies in
the structured integration of technical opportunities, governance requirements, cultural challenges,
and actionable policy recommendations derived from high-quality secondary sources. Its aim, to inform PME practitioners,
military leaders, and policymakers about current technologies, methods, challenges, and opportunities
that AI offers in transforming military training.
From Analogue to Algorithmic Training
In the ‘analogue’ era, military training relied on its effectiveness through fixed processes,
oral knowledge transmission, classic scenarios, and empirical practices that were repeated
regardless of the individual strengths or weaknesses of each trainee. Field exercises, the
traditional disciplined role of the instructor, and a focus on standardisation ensured a
foundation for readiness and cohesion but often failed to adequately address the interactive and
rapidly evolving nature of modern threats
(Rozman, 2020;
Biggs, 2025, pp. 23, 35).
In contrast, algorithmic training brings ‘digital transformation’ to the core of learning and
military preparation, integrating artificial intelligence systems, ‘constructivist’ learning environments (e.g., LVC),
high-fidelity simulations and flexible, personalised learning programmes based on real-time analysis
of training performance data. Each training scenario can be tailored to the specific characteristics,
level, and cognitive speed of each trainee, while machine learning algorithms provide real-time feedback,
error detection, improvement suggestions, and dynamic difficulty scaling
(Rozman, 2020;
UK MOD, 2022, pp. 1, 7, 20, 26, 27, 31, 36, 52).
Additionally, these systems collect and analyse large volumes of data (telemetry,
behavioural metrics, decisions, reflexes, etc.), creating continuously updated, individualised learning
profiles that enable instructors to target specific areas of deficiency or potential improvement.
Furthermore, algorithmic training promotes ‘reflection’, (i.e., the critical analysis of trainees’
actions through evidence-based, automated After Action Reviews (AAR) and Data-driven Mentoring Cycles (DMC))
(Biggs, 2025, pp. 24, 29, 35;
UK MOD, 2022, pp. 1, 24-25, 27).
This new paradigm enables militaries and leaders to enhance their adaptability,
respond more effectively to unexpected events, and reduce the gaps between the training process
and operational reality, placing the human at the centre of an algorithmically enhanced system
for optimal decision-making
(Rozman, 2020;
UK MOD, 2022, pp. 1, 6-7, 15, 17, 20, 27).
This personalised, data-driven training continuously adjusts difficulty, scenarios, and feedback to each
learner’s performance, strengthening pattern recognition, situational awareness, and stress-tested judgment
in conditions that closely mirror operational complexity. In turn, commanders gain richer, real-time insight
into individual and collective readiness, enabling more timely, proportionate, and well-informed decisions under uncertainty.
International Context and Principles of Responsible Adoption
The rapid integration of artificial intelligence (AI) into the defence sector
and particularly into military training, requires a clear and ambitious international framework
that not only promotes innovation and operational superiority but also ensures responsible management,
transparency, and accountability. Organisations such as NATO have already adopted specific strategies
that define core principles and protocols for the responsible adoption of AI, in accordance with
democratic values and fundamental human rights.
NATO Guidelines and Principles Framework
NATO’s AI strategy is grounded in six fundamental principles: lawfulness, responsibility, traceability,
reliability, governability, and bias mitigation. Every military AI application must align with these
principles, ensuring that the use of technology is permissible, controlled, and its outcomes predictable
and regulated. Interoperability among allies, protection of technological superiority and prevention
of malicious uses are central pillars to ensure AI enhances collective security rather than becoming
a source of instability
(Burt, 2021, pp. 2-3;
Clement, 2024, pp. 6, 12) .
European Approach and Regulatory Framework
In parallel, the European Union has developed a human-centric and trustworthy approach to AI,
focusing on excellence, trust, and the safeguarding of democratic values. The EU’s AI action
plan includes measures such as developing big data infrastructures, enhancing access to high-quality data,
training AI talent, as well as establishing compliance and oversight frameworks, which also apply
to the defence sector. Adherence to detailed data governance and algorithm explainability is a
prerequisite for maintaining trust and avoiding biases and errors in the training process
(European Commission, 2025).
Necessary Actions for Responsible Adoption
To ensure that the use of AI in military training is not only effective but also certified as safe and ethical,
the following actions are required:
Transparency and Traceability: Comprehensive documentation of algorithmic decisions and training processes
to ensure they are easily analysed and audited by humans and institutions
(Burt, 2021, p. 2;
UK MOD, 2022, pp. 37, 45, 53).
NATO’s AI principles require logging data, models, and parameters behind targeting-support recommendations,
allowing commanders and legal advisors to reconstruct and audit decision paths in case of incidents.
Data Protection and Cybersecurity: Implementation of encryption technologies, access control policies,
and audit systems to prevent breaches or leaks of sensitive training and operational data
(UK MOD, 2022, pp. 24-25, 27, 36).
In the UK Defence AI Strategy, AI training systems processing operational data must be within a secure
digital backbone with role-based access and encryption, preventing telemetry exfiltration or reuse by adversaries.
Addressing Biases: Systematic testing and correction of algorithmic biases that could compromise
the effectiveness or fairness of training
(Burt, 2021, p. 2;
UK MOD, 2022, pp. 17, 27, 44, 48).
Datasets for AI assessment tools must be regularly audited to identify biases, such as over-rewarding
certain branches, regions, or career paths, with mitigation measures applied before large-scale use.
Human Oversight: Human judgment and responsibility remain a cornerstone at every stage, ensuring AI
serves as an aid rather than a substitute for critical decision-making
(Burt, 2021, p. 2;
Clement, 2024, p. 4).
NATO guidance mandates AI systems in mission rehearsal or targeting to show options and risks
but never carry out kinetic effects without human review and approval.
Interoperability and Collaboration: Cooperation with allies to adopt common standards and
interoperability specifications, eliminating risks of incompatibility and enhancing the collective impact of AI
(Burt, 2021, p. 2;
Clement, 2024, pp. i, 3, 6, 8, 11-12).
In NATO-led multinational exercises, interoperable data standards, and shared AI
governance enable allies to integrate their simulation and decision-support tools into a
common LVC environment, avoiding isolated national systems.
Innovative, Feasible Solutions in Military Training with AI
Adaptive training using AI can serve as an innovative application in military education,
particularly within PME institutions, as it enables personalised learning based on the skills,
weaknesses, and learning pace of each trainee. Unlike traditional training, where everyone follows
a common structure of lessons and exercises, adaptive training relies on algorithms that analyse
performance data in real time and dynamically adjust the training content. For example,
if a soldier underperforms in a specific area, such as urban combat exercises,
the algorithm provides increased and targeted training in that domain. This approach saves resources
and time, as trainees do not expend energy on areas where they already excel
(Martin et al., 2020, p. 7),
while also contributing to stress reduction and improved psychological resilience,
as the difficulty level increases gradually rather than abruptly
(Groombridge, 2022) .
A tangible example of such a training approach comes from the US Army Research Laboratory (ARL),
which has developed the Adaptive Training System (ATS). This system trains military units through
personalised exercises that enhance readiness at a faster rate compared to traditional methods
(ARL Public Affairs, 2018).
Such a training model could be particularly applicable, as the rapid training of both conscripted soldiers,
and permanent personnel poses a challenge, and the use of AI could ensure that each
trainee receives the training they genuinely need.
Furthermore, Generative Artificial Intelligence (AI) offers an unprecedented opportunity
to create realistic and multidimensional training scenarios, surpassing the limitations
of traditional military training. With the aid of algorithms such as Generative Adversarial Networks (GANs)
and Large Language Models (LLMs), images, videos, audio data, and complete mission scenarios can be
generated for use in Virtual Reality/Augmented Reality (VR/AR) training environments
(Roff, 2024).
In this way, a soldier can engage in a fully virtual battlefield, confront
‘learning’ virtual adversaries, or participate in rescue missions that are difficult to
replicate with traditional methods, such as natural disaster scenarios.
Generative AI can be integrated into military training to automate the production of simulation materials,
significantly reducing associated costs and enabling scalable exercise creation
(Biggs, 2025, p. 25;
Broo, 2023).
However, the quality of AI-generated content remains an area of active discussion,
with ongoing efforts to ensure human oversight and maintain instructional effectiveness.
In every Theatre of Operations, generative AI could prove particularly useful in training
personnel to handle tensions, crises, warfare, or search and rescue missions, as it can
create realistic and unpredictable scenarios that enhance trainees’ readiness with personalised
exercise content, simulating operational conditions that closely resemble reality.
Beyond scenario creation, decision-making under pressure and uncertainty is another critical aspect
of military training, where decision-support tools can significantly enhance the effectiveness of
commanders and trainees. Specifically, these systems collect data from multiple sources,
such as satellite imagery, intelligence reports, videos, and sensors to then use machine
learning algorithms to filter out ‘noise’ – meaning irrelevant information – and highlight critical data
(Cummings, 2017, p.3).
For example, potential ambushes can be identified early, their risk assessed and multiple
alternative action plans proposed. The US Department of Defense’s ‘Project Maven’ is a notable example,
utilising AI for image analysis from unmanned aerial vehicles, providing real-time information to analysts and staff
(CDAO, 2023).
Similarly, the Defence Advanced Research Projects Agency (DARPA) is developing
Deep Exploration and Filtering of Text (DEFT) for the automated processing of vast amounts of
data in operational environments
(DARPA, 2020).
Applying these technologies to military training offers a unique opportunity to practice in realistic conditions,
enhancing decision-making speed, and judgment accuracy, as the abundant existing data can support a wide range
of scenarios and highlight multiple potential outcomes based on decisions made.
Leadership, Command, and Resilience in the Era of Algorithmic Military Training
The introduction of Artificial Intelligence (AI) and algorithmic systems into military training marks a
fundamental shift in the approach to command and leadership. Traditionally, military command relied on
centralised hierarchical models, where authority and decision-making were concentrated at higher echelons,
with bureaucracy serving as a key regulator of processes and the flow of orders. The new algorithmic model
promotes decentralised and flexible command, granting smaller-scale units and field commanders’ access to
predictive data, automated decision-support tools, and continuous real-time feedback, thereby enhancing
speed and adaptability in combat
(Biggs, 2025, pp. 25-26).
The transition to algorithmic military training, however, faces significant challenges and resistance.
According to the Swedish Defence Research Agency (FOI), traditional military and political hierarchies,
which are rooted in values of discipline, strict control, and clearly defined roles, struggle to embrace
innovations that reduce centralised oversight or shift critical decision-making processes to lower echelons.
The embedded culture and bureaucratic processes can hinder rapid adoption of algorithmic tools,
mainly due to concerns over role changes and transparency, while still aligning with oversight
institutions overseeing authority, access, and accountability. Additionally, the difficulty in
interpreting complex algorithmic models heightens uncertainty, while the need for transparency and
human oversight in critical decision situations makes full integration of such systems a challenging
process. These systemic and cultural factors are considered equally critical to the success or failure
of adoption of AI in the military domain, alongside technical and legal obstacles
(Svenmarck et al., 2024, pp. 1, 4-5, 9)
Furthermore, a key source of resistance is the human factor itself: ambiguity regarding the role of leaders,
fear of replacement or substitution by machines and the need to maintain personal influence and leadership
identity can lead to conflicts and delays in adopting new practices. This resistance is compounded by
concerns over diminished authority, as leaders grapple with trusting AI-driven insights versus traditional
intuition. Cultural inertia within military structures, coupled with scepticism about AI reliability and
ethical implications, further hinders integration. Effective change management, including targeted training
and clear communication, is essential to align human expertise with algorithmic advancements,
fostering acceptance, and collaboration.
(Biggs, 2025, p. 29).
Successfully managing this change requires continuous training and sensitisation of leadership on
technological and administrative issues, the creation of participatory decision-making frameworks
that integrate human judgment with algorithmic tools and the gradual introduction of practices with
clear benefits and support from both central and regional leadership. This involves fostering digital
literacy among commanders, ensuring they understand AI capabilities and limitations. Collaborative
platforms must be developed to balance algorithmic precision with human intuition, while pilot
programmes can demonstrate tangible improvements in operational efficiency. Engaging stakeholders
at all levels builds confidence and mitigates resistance, ensuring a smooth transition to AI-enhanced systems
(Biggs, 2025, pp. 23-26, 30, 32, 34-35;
UK MOD, 2022, pp. 17-18, 29-30, 33).
At the same time, there is a need to foster a culture of trust in new technologies,
emphasising ethical use, performance monitoring, and ensuring full accountability, so that algorithmic
command functions as a ‘partner’ to human leadership rather than a replacement.
This requires transparent AI systems with traceable decision-making processes to build confidence.
Regular ethical training, robust oversight mechanisms, and clear accountability protocols are essential
to align AI with military values. By promoting collaboration between human expertise and algorithmic
insights, militaries can enhance decision-making while preserving leadership integrity and trust
(Burt, 2021, pp. 2, 4;
Clement, 2024).
Case Studies: Global Applications and Successes
United States – US Army, Synthetic Training Environment (STE)
One of the most advanced examples of AI implementation in military training is the US Army’s STE.
This is a unified architecture that integrates LVC elements, enabling the simulation of multiple scenarios in
real time with a high degree of realism, analysis and personalisation.
The STE allows trainees to experiment with tactics, record and analyse each step, while the platform’s
algorithm identifies errors, suggests improvements, and proposes personalised corrective and tactical
skill-enhancing exercises. Thanks to its fully connected infrastructure, real-time collection of
large-scale data (telemetry) is achieved, leading to trainee performance evaluations and rapid
adjustments to training plans. The psychological fidelity and continuous feedback contribute
to increased readiness for real operational conditions
(Rozman, 2020).
United Kingdom – Digital Targeting Web (DTW) and ASGARD
The United Kingdom has laid the foundations for algorithmic training through the DTW
programme and decision-making platforms such as ASGARD. The DTW connects sensors, tactical executors,
and strategic entities via a central algorithmic core, enabling rapid data exchange,
predictive decision support, and retrospective scenario analysis. In the ‘Spring Storm’ exercises,
the AI engine accelerated the risk assessment process, minimising the time for critical military
decisions, while simultaneously freeing human leadership for higher strategic judgments
(Defence Science and Technology Laboratory, 2025).
Thanks to the ASGARD programme, military training in the UK acquires a clear human-machine structure,
combining command intuition with algorithmic analysis and enabling multiple, high-tempo scenario iterations
and analysis of adverse or ambiguous battle data.
NATO – Allied Interoperability and Trials
NATO leads international initiatives for implementing AI in joint training through organisations
like DIANA and Allied Command Transformation (ACT), promoting principles of interoperability, ethics,
and responsibility. Algorithmic team-fusion scenarios enable member states to apply
common standards and systems, enhancing data security, and decision compatibility.
A notable example is the NATO AUKUS AI trials, where human-centric and algorithmic teams were
simultaneously evaluated for speed and depth of decision-making, leading to evidence-based
recommendations for integrating AI into tactical learning cycles
(Etl, 2023, pp. 495-496, 503-504).
Defence Industry – Examples: Thales AIMTTS, BAE, etc.
The penetration of AI into the defence industry is considered of critical importance. Indicatively,
Thales, with its AI-based Mission Training and Tactical Solutions (AIMTTS) system, promotes real-time analyses
for advanced individual shooting, while BAE Systems develops AI models for enhanced judgment in combat,
quantifying performance under stress conditions and effectively analysing leadership indicators
(GDT, n.d.).
Innovations to the PME structure
Integrating AI into PME involves mapping emerging technologies to operational realities across
all educational levels. Strategically, AI decision-support aids senior planning and wargaming.
Operationally, AI-powered LVC environments facilitate brigade and division exercises, simulating complex multi-domain operations.
Tactically, adaptive modules and AI co-coaching provide individual feedback, analytics,
and scenario-based training for small-unit leaders. Institutionally, AI tools assist curriculum design, assessments,
and training needs detection.
This multi-level approach ensures AI is not introduced in isolation but is harnessed to maximise PME outcomes,
cultivating agile, data-literate military professionals capable of leading in rapidly evolving operational environments.
The interconnection of military training and industrial innovation production enables the rapid dissemination of results,
‘pilot-to-scale’ applications, and enhancement of overall operational and administrative readiness.
Security and Ethics Policies in AI-Driven Military Training
The transition to ‘algorithmic’ military training introduces new security and ethical challenges, as sensitive data,
complex programs, and autonomous subsystems require stringent management and transparency. Protecting the integrity,
confidentiality, and availability of data during AI applications is critical and is ensured through the use of
modern cryptographic techniques and authorised auditing mechanisms.
AI-driven training programmes must ensure compliance with international security standards
(e.g., ISO 27001)
and regulations such as General Data Protection Regulation (GDPR), especially when handling personal or
operational data. The use of access management systems, audit logs, and threat monitoring ensure the prevention
of leaks and malicious use while maintaining the integrity of the learning process.
Military authorities adopt policies ensuring that decisions supported or made by AI systems remain under
human oversight, avoiding arbitrary or uncontrolled outcomes. Transparency in algorithm operations and traceability of
AI system actions are critical to safeguarding accountability and ensuring trust.
Training and educating personnel on cybersecurity and AI ethics are key success factors.
Programmes combining technical training with awareness of digital security, data protection,
and ethical issues serve as a model for major military forces. Continuous knowledge updates and participation
in cybersecurity exercises with integrated AI support (e.g., national-level exercises) enhance the resilience
and effectiveness of forces.
Challenges, Limitations, and Critical Considerations
Despite AI-driven military education, challenges like data security remain. Handling sensitive data –
such as simulation logs, metrics, and scenarios – risks security if improperly accessed or leaked.
Strict data governance and privacy tech are vital to prevent unauthorised exposure, manipulation, or breaches
that could harm readiness and trust
(UK MOD, 2022, pp. 21, 24-26, 36, 47-48).
Another persistent limitation is instructor and organisational readiness. Successful AI integration
requires not only technical infrastructure but also a culture shift among instructors and staff,
who may mistrust black-box algorithms or feel their expertise is threatened. Resistance can come
from concerns about deskilling, reduced discretion, or data-driven monitoring. Re-skilling,
transparent training, and emphasising human judgment are essential to reduce these risks and foster acceptance
(Biggs, 2025, p. 34).
Cost and infrastructure are barriers to equitable AI adoption in PME. Developing and maintaining
advanced training platforms require significant investment, which can worsen disparities for
less-resourced militaries. To foster inclusion, efforts should focus on scaling solutions,
pooling resources, or using multinational partnerships and open architectures.
Ethical considerations are essential, such as obtaining trainee consent, protecting data privacy,
and addressing bias to ensure trust and fairness. There is a risk of becoming overly dependent on algorithms,
which might overshadow human judgment. Therefore, it is crucial to balance AI insights with human oversight,
emphasising transparency, and accountability for responsible implementation.
Methodologies for Designing AI-Driven Training Programs – Technological Architectures and Implementation Platforms
In addition, Automated After Action Review (A2AR) processes include automatic extraction of inflection points,
comparison with SOP/ROE, and the creation of role-specific exercises. These concepts could be the ambitious
future goals in a project such as the USMC’s Project Tripoli in the Live, Virtual, Constructive –
Training Environment (LVC-TE) and reflect how an iterative and scalable unified, all-domain training
environment can become a reality because strong analytics, and AAR tools help units adapt faster,
refine tactics, and experiment with new concepts – boosting readiness and supporting the future force design
(US Marine Corps, n.d.).
Technological Architecture and Digital Infrastructure
The concepts of data strategy and governance include defining a data model, categorising data,
controlling access, protecting privacy, and ensuring compliance with ethical and responsible use principles.
European initiatives emphasise the critical role of data governance in the development and use of artificial intelligence
(EU, 2022;
EU, 2024),
while NATO's digital transformation implementation strategy has been moving for years toward
strategic results and deliverables, such as the Alliance's Digital Initiatives,
Data-Centric Governance, a Digitally Ready Workforce, etc., which adopts the capabilities of
Business Intelligence (BI) and AI in all areas of its activities
(NATO’s Digital Transformation Implementation Strategy, 2024).
As mentioned earlier, the integration of LVC-TE with common interoperability standards will
connect multiple units/levels to achieve ‘training at the point of need’
(US Marine Corps, n.d).
This implies quite high investments in emerging and disruptive Communication and Information Systems (CIS) and technologies.
It is also worth mentioning that cutting-edge communication and information technologies developed
in specially designed infrastructures under military supervision (Cloud and Edge Computing On-Premises)
can contribute to providing the scalable computing power required for artificial intelligence models.
These include hybrid processing, combined with low-latency data transmission to training facilities/fields,
which enables secure centralised analysis. After all, projects to provide cloud services at Secret and
Above Secret classifications have already been implemented to date. Effective methods for transferring
data between classifications are essential for the operational use of artificial intelligence, which often
relies on sensitive data. Access to scalable computing power, including cloud and edge computing architectures,
is recognised as a key factor in the adoption of artificial intelligence in the defence sector, and by
extension, in the provision of training. As an example, the UK Ministry of Defence’s artificial
intelligence strategy invests in the ‘Digital Backbone’ and the ‘AI Skills Framework’ as foundations
for the further development and utilisation of AI services
(UK MOD, 2022, pp. 6, 17, 25).
The development-verification-deployment cycles (Validation and Verification (V&V)),
model resilience testing, adversarial testing, and continuous monitoring with security/performance
indicators ensure the stable development of artificial intelligence for military purposes such as
professional military training. The architecture that adopts the above is also documented by the
corresponding legal background for Machine Learning Operations (MLOps)
and Development Security Operations (DevSecOps) in the official documents of
the European Union AI ACT
(European Union Artificial Intelligence Act, 2024b).
The EU AI Act Compliance Mapping is described as below:
Practice EU AI Act Reference Obligation
Validation and Verification (V&V) Articles 43–51 (Conformity Assessment)
High risk AI systems must undergo conformity assessments to verify compliance
with standards and validate reliable performance.
Model Resilience Testing Article 15 (Accuracy, Robustness, Cybersecurity)
Systems must be tested for robustness and accuracy, ensuring resilience against errors and unexpected conditions.
Adversarial Testing Article 15(2) AI systems must be protected against manipulation
and adversarial attacks, with defences documented.
Continuous Monitoring with KPIs Article 89 (Post Market Monitoring) Providers must
continuously monitor performance and security metrics, detect risks and report incidents to authorities.
Scenario Design and ‘Smart’ Opponents
Systems whose main focus is Opposing Force (OPFOR) learn from their experiences,
gradually increasing complexity, and introducing rare threats. Industry and research
describe practices for integrating intelligent systems for more demanding training
(Stensrud, 2024).
Mission Rehearsal Twin software follows the simulation of digital twins
to combine 3D terrain analysis, logistics, and C2 (Command and Control) for rapid war
games and quick identification of weaknesses before ‘live’ operations. NATO is
exploring the use of Mission Rehearsal Twins – digital twin technologies combined
with modelling and simulation (M and S) – to improve operational readiness, interoperability,
and mission rehearsal exercises like Coalition Warrior Interoperability Exercise (CWIX),
across multinational forces, while also posing challenges in Data Integration,
cybersecurity, scalability, and trust in simulation
(Erickson, Pullen, and Ruth, 2024;
Kapteyn and Willcox, 2021).
Policies, Ethics and the Human Factor
Responsible Artificial Intelligence (RAI) from design requires embedding ethical, legal,
and operational safeguards into AI systems from the outset. It involves thorough risk analysis
to mitigate bias, misuse, and unintended consequences, while maintaining human oversight through
accountability measures, intervention procedures, and fail-safes. NATO ensures international
alignment by adhering to frameworks such as the Organisation for Economic Co-operation (OECD)
AI Principles, UNESCO’s Ethics Recommendation, and the EU AI Act, promoting transparency,
fairness, and democratic values. Finally, RAI is applied through operational integration,
embedding responsible practices into mission planning, digital twin simulations, autonomous systems,
and cyber defence to guarantee trustworthy and effective use in real-world operations
(NDC PAO, 2025).
The Training the Trainers process is vital for equipping leaders and instructors to ensure
the responsible and effective use of AI in defence and security. As
Syed (2025)
highlights, this requires structured skills programmes that encompass prompting techniques,
explainable AI (XAI) for transparency, data literacy to support informed decision-making, and a
strong grasp of the legal frameworks governing AI. Beyond classroom learning, these programmes
employ a range of methodologies, from low-fidelity scenario-based exercises to high-fidelity
war game simulations that replicate complex, multi-domain operations. This layered approach
provides trainers with both technical expertise and practical experience, enabling them to instil trust,
accountability, and operational readiness in the personnel they guide.
Implementation Plan (12 Months)
Months 0–4: Establish robust data governance policies, develop the foundational RAG system,
and define clear guidelines for AI usage in military training contexts. Launch pilot projects for
AI co-coach systems in controlled, unclassified settings.
Months 4–8: Integrate LVC synthetic environments with AI-augmented training tools. Implement
A2AR pipelines and develop XAI rubrics to ensure transparency in AI recommendations. Conduct training
workshops for instructors and personnel on AI ethics, tools and best practices.
Months 8–12: Deploy advanced mission rehearsal twins and sophisticated adversarial OPFOR AI agents
to create dynamic, challenging training scenarios. Initiate external evaluation and certification
processes aligned with allied standards to validate system effectiveness and compliance.
Conclusions
Military training is undergoing a profound transformation with the shift from traditional analogue
models to AI-enhanced algorithmic frameworks. This digital revolution enables the creation of more
realistic, adaptive, and data-driven training environments that better address the complexity and
dynamism of modern threats. Through groundbreaking technologies such as LVC environments,
large-scale telemetry databases, and machine learning-based analytics tools, units and leaders can
accelerate the learning cycle and enhance the effectiveness of decision-making processes.
At the same time, the integration of AI demands a rigorous framework of ethics, governance, and
security, as outlined by the allied strategies of NATO, the EU, the United Kingdom, and the United States.
Traceability, bias mitigation, and human oversight systems are essential to ensure AI serves as a
supportive tool rather than a replacement for human judgment. Aligning these parameters ensures
transparency, trust and interoperability among allies.
Finally, international case studies demonstrate that AI in training programmes delivers
multidimensional benefits, reduces training time, enhances psychological fidelity, and enables
personalised learning. The future of military training relies on effective human-machine collaboration,
creating a hybrid model that transforms knowledge, judgment, and action to achieve the highest levels
of operational readiness and leadership.
AI Statement: In this work authors used a combination of Adobe Reader AI, Microsoft
Copilot, and ChatGPT to extract summaries from source documents and books as well as DeepL and Grammarly
for language improvement and translations.