Journal on Baltic Security logo


  • Latest call
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 2 (2025)
  4. From the Analogue Warrior to the Algorit ...

Journal on Baltic Security

Submit your article Information Become a Peer-reviewer
  • Article info
  • Full article
  • Related articles
  • More
    Article info Full article Related articles

From the Analogue Warrior to the Algorithmic Strategist: The Evolution of Military Training through Artificial Intelligence
Volume 11, Issue 2 (2025), pp. 54–78
Gerassimos Karabelias ORCID icon link to view author Gerassimos Karabelias details   Konstantinos Zafeiris ORCID icon link to view author Konstantinos Zafeiris details   Georgios Chontos ORCID icon link to view author Georgios Chontos details  

Authors

 
Placeholder
https://doi.org/10.57767/jobs_2025_011
Pub. online: 30 December 2025      Type: Policy Analysis      Open accessOpen Access

Received
11 October 2025
Accepted
9 November 2025
Published
30 December 2025

Abstract

NATO’s 2030 digital transformation demands innovative approaches to harness specialised capabilities and ensure readiness against hybrid threats. Cyber reserves are pivotal in bridging military and civilian technologies, enabling digital objectives, and countering sophisticated tactics like cyberattacks and GPS jamming. These reserves integrate military training with civilian expertise, leveraging private sector knowledge – controlling 90% of critical infrastructure – as a strategic asset. They serve as a force multiplier in digital transformation, connect industry and technology to military planning, and enable rapid deployment of advanced capabilities like cloud, AI, and data analytics. Cyber reserves enhance a country’s response to hybrid threats by improving vulnerability assessment, attribution, and civil-military coordination, emphasising societal resilience and military preparedness. They foster digital literacy, cultural change, and partnerships with industry and academia to strengthen holistic defence. However, challenges include standardising training, securing information exchange, and ensuring flexible service models that respect civilian commitments and national sovereignty. By addressing these, a military can calibrate cyber reserves to bolster defences and accelerate digital transformation, creating a full-spectrum, multi-domain force capable of countering 21st-century hybrid threats in both digital and physical spaces.

Introduction

Military training, a cornerstone of the readiness and effectiveness of the Armed Forces, stands out as a pivotal point of transformation. Traditional training, reliant on static, analogue and often linear learning models, struggles to meet the growing demands of an environment where threats are increasingly complex, multifaceted, and rapidly evolving. In this context, artificial intelligence (AI) emerges as a powerful catalyst, enabling a shift from the traditional ‘analogue warrior’ to the ‘algorithmic strategist’ , reflecting the integration of advanced computational models, data, and automated systems into the processes of learning, cost management, and decision-making (Biggs, 2025, p. 24).
This (r)evolution is marked by the parallel adoption of technologies such as Live Virtual Constructive (LVC) synthetic environments, big data processing, decision-support tools, and the application of machine learning algorithms for adaptive and personalised training. The result is the creation of a new training ecosystem that offers greater realism, faster learning times, improved performance, and a safer environment for testing and iterating missions (Rozman, 2020).
At the international level, organisations such as NATO, as well as countries such as the United Kingdom and the United States have already introduced strategic frameworks that not only promote the development of technological solutions but also enforce strict ethical, governance, and security frameworks to ensure that the integration of AI into military training is conducted responsibly (Burt, 2021, p. 1).
The article attempts to map the educational AI landscape in military training, focusing on AI’s evolving role in Professional Military Education (PME). It adopts a qualitative, desk-based synthetic methodology characteristic of defence and strategic studies. It systematically reviews and synthesises recent (2018–2025) open-source academic literature, official policy documents, and programmatic reports from NATO, the EU, UK Ministry of Defence, and US Defence research bodies (i.e. ARL, DARPA, CDAO). Through comparative analysis, it contrasts traditional analogue training with emerging AI-enabled algorithmic paradigms across pedagogical, technological, and operational dimensions. International responsible-AI frameworks are examined, while selected real-world cases (e.g., US Adaptive Training System, Project Maven) serve as illustrative proof-of-concept. No primary data is collected; the contribution lies in the structured integration of technical opportunities, governance requirements, cultural challenges, and actionable policy recommendations derived from high-quality secondary sources. Its aim, to inform PME practitioners, military leaders, and policymakers about current technologies, methods, challenges, and opportunities that AI offers in transforming military training.

From Analogue to Algorithmic Training

In the ‘analogue’ era, military training relied on its effectiveness through fixed processes, oral knowledge transmission, classic scenarios, and empirical practices that were repeated regardless of the individual strengths or weaknesses of each trainee. Field exercises, the traditional disciplined role of the instructor, and a focus on standardisation ensured a foundation for readiness and cohesion but often failed to adequately address the interactive and rapidly evolving nature of modern threats (Rozman, 2020; Biggs, 2025, pp. 23, 35).
In contrast, algorithmic training brings ‘digital transformation’ to the core of learning and military preparation, integrating artificial intelligence systems, ‘constructivist’ learning environments (e.g., LVC), high-fidelity simulations and flexible, personalised learning programmes based on real-time analysis of training performance data. Each training scenario can be tailored to the specific characteristics, level, and cognitive speed of each trainee, while machine learning algorithms provide real-time feedback, error detection, improvement suggestions, and dynamic difficulty scaling (Rozman, 2020; UK MOD, 2022, pp. 1, 7, 20, 26, 27, 31, 36, 52).
Additionally, these systems collect and analyse large volumes of data (telemetry, behavioural metrics, decisions, reflexes, etc.), creating continuously updated, individualised learning profiles that enable instructors to target specific areas of deficiency or potential improvement. Furthermore, algorithmic training promotes ‘reflection’, (i.e., the critical analysis of trainees’ actions through evidence-based, automated After Action Reviews (AAR) and Data-driven Mentoring Cycles (DMC)) (Biggs, 2025, pp. 24, 29, 35; UK MOD, 2022, pp. 1, 24-25, 27).
This new paradigm enables militaries and leaders to enhance their adaptability, respond more effectively to unexpected events, and reduce the gaps between the training process and operational reality, placing the human at the centre of an algorithmically enhanced system for optimal decision-making (Rozman, 2020; UK MOD, 2022, pp. 1, 6-7, 15, 17, 20, 27). This personalised, data-driven training continuously adjusts difficulty, scenarios, and feedback to each learner’s performance, strengthening pattern recognition, situational awareness, and stress-tested judgment in conditions that closely mirror operational complexity. In turn, commanders gain richer, real-time insight into individual and collective readiness, enabling more timely, proportionate, and well-informed decisions under uncertainty.

International Context and Principles of Responsible Adoption

The rapid integration of artificial intelligence (AI) into the defence sector and particularly into military training, requires a clear and ambitious international framework that not only promotes innovation and operational superiority but also ensures responsible management, transparency, and accountability. Organisations such as NATO have already adopted specific strategies that define core principles and protocols for the responsible adoption of AI, in accordance with democratic values and fundamental human rights.

NATO Guidelines and Principles Framework

NATO’s AI strategy is grounded in six fundamental principles: lawfulness, responsibility, traceability, reliability, governability, and bias mitigation. Every military AI application must align with these principles, ensuring that the use of technology is permissible, controlled, and its outcomes predictable and regulated. Interoperability among allies, protection of technological superiority and prevention of malicious uses are central pillars to ensure AI enhances collective security rather than becoming a source of instability (Burt, 2021, pp. 2-3; Clement, 2024, pp. 6, 12) .

European Approach and Regulatory Framework

In parallel, the European Union has developed a human-centric and trustworthy approach to AI, focusing on excellence, trust, and the safeguarding of democratic values. The EU’s AI action plan includes measures such as developing big data infrastructures, enhancing access to high-quality data, training AI talent, as well as establishing compliance and oversight frameworks, which also apply to the defence sector. Adherence to detailed data governance and algorithm explainability is a prerequisite for maintaining trust and avoiding biases and errors in the training process (European Commission, 2025).

Necessary Actions for Responsible Adoption

To ensure that the use of AI in military training is not only effective but also certified as safe and ethical, the following actions are required:
Transparency and Traceability: Comprehensive documentation of algorithmic decisions and training processes to ensure they are easily analysed and audited by humans and institutions (Burt, 2021, p. 2; UK MOD, 2022, pp. 37, 45, 53). NATO’s AI principles require logging data, models, and parameters behind targeting-support recommendations, allowing commanders and legal advisors to reconstruct and audit decision paths in case of incidents.
Data Protection and Cybersecurity: Implementation of encryption technologies, access control policies, and audit systems to prevent breaches or leaks of sensitive training and operational data (UK MOD, 2022, pp. 24-25, 27, 36). In the UK Defence AI Strategy, AI training systems processing operational data must be within a secure digital backbone with role-based access and encryption, preventing telemetry exfiltration or reuse by adversaries.
Addressing Biases: Systematic testing and correction of algorithmic biases that could compromise the effectiveness or fairness of training (Burt, 2021, p. 2; UK MOD, 2022, pp. 17, 27, 44, 48). Datasets for AI assessment tools must be regularly audited to identify biases, such as over-rewarding certain branches, regions, or career paths, with mitigation measures applied before large-scale use.
Human Oversight: Human judgment and responsibility remain a cornerstone at every stage, ensuring AI serves as an aid rather than a substitute for critical decision-making (Burt, 2021, p. 2; Clement, 2024, p. 4). NATO guidance mandates AI systems in mission rehearsal or targeting to show options and risks but never carry out kinetic effects without human review and approval.
Interoperability and Collaboration: Cooperation with allies to adopt common standards and interoperability specifications, eliminating risks of incompatibility and enhancing the collective impact of AI (Burt, 2021, p. 2; Clement, 2024, pp. i, 3, 6, 8, 11-12). In NATO-led multinational exercises, interoperable data standards, and shared AI governance enable allies to integrate their simulation and decision-support tools into a common LVC environment, avoiding isolated national systems.

Innovative, Feasible Solutions in Military Training with AI

Adaptive training using AI can serve as an innovative application in military education, particularly within PME institutions, as it enables personalised learning based on the skills, weaknesses, and learning pace of each trainee. Unlike traditional training, where everyone follows a common structure of lessons and exercises, adaptive training relies on algorithms that analyse performance data in real time and dynamically adjust the training content. For example, if a soldier underperforms in a specific area, such as urban combat exercises, the algorithm provides increased and targeted training in that domain. This approach saves resources and time, as trainees do not expend energy on areas where they already excel (Martin et al., 2020, p. 7), while also contributing to stress reduction and improved psychological resilience, as the difficulty level increases gradually rather than abruptly (Groombridge, 2022) .
A tangible example of such a training approach comes from the US Army Research Laboratory (ARL), which has developed the Adaptive Training System (ATS). This system trains military units through personalised exercises that enhance readiness at a faster rate compared to traditional methods (ARL Public Affairs, 2018). Such a training model could be particularly applicable, as the rapid training of both conscripted soldiers, and permanent personnel poses a challenge, and the use of AI could ensure that each trainee receives the training they genuinely need.
Furthermore, Generative Artificial Intelligence (AI) offers an unprecedented opportunity to create realistic and multidimensional training scenarios, surpassing the limitations of traditional military training. With the aid of algorithms such as Generative Adversarial Networks (GANs) and Large Language Models (LLMs), images, videos, audio data, and complete mission scenarios can be generated for use in Virtual Reality/Augmented Reality (VR/AR) training environments (Roff, 2024). In this way, a soldier can engage in a fully virtual battlefield, confront ‘learning’ virtual adversaries, or participate in rescue missions that are difficult to replicate with traditional methods, such as natural disaster scenarios.
Generative AI can be integrated into military training to automate the production of simulation materials, significantly reducing associated costs and enabling scalable exercise creation (Biggs, 2025, p. 25; Broo, 2023). However, the quality of AI-generated content remains an area of active discussion, with ongoing efforts to ensure human oversight and maintain instructional effectiveness. In every Theatre of Operations, generative AI could prove particularly useful in training personnel to handle tensions, crises, warfare, or search and rescue missions, as it can create realistic and unpredictable scenarios that enhance trainees’ readiness with personalised exercise content, simulating operational conditions that closely resemble reality.
Beyond scenario creation, decision-making under pressure and uncertainty is another critical aspect of military training, where decision-support tools can significantly enhance the effectiveness of commanders and trainees. Specifically, these systems collect data from multiple sources, such as satellite imagery, intelligence reports, videos, and sensors to then use machine learning algorithms to filter out ‘noise’ – meaning irrelevant information – and highlight critical data (Cummings, 2017, p.3). For example, potential ambushes can be identified early, their risk assessed and multiple alternative action plans proposed. The US Department of Defense’s ‘Project Maven’ is a notable example, utilising AI for image analysis from unmanned aerial vehicles, providing real-time information to analysts and staff (CDAO, 2023).
Similarly, the Defence Advanced Research Projects Agency (DARPA) is developing Deep Exploration and Filtering of Text (DEFT) for the automated processing of vast amounts of data in operational environments (DARPA, 2020). Applying these technologies to military training offers a unique opportunity to practice in realistic conditions, enhancing decision-making speed, and judgment accuracy, as the abundant existing data can support a wide range of scenarios and highlight multiple potential outcomes based on decisions made.

Leadership, Command, and Resilience in the Era of Algorithmic Military Training

The introduction of Artificial Intelligence (AI) and algorithmic systems into military training marks a fundamental shift in the approach to command and leadership. Traditionally, military command relied on centralised hierarchical models, where authority and decision-making were concentrated at higher echelons, with bureaucracy serving as a key regulator of processes and the flow of orders. The new algorithmic model promotes decentralised and flexible command, granting smaller-scale units and field commanders’ access to predictive data, automated decision-support tools, and continuous real-time feedback, thereby enhancing speed and adaptability in combat (Biggs, 2025, pp. 25-26).
The transition to algorithmic military training, however, faces significant challenges and resistance. According to the Swedish Defence Research Agency (FOI), traditional military and political hierarchies, which are rooted in values of discipline, strict control, and clearly defined roles, struggle to embrace innovations that reduce centralised oversight or shift critical decision-making processes to lower echelons. The embedded culture and bureaucratic processes can hinder rapid adoption of algorithmic tools, mainly due to concerns over role changes and transparency, while still aligning with oversight institutions overseeing authority, access, and accountability. Additionally, the difficulty in interpreting complex algorithmic models heightens uncertainty, while the need for transparency and human oversight in critical decision situations makes full integration of such systems a challenging process. These systemic and cultural factors are considered equally critical to the success or failure of adoption of AI in the military domain, alongside technical and legal obstacles (Svenmarck et al., 2024, pp. 1, 4-5, 9)
Furthermore, a key source of resistance is the human factor itself: ambiguity regarding the role of leaders, fear of replacement or substitution by machines and the need to maintain personal influence and leadership identity can lead to conflicts and delays in adopting new practices. This resistance is compounded by concerns over diminished authority, as leaders grapple with trusting AI-driven insights versus traditional intuition. Cultural inertia within military structures, coupled with scepticism about AI reliability and ethical implications, further hinders integration. Effective change management, including targeted training and clear communication, is essential to align human expertise with algorithmic advancements, fostering acceptance, and collaboration. (Biggs, 2025, p. 29).
Successfully managing this change requires continuous training and sensitisation of leadership on technological and administrative issues, the creation of participatory decision-making frameworks that integrate human judgment with algorithmic tools and the gradual introduction of practices with clear benefits and support from both central and regional leadership. This involves fostering digital literacy among commanders, ensuring they understand AI capabilities and limitations. Collaborative platforms must be developed to balance algorithmic precision with human intuition, while pilot programmes can demonstrate tangible improvements in operational efficiency. Engaging stakeholders at all levels builds confidence and mitigates resistance, ensuring a smooth transition to AI-enhanced systems (Biggs, 2025, pp. 23-26, 30, 32, 34-35; UK MOD, 2022, pp. 17-18, 29-30, 33).
At the same time, there is a need to foster a culture of trust in new technologies, emphasising ethical use, performance monitoring, and ensuring full accountability, so that algorithmic command functions as a ‘partner’ to human leadership rather than a replacement. This requires transparent AI systems with traceable decision-making processes to build confidence. Regular ethical training, robust oversight mechanisms, and clear accountability protocols are essential to align AI with military values. By promoting collaboration between human expertise and algorithmic insights, militaries can enhance decision-making while preserving leadership integrity and trust (Burt, 2021, pp. 2, 4; Clement, 2024).

Case Studies: Global Applications and Successes

United States – US Army, Synthetic Training Environment (STE)

One of the most advanced examples of AI implementation in military training is the US Army’s STE. This is a unified architecture that integrates LVC elements, enabling the simulation of multiple scenarios in real time with a high degree of realism, analysis and personalisation.
The STE allows trainees to experiment with tactics, record and analyse each step, while the platform’s algorithm identifies errors, suggests improvements, and proposes personalised corrective and tactical skill-enhancing exercises. Thanks to its fully connected infrastructure, real-time collection of large-scale data (telemetry) is achieved, leading to trainee performance evaluations and rapid adjustments to training plans. The psychological fidelity and continuous feedback contribute to increased readiness for real operational conditions (Rozman, 2020).

United Kingdom – Digital Targeting Web (DTW) and ASGARD

The United Kingdom has laid the foundations for algorithmic training through the DTW programme and decision-making platforms such as ASGARD. The DTW connects sensors, tactical executors, and strategic entities via a central algorithmic core, enabling rapid data exchange, predictive decision support, and retrospective scenario analysis. In the ‘Spring Storm’ exercises, the AI engine accelerated the risk assessment process, minimising the time for critical military decisions, while simultaneously freeing human leadership for higher strategic judgments (Defence Science and Technology Laboratory, 2025).
Thanks to the ASGARD programme, military training in the UK acquires a clear human-machine structure, combining command intuition with algorithmic analysis and enabling multiple, high-tempo scenario iterations and analysis of adverse or ambiguous battle data.

NATO – Allied Interoperability and Trials

NATO leads international initiatives for implementing AI in joint training through organisations like DIANA and Allied Command Transformation (ACT), promoting principles of interoperability, ethics, and responsibility. Algorithmic team-fusion scenarios enable member states to apply common standards and systems, enhancing data security, and decision compatibility. A notable example is the NATO AUKUS AI trials, where human-centric and algorithmic teams were simultaneously evaluated for speed and depth of decision-making, leading to evidence-based recommendations for integrating AI into tactical learning cycles (Etl, 2023, pp. 495-496, 503-504).

Defence Industry – Examples: Thales AIMTTS, BAE, etc.

The penetration of AI into the defence industry is considered of critical importance. Indicatively, Thales, with its AI-based Mission Training and Tactical Solutions (AIMTTS) system, promotes real-time analyses for advanced individual shooting, while BAE Systems develops AI models for enhanced judgment in combat, quantifying performance under stress conditions and effectively analysing leadership indicators (GDT, n.d.).

Innovations to the PME structure

Integrating AI into PME involves mapping emerging technologies to operational realities across all educational levels. Strategically, AI decision-support aids senior planning and wargaming. Operationally, AI-powered LVC environments facilitate brigade and division exercises, simulating complex multi-domain operations. Tactically, adaptive modules and AI co-coaching provide individual feedback, analytics, and scenario-based training for small-unit leaders. Institutionally, AI tools assist curriculum design, assessments, and training needs detection.
This multi-level approach ensures AI is not introduced in isolation but is harnessed to maximise PME outcomes, cultivating agile, data-literate military professionals capable of leading in rapidly evolving operational environments. The interconnection of military training and industrial innovation production enables the rapid dissemination of results, ‘pilot-to-scale’ applications, and enhancement of overall operational and administrative readiness.

Security and Ethics Policies in AI-Driven Military Training

The transition to ‘algorithmic’ military training introduces new security and ethical challenges, as sensitive data, complex programs, and autonomous subsystems require stringent management and transparency. Protecting the integrity, confidentiality, and availability of data during AI applications is critical and is ensured through the use of modern cryptographic techniques and authorised auditing mechanisms.
AI-driven training programmes must ensure compliance with international security standards (e.g., ISO 27001) and regulations such as General Data Protection Regulation (GDPR), especially when handling personal or operational data. The use of access management systems, audit logs, and threat monitoring ensure the prevention of leaks and malicious use while maintaining the integrity of the learning process.
Military authorities adopt policies ensuring that decisions supported or made by AI systems remain under human oversight, avoiding arbitrary or uncontrolled outcomes. Transparency in algorithm operations and traceability of AI system actions are critical to safeguarding accountability and ensuring trust.
Training and educating personnel on cybersecurity and AI ethics are key success factors. Programmes combining technical training with awareness of digital security, data protection, and ethical issues serve as a model for major military forces. Continuous knowledge updates and participation in cybersecurity exercises with integrated AI support (e.g., national-level exercises) enhance the resilience and effectiveness of forces.

Challenges, Limitations, and Critical Considerations

Despite AI-driven military education, challenges like data security remain. Handling sensitive data – such as simulation logs, metrics, and scenarios – risks security if improperly accessed or leaked. Strict data governance and privacy tech are vital to prevent unauthorised exposure, manipulation, or breaches that could harm readiness and trust (UK MOD, 2022, pp. 21, 24-26, 36, 47-48).
Another persistent limitation is instructor and organisational readiness. Successful AI integration requires not only technical infrastructure but also a culture shift among instructors and staff, who may mistrust black-box algorithms or feel their expertise is threatened. Resistance can come from concerns about deskilling, reduced discretion, or data-driven monitoring. Re-skilling, transparent training, and emphasising human judgment are essential to reduce these risks and foster acceptance (Biggs, 2025, p. 34).
Cost and infrastructure are barriers to equitable AI adoption in PME. Developing and maintaining advanced training platforms require significant investment, which can worsen disparities for less-resourced militaries. To foster inclusion, efforts should focus on scaling solutions, pooling resources, or using multinational partnerships and open architectures.
Ethical considerations are essential, such as obtaining trainee consent, protecting data privacy, and addressing bias to ensure trust and fairness. There is a risk of becoming overly dependent on algorithms, which might overshadow human judgment. Therefore, it is crucial to balance AI insights with human oversight, emphasising transparency, and accountability for responsible implementation.

Methodologies for Designing AI-Driven Training Programs – Technological Architectures and Implementation Platforms

Programme Design with Detailed Data (Data Lakes/Reports/Analytics) The first rule governing project effectiveness is prioritising results. Clearly defining operational advantages, such as the speed of the OODA loop (Observe, Orient, Decide, Act), the quality of courses of action (COA), and linking them to measurable indicators such as telemetry, after-action reviews (AAR), and scenario performance measurements. Statements by NATO and the international community on the responsible use of resources emphasise the importance of documentation, traceability, and human oversight, which must be integrated into the learning cycle (NDC PAO, 2025). Learning telemetry involves collecting real-time performance data, such as decision-making times, errors, and compliance with Standard Operating Procedures/Rules of Engagement (SOP/ROE). Providing explanatory feedback to instructors and trainees is critical to understanding mission-based training. Best practices for the proper use of artificial intelligence in the classroom will provide valuable implementation guides (Biggs, 2025, pp. 23, 30, 34). Adaptive and Personalised Learning Artificial intelligence can significantly improve military training by tailoring instruction to the needs of each soldier. By monitoring performance in real time, AI systems can adjust the difficulty of tasks, identify knowledge gaps by providing targeted feedback, and suggest alternative learning resources. This adaptive approach –similar to successful personalised learning with artificial intelligence in basic education – ensures more effective skill acquisition and moves military training beyond a one-size-fits-all model (Stensrud, 2024). Although the Policy Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy (US Department of State Bureau of Arms Control, Deterrence, and Stability, 2021) does not directly refer to adaptive or personalised learning, it touches on related ideas through its mphasis on self-learning and continuous updating of artificial intelligence systems. Like adaptive learning technologies that adjust their behaviour based on new data or user interactions, some military artificial intelligence systems may also evolve after their development. The statement’s requirements for rigorous testing, monitoring, transparency, and safeguards serve a similar purpose: to ensure that as these learning systems adapt, they remain safe, reliable, and aligned with human oversight. In both areas, the key challenge is managing how AI systems change over time while preventing bias, errors, or unintended behaviour. It can therefore be concluded that the AI Co-Trainer will coordinate the technical methodology of artificial intelligence, recorded as Retrieval-Augmented Generation (RAG), which will be based on approved data sets SOP, Tactics, Techniques, and Procedures (TTP), After Action Reviews (AAR) for micro-interventions and interpretable instructions, accompanied by policies for responsible use and decision recording. Assessment with Explainable Rubrics In the military classroom, artificial intelligence is increasingly used to generate scenarios, simulations, and tailored learning materials. To ensure trust and integrity, each AI recommendation must be accompanied by a clear rationale. eXplainable Artificial Intelligence (XAI) rubrics refer to each system recommendation that is accompanied by a ‘why’ (indicative evidence), enabling trainees to reflect and instructors to verify. This strengthens academic integrity and reduces the risks of ‘hallucinations’ (Lacey, 2025). This rubric‑based approach aligns with the emphasis on transparency, oversight, and critical reflection. It ensures that AI serves as a partner in learning rather than a black box, empowering trainees to interrogate outputs, and instructors to safeguard against bias or misinformation. A sample of the above criteria could be included in the following template: Category Description Scoring Criteria (1–5) Recommendation The AI system’s output (decision, suggestion, or generated content) 1 = vague 5 = precise and actionable Why / Evidence Clear rationale or supporting evidence (citations, data, doctrine, peer-reviewed source) 1 = absent 5 = well-supported and verifiable Reflection Trainee’s analysis of the recommendation and rationale (critical thinking, questioning assumptions) 1 = superficial 5 = deep and insightful Verification Instructor’s evaluation of accuracy, alignment with standards and absence of bias 1 = unchecked 5 = fully validated Integrity Degree to which the process strengthens academic rigor and reduces hallucinations 1 = compromised 5 = fully upheld
In addition, Automated After Action Review (A2AR) processes include automatic extraction of inflection points, comparison with SOP/ROE, and the creation of role-specific exercises. These concepts could be the ambitious future goals in a project such as the USMC’s Project Tripoli in the Live, Virtual, Constructive – Training Environment (LVC-TE) and reflect how an iterative and scalable unified, all-domain training environment can become a reality because strong analytics, and AAR tools help units adapt faster, refine tactics, and experiment with new concepts – boosting readiness and supporting the future force design (US Marine Corps, n.d.).

Technological Architecture and Digital Infrastructure

The concepts of data strategy and governance include defining a data model, categorising data, controlling access, protecting privacy, and ensuring compliance with ethical and responsible use principles. European initiatives emphasise the critical role of data governance in the development and use of artificial intelligence (EU, 2022; EU, 2024), while NATO's digital transformation implementation strategy has been moving for years toward strategic results and deliverables, such as the Alliance's Digital Initiatives, Data-Centric Governance, a Digitally Ready Workforce, etc., which adopts the capabilities of Business Intelligence (BI) and AI in all areas of its activities (NATO’s Digital Transformation Implementation Strategy, 2024).
As mentioned earlier, the integration of LVC-TE with common interoperability standards will connect multiple units/levels to achieve ‘training at the point of need’ (US Marine Corps, n.d). This implies quite high investments in emerging and disruptive Communication and Information Systems (CIS) and technologies.
It is also worth mentioning that cutting-edge communication and information technologies developed in specially designed infrastructures under military supervision (Cloud and Edge Computing On-Premises) can contribute to providing the scalable computing power required for artificial intelligence models. These include hybrid processing, combined with low-latency data transmission to training facilities/fields, which enables secure centralised analysis. After all, projects to provide cloud services at Secret and Above Secret classifications have already been implemented to date. Effective methods for transferring data between classifications are essential for the operational use of artificial intelligence, which often relies on sensitive data. Access to scalable computing power, including cloud and edge computing architectures, is recognised as a key factor in the adoption of artificial intelligence in the defence sector, and by extension, in the provision of training. As an example, the UK Ministry of Defence’s artificial intelligence strategy invests in the ‘Digital Backbone’ and the ‘AI Skills Framework’ as foundations for the further development and utilisation of AI services (UK MOD, 2022, pp. 6, 17, 25).
The development-verification-deployment cycles (Validation and Verification (V&V)), model resilience testing, adversarial testing, and continuous monitoring with security/performance indicators ensure the stable development of artificial intelligence for military purposes such as professional military training. The architecture that adopts the above is also documented by the corresponding legal background for Machine Learning Operations (MLOps) and Development Security Operations (DevSecOps) in the official documents of the European Union AI ACT (European Union Artificial Intelligence Act, 2024b). The EU AI Act Compliance Mapping is described as below:

Practice EU AI Act Reference Obligation

Validation and Verification (V&V) Articles 43–51 (Conformity Assessment) High risk AI systems must undergo conformity assessments to verify compliance with standards and validate reliable performance.
Model Resilience Testing Article 15 (Accuracy, Robustness, Cybersecurity) Systems must be tested for robustness and accuracy, ensuring resilience against errors and unexpected conditions.
Adversarial Testing Article 15(2) AI systems must be protected against manipulation and adversarial attacks, with defences documented.
Continuous Monitoring with KPIs Article 89 (Post Market Monitoring) Providers must continuously monitor performance and security metrics, detect risks and report incidents to authorities.

Scenario Design and ‘Smart’ Opponents

Systems whose main focus is Opposing Force (OPFOR) learn from their experiences, gradually increasing complexity, and introducing rare threats. Industry and research describe practices for integrating intelligent systems for more demanding training (Stensrud, 2024).
Mission Rehearsal Twin software follows the simulation of digital twins to combine 3D terrain analysis, logistics, and C2 (Command and Control) for rapid war games and quick identification of weaknesses before ‘live’ operations. NATO is exploring the use of Mission Rehearsal Twins – digital twin technologies combined with modelling and simulation (M and S) – to improve operational readiness, interoperability, and mission rehearsal exercises like Coalition Warrior Interoperability Exercise (CWIX), across multinational forces, while also posing challenges in Data Integration, cybersecurity, scalability, and trust in simulation (Erickson, Pullen, and Ruth, 2024; Kapteyn and Willcox, 2021).

Policies, Ethics and the Human Factor

Responsible Artificial Intelligence (RAI) from design requires embedding ethical, legal, and operational safeguards into AI systems from the outset. It involves thorough risk analysis to mitigate bias, misuse, and unintended consequences, while maintaining human oversight through accountability measures, intervention procedures, and fail-safes. NATO ensures international alignment by adhering to frameworks such as the Organisation for Economic Co-operation (OECD) AI Principles, UNESCO’s Ethics Recommendation, and the EU AI Act, promoting transparency, fairness, and democratic values. Finally, RAI is applied through operational integration, embedding responsible practices into mission planning, digital twin simulations, autonomous systems, and cyber defence to guarantee trustworthy and effective use in real-world operations (NDC PAO, 2025).
The Training the Trainers process is vital for equipping leaders and instructors to ensure the responsible and effective use of AI in defence and security. As Syed (2025) highlights, this requires structured skills programmes that encompass prompting techniques, explainable AI (XAI) for transparency, data literacy to support informed decision-making, and a strong grasp of the legal frameworks governing AI. Beyond classroom learning, these programmes employ a range of methodologies, from low-fidelity scenario-based exercises to high-fidelity war game simulations that replicate complex, multi-domain operations. This layered approach provides trainers with both technical expertise and practical experience, enabling them to instil trust, accountability, and operational readiness in the personnel they guide.
Implementation Plan (12 Months) Months 0–4: Establish robust data governance policies, develop the foundational RAG system, and define clear guidelines for AI usage in military training contexts. Launch pilot projects for AI co-coach systems in controlled, unclassified settings. Months 4–8: Integrate LVC synthetic environments with AI-augmented training tools. Implement A2AR pipelines and develop XAI rubrics to ensure transparency in AI recommendations. Conduct training workshops for instructors and personnel on AI ethics, tools and best practices. Months 8–12: Deploy advanced mission rehearsal twins and sophisticated adversarial OPFOR AI agents to create dynamic, challenging training scenarios. Initiate external evaluation and certification processes aligned with allied standards to validate system effectiveness and compliance.

Conclusions

Military training is undergoing a profound transformation with the shift from traditional analogue models to AI-enhanced algorithmic frameworks. This digital revolution enables the creation of more realistic, adaptive, and data-driven training environments that better address the complexity and dynamism of modern threats. Through groundbreaking technologies such as LVC environments, large-scale telemetry databases, and machine learning-based analytics tools, units and leaders can accelerate the learning cycle and enhance the effectiveness of decision-making processes.
At the same time, the integration of AI demands a rigorous framework of ethics, governance, and security, as outlined by the allied strategies of NATO, the EU, the United Kingdom, and the United States. Traceability, bias mitigation, and human oversight systems are essential to ensure AI serves as a supportive tool rather than a replacement for human judgment. Aligning these parameters ensures transparency, trust and interoperability among allies.
Finally, international case studies demonstrate that AI in training programmes delivers multidimensional benefits, reduces training time, enhances psychological fidelity, and enables personalised learning. The future of military training relies on effective human-machine collaboration, creating a hybrid model that transforms knowledge, judgment, and action to achieve the highest levels of operational readiness and leadership.
AI Statement: In this work authors used a combination of Adobe Reader AI, Microsoft Copilot, and ChatGPT to extract summaries from source documents and books as well as DeepL and Grammarly for language improvement and translations.
 
ARL Public Affairs. (2018) ‘Artificial Intelligence Helps Soldiers Learn Faster’. US Army, 30 April. Available at: https://www.army.mil/article/204473/artificial_intelligence_helps_soldiers_learn_faster (Accessed: October 2025).
 
Biggs, Adam T. (2025) ‘Enhancing Professional Military Education with AI’, Journal of Military Learning, 9(2), pp. 22–37. Available at: https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/JML-April-2025/Enhancing-pme-with-ai/ (Accessed: October 2025).
 
Broo, Didem Gürdür. (2023) ‘AI Design’. IEEE Spectrum, 14 October. Available at: https://spectrum.ieee.org/ai-design (Accessed: October 2025).
 
Burt, Peter. (2021) ‘NATO’s New AI Strategy: Lacking in Substance and Lacking in Leadership’, NATO Watch, 8 November. Available at: https://natowatch.org/default/2021/natos-new-ai-strategy-lacking-substance-and-lacking-leadership .
 
CDAO. (2023) 2023 Data, Analytics, and Artificial Intelligence Adoption Strategy. Washington, DC: Chief Digital and Artificial Intelligence Office, US Department of Defense.
 
Clement, Sven (2024) NATO and Artificial Intelligence: Navigating the Challenges and Opportunities. Luxembourg: NATO Parliamentary Assembly, Science and Technology Committee.
 
Cummings, Mary L. (2017) ‘Artificial Intelligence and the Future of Warfare’. [Research Paper] London, England: Royal Institute of International Affairs (Chatham House).
 
DARPA. (2020) ‘Deep Exploration and Filtering of Text’. Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/research/programs/deep-exploration-and-filtering-of-text . (Accessed: October 2025).
 
Defence Science and Technology Laboratory (2025) ‘Building the Digital Targeting Web: Case Study’, 10 September. GOV.UK. Available at: https://www.gov.uk/government/case-studies/building-the-digital-targeting-web . (Accessed: October 2025).
 
Erickson, J., Pullen, J.M., and Ruth, J. (2024) ‘Expanding M&S-Based Mission Rehearsal in CWIX’. NATO Science and Technology Organisation. Available at: https://www.sto.nato.int/document/expanding-ms-based-mission-rehearsal-in-cwix/ . (Accessed: November 2025).
 
Etl, Alex. (2023) ‘The Impact of AI on NATO Member States’ Strategic Thinking’, Strategies XXI: International Scientific Conference, 18(1), pp. 493–505. DOI: https://doi.org/10.53477/2971-8813-22-57 .
 
European Commission. (2025) ‘Shaping Europe’s Digital Future’. Available at: https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence . (Accessed: September 2025).
 
European Union. (2022) ‘Regulation (EU) 2022/868 on European Data Governance (Data Governance Act)’, 30 May. Available at: https://eur-lex.europa.eu/eli/reg/2022/868/oj . (Accessed: November 2025).
 
European Union. (2024a) ‘Artificial Intelligence Act’. Available at: https://artificialintelligenceact.eu/ai-act-explorer/ . (Accessed: November 2025).
 
European Union. (2024b) ‘Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’, 13 June. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj . (Accessed: November 2025).
 
GDT. (n.d.) ‘Artificial Intelligence in the Defence Industry’. Global Defence Technology (GlobalData). Available at: https://defence.nridigital.com/global_defence_technology_aug23/case_studies_artificial_intelligence_defence_industry. . (Accessed: October 2025).
 
Groombridge, David. (2022) ‘Gartner Top 10 Strategic Technology Trends for 2023’, 17 October. Gartner. Available at: https://www.gartner.com/en/articles/gartner-top-10-strategic-technology-trends-for-2023 . (Accessed: October 2025).
 
Kapteyn, Michael and Willcox, Karen E. (2021) ‘Predictive Digital Twins as a Foundation for Improved Mission Readiness’. NATO Science and Technology Organisation. Available at: https://www.sto.nato.int/document/predictive-digital-twins-as-a-foundation-for-improved-mission-readiness/. (Accessed: November 2025).
 
Lacey, James. (2025) ‘Peering into the Future of Artificial Intelligence in the Military Classroom’, 3 April. War on the Rocks. Available at: https://warontherocks.com/2025/04/peering-into-the-future-of-artificial-intelligence-in-the-military-classroom/ . (Accessed: October 2025).
 
Martin, Florence, Chen, Yan, Moore, Robert L., and Westine, Carl D. (2020) ‘Systematic Review of Adaptive Learning Research Designs, Context, Strategies, and Technologies from 2009 to 2018’, Educational Technology Research and Development, 68, pp. 1903–1929. DOI:10.1007/s11423-020-09793-2.
 
NATO. (2024) ‘NATO’s Digital Transformation Implementation Strategy’, 17 October. Available at: https://www.nato.int/en/about-us/official-texts-and-resources/official-texts/2024/10/17/natos-digital-transformation-implementation-strategy. (Accessed: November 2025).
 
NDC PAO (2025) ‘Conference of Commandants 2025’, 20 May. NATO Defense College. Available at: https://www.ndc.nato.int/coc-2025/ . (Accessed: October 2025).
 
Roff, Heather M. (2024) ‘AI, Military Ethics, and Being Alchemists of Meaning’. AI and Equality Podcast, 27 June. Carnegie Council. Available at: https://carnegiecouncil.org/media/series/aiei/ai-military-ethics-heather-roff. (Accessed: October 2025).
 
Rozman, Jeremiah. (2020) ‘Synthetic Training Environment’, 10 December. Association of the United States Army. Available at: https://www.ausa.org/publications/synthetic-training-environment . (Accessed: September 2025).
 
Stensrud, Brian. (2024) ‘AI Innovation Set to Revolutionise Military Training Landscape’, 16 October. Shephard Media. Available at: https://www.shephardmedia.com/news/training-simulation/sponsored-ai-innovation-set-to-revolutionise-military-training-landscape/ . (Accessed: October 2025).
 
Svenmarck, Peter, Luotsinen, Linus, Nilsson, Mattias, and Schubert, Johan. (2024) ‘Possibilities and Challenges for Artificial Intelligence in Military Applications’. Stockholm, Sweden: Swedish Defence Research Agency (FOI) for NATO.
 
Syed, Najeeb Ahmad (2025) ‘Theory vs Practice’, 17 July. War Room. Available at: https://warroom.armywarcollege.edu/articles/theory-vs-practice/ . (Accessed: October 2025).
 
UK Ministry of Defence (2022) ‘Defence Artificial Intelligence Strategy’. London, England: UK Ministry of Defence.
 
US Department of State, Bureau of Arms Control, Deterrence and Stability (2021) ‘Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy’, 20 January. Available at: https://2021-2025.state.gov/political-declaration-on-responsible-military-use-of-artificial-intelligence-and-autonomy/ . (Accessed: October 2025).
 
US Marine Corps (n.d.) ‘Live, Virtual, and Constructive Training Environment’. Available at: https://www.tecom.marines.mil/Units/Divisions/Range-and-Training-Programs-Division/LVC-TE/ . (Accessed: October 2025).
Reading mode PDF XML

Copyright
Open Access. ©
by logo by logo
This work is licensed under the Creative Commons Attribution 4.0 International License.

Keywords
Professional Military Education Artificial Intelligence AI Integration AI Military Training

Metrics
since May 2021
319

Article info
views

0

Full article
views

134

PDF
downloads

60

XML
downloads

Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

Journal on Baltic Security

  • Online ISSN: 2382-9230
  • Print ISSN: 2382-9222
  • Copyright © 2021 Baltic Defence College

About

  • About journal

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • Baltic Defence College,
    Riia 12, 51010,
    Tartu, Estonia
Powered by PubliMill  •  Privacy policy