Journal on Baltic Security logo


  • Latest call
Login Register

  1. Home
  2. Issues
  3. Volume 11, Issue 2 (2025)
  4. Editorial Introduction: Artificial Intel ...

Editorial Introduction: Artificial Intelligence in Professional Military Education: Patterns for Human-AI Collaboration
Volume 11, Issue 2 (2025), pp. 1–11
William "Bill" Combes ORCID icon link to view author William "Bill" Combes details  

Authors

 
Placeholder
https://doi.org/10.57767/jobs_2025_008
Pub. online: 29 December 2025      Type: Editorial      Open accessOpen Access

Published
29 December 2025

Abstract

Professional Military Education faces a fundamental challenge: preparing leaders to collaborate effectively with AI capabilities that evolve faster than curriculum cycles, in operational contexts still being defined, against adversaries who conceptualise AI through different strategic logics. This editorial introduction frames the Journal on Baltic Security special issue on Artificial Intelligence in Professional Military Education, presenting a distinction between command functions (requiring human judgment) and control functions (amenable to AI augmentation) that emerged from collaborative exploration among Baltic-Nordic defence organisations and NATO institutions. Three patterns of effective human-AI collaboration – Strategic Sense-Making, Ethical Responsibility, and Adaptive Command – provide the conceptual thread connecting six diverse contributions spanning adversary paradigm analysis, technology survey, NATO cognitive infrastructure, gender and inclusion, practitioner reflection, and empirical framework development. The introduction identifies implementation gaps requiring continued attention and points toward future collaborative work addressing practical solutions for PME institutions navigating AI integration.

The Challenge Before Us

Artificial Intelligence (AI) is transforming how militaries operate, decide, and fight. The integration of AI into command-and-control systems, intelligence analysis, autonomous platforms, and decision-support tools proceeds rapidly across NATO and among potential adversaries. Yet Professional Military Education (PME) faces a fundamental challenge: preparing leaders to collaborate effectively with AI capabilities that evolve faster than curriculum cycles, in operational contexts still being defined, against adversaries who conceptualise AI through different strategic logics.
This challenge is not theoretical. Institutions across the Alliance are actively implementing AI and digital transformation initiatives – developing faculty capabilities, revising curricula, experimenting with new tools, navigating policy frameworks that struggle to keep pace with technological change. The Baltic Defence College’s own efforts, guided by its Digital Transformation and Artificial Intelligence Working Group, revealed early on that no institution can address this challenge alone. The complexity exceeds what isolated efforts can manage; the lessons emerging from early adoption need mechanisms for sharing across institutional boundaries.
Recognition that collaboration was necessary shaped two initiatives. In February 2025, the Baltic Defence College convened a workshop on AI in PME bringing together representatives from regional ministries of defence, military academies, and NATO institutions to explore AI integration challenges and share emerging practices. The call for papers for this special issue invited contributions from diverse institutions – faculty, technical teams, institutional leaders, researchers at various stages of AI adoption – to document lessons, frameworks, and critical perspectives that could inform the broader PME community.
The contributions gathered here represent that diversity: Finnish scholars analysing Russian and Chinese military thought in original languages, Greek practitioners mapping the technological landscape, a Norwegian researcher examining NATO’s cognitive infrastructure, Turkish institutional case studies on inclusion and ethics, a practitioner navigating her AI scepticism through a civilian educator’s guide to AI collaboration, and empirical findings from the AI in PME collaboration. What unites these contributions is shared concern: ensuring that as AI transforms military operations, PME transforms leaders’ capacity to exercise human judgment where it matters most.

Command, Control, and Collaboration

A distinction that emerged from practitioner exploration at the BALTDEFCOL workshop provides the conceptual thread connecting these diverse contributions: the difference between command-and-control functions in AI-enabled operations.
Command encompasses functions requiring human judgment – leadership, authority, responsibility, strategic decision-making. These remain essentially human regardless of technological advancement. No algorithm assumes accountability for operational consequences. No AI system provides the moral authority that legitimises the use of force. Command requires understanding context, weighing competing values, accepting responsibility for uncertainty – capacities that remain distinctively human even as AI capabilities expand.
Control encompasses functions where AI augmentation enhances speed, scale, and precision – communication, information processing, coordination, environmental monitoring. Here AI excels: processing sensor data at volumes humans cannot manage, identifying patterns across disparate sources, maintaining situational awareness across extended operations, coordinating complex logistics. The augmentation of control functions extends what human commanders can perceive, coordinate, and direct.
This distinction – command remains human, control can be AI-enabled – provides clarity for technology adoption decisions, training design, and leader development priorities. It suggests that PME must develop both: the AI literacy required to leverage enhanced control functions, and the judgment, ethics, and strategic thinking that effective command demands.
Three patterns of effective human-AI collaboration emerged from the workshop exploration, each reflecting this command-control distinction:
Strategic Sense-Making pairs AI pattern recognition with human contextual judgment. AI systems identify patterns across vast information – intelligence reports, sensor data, open-source analysis – at speeds and scales humans cannot match. Humans provide the contextual judgment that determines operational significance: what these patterns mean within specific strategic circumstances, historical precedents, and political constraints.
Ethical Responsibility pairs AI option generation with human ethical evaluation. AI systems rapidly generate courses of action, alternative phrasings, scenario variations. Humans evaluate each against ethical frameworks, professional standards, and institutional values – accepting, modifying, or rejecting based on judgment that algorithms cannot replicate.
Adaptive Command pairs human primacy in objective-setting with AI environmental sensing. Humans establish what must be achieved and why; AI systems monitor the environment for changes relevant to those objectives, alerting commanders to developments requiring attention while the fundamental direction remains under human authority.
Readers will encounter these patterns operating – sometimes explicitly, sometimes implicitly – across the contributions that follow. The final article develops the full theoretical framework; the preceding contributions demonstrate the patterns in application.

The Contributions

Mapping the Threat Landscape
The issue opens with Puranen and Kukkola's 'A Eurasian Paradigm of Intelligent Warfare? How China and Russia Perceive AI’s Impact on Military Power'. Through analysis of over 180 Russian military articles and extensive Chinese sources in original languages, the authors map the conceptual terrain that PME must help officers navigate.
Their central finding challenges assumptions about a monolithic adversary approach: no unified Eurasian paradigm exists despite deepening Sino-Russian partnership. Russian military thinking emphasises information superiority and defensive sovereignty within a 2030 horizon, viewing AI as enhancing existing systems rather than transforming warfare’s fundamental character. Chinese thinking envisions transformative 'intelligentisation' – human-machine fusion, organisational revolution, cognitive domain expansion – toward 2050 and beyond.
For PME, this divergence demands comparative paradigm analysis. Officers must understand that adversaries conceptualise AI through different logics shaped by strategic culture, historical experience, and geopolitical position. The analytical work itself demonstrates Strategic Sense-Making: synthesising multilingual military literature requires pattern recognition at scale, but determining what those patterns mean for NATO strategy requires human contextual judgment that understands why Russian defensive framing differs from Chinese transformative ambition. This divergence extends to ethical considerations – a dimension NATO cannot abandon regardless of whether adversaries share it.
Surveying the Technological Landscape
From adversary concepts, the issue moves to available tools. Karabelias, Zafeiris, and Chontos offer 'From the Analogue Warrior to the Algorithmic Strategist: The Evolution of Military Training through Artificial Intelligence'. This comprehensive survey maps the AI application landscape for military training: adaptive learning systems that personalise instruction, synthetic environments enabling realistic simulation, decision-support tools accelerating staff processes, generative AI creating dynamic scenarios.
Drawing on frameworks from NATO, the United Kingdom, the United States, and the European Union, the authors catalogue what exists and what responsible adoption requires – ethical governance, security protocols, human oversight mechanisms. The survey serves readers at different starting points: newcomers gain orientation to the field; experienced practitioners gain a checklist against which to evaluate institutional readiness.
Like the threat analysis, this mapping exercise demonstrates Strategic Sense-Making. Synthesising dispersed technical literature, policy documents, and case studies into coherent overview requires processing information at scale. Evaluating which applications merit institutional investment – given operational requirements, resource constraints, and organisational culture – requires human judgment that no catalogue can provide.
Building Cognitive Infrastructure
If Puranen and Kukkola map the threat and Karabelias and colleagues map the tools, Bendiksen addresses how NATO’s human architecture enables coherent response. 'Human Interoperability through PME: Standardised Mental Models as Enablers for Data-Centric Warfighting' argues that shared heuristics taught through Professional Military Education – ends-ways-means reasoning, fighting power frameworks, interoperability models – function as cognitive connectors enabling officers from different nations and institutions to think together effectively.
This reframes PME as NATO’s 'cognitive infrastructure', essential to rather than separate from technological transformation. Technical interoperability can be standardised through equipment and protocols. Procedural interoperability can be aligned through doctrine. But human interoperability – the trust, shared understanding, and compatible mental models that enable multinational forces to act as one – must be cultivated through education.
The paper establishes the cognitive foundation that Adaptive Command requires as AI transforms the information environment, shared mental models ensure officers retain the conceptual primacy needed to set objectives and interpret AI-generated information. Officers educated in common heuristics share compatible frameworks for that interpretation, enabling collective sense-making even as the volume and velocity of data exceed what any individual could process. The heuristics are portable cognitive tools that keep humans in command even as control functions transform.
Examining Ethical Dimensions
The issue then pauses for Kuloğlu and Koçanli to ask a necessary question: who is included or excluded when military education digitises? 'Gender Equality in AI-Supported Military Education: Literature Insights and Evidence from Turkish Institutions' examines the intersection of algorithmic bias and institutional inequality through case studies of three Turkish military educational institutions.
The authors reveal a dual gap: minimal AI adoption coincides with absent gender perspectives. This could enable thoughtful, inclusive implementation – a blank slate for getting it right from the start. Or it could risk embedding historical biases at scale if institutions adopt AI tools without awareness of how algorithms trained on male-dominated military data may perpetuate exclusion.
The paper’s insight resonates beyond the Turkish context: algorithmic bias reflects institutional inequalities, not just technical problems. Institutions that struggle with human-centred inclusion policies will struggle with the cognitive complexity of evaluating AI outputs for bias. The paper identifies the need for practical frameworks to guide gender-sensitive AI implementation and argues for developing systematic approaches to prevent algorithmic bias in military education systems across NATO.
This contribution illustrates why Ethical Responsibility is essential. By identifying a gap in Turkish institutions – minimal AI adoption coinciding with absent gender frameworks – the authors argue that ethical evaluation must accompany AI adoption from the outset, not follow as afterthought. AI systems generate training scenarios, assessment algorithms, and educational content. Humans must evaluate these outputs against ethical frameworks – asking whose experiences shaped the training data, whose perspectives inform the algorithms, whether technological futures will be more or less inclusive than the past. The paper also demonstrates Strategic Sense-Making: the literature synthesis required pattern recognition across dispersed gender and AI scholarship, but determining implications for specific institutional contexts required human judgment informed by direct observation.
Practitioner Reflection
Before the theoretical synthesis, the issue offers practitioner reflection. Armstrong's review of Ethan Mollick's Co-Intelligence: Living and Working with AI models the intellectual journey many PME professionals are navigating – engaging thoughtfully with AI’s potential while maintaining justified scepticism about rushed or haphazard integration.
Armstrong's critical stance enriches the issue. She challenges Mollick’s framing of AI scepticism as rooted in fear of replacement, arguing instead that genuine sceptics are motivated by concerns for safety, security, sustainability, and equity during this transformational period. Her central critique: Mollick’s optimism about experimentation and iterative learning sits uneasily with military contexts where ultimate stakes are higher and institutional cultures may resist the systematic experimentation his approach requires. Drawing on her experience as BALTDEFCOL faculty navigating AI integration, Armstrong observes that there is not yet a final answer for risk mitigation as institutions “fumble” – even systematically – through rapid technological expansion.
Yet her willingness to let the book challenge her assumptions – acknowledging that reading Co-Intelligence shifted her thinking even without converting her to AI enthusiasm – demonstrates the adaptive learning this issue advocates. The review embodies Ethical Responsibility in intellectual practice: Mollick generates frameworks; Armstrong evaluates them against PME’s specific requirements, finding alignment where it exists and identifying limitations where civilian assumptions meet military realities.
Theoretical Framework
The issue concludes with 'Educating Tomorrow's Leaders: Human-AI Collaboration Patterns for Professional Military Education'. Drawing on the February 2025 workshop, this article develops the command-control framework and three collaboration patterns previewed in this introduction, grounding them in operational observation from Ukrainian and Israeli experience and NATO doctrinal analysis.
In addition to the framework, it documents a transferable workshop methodology for institutions navigating their own AI integration challenges – complexity-aware facilitation that enables patterns to emerge from practitioner wisdom rather than imposing predetermined frameworks. It acknowledges implementation obstacles candidly: institutional conservatism, rapid technological change, resource constraints, cultural resistance. And it positions collaboration as strategic necessity for institutions that cannot achieve transformation alone.
In the spirit of this issue’s focus, the article includes transparent documentation of AI-assisted research methodology in Appendix A. This introduction itself was developed through similar collaboration: command functions remained editorial – determining thematic architecture, deciding how to position each contribution, ensuring the final voice reflected intent rather than generated text. Control functions were augmented through AI assistance – synthesising across manuscripts to identify connections, generating structural options, drafting sections for revision and refinement. The three patterns operated throughout: Strategic Sense-Making as AI identified thematic patterns while editorial judgment determined which held significance; Ethical Responsibility as AI generated framing options while editorial judgment evaluated each against contributors’ intentions; Adaptive Command as editorial objectives guided AI-augmented analysis.
Having read the preceding contributions, readers will recognise how these patterns operate across diverse contexts: Strategic Sense-Making in Puranen and Kukkola’s paradigm analysis and Karabelias and colleagues' technology survey; Ethical Responsibility in the gender equality examination and Armstrong's critical engagement; Adaptive Command in Bendiksen’s cognitive infrastructure and throughout as human judgment guides AI-augmented analysis. The framework makes explicit what the contributions demonstrate.

The Way Ahead

This special issue advances understanding of AI integration in Professional Military Education, yet significant work remains. Ongoing implementation efforts across the region have identified challenges that frame continued collaborative attention.
National Strategy and Institutional Alignment: How do PME institutions translate national AI strategies and NATO-level guidance into curriculum revision, faculty development, and resource allocation? The gap between strategic aspiration and institutional implementation remains substantial.
Technical Infrastructure: Private large language models, closed network solutions, AI-enabled wargaming, and secure data environments require expertise and investment that many institutions lack. Practical pathways for resource-constrained institutions need development.
Leadership Competencies: Beyond AI literacy, what command competencies does AI-enabled operations require? How do PME curricula develop the judgment, ethical reasoning, and adaptive capacity that effective human-AI collaboration demands?
Assessment Methodologies: How do institutions measure whether AI integration improves educational outcomes? What metrics capture not just efficiency gains but development of the human capacities that remain essential?
Resource Coordination: How do smaller institutions share development costs, lessons learned, and emerging practices? Regional cooperation offers possibilities that isolated institutional efforts cannot achieve.
These gaps frame the next phase of collaborative work. The Baltic Defence College will host 'Bridging the AI Adoption Gap: AI in Professional Military Education' in February 2026, moving from theoretical exploration to practical problem-solving. The workshop will address national AI strategies and institutional policy alignment, technical implementation case studies, leadership development for AI-enabled operations, and regional network development for sustained collaboration.
Institutions interested in participating or contributing to future research are invited to contact the Baltic Defence College. The challenge of preparing leaders for AI-enabled warfare exceeds what any single institution can address. The contributions gathered here demonstrate that the work is underway – across institutions, across perspectives, across the Alliance. AI augments control functions while humans retain command authority. The work continues through collaboration, iterative refinement, and sustained attention to both innovation and responsibility. The mission demands we proceed together.

Notes

AI Statement:
[1] The introduction to a special issue on human-AI collaboration deserves an AI statement that demonstrates the framework rather than simply discloses tool use – it should model what the issue advocates: transparency that actually works. Given the issue's focus, I want to show this in practice.
This introduction was developed through human-AI collaboration reflecting the command-control framework presented in this special issue. Command functions remained with me as Guest Editor: determining thematic architecture, deciding how to position each contribution, ensuring the final voice reflected my editorial intent. Control functions were augmented through Anthropic's Claude: synthesising across manuscripts, generating structural options, and drafting sections for my revision.
The three collaboration patterns operated throughout. Strategic Sense-Making: Claude identified thematic patterns across manuscripts while I determined which connections held significance. Ethical Responsibility: Claude generated framing options while I evaluated each against contributors' intentions. Adaptive Command: my editorial objectives guided Claude-augmented analysis throughout.
The final product is mine.
Exit Reading PDF XML


Export citation

Copy and paste formatted citation
Placeholder

Download citation in file


Share


RSS

Journal on Baltic Security

  • Online ISSN: 2382-9230
  • Print ISSN: 2382-9222
  • Copyright © 2021 Baltic Defence College

About

  • About journal

For contributors

  • Submit
  • OA Policy
  • Become a Peer-reviewer

Contact us

  • Baltic Defence College,
    Riia 12, 51010,
    Tartu, Estonia
Powered by PubliMill  •  Privacy policy