<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.0 20120330//EN" "JATS-journalpublishing1.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">JOBS</journal-id>
      <journal-title-group>
        <journal-title>Journal on Baltic Security</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2382-9230</issn>
      <issn pub-type="ppub">2382-9222</issn>
      <publisher>
        <publisher-name>BDC</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="publisher-id">JOBS-11-2-JOBS-2025-008</article-id>
      <article-id pub-id-type="doi">10.57767/jobs_2025_008</article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Editorial</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>Editorial Introduction: Artificial Intelligence in Professional Military Education: Patterns for Human-AI Collaboration</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <contrib-id contrib-id-type="orcid">https://orcid.org/0000-0003-0191-8203</contrib-id>
          <name>
            <surname>Combes</surname>
            <given-names>William "Bill"</given-names>
          </name>
          <email xlink:href="mailto:william.combes@baltdefcol.org">william.combes@baltdefcol.org</email>
          <xref ref-type="aff" rid="j_JOBS_aff_000"/>
          <xref ref-type="corresp" rid="cor1">∗</xref>
        </contrib>
        <aff id="j_JOBS_aff_000">Baltic Defence College</aff>
      </contrib-group>
      <author-notes>
        <corresp id="cor1"><label>∗</label>Corresponding author.</corresp>
      </author-notes>
      <volume>11</volume>
      <issue>2</issue>
      <fpage>1</fpage>
      <lpage>11</lpage>
      <pub-date pub-type="epub">
        <day>29</day>
        <month>12</month>
        <year>2025</year>
      </pub-date>
      <permissions>
        <copyright-statement>Open Access. ©</copyright-statement>
        <copyright-year>2025</copyright-year>
        <copyright-holder>William Combes</copyright-holder>
        <license license-type="open-access" xlink:href="https://creativecommons.org/licenses/by/4.0/">
          <license-p>This work is licensed under the Creative Commons Attribution 4.0 International License.</license-p>
        </license>
      </permissions>
      <abstract>
       <p> Professional Military Education faces a fundamental challenge: preparing leaders to collaborate effectively with AI capabilities that evolve faster than curriculum cycles, in operational contexts still being defined, against adversaries who conceptualise AI through different strategic logics. This editorial introduction frames the Journal on Baltic Security special issue on Artificial Intelligence in Professional Military Education, presenting a distinction between command functions (requiring human judgment) and control functions (amenable to AI augmentation) that emerged from collaborative exploration among Baltic-Nordic defence organisations and NATO institutions. Three patterns of effective human-AI collaboration – Strategic Sense-Making, Ethical Responsibility, and Adaptive Command – provide the conceptual thread connecting six diverse contributions spanning adversary paradigm analysis, technology survey, NATO cognitive infrastructure, gender and inclusion, practitioner reflection, and empirical framework development. The introduction identifies implementation gaps requiring continued attention and points toward future collaborative work addressing practical solutions for PME institutions navigating AI integration.</p>
      </abstract>
    </article-meta>
  </front>

  
<body>

<p><title>The Challenge Before Us</title></p>
<p>Artificial Intelligence (AI) is transforming how militaries operate, decide, and fight. The integration of AI into command-and-control systems, 
  intelligence analysis, autonomous platforms, and decision-support tools proceeds rapidly across NATO and among potential adversaries. 
  Yet Professional Military Education (PME) faces a fundamental challenge: preparing leaders to collaborate effectively with AI 
  capabilities that evolve faster than curriculum cycles, in operational contexts still being defined, against adversaries 
  who conceptualise AI through different strategic logics.
</p><p>
  This challenge is not theoretical. Institutions across the Alliance are actively implementing AI and digital 
  transformation initiatives – developing faculty capabilities, revising curricula, experimenting with new tools, 
  navigating policy frameworks that struggle to keep pace with technological change. The Baltic Defence College’s own efforts, 
  guided by its Digital Transformation and Artificial Intelligence Working Group, 
  revealed early on that no institution can address this challenge alone. The complexity exceeds what 
  isolated efforts can manage; the lessons emerging from early adoption need mechanisms for sharing across institutional boundaries.
</p><p>
  Recognition that collaboration was necessary shaped two initiatives. In February 2025, 
  the Baltic Defence College convened a workshop on AI in PME bringing together representatives 
  from regional ministries of defence, military academies, and NATO institutions to explore AI integration 
  challenges and share emerging practices. The call for papers for this special issue invited contributions 
  from diverse institutions – faculty, technical teams, institutional leaders, researchers at various 
  stages of AI adoption – to document lessons, frameworks, and critical perspectives that could inform the broader PME community.
</p><p>
  The contributions gathered here represent that diversity: Finnish scholars analysing Russian and 
  Chinese military thought in original languages, Greek practitioners mapping the technological landscape, 
  a Norwegian researcher examining NATO’s cognitive infrastructure, Turkish institutional case studies 
  on inclusion and ethics, a practitioner navigating her AI scepticism through a civilian educator’s guide to AI collaboration, 
  and empirical findings from the AI in PME collaboration. What unites these contributions is shared concern: 
  ensuring that as AI transforms military operations, PME transforms leaders’ capacity to exercise human judgment 
  where it matters most.
</p>
<br></br>
<p><title>Command, Control, and Collaboration</title></p>
<p>A distinction that emerged from practitioner exploration at the BALTDEFCOL workshop provides the 
  conceptual thread connecting these diverse contributions: the difference between command-and-control 
  functions in AI-enabled operations.
</p><p>
  Command encompasses functions requiring human judgment – leadership, authority, responsibility, 
  strategic decision-making. These remain essentially human regardless of technological advancement. 
  No algorithm assumes accountability for operational consequences. No AI system provides the moral 
  authority that legitimises the use of force. Command requires understanding context, weighing competing values, 
  accepting responsibility for uncertainty – capacities that remain distinctively human even as AI capabilities expand.
</p><p>
  Control encompasses functions where AI augmentation enhances speed, scale, and precision – communication, 
  information processing, coordination, environmental monitoring. Here AI excels: processing sensor data 
  at volumes humans cannot manage, identifying patterns across disparate sources, maintaining situational 
  awareness across extended operations, coordinating complex logistics. The augmentation of control functions 
  extends what human commanders can perceive, coordinate, and direct.
</p><p>
  This distinction – command remains human, control can be AI-enabled – provides clarity for 
  technology adoption decisions, training design, and leader development priorities. It suggests 
  that PME must develop both: the AI literacy required to leverage enhanced control functions, 
  and the judgment, ethics, and strategic thinking that effective command demands.
</p><p>
  Three patterns of effective human-AI collaboration emerged from the workshop exploration, 
  each reflecting this command-control distinction:
</p><p>
  Strategic Sense-Making pairs AI pattern recognition with human contextual judgment. AI systems 
  identify patterns across vast information – intelligence reports, sensor data, open-source analysis – 
  at speeds and scales humans cannot match. Humans provide the contextual judgment that determines 
  operational significance: what these patterns mean within specific strategic circumstances, historical precedents, 
  and political constraints.
</p><p>
  Ethical Responsibility pairs AI option generation with human ethical evaluation. AI systems 
  rapidly generate courses of action, alternative phrasings, scenario variations. Humans evaluate 
  each against ethical frameworks, professional standards, and institutional values – accepting, modifying, 
  or rejecting based on judgment that algorithms cannot replicate.
</p><p>
  Adaptive Command pairs human primacy in objective-setting with AI environmental sensing. 
  Humans establish what must be achieved and why; AI systems monitor the environment for changes relevant 
  to those objectives, alerting commanders to developments requiring attention while the fundamental 
  direction remains under human authority.
</p><p>
  Readers will encounter these patterns operating – sometimes explicitly, sometimes implicitly – 
  across the contributions that follow. The final article develops the full theoretical framework; 
  the preceding contributions demonstrate the patterns in application.
</p>

<br></br>
<p><title>The Contributions</title></p>
<p><italic>Mapping the Threat Landscape</italic></p>
<p>The issue opens with <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/135">Puranen and Kukkola's 'A Eurasian Paradigm of Intelligent Warfare? 
  How China and Russia Perceive AI’s Impact on Military Power'</ext-link>. Through analysis of over 180 Russian military 
  articles and extensive Chinese sources in original languages, the authors map the conceptual 
  terrain that PME must help officers navigate.
</p><p>
  Their central finding challenges assumptions about a monolithic adversary approach: no unified 
  Eurasian paradigm exists despite deepening Sino-Russian partnership. Russian military thinking emphasises 
  information superiority and defensive sovereignty within a 2030 horizon, viewing AI as enhancing existing 
  systems rather than transforming warfare’s fundamental character. Chinese thinking envisions transformative 
  'intelligentisation' – human-machine fusion, organisational revolution, cognitive domain expansion – toward 2050 and beyond.
</p><p>
  For PME, this divergence demands comparative paradigm analysis. Officers must 
  understand that adversaries conceptualise AI through different logics shaped by strategic culture, 
  historical experience, and geopolitical position. The analytical work itself demonstrates Strategic 
  Sense-Making: synthesising multilingual military literature requires pattern recognition at scale, 
  but determining what those patterns mean for NATO strategy requires human contextual judgment 
  that understands why Russian defensive framing differs from Chinese transformative ambition. 
  This divergence extends to ethical considerations – a dimension NATO cannot abandon regardless 
  of whether adversaries share it.
</p>

<br></br>
<p><italic>Surveying the Technological Landscape</italic></p>
<p>From adversary concepts, the issue moves to available tools. <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/136">Karabelias, Zafeiris, and Chontos</ext-link> offer 
  <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/136">'From the Analogue Warrior to the Algorithmic Strategist: The Evolution of Military Training through Artificial Intelligence'</ext-link>. 
  This comprehensive survey maps the AI application landscape for military training: adaptive learning systems 
  that personalise instruction, synthetic environments enabling realistic simulation, 
  decision-support tools accelerating staff processes, generative AI creating dynamic scenarios.
</p><p>
  Drawing on frameworks from NATO, the United Kingdom, the United States, and the European Union, 
  the authors catalogue what exists and what responsible adoption requires – ethical governance, security protocols, 
  human oversight mechanisms. The survey serves readers at different starting points: newcomers gain orientation 
  to the field; experienced practitioners gain a checklist against which to evaluate institutional readiness.
</p><p>
  Like the threat analysis, this mapping exercise demonstrates Strategic Sense-Making. 
  Synthesising dispersed technical literature, policy documents, and case studies into coherent 
  overview requires processing information at scale. Evaluating which applications merit institutional 
  investment – given operational requirements, resource constraints, and organisational culture – requires 
  human judgment that no catalogue can provide.
</p>

<p><italic>Building Cognitive Infrastructure</italic></p>
<p>If Puranen and Kukkola map the threat and Karabelias and colleagues map the tools, 
  <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/137">Bendiksen</ext-link> addresses how NATO’s human architecture enables coherent response. 
  <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/137">'Human Interoperability through PME: Standardised Mental Models as Enablers for Data-Centric Warfighting'</ext-link> 
  argues that shared heuristics taught through Professional Military Education – ends-ways-means reasoning, 
  fighting power frameworks, interoperability models – function as cognitive connectors enabling officers 
  from different nations and institutions to think together effectively.
</p><p>
  This reframes PME as NATO’s 'cognitive infrastructure', essential to rather than separate from technological 
  transformation. Technical interoperability can be standardised through equipment and protocols. 
  Procedural interoperability can be aligned through doctrine. But human interoperability – the trust, 
  shared understanding, and compatible mental models that enable multinational forces to act as one – must 
  be cultivated through education.
</p><p>
  The paper establishes the cognitive foundation that Adaptive Command requires as AI transforms 
  the information environment, shared mental models ensure officers retain the conceptual primacy needed 
  to set objectives and interpret AI-generated information. Officers educated in common heuristics share 
  compatible frameworks for that interpretation, enabling collective sense-making even as the volume and 
  velocity of data exceed what any individual could process. The heuristics are portable cognitive tools 
  that keep humans in command even as control functions transform.
</p>

<p><italic>Examining Ethical Dimensions</italic></p>
<p>The issue then pauses for <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/139">Kuloğlu and Koçanli</ext-link> to ask a necessary question: who is included or 
  excluded when military education digitises? <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/139">'Gender Equality in AI-Supported Military Education: 
  Literature Insights and Evidence from Turkish Institutions'</ext-link> examines the intersection of algorithmic bias 
  and institutional inequality through case studies of three Turkish military educational institutions.
</p><p>
  The authors reveal a dual gap: minimal AI adoption coincides with absent gender perspectives. 
  This could enable thoughtful, inclusive implementation – a blank slate for getting it right from the start. 
  Or it could risk embedding historical biases at scale if institutions adopt AI tools without 
  awareness of how algorithms trained on male-dominated military data may perpetuate exclusion.
</p><p>
  The paper’s insight resonates beyond the Turkish context: algorithmic bias reflects 
  institutional inequalities, not just technical problems. Institutions that struggle with human-centred 
  inclusion policies will struggle with the cognitive complexity of evaluating AI outputs for bias. 
  The paper identifies the need for practical frameworks to guide gender-sensitive AI implementation and 
  argues for developing systematic approaches to prevent algorithmic bias in military education systems across NATO.
</p><p>
  This contribution illustrates why Ethical Responsibility is essential. By identifying a gap in 
  Turkish institutions – minimal AI adoption coinciding with absent gender frameworks – the authors 
  argue that ethical evaluation must accompany AI adoption from the outset, not follow as afterthought. 
  AI systems generate training scenarios, assessment algorithms, and educational content. 
  Humans must evaluate these outputs against ethical frameworks – asking whose experiences 
  shaped the training data, whose perspectives inform the algorithms, whether technological 
  futures will be more or less inclusive than the past. The paper also demonstrates Strategic 
  Sense-Making: the literature synthesis required pattern recognition across dispersed gender 
  and AI scholarship, but determining implications for specific institutional contexts 
  required human judgment informed by direct observation.
</p>

<p><italic>Practitioner Reflection</italic></p>
<p>Before the theoretical synthesis, the issue offers practitioner reflection. <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/140">Armstrong's</ext-link> 
  <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/140">review of Ethan Mollick's Co-Intelligence: Living and Working with AI </ext-link> models the intellectual 
  journey many PME professionals are navigating – engaging thoughtfully with AI’s potential while 
  maintaining justified scepticism about rushed or haphazard integration.
</p><p>
  Armstrong's critical stance enriches the issue. She challenges Mollick’s framing of 
  AI scepticism as rooted in fear of replacement, arguing instead that genuine sceptics are motivated 
  by concerns for safety, security, sustainability, and equity during this transformational period. 
  Her central critique: Mollick’s optimism about experimentation and iterative learning sits uneasily 
  with military contexts where ultimate stakes are higher and institutional cultures may resist 
  the systematic experimentation his approach requires. Drawing on her experience as BALTDEFCOL 
  faculty navigating AI integration, Armstrong observes that there is not yet a final answer 
  for risk mitigation as institutions “fumble” – even systematically – through rapid technological expansion.
</p><p>
  Yet her willingness to let the book challenge her assumptions – acknowledging that 
  reading Co-Intelligence shifted her thinking even without converting her to AI enthusiasm – 
  demonstrates the adaptive learning this issue advocates. The review embodies Ethical 
  Responsibility in intellectual practice: Mollick generates frameworks; Armstrong evaluates 
  them against PME’s specific requirements, finding alignment where it exists and identifying 
  limitations where civilian assumptions meet military realities.
</p>

<p><italic>Theoretical Framework</italic></p>
<p>The issue concludes with <ext-link xref="https://journalonbalticsecurity.com/journal/JOBS/article/141">'Educating Tomorrow's Leaders: Human-AI Collaboration 
  Patterns for Professional Military Education'</ext-link>. Drawing on the February 2025 workshop, 
  this article develops the command-control framework and three collaboration patterns previewed 
  in this introduction, grounding them in operational observation from Ukrainian and Israeli 
  experience and NATO doctrinal analysis.
</p><p>
  In addition to the framework, it documents a transferable workshop methodology for 
  institutions navigating their own AI integration challenges – complexity-aware facilitation 
  that enables patterns to emerge from practitioner wisdom rather than imposing predetermined frameworks. 
  It acknowledges implementation obstacles candidly: institutional conservatism, rapid technological change, 
  resource constraints, cultural resistance. And it positions collaboration as strategic 
  necessity for institutions that cannot achieve transformation alone.
</p><p>
  In the spirit of this issue’s focus, the article includes transparent documentation 
  of AI-assisted research methodology in Appendix A. This introduction itself was developed through 
  similar collaboration: command functions remained editorial – determining thematic architecture, 
  deciding how to position each contribution, ensuring the final voice reflected intent rather than 
  generated text. Control functions were augmented through AI assistance – synthesising across 
  manuscripts to identify connections, generating structural options, drafting sections for 
  revision and refinement. The three patterns operated throughout: Strategic Sense-Making as 
  AI identified thematic patterns while editorial judgment determined which held significance; 
  Ethical Responsibility as AI generated framing options while editorial judgment evaluated each 
  against contributors’ intentions; Adaptive Command as editorial objectives guided AI-augmented analysis.
</p><p>
  Having read the preceding contributions, readers will recognise how these patterns operate 
  across diverse contexts: Strategic Sense-Making in Puranen and Kukkola’s paradigm analysis 
  and Karabelias and colleagues' technology survey; Ethical Responsibility in the gender equality 
  examination and Armstrong's critical engagement; Adaptive Command in Bendiksen’s cognitive 
  infrastructure and throughout as human judgment guides AI-augmented analysis. The framework 
  makes explicit what the contributions demonstrate.
</p>

<br></br>
<p><title>The Way Ahead</title></p>
<p>This special issue advances understanding of AI integration in Professional Military Education, 
  yet significant work remains. Ongoing implementation efforts across the region have identified 
  challenges that frame continued collaborative attention.
</p><p>
  National Strategy and Institutional Alignment: How do PME institutions translate national 
  AI strategies and NATO-level guidance into curriculum revision, faculty development, 
  and resource allocation? The gap between strategic aspiration and institutional implementation remains substantial.
</p><p>
  Technical Infrastructure: Private large language models, closed network solutions, 
  AI-enabled wargaming, and secure data environments require expertise and investment 
  that many institutions lack. Practical pathways for resource-constrained institutions need development.
</p><p>
  Leadership Competencies: Beyond AI literacy, what command competencies does 
  AI-enabled operations require? How do PME curricula develop the judgment, ethical reasoning, 
  and adaptive capacity that effective human-AI collaboration demands?
</p><p>
  Assessment Methodologies: How do institutions measure whether AI integration improves educational outcomes?
   What metrics capture not just efficiency gains but development of the human capacities that remain essential?
</p><p>
  Resource Coordination: How do smaller institutions share development costs, lessons learned, and emerging 
  practices? Regional cooperation offers possibilities that isolated institutional efforts cannot achieve.
</p><p>
  These gaps frame the next phase of collaborative work. The Baltic Defence College 
  will host 'Bridging the AI Adoption Gap: AI in Professional Military Education' in February 2026, 
  moving from theoretical exploration to practical problem-solving. The workshop will address national 
  AI strategies and institutional policy alignment, technical implementation case studies, 
  leadership development for AI-enabled operations, and regional network development 
  for sustained collaboration.
</p><p>
  Institutions interested in participating or contributing to future research are invited to 
  contact the Baltic Defence College. The challenge of preparing leaders for AI-enabled warfare exceeds 
  what any single institution can address. The contributions gathered here demonstrate that 
  the work is underway – across institutions, across perspectives, across the Alliance. 
  AI augments control functions while humans retain command authority. The work continues 
  through collaboration, iterative refinement, and sustained attention to both innovation 
  and responsibility. The mission demands we proceed together.
</p>
<xref ref-type="fn" rid="footnote11"> </xref> 

</body>
<back>
  <fn-group>
      <fn id="footnote1"> <bold>AI Statement:</bold> <p>The introduction to a special issue on human-AI collaboration 
        deserves an AI statement that demonstrates the framework rather than simply discloses tool 
        use – it should model what the issue advocates: transparency that actually works. Given 
        the issue's focus, I want to show this in practice.</p>
      <p>This introduction was developed through human-AI collaboration reflecting the 
      command-control framework presented in this special issue. Command functions remained 
      with me as Guest Editor: determining thematic architecture, deciding how to position 
      each contribution, ensuring the final voice reflected my editorial intent. Control functions 
      were augmented through Anthropic's Claude: synthesising across manuscripts, generating structural 
      options, and drafting sections for my revision.</p>
      <p>The three collaboration patterns operated throughout. Strategic Sense-Making: 
      Claude identified thematic patterns across manuscripts while I determined which connections 
      held significance. Ethical Responsibility: Claude generated framing options while I evaluated 
      each against contributors' intentions. Adaptive Command: my editorial objectives guided Claude-augmented analysis throughout.</p>
      <p>The final product is mine.</p></fn>
    </fn-group>
</back>  
</article>
