fbpx

Search Blog Articles & Latest News

Blog Archive Resource Library

Get practical insights on AI, Agentic Systems & Digital Twins for industrial operations

Join The Newsletter

Human Agency Controls: Why 96% of Organizations Need Dynamic Authority Over AI Agents

Digital twin

Pieter Van Schalkwyk

CEO at XMPRO

This article originally appeared on XMPro CEO’s Linkedin Blog, The Digital Engineer

Only 4% of organizations want to avoid AI agents entirely. Yet 67% refuse to give them full control. This gap between adoption and autonomy is creating the biggest challenge and opportunity in industrial AI.

LNS Research reveals this unprecedented consensus around AI agent necessity across industries. Organizations recognize that agents will transform their operations. However, comfort with autonomous operation varies dramatically between companies seeking full autonomy and those demanding extensive human oversight.

This universal adoption, combined with varied preferences, creates a fundamental challenge. Organizations need systems that preserve sophisticated AI intelligence while adapting execution authority to their specific requirements. The solution lies in what we call “Human Agency Controls” that enable dynamic authority management.

The Reality of Universal Agent Adoption

Niels Erik Andersen‘s LNS Research findings reveal something remarkable about industrial AI readiness. The consensus around agent necessity happened faster than any previous industrial technology adoption. Organizations across all industries recognize that AI agents will transform their operations.

However, comfort with autonomous operation varies dramatically. About 33% of organizations want AI agents taking actions without human intervention. The remaining 67% prefer different degrees of human oversight and control.

This distribution creates an architectural challenge that traditional approaches cannot solve. Companies cannot choose between agent-based and conventional systems because 96% need agent capabilities. They must determine how to implement agents within their operational and cultural contexts.

The Four Domains of Agent Operation

Andersen’s research framework provides a foundation for understanding when different authority levels become appropriate. Agent behavior operates across safety and training dimensions that create four distinct operational domains.

The Safe & Trained domain represents optimal conditions where agents possess relevant experience. They operate within established safety parameters with full knowledge of similar situations. Organizations comfortable with autonomy can implement closed-loop execution here.

The Safe & Untrained domain presents scenarios where conditions remain safe but agents lack specific training. Even in safe environments, agents should not operate autonomously beyond their knowledge boundaries. The system must transition to human-guided operation while agents learn.

The Unsafe & Trained domain creates complex challenges because agents have experience but conditions exceed safety parameters. Even organizations pursuing full autonomy must implement deterministic safety overrides. Agent recommendations can inform decisions, but execution authority transfers to safety systems.

The Unsafe & Untrained domain represents conditions where agents lack both experience and safe conditions. No responsible implementation permits agent operation here. Immediate transition to human control becomes mandatory regardless of organizational autonomy preferences.

How Human Agency Controls Solve the Challenge

XMPro MAGS addresses this through Human Agency Controls that separate cognitive capability from execution authority. The same agent intelligence serves organizations across the entire autonomy spectrum. HAC adjusts approval pathways rather than agent reasoning capability.

Conservative implementations can require extensive approval workflows for routine recommendations. They still benefit from agent analysis that exceeds human capacity for pattern recognition and coordination. Autonomous implementations enable direct execution within predefined safety boundaries while maintaining identical intelligence frameworks.

Human Agency Controls operate at multiple organizational levels to accommodate different requirements:

  • Organizational-level controls establish baseline authority policies reflecting regulatory and cultural preferences
  • Domain-specific controls allow managers to adjust authority for different functional areas
  • Task-level controls provide supervisors with dynamic management based on current conditions

This flexibility means agents handle routine decisions automatically while ensuring human expertise guides complex situations. The agents learn from human decisions to expand their autonomous capabilities over time.

The XMPro Human Agency Score Framework

Stanford University’s research on Human Agency Scores provides the scientific foundation for our Human Agency Controls implementation. Their comprehensive study of 1,500 workers across 104 occupations reveals crucial insights about worker preferences for AI collaboration.

XMPro adopts an inverted Human Agency Score scale that aligns with industry standards like SAE autonomous driving levels. This makes the system intuitive for industrial users familiar with other autonomy frameworks.

The Five Levels of Human Agency

Our Human Agency Score framework directly parallels the well-established SAE autonomous driving levels, making it intuitive for organizations already familiar with automation standards.

HAS 1 (No Autonomy): Human-Driven

  • Equivalent to SAE Level 0: No automation
  • Human maintains complete control with AI as basic tool
  • Continuous human involvement essential for task completion
  • Worker preference: 1.0% of occupations choose this level
  • Examples: Creative problem-solving, crisis management, safety protocols

HAS 2 (Minimal Autonomy): Human-Assisted

  • Equivalent to SAE Level 1-2: Driver assistance to partial automation
  • Human drives the process with limited AI support
  • AI provides recommendations while human retains authority
  • Worker preference: 16.3% of occupations prefer this level
  • Examples: Medical diagnosis, engineering design, regulatory compliance

HAS 3 (Collaborative Autonomy): Partnership

  • Equivalent to SAE Level 3: Conditional automation with active oversight
  • True human-AI partnership with shared decision-making
  • Most preferred level: 45.2% of occupations choose this approach
  • Both parties contribute unique strengths to task completion
  • Examples: Process optimization, maintenance planning, strategic analysis

HAS 4 (High Autonomy): AI-Assisted

  • Equivalent to SAE Level 4: High automation in defined conditions
  • AI manages most operations with minimal human oversight
  • Human input required only at critical decision points
  • Worker preference: 35.6% of occupations prefer this level
  • Examples: Quality control monitoring, inventory management, routine scheduling

HAS 5 (Full Autonomy): AI-Driven

  • Equivalent to SAE Level 5: Full automation in all conditions
  • AI handles tasks entirely without human involvement
  • Suitable for routine, low-risk, highly predictable operations
  • Worker preference: Only 1.9% of occupations prefer this level
  • Examples: Data entry, basic reporting, standard monitoring

Why This Alignment Matters

The SAE framework provides a proven model for understanding automation levels that industrial organizations already use for vehicle fleets and autonomous equipment. By aligning our Human Agency Scores with these established standards, we create immediate familiarity and trust.

Just as SAE Level 3 vehicles require drivers to remain alert and ready to intervene, HAS 3 systems maintain active human partnership in decision-making. Similarly, SAE Level 4 operates autonomously within defined zones, while HAS 4 enables autonomous operation within defined operational parameters.

Research Insights That Shape Implementation

The Stanford research reveals three critical findings that guide our Human Agency Controls implementation:

  • 61.5% of workers prefer high human involvement (HAS 1-3) rather than high automation
  • 47.5% prefer lower autonomy levels than experts consider technically feasible
  • 69.38% want AI to free up time for high-value work rather than eliminate roles

These findings validate our approach of defaulting to collaborative autonomy while enabling adjustment based on task characteristics and organizational comfort.

Task Characteristics That Determine Authority Levels

The research identifies four key factors that influence optimal HAS levels:

  • Interpersonal Communication: Tasks requiring human relationships need HAS 1-2
  • Domain Expertise: Specialized knowledge and experience favor HAS 1-3
  • Uncertainty and Risk: High-stakes decisions with uncertain outcomes require HAS 1-3
  • Physical Action Requirements: Tasks requiring physical interaction vary by complexity

XMPro MAGS automatically recommends appropriate HAS levels based on these characteristics while allowing manual override based on organizational preferences.

The Stanford Research Validation

Stanford University research on worker preferences provides crucial validation for this approach. The study of 1,500 workers across 104 occupations reveals that 45.2% prefer collaborative autonomy rather than full automation. This preference significantly exceeds any other autonomy level.

More importantly, 61.5% of workers prefer high human involvement rather than high automation levels. This aligns precisely with the 67% of organizations requiring human oversight. The convergence between organizational readiness and worker preferences validates the Human Agency Controls architecture.

The research shows that 47.5% of workers prefer lower autonomy levels than experts consider technically feasible. This preference gap disappears when systems preserve human agency through governance rather than limiting intelligence through design.

Bounded Autonomy in Practice

XMPro MAGS implements what we call “bounded autonomy” that maintains separation between decision logic and execution control. Agents observe operational conditions, reflect on patterns, plan coordinated responses, and generate recommendations using identical processes regardless of autonomy level.

Human Agency Controls determine the execution pathway for agent recommendations. High autonomy levels enable direct execution within safety boundaries. Lower autonomy levels insert human approval gates without changing underlying agent intelligence or safety validation.

This separation ensures conservative implementations benefit from sophisticated agent analysis while maintaining organizational control preferences. The same cognitive capabilities that identify optimization opportunities serve both autonomous and supervised implementations through different governance pathways.

Safety First Across All Levels

Safety requirements remain universal regardless of autonomy preferences or Human Agency Controls settings. XMPro maintains deterministic safety validation independent of agent reasoning processes.

The safe operating envelope is defined and managed at the XMPro DataStream level, not by individual agents. This critical architectural decision ensures that safety boundaries cannot be modified by agents regardless of their autonomy level. DataStreams establish immutable operational limits that protect equipment, processes, and personnel.

Within this envelope, Human Agency Controls determine approval requirements and execution pathways. Outside this envelope, all implementations must immediately transition to deterministic safety systems. Agents cannot override or adjust these fundamental safety boundaries under any circumstances.

This safety-first foundation enables organizations to adjust authority levels based on comfort and requirements without compromising protection. The Human Agency Controls framework preserves organizational control while maintaining consistent safety standards across all autonomy levels.

Building for Organizational Evolution

Human Agency Controls recognize that organizational comfort with autonomous operation evolves over time. Initial implementations can operate with conservative control settings requiring extensive human approval. The same agent intelligence prepares the foundation for higher autonomy as readiness develops.

Organizations can begin with minimal autonomy and gradually increase execution boundaries as trust develops. This evolution happens through control adjustment rather than system redesign because cognitive capabilities remain constant while governance frameworks adapt.

The bounded autonomy architecture supports this evolution by separating what agents can understand from what they can execute. Sophisticated reasoning operates consistently while execution authority adapts to organizational maturity and comfort levels.

The Competitive Advantage

Organizations that master Human Agency Controls gain sustainable competitive advantage by benefiting from sophisticated agent intelligence regardless of their current autonomy comfort level. Conservative implementations gain advanced decision support while autonomous implementations achieve operational responsiveness at machine speed.

The Human Agency Controls framework addresses the fundamental challenge of universal agent adoption across varied preferences. Organizations ready for autonomous operation can maximize efficiency through high control settings. Those requiring oversight benefit from identical intelligence through conservative configuration.

Stanford research shows that 69.38% of workers want AI to free up time for high-value work. Human Agency Controls enable this by letting human expertise focus on strategic activities while agents handle routine tasks within appropriate boundaries.

Moving Forward with Dynamic Authority

Future-ready organizations effectively implement AI agents within their operational realities. The 96% adoption rate makes agent technology inevitable, not optional. The success factor becomes how well organizations implement dynamic authority through Human Agency Controls.

Organizations should start by assessing their current comfort with autonomous operation across different domains. They can implement conservative control settings initially while building experience and confidence. The same underlying agent intelligence serves both approaches through different execution pathways.

The key insight from both LNS and Stanford research is clear.

Success comes from preserving human agency through governance rather than limiting artificial intelligence through design.

Human Agency Controls provide the practical mechanism for achieving this balance.

Implementation Recommendations

Based on the research findings, organizations should:

  • Start with HAS 3 (Collaborative) as the default setting since 45.2% of workers prefer this level
  • Respect worker preferences for lower autonomy levels even when higher levels are technically possible
  • Use task characteristics to guide initial HAS recommendations while allowing adjustment
  • Measure worker satisfaction alongside operational metrics to ensure sustainable adoption

Companies that embrace this approach position themselves to benefit from sophisticated AI capabilities while respecting organizational culture and worker preferences. Human Agency Controls enable partnership between human and artificial intelligence rather than replacement of human judgment.

The question isn’t whether your organization will adopt AI agents. The question is whether you’ll implement them with the Human Agency Controls that enable success across your entire operation.

Links:


Pieter van Schalkwyk is the CEO of XMPro, specializing in industrial AI agent orchestration and governance. Drawing on 30+ years of experience in industrial automation, he helps organizations implement practical AI solutions that deliver measurable business outcomes while ensuring responsible AI deployment at scale.

About XMPro: We help industrial companies automate complex operational decisions. Our cognitive agents learn from your experts and keep improving, ensuring consistent operations even as your workforce changes.

Our GitHub Repo has more technical information. You can also contact me or Gavin Green for more information.

Read more on MAGS at The Digital Engineer