Solving the AI Delegation Problem - A Framework for UX

04 Mar 2026 2:11 PM | Laura Cunningham (Administrator)

Author: Caleb Furlough, PhD

The mainstreaming of generative AI expectations has arrived. Those working in tech, including UXers, are experiencing pressure to integrate AI into their regular workflows. The decision framing is not whether to use AI, but how to best integrate it. Today, many UXers approach delegating tasks to AI in an ad hoc, unstructured way. We ask it to generate a data visualization, pull useful quotes from transcripts, or rewrite an executive summary. Frequently missing is a consistent, risk-informed approach to extracting value from AI tools by delegating the right tasks at the right time. 

I want to offer a framework that can help improve the effectiveness of AI task delegation. Much of this is not even my framework at its core, but is my attempt to adapt and expand a classic framework in the human factors literature to address AI task delegation in today’s environment. 

Before LLMs: When to Automate

In 2000, names familiar to those of us with a human factors background (Parasuraman, Sheridan, and Wickens) published a paper titled "A Model for Types and Levels of Human Interaction with Automation.” In this paper, they shared a framework to guide task automation.Their original framework was created with high-risk tasks in mind, like those in aviation and industrial control systems, but can easily be modified to apply to modern UX workflows. 

The core of this framework revolves around the idea that automation exists as a degree, not a binary. Deciding to delegate a task to an automated tool requires nuance. First, they suggest considering the type of task being delegated based on four stages of information processing. Second, determine the degree of automation on a sliding ten-point scale. I have taken the original framework (the 4 phases and 10 levels of automation) and slightly adjusted it to better fit modern AI contexts.  

The 4 Information Processing Phases

AI can be applied at four distinct stages of a task or workflow, which align with four phases of human cognitive processing. These phases alone do not tell you when or how to automate a task with AI, but they provide a nudge in the right direction (e.g., taking action requires a higher degree of automation than simply acquiring information). 

  1. Acquiring Information: Sensing and gathering data

  2. Analyzing Information: Making sense of those data

  3. Deciding: Making a decision for action from available choices

  4. Take Action: Executing that decision

The 10 Levels of Automation (LOA)

Within each of the 4 above phases, the degree to which a task is performed by AI (level of automation) ranges from 1 to 10. See the below table for reference. 


  • Level 1: Human Only
  • Levels 2-4: Human Execution, AI Support

  • Level 5: AI Execution with Human Consent

  • Levels 6-8: Autonomous AI Execution

  • Levels 9-10: AI Ownership

The core insights from this framework are that AI automation comes in varying degrees and these degrees each have their tradeoffs, depending on the task at hand. The next step in task delegation is evaluating the suitability of a given task for some degree of AI automation against a set of evaluative criteria. 

AI Task Fit Criteria

A given task can fit into different information processing categories and be automated to different degrees. This alone is helpful framing but we need an additional evaluative layer to filter where AI could truly add value. Here are some evaluative criteria to help in that assessment. This is not an exhaustive list, but covers many of the core factors that predict if an AI implementation will be successful.

  1. Comparative Advantage: I have written about this concept elsewhere, but Comparative Advantage is a classic theory in the field of economics. For our purposes, this criterion looks at the extent to which a human or AI is not only better at a task, but relatively better compared to performance on other tasks. Each should lean into tasks for which it pays a higher opportunity cost if it did not do them. Think “if I did this task myself instead of delegating it to AI, what other tasks would I not be able to spend my time doing?”. 

  2. AI Capability Fit: Is AI, in its current state and with the tools and capabilities available to you today, able to execute the type of task you are considering?

  3. Mental Workload: If AI automated some or all of a task, how much would it reduce your mental workload?

  4. Situation Awareness: To what extent do you need to be aware of the progress or status of the task being executed? Do you need to be aware every step of the way or only aware of the final outcome or at certain checkpoints? Not at all?

  5. Skill Degradation: If you automate this task, will important skills atrophy over time? If yes, is that a problem or is that acceptable?

  6. Failure Cost: What are the impacts if the task fails or quality drops below the target level? 

  7. Total Task Time: I borrowed this thought from Ethan Molluck: consider the net productivity gain/loss by comparing the time and effort for a human to complete the task over and against an AI counterpart. Include the probability of AI failure that requires you to execute the task again. Is Human task time>AI task time + Probability of rework?

  8. Verification Cost: Consider not only the time it takes AI to complete a task but also the time for you to verify or quality check its output.

The Full Task Delegation Flow

Putting it all together in a simple flow looks something like this:


Figure 1. The Full Task Delegation Flow (made with Nano Banana 2.0)

Consider the Jagged Frontier, Partial Delegation

One more thing to consider. When looking to delegate tasks to AI, an important concept to keep in mind is that of the "Jagged Frontier" of AI performance. LLMs are famously stochastic black boxes that hide how they produce output. This leads to a phenomena wherein the same LLM can perform extraordinarily well on a complex task but turn around and perform as a toddler would, even on a much simpler task. 

This jaggedness, along with other factors, have led to high AI adoption but low rates of complete workflow delegation. For example, by one estimate, even with a reported 85% AI adoption rate, only around 4% of work time is spent using AI. While the vast majority of large enterprise companies are heavily investing in AI, only around 1 in 4 AI agent projects are ever deployed in production (and we have to wonder how many of those result in a net productivity gain). 

When delegating tasks to AI, carefully consider the capabilities of the tools you have, your experience with them, the risks involved, and adapt in the face of jagged outcomes. Don’t look at task delegation as an all-or-nothing exercise, but strongly consider partial delegation where it makes sense. 

Conclusion

The future of UX is not a tension between AI and humans. Nor is it going to look like a clean division of labor, at least for now. Task delegation is messy. However, the delegation framework I presented here will, hopefully, cut through some of the messiness and help us target those tasks that are genuinely helpful to delegate. It also serves as a robust framework that should scale with AI capabilities. 

The next time you open your research repository to start ramping up for a new study, don’t merely ask “can AI do this?”. Ask, "to what degree should AI do this?"


Copyright © Triangle User Experience Professionals Association

Powered by Wild Apricot Membership Software