Post Image

In the world of AI product development, we often talk about user experience as if it were a layer we add on top of technical solutions — a coat of paint that makes the machinery underneath more palatable to users. This mental model, while comfortable, misses something fundamental about how AI systems actually work and how users experience them.

Here are some of the recurring challenges I’ve observed:

1. The unintended gatekeepers

One of the most subtle yet significant challenges in AI product development is how technical teams unintentionally become gatekeepers of user experience. It’s not by design or desire — it’s a structural consequence of how we’ve organized our teams and processes.

When an AI system fails to understand context or generates an unexpected response, we instinctively frame it as a technical problem. As a result, these issues are discussed in technical meetings where UX professionals are rarely present, simply because they don’t directly concern the interface. In this way, technical teams become de facto UX designers, making crucial decisions about user experience without necessarily having the frameworks or context to optimize for UX outcomes.

This creates a troubling paradox: the people with the deepest understanding of user needs and behaviors are often absent when decisions most impacting those users are made. It’s not malicious — it’s structural. But the impact is profound.

2. The hidden language of UX in AI

Think about the last time you interacted with an AI system. Perhaps you asked ChatGPT for help with a coding problem, or used GitHub Copilot to autocomplete a function. The interface might have been clean and intuitive, but your experience was shaped far more by decisions made deep within the system’s architecture than by the UI elements you interacted with.

When a model takes too long to respond, generates hallucinations, or misses crucial context, we often frame these as technical problems to be solved by engineers. But they’re fundamentally user experience problems. The distinction between technical and UX decisions in AI systems is largely artificial — a remnant of how we built traditional software.

3. The ripple effect of technical choices

Consider a seemingly technical decision like token window size. Engineers might discuss this in terms of computational efficiency and model performance metrics. But this choice directly impacts how much context the AI can maintain in a conversation, which shapes the user’s entire interaction pattern. Should users have to carefully manage their prompt length? Should they repeat important context periodically? These aren’t just technical trade-offs — they’re UX decisions being made implicitly during technical discussions.

Or take the choice of model temperature. What looks like a simple numerical parameter actually encodes crucial decisions about the balance between creativity and reliability in AI responses. This shapes user trust, interaction patterns, and the very nature of how people incorporate the AI into their workflows.

4. The testing paradox

One of the most challenging aspects of AI product development is how traditional UX testing methods break down. How do you conduct meaningful A/B testing when every response is probabilistic? How do you plan for edge cases in systems designed to handle the unknown?

This isn’t just a methodological challenge — it’s a fundamental shift in how we need to think about user experience validation. Traditional usability testing assumes deterministic system behavior, but AI systems are inherently probabilistic. This forces us to develop new approaches to understanding and validating user experience.

5. The data quality paradox

Perhaps nowhere is this intersection of technical and UX decisions more evident than in data quality. The data we use to train AI systems isn’t just a technical input — it’s the foundation of the entire user experience. Biases in training data don’t just create technical inaccuracies; they shape the AI’s understanding of user intent, its cultural awareness, and its ability to serve diverse user needs.

Yet data quality discussions often happen in technical silos, far removed from UX expertise. We treat data cleaning as a technical task, when it’s really about understanding and encoding human context, behavior, and needs. Every decision about what data to include, exclude, or weight differently is a decision about what kinds of user experiences we’re optimizing for.

6. The cultural and linguistic impact

What appears to be a neutral technical optimization often carries hidden cultural and linguistic implications. A choice that works perfectly for English might create friction for other languages. The way we handle tokenization, context windows, and response generation can have profound implications for different user groups.

This isn’t just about localization — it’s about fundamental architectural choices that affect how accessible and useful our AI systems are across different cultures and languages. These decisions, often made early in the development process, can be difficult to correct later.

7. The evolution of roles

This new reality is forcing us to rethink not just how we build AI products, but how we define our roles within product teams. The traditional roles of Product Designers, UX Writers, and UX Researchers are changing. They need to evolve in order to include:

  • Deeper technical literacy to participate meaningfully in architectural discussions
  • Understanding of ML principles to anticipate UX implications
  • Ability to advocate for user needs in technical contexts
  • Skills in designing and evaluating probabilistic systems

The transformation won’t happen overnight. It requires both time for teams to recognize and adapt to these new realities, and a fundamental shift in how we understand the role of UX in AI development.

Breaking down the silos

The solution isn’t just to have UX designers sit in on technical meetings or to have engineers think more about user experience. It’s about fundamentally rethinking how we separate technical and UX decisions in AI development.

When a data scientist decides how to handle edge cases in training data, they’re making a UX decision. When an engineer tunes a model’s confidence thresholds, they’re designing user interaction patterns. When a product manager prioritizes certain model capabilities over others, they’re shaping the fundamental ways users will be able to interact with the system.

Moving forward: A new approach

This perspective suggests several practical changes to how we build AI systems:

Expand UX disciplines

Each UX role needs to evolve beyond traditional boundaries:

  • UX Research: Moving to inform data collection, model behavior specifications, and system architecture decisions. Researchers need new methodologies for testing probabilistic systems and evaluating AI-human interaction patterns.
  • UX Writing: Evolving to defining complex conversational flows, defining AI personality and tone, and crafting prompts that shape AI behavior. Writers become architects of interaction logic, developing frameworks for handling unexpected responses and maintaining context across non-linear conversations.
  • Product Design: Expanding from screen-based interactions to designing entire AI interaction systems. They need to think in terms of systems rather than screens, considering how technical choices will manifest in user interactions.

Reframe technical discussions

Technical reviews should explicitly consider the UX implications of architectural choices, treating them as fundamental to the decision-making process rather than secondary considerations. To support this, we need to:

  • Bridge the knowledge gap: UX professionals need enough technical understanding to participate meaningfully in architectural decisions, while engineers need to understand how their technical choices shape user experience.
  • Collaborate on data quality: Data cleaning and curation should be a collaborative process involving both technical and UX expertise, recognizing that data quality is as much about understanding user needs as it is about technical accuracy.
  • Rethink team structures: Traditional software development structures may not serve AI products well, and new organizational models may be needed to better support integrated decision-making.

The future of AI UX

As AI systems become more complex and more deeply embedded in our daily lives, the artificial boundary between technical and UX decisions will become increasingly untenable. The most successful AI products will be built by teams that understand this and structure their development process accordingly.

The challenge isn’t just to build AI systems that work well technically or that have good interfaces. It’s to create systems whose entire architecture — from data to models to interfaces — is aligned with user needs and experiences. This requires a fundamental shift in how we think about AI development, moving from a layered model of technical and UX decisions to an integrated approach that recognizes the UX implications of every decision we make.

The path forward requires both time and intention — time for teams to realize this need and adapt their processes, and a deliberate shift in our understanding of what UX means in the context of AI development. Only by embracing this more holistic view can we create AI systems that truly serve their users’ needs, rather than forcing users to adapt to technical limitations we’ve unknowingly encoded into their experience.

Comments are closed.