LLMs In The Enterprise — Part 2: How Will LLMs Impact Businesses & Their Customers?
- Part 1: Intro
- Part 2: How Will LLMs Impact Businesses & Their Customers?
- Part 3: The Limitations Of LLMs
- Part 4: Identifying LLM Opportunities
- Part 5: Executing On LLM Opportunities
How Will LLMs Change The Users Experience?
A good user experience (UX) helps users effectively achieve outcomes with speed and delight. But building a great UX, especially in the enterprise world, can be hard.
Navigating Complexity With In-Product Assistants
The complexity of enterprise software and the domains they embody create an arduous path to proficiency for new users. The fortunate few have documentation, NUXs, books, videos, and dedicated support networks to help them on this journey. But the costs of providing these things are high. Support isn’t always available, and the inefficiencies of transferring context and finding what you need creates annoying delays.
The ability to integrate in-product LLM-enabled assistants, that are always available and have a complete history of our users’ interactions and preferences, can redefine these experiences. Individualized in-product tutors can help users master products and domains by illuminating guided and conversational learning paths throughout the product experience. When they need help, context-aware assistants can provide proactive support to unblock them. And in-product co-pilots & concierges can help users achieve outcomes every step of the way by proactively providing suggestions and assisting with end-to-end tasks.
Abstracting Complexity Through Natural Language Interfaces
Many tasks require a significant amount of expertise and intricate clicking, dragging and typing to complete. This is especially true for products with vast and complex feature flows, or systems that allow for extensive customization or fine-grained control. The ability to effectively capture intent and generate code and configurations from loosely expressed natural language, provides an additional layer of abstraction and declarative flexibility to these complexities. Users can now specify ‘what’ instead of ‘how’.
For laypersons in particular, the abstraction of intricate product steps, formulas, query-languages, and configurations can significantly reduce the barriers to entry for a variety of previously restricted tasks. And for experts, the ability to augment and automate these tasks accelerates productivity and allows many of them to be managed asynchronously — freeing up their time and attention. These capabilities will become especially powerful as agents improve. Users can establish a goal and are only involved during key transactions.
Smarter, Simpler, More Personalized Products
The same problems we observe with static, one-size-fits-all information also manifest in user experiences. Builders must cater to a broad spectrum of users with different competencies, preferences, and needs, which often leads to a compromised experience. In complex enterprise UXs especially, these tensions cause novices to feel overwhelmed by complexity or leave experts frustrated by simplifications. As functionalities grow, navigations and click-depths expand, real-estate gets tight, and making the right trade-offs gets hard.
LLMs provide builders with new tool-kits to make their products smarter, simpler, and more personalized. The barriers to proficiency can be significantly lowered, to the point that systems previously operated by trained employees will be more accessible to end-users, eliminating costs and delays.
The ubiquity of smart suggestions, AI integrations, and agents will incrementally reduce cognitive loads. Allowing users to spend more time reviewing actions than making them.
The ability to invoke job-to-be-done workflows via natural language can break the rigidity of existing UI’s and product boundaries — allowing users to leverage products in powerful new ways. Especially in connected internal ecosystems, users will be able to seamlessly move across product boundaries through a single NLI, removing the slow context switches and carry overs of today.
The ability to better capture the users needs and generate individualized content around them, will lead to more chauffeured, personalized experiences. Content, whether for advertisements, support, or product experiences, can be generated in the tone, style, and language that resonates most with each user. And modalities such as audio and vision can help individuals with various disabilities experience content in a more equitable way, whilst also driving significant productivity wins over conventional inputs.
By applying these technologies, product builders can lower the barriers to entry, enhance accessibility, boost productivity, and redefine the end-to-end customer experience.
GenAI UX Limitations & Guidance
Several challenges and limitations hinder the broad realization of these UX visions. The next note in this series will discuss these at length, alongside the techniques used to reduce them. We therefore recommend builders take a defensive approach to applying these technologies to their products, keeping their limitations in mind and designing around them.
High-Level UX Guidance:
- Defensive UX — Design with the limitations of LLMs in mind. Latency and hallucinations can be particularly problematic. See notes three and five for more details.
- Provide Transparency — Make it clear to users that they are engaging with AI and set the right expectations to build trust.
- Show Your Work — Build confidence by helping users understand why the system generated a given response (e.g. through citations). And for use cases where multiple LLM calls are chained together, providing visibility into each step can help reduce cognitive load and perceived latency, and gives users an opportunity to intervene.
- Provide Escape Hatches — Make AI experiences easy to dismiss and allow users to choose the non-AI path. Many products will need to support a spectrum of engagement models based on their users’ appetite for assistance.
- Balance Flexibility With Constraint — Open-ended natural language interfaces provide users with a great deal of power and flexibility, but they also add a lot of complexity for end-users and product builders. Constraining inputs and product flows can make these experiences more intuitive and easier to develop.
- Typing Versus Clicking — Don’t blindly force language interface onto users. Especially in cases where typing is less effective or where users prefer granular control.
- Provide Input Suggestions — Prompt suggestions are a great way to make your natural language experiences more intuitive and convenient.
- Make It Easy To Review/Tweak — As AI suggestions become more prolific, builders will need to prioritize swift and effective reviewing & editing experiences.
- Feedback & Evaluation Loops — Feedback and evaluation loops will be critical to the success of your experiences. Collect explicit and implicit feedback to drive and evaluate improvements.
- Efficient Labeling/Feedback — To drive improvements in a cost efficient manner. Teams should try to integrate these feedback loops in a way that does not significantly degrade their users current productivity.
- Baked In — It’s important that AI integrations feel woven into the fabric of your product. Instead of bolting chat-UXs into the corners, consider less intrusive and more engaging options that are element & context aware.
- Provide Clear Context Indicators & Selectors — Make it easy to set and understand the current context to remove confusion. This is especially important in experiences with multiple elements, turns, or models. Controlling context should feel intuitive and seamless.
- Correctness & Productivity — Teams should focus on use-cases where the costs of inaccurate or unsafe responses are significantly low. Avoiding slowing the user down or putting the business at risk.
- Costs — We’re hopeful these will drop. But for now, teams should still be mindful of the API costs.
- Connected Ecosystem — Longer term, teams should consider how these technologies might dissolve existing product boundaries. This is especially interesting for Intern.
- Keep It User & Problem Centered — Ultimately, we’re designing solutions to solve real user problems. Avoid applying GenAI as a solution in search of a problem.
Generative AI promises to unlock new levels of ‘don’t make me think’ and as the technologies improve and become more ubiquitous, they’ll extend into new levels of ‘don’t make me search, ask, wait, click, or type’, enabling a new level of productivity and delight for our customers.
How Will LLMs Change Our Business?
Revolutionising Our Interactions With Information
The speed and efficacy with which we assimilate the domains around us, and align with our customers and colleagues, significantly impacts the outcomes we drive. Unfortunately, the scale, distribution, and complexity of many businesses make this challenging, leading to delays, misalignments, and an over-reliance on tribal knowledge.
Regular documentation and asynchronous communication can help, but authoring and curating this information takes time and effort. And subsequently finding and extracting the pertinent pieces you need, across vast and fragmented sources, can be tedious.
LLMs can redefine the ways we interact with information. Authoring assistants can help us produce better content faster by providing feedback, reforming passages, and distilling content from unrefined sources. Information assistants can help us curate, aggregate, and interrogate vast sources of information and data through a highly contextual and conversational interface, alongside a personal interpreter, ideator, and reasoning engine that helps you comprehend the knowledge you need. These assistants elevate the efficiency in which ideas and information can be effectively transferred between individuals, leading to a more aligned, agile, and synergetic business that can make better decisions faster.
System Augmentations & Automations
The gradual democratization of AI will lead to a proliferation of system augmentations, and then automations, across a long-tail of problems. Following the six levels of automation outlined for driverless cars. Tasks will initially be augmented to help employees make better decisions through generated insights and suggestions. Feedback loops will drive incremental improvements, gradually pushing us into supervisory roles, which select between generated options and intervene when necessary. Longer-term, if and when human levels of performance are sustained, systems may run with zero human supervision or interaction, allowing us to move into higher-level auditing and orchestration roles.
The timeline for these progressions varies from task to task and largely depends on the criticality of correctness. But as the capabilities of these technologies improve and integration costs decrease, we expect to see a surge of AI integrations across our business that can increase productivity, revenue, and cost-efficiency and reduce our exposure to risk.
Builder Efficiency
LLMs can also be leveraged throughout the product-development cycle to accelerate delivery timelines, reduce costs, and lower proficiency barriers. A host of role-specific assistants across PM, PD, DS, and Eng can augment the end-to-end process of gathering and scoping customer needs, and generating designs and code to meet them. LLM powered no-code solutions in particular can help non-technical users to satisfy business needs without specialized support.
Externally, as the costs and expertise of meeting market demands drop, we will hopefully see a rise in new solutions and market entrants coming to satisfy any underserved needs. These increases in supply and the subsequent competition that follows will hopefully drive prices down and foster new innovations, while pushing existing offerings to improve or specialize. Longer-term, many businesses should be looking to employ these technologies to stay ahead, while also re-examining their ‘build vs buy’ decisions as offerings improve.
Limitations & Guidance
The same limitations mentioned above apply here. Hallucinations and costs in particular will limit how liberally GenAI can be applied across business operations.
Some high-level guidance is provided below:
- Humans In The Loop — Hallucination risks mean most teams need to deploy experiences with humans in the loop (phase 2–3 of the autonomy spectrum). This can significantly de-risk the potentials of negative outcomes while also providing an efficient feedback loop to improve systems over time. For now, think Co-Pilots NOT Agents.
- Build Lean & Around Limitations — These technologies inherently require a significant amount of trial and error. Build leanly and design with their limitations in mind to avoid sunken costs.
- Data Is King — All implementations are subject to the data foundations on which they’re built. Even the most powerful models and architectures cannot redeem poor training-sets or retrieval systems. Businesses must remember to invest in data, especially as this is likely their only moat in the market.
- Classical Predictive AI — It’s important to remember that GenAI is just another tool in the AI tool-kit. Many use-cases may achieve better performance through more traditional approaches.
- Experiment & Prepare — Several challenges must still be overcome before these technologies become an integral part of our business. But incumbents should be experimenting with them now so that they are ready to capitalize when the technologies improve. “This is the dumbest they’ll ever be.” — Sam Altman.
- Build Smart & Avoid Over Investing — The technology, tooling, and infra is moving extremely quickly. Teams should be pragmatic with their investments and build in a modular fashion to avoid wasted work.