LLMs In The Enterprise — Part 1: Intro

Storm Anning
4 min readNov 7, 2023

Part 1: Intro

When a step-change technology like generative AI comes along, waves of opportunity flood the market to solve new and existing problems.

Through a series of notes, I hope to help builders understand the shape of this technology and how to effectively mold it to fill the needs of their customers and business. We’ll primarily focus on large language models (LLMs), as I believe their language generation capabilities will have the most significant near-term impact on the enterprise compared to other modalities, such as image, video, and audio.

What’s New?

AI has been around for a long time, so what makes large language models a ‘step-change’ technology? The biggest change, as alluded to in their name, comes from the sheer size of these models and their training corpuses.

In the past, models were primarily built to approximate functions to solve use-case-specific problems in areas like classification or recommendation. They performed well for individual use-cases but weren’t overly generalisable or transferable. A model built to classify movie genres couldn’t be easily reused to classify customer sentiment. LLMs, on the other hand, are modelled to approximate the much broader function of language in general. Given a piece of text, they are able to predict, token by token, a reasonable continuation of what should come next. These powerful capabilities make them highly transferable to a broad range of existing language problems, and open up a host of new language capabilities for machines. Teams can now reuse the same foundational model to accelerate the development of both use-cases, sometimes through prompt tweaks alone. That being said, there are some caveats to this, which will be addressed in part three.

For a much deeper understanding of how these models work, I highly recommend this short article by Stephen Wolfram.

So What?

We’re still in the early stages of this technology and many challenges lie ahead, but I believe they will have a significant impact on the enterprise in the following key areas.

The Path To Ubiquitous AI

The non-transferability of existing models meant teams didn’t have much to get started with when they encountered new language problems. Each new use-case required a significant amount of resources, training data, and expertise to develop, restricting teams to a limited set of use-cases where the ROI made sense. LLMs change this. Their transferability to a wide-range of language problems dramatically lowers the barriers to entry, democratising the application of AI to a long-tail of problems. Over time, breakthroughs in costs and capabilities may lower these barriers further, opening a path to ubiquitous AI.

Redefining The Customer Experience Through Language Interfaces

Interfaces of the past have always been fairly rigid. But the ability to model the underlying semantics of language promises to unlock much more powerful and flexible natural language interfaces. Users can simply tell us what they want, adding a new level of declarative flexibility to how we interface with systems and tasks. This can lower accessibility and proficiency barriers, boost productivity, and redefine the customer experience.

Redefining How We Interact With Information

LLMs can redefine how we create, process, and interact with information and data, ushering in new levels of communication, support, and decision-making. The abundance of information available and the differences in people’s backgrounds, expertise, and intents often hinder the efficient access and transfer of information across businesses. LLMs enable a shift away from these rigid, one-size-fits-all forms of information, to fluid and accessible forms that can adapt to the user’s context and needs.

Agents

Although immature, builders have been experimenting with giving agency to LLMs by running them in a loop and allowing them to act on the world through APIs. Users specify a high-level task and the LLMs attempt to decompose and automate that task with minimal human interruption. The hope is to one day realise agents that can fully automate many of the tasks restricted to humans today. Unfortunately there are still many limitations that need to be overcome, and although LLMs can sometimes demonstrate quasi-cognitive functions such as reasoning and understanding, we’re still yet to see any good examples of enterprise agents being productionised in the wild today.

Example Language-Problems

Below are some high-level examples of the use-cases that are a good fit for LLMs:

Transforming Information

  • Synthesis: Merging and distilling information from various sources — [Summarization, Aggregation, Key Points].
  • Translation: Translating between languages or formats — [English→French, XML→JSON, Python → Javascript].
  • Adjustment & Paraphrasing: Rewriting and adjusting information — [Length/Brevity, Structure, Tone, Formality, Grammar, Personalizing].

Analyzing Information

  • Classification: Classifying information — [Sentiment, Categorization, Labeling, Part-Of-Speech-Tagging].
  • Extraction: Extracting data/entities from information sources — [Entities, Attributes, Metadata (e.g. dates), Unstructured Parsing].
  • Anomaly & Pattern Detection: Identifying anomalies or patterns in information — [Fraud, Inconsistencies].
  • Domain-Specific Analysis: Analyzing information in the context of a particular domain — [Legal, Financial, Medical].
  • Ad-Hoc Analysis: Running ad-hoc analysis with use-case specific prompts/questions — [Proofreading, Gap Analysis, Strengths/Weaknesses, Steelmanning].

Interrogating Information

  • Single & Multi-Turn Q&A: Interrogate existing information sources in a conversational & contextual manner — [Support, Marketing, Sales, Tutors, Co-Pilots, ELI5].

Research & Suggestions

  • Ideation: Generating diverse perspectives, reframings, connections, and ideations — [Recommendations, Diagnoses, Designs].
  • Problem Solving: Augmenting problem solving tasks by breaking them down and ideating/evaluating solutions — [Task Decomposition, Solution Ideation, Quasi-Reasoning Engines].

Authoring

  • Augmented Generation: Helping authors write faster by generating content from context — [Code, SQL, Documents, Posts, Chats, Emails].

Natural Language Interfaces

  • Allow users to declaratively ‘tell’ computers what they want via natural language — [Intent Capturing/Mapping, UX/Interface Abstractions].

By chaining these applications together, teams can solve a plethora of language and information problems across their products and businesses.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Storm Anning
Storm Anning

No responses yet

Write a response