The Building Blocks Behind AI
At WLCM, we’re full steam ahead on building AI apps for the OpenAI marketplace. This piece is Part 2 in a series helping entrepreneurs understand this new frontier. You can see the Part 1 here.
AI is not a tool or a piece of software or anything tangible. “Artificial Intelligence” merely refers to the concept of creating human-like intelligence.
Over the decades — centuries, truthfully — what qualifies and constitutes as human intelligence has been a set of moving goalposts. Since computer scientist Alan Turing developed the Turing Test in 1950, the standard has been that artificial intelligence is broached when you can’t tell whether you’re interacting with a human or a machine. Though tons of smart people have poked holes in the Turing Test’s efficacy, the concept remains uncannily relevant.
To pursue AI is to reflect on what makes us human. But enough about what it is. Let’s talk about how it works.
The anatomy of a chatbot
At its core, ChatGPT is a chatbot. When you think about how seldom you probably interact with chatbots, it might feel somewhat like a niche use case. But language sits at the core of intelligence, of who we are. It’s how anything gets done.
We refer to AI’s language-based capabilities and use cases as “Natural Language Processing.” It allows humans to express their needs in imperfect language, and it allows the machine to understand and respond in organic, unscripted language.
Here’s how it happens.
Tokenization
When you input a command, question, or prompt into a generative AI chatbot, it breaks your input down into “tokens” of individual words, subwords, or characters. It then maps these tokens against the data set it was trained on. The points in the data set have also been tokenized as part of the language model’s training (more on those in a bit).
The result: it understands each word in isolation and in relation to the full prompt and the full data set. The chatbot can observe contextual relationships among words, which allows the program to analyze sentiment, translate other languages, and summarize the main takeaways of a body of text.
But not all tokenization mechanisms are created equally. It takes deep technical skill and planning to properly tokenize a large data set.
Artificial Neural Networks & Large Language Models
Artificial neural networks are computer programs that mimic the way the human brain works.
A neural network has an input layer, an output layer, and many layers in between that hold facts, images, experiences, and other reference material. Neurons in these middle layers communicate with one another in the decision-making process.
For example, if you see a picture of a dog, your brain pings that picture around among its neural networks to identify what kind it is, how big it is, and other dog-related details.
Artificial neural networks used by AI operate similarly. For scale, the human brain has tens of billions of neurons, while robust artificial neural networks may have a few thousand.
Large Language Models (LLMs) are a type of neural network and the foundational technology behind ChatGPT. The reference material consists of a huge data set of textual examples fed into the model. The model uses statistical analysis to identify words, concepts, and contextual relationships between them.
For example, if you ask what band Paul, John, Ringo, and George were in together, the transformer uses statistical data to identify the contextual link between these names and inform the output: “The Beatles.”
Up until very recently, we have not had 1) the amount of data necessary to train large language models or 2) the type of model that can understand connections to the breadth and depth that current transformer models can.
Recently, I fed some information about WLCM to a generative AI tool and asked it to create a cocktail recipe inspired by the WLCM brand. The results were better than you might expect. This wouldn’t have been possible just a few years ago.
An LLM is only as good as the data it’s trained on, and building one is no small undertaking.
Knowledge Graphs
Remember in middle school when your English teacher had you make a mind map to brainstorm around a particular topic? Knowledge graphs are kind of like that.
Knowledge graphs are a framework for describing things and mapping relationships between them. The correlations among entities in a knowledge graph are defined by the humans who build them — unlike LLMs.
Basically, you build the truth into a knowledge graph, whereas an LLM infers truth based on statistical modeling imposed upon a data set.
As the AI app race revs up, I’m betting big on knowledge graphs, for a few reasons.
Knowledge graphs are much more appropriate for industry-specific domains of knowledge.
Relationships are more firmly defined, which allows for a more reliable inference of new knowledge and a better-informed decision-making function.
Because relationships are built, not assumed, knowledge graphs don’t have the traceability problems that LLMs do. It’s easier to understand the reasoning behind their outputs.
Knowledge graphs are easier to update with new information and practices.
I sense that the true value of AI app development is the ability to implement expert, domain-specific knowledge that the user can access and trust.
At WLCM, this has been our primary realm of experimentation. And we’re excited about what we’ve accomplished in a short amount of time.
The power of LLMs + knowledge graphs
So does your app need an LLM or a knowledge graph?
Well, OpenAI’s ChatGPT is a large language model, and that is what OpenAI is selling: the ability to build your app on top of that LLM.
But LLMs alone aren’t perfect, and neither are knowledge graphs. Used together, each fills in the gaps of the other in a way that is quite uncanny.
Because they have more of the human touch, knowledge graphs may have gaps in understanding. LLMs can query larger data sets to help fill in those gaps.
While LLMs tap into the broad tapestry of general human knowledge, they don’t go very deep. LLMs can provide the breadth of knowledge, while knowledge graphs can provide the depth.
Knowledge graphs have trouble understanding natural language and unstructured text. LLMs can interpret linguistic nuances and semantics that knowledge graphs struggle with.
LLMs famously hallucinate inaccurate outputs. Knowledge graphs provide guardrails against these hallucinations, which ultimately make the application much more trustable.
LLMs can be a black box, opening companies to compliance, privacy, and copyright liability. Knowledge graphs provide a better infrastructure to stay on the right side of the law (or ethics, rather, as crucial legislation is tbd).
The future must be trustworthy
To date, apps have been tools — and they’ve been great at that. But even the best, most appropriate tool alone doesn’t bridge the gap between problem and solution. The latest table saw does not a woodworker make.
Surely you’ve had a situation in your life where you wish you could take everything you know about a subject and just drop it into someone else’s brain, or vice versa.
I see the evolution of individualized GPT apps trending toward that exact ability. The potential of having a financial expert, a healthcare expert, a life coach, a teacher, right there in your app is awe-inspiring to imagine.
Looking to stake your claim in OpenAI’s new marketplace? We’re ready to build it right with you. Tell us more about your project here.