What Is LangChain?

Discover what LangChain is and how it enables the formulation of prompts that serve as inputs to LLMs.

What is GPT?

Generative pre-trained transformer (GPT) is an initial language model introduced in 2018 by OpenAI to demonstrate the power of performing natural language processing (NLP) tasks using the transformer architecture. The advancements in the GPT model show the potential of pretraining a large language model on diverse datasets to generate coherent and context-aware text.

The introduction of GPT-3 in 2020 marked a turning point because the model exhibits remarkable performance for tasks such as language translation, question answering, and creative writing to help people speed up their daily tasks.

GPT-3 provides two possibilities to interact with the OpenAI LLMs. A user can visit the browser and use a conversational style to perform tasks, or alternately, a developer can utilize the APIs provided by OpenAI to integrate the GPT model into an LLM-powered application.

Press + to interact
Interaction possibilities with OpenAI
Interaction possibilities with OpenAI

Models like GPT-3 and GPT-3.5 lack the ability to interact and extract information from the external data resource. This gives rise to the following limitations:

  • Limited knowledge: All GPT models provide responses based on the pretrained data, and as of the time this course was written, ChatGPT’s knowledge is based on preexisting data up to January 2022. Therefore, it’s unaware of any events or updates after this cut-off date. For example, if we enquire about a recent sports event, like the result of the most recent world championship, the GPT models would not be able to answer.

  • Access to private repositories: The GPT models don’t have access to private data in a repository like Google Drive or any external websites. This limits the GPT models’ accessibility to information.

Note: OpenAI hasn’t neglected the functionality of interacting with external sources; instead, OpenAI has entrusted this capability to developers and provides the API as a foundational infrastructure to communicate with external information sources.

LangChain

LangChain is an open-source framework available in Python and JavaScript that facilitates the integration of LLMs to develop LLM-powered applications. It links the LLM and external data sources and services. This opens up new opportunities for developers to perform context-aware NLP tasks by connecting to any source of data or knowledge in real time without having to build everything from scratch.

Press + to interact
Interaction with external data sources using LangChain
Interaction with external data sources using LangChain

The open-source nature of LangChain encourages collaboration and innovation among the developer community, resulting in innovative applications and enhanced capabilities in a short time. The quick integration of different service providers is one outcome of this collaboration.

How does LangChain work?

LangChain offers a comprehensive toolkit to formalize the prompt engineering process, enabling the structured formulation of text prompts that serve as inputs to LLMs. A prompt consists of a set of instructions by a user to guide the model in generating the desired response. It can be a question, a command, or any specific input to prompt a meaningful and contextually relevant output from the model. The precision and clarity of prompts play a crucial role in influencing the output generated by the LLM. LangChain provides predefined templates of prompts for common operations, such as summarization, questions answering, etc., to help developers streamline and standardize the input to the language model.

The interaction with LangChain is centered around the concept of chains. Chains provide a mechanism to execute a sequence of calls to LLMs and tools through prompt templates. The tools refer to the functionalities that allow the LLMs to interact with the world, e.g., through an API call. This sequence of calls allows developers to harness the power of language models and efficiently integrate them into their applications.

Press + to interact
A simple chain in LangChain
A simple chain in LangChain

LangChain also provides output parsers to structure the LLMs’ responses in an organized and specified format. The LLMs provide the output in a plain text string, and using parsers, we can extract information in a desired output format, like JSON. This is especially helpful when building an application that requires not just raw output but also the results to be presented more efficiently.

By employing output parsers, developers can enhance the usability of language model responses, ensuring that the information generated by LLMs is formatted in a way that aligns with the specific requirements of their applications. This facilitates a smoother integration of language models into various use cases because the structured output can be readily utilized for further processing or presentation within the application’s context.

Getting started with OpenAI API

In this course, we’ll mainly utilize the OpenAI’s language model gpt-3.5-turbo. To do so, we need to obtain the OpenAI’s API key.

Press + to interact

Let’s go through the following steps to fetch OpenAI’s API key:

  1. We’ll start by first visiting OpenAI’s website.

  2. Click the “API Keys” on the left navigation bar.

  3. Now, click the “+ Create new secret key” button to generate the new key.

  4. Remember to copy the key because you won’t be able to view this key again once you click the “Done” button.

  5. Save the key in the widget below to use it throughout the course by following the instructions below:

    1. Click the “Edit” button in the following widget.

    2. Enter your API key in the OpenAI_Key field.

    3. Click the “Save” button.

In the code below, we simply pass the OpenAI key on line 3. In lines 5–10, we define the role of the model just for testing the API key that we’ll pass. Click the “Run” button to test the API key.

from openai import OpenAI

# creating the model to be used
client = OpenAI(api_key="{{OpenAI_Key}}")

# defining the model name and messages to display to check if the key works
completion = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hi, my name is Alex."}
  ]
)

# printing the first message
print(completion.choices[0].message)
Testing our configuration

Note: Please follow the steps in How to get API Key of GPT-3 if you want to create a new account on OpenAI.

We’ll get the following output if the key works:

ChatCompletionMessage(content='Hello Alex! How can I assist you today?', role='assistant',
function_call=None, tool_calls=None)
Expected output