Skip to main content

Introduction

LangChain is a Python framework for building applications powered by large language models (LLMs). It provides components and tools for:

  • Interfacing with LLMs - Easily load models like GPT-3 and InstructGPT using the Model I/O module.

  • Connecting data sources - Integrate external datasets into your application using the Data Connection module.

  • Constructing workflows - Chain together sequences of LLM calls, data lookups, and logic using the Chains module. The Chains module allows you to construct prompts, pass user input to LLMs, and sequence calls.

  • Letting chains choose tools - Create goal-driven chains that dynamically choose tools using the Agents module. The Agents module allows chains to interact with their environment and accomplish high-level tasks.

  • Persisting state - Maintain conversation context across interactions using the Memory module.

  • Monitoring execution - Log and analyze chain execution with Callbacks.

With these modular components, LangChain makes it easy to build complex, data-driven LLM applications like chatbots, semantic search engines, and more.

Getting Started

To start using LangChain:

  1. Install the Python package:
pip install langchain
  1. Follow the Quickstart Guide to build your first LangChain app.

  2. Check out example use cases like chatbots, question answering, and data analysis.

  3. Join the Discord to connect with other LangChain users.

Key Modules

LangChain provides the following modules:

Model I/O

Interface with language models like GPT-3, BLOOM, and InstructGPT. The Model I/O module allows you to load LLMs and call them with methods like predict and predict_messages.

from langchain import OpenAI

llm = OpenAI()
llm.predict("Hello world!")

Data Connection

Connect to data sources like SQL, Elasticsearch, and CSVs.

from langchain.data import SQLDatabase

db = SQLDatabase(url="sqlite:///my_database.db")

Chains

Compose sequences of LLM prompts, data lookups, and logic. The Chains module allows constructing sequences of calls to LLMs, databases, and other components.

from langchain import LLMChain, SQLDatabase

db = SQLDatabase(...)
chain = LLMChain(llm=..., prompt=..., database=db)

Agents

Create goal-driven chains that dynamically choose tools based on high-level directives. The Agents module allows chains to interact with their environment.

from langchain import Tool, Agent

agent = Agent(tools=[Tool(name="summarize", chain=...),
Tool(name="search", chain=...)])

Memory

Persist state across chain runs like conversation context.

from langchain.memory import ConversationBufferMemory

memory = ConversationBufferMemory()

Callbacks

Log, monitor, and analyze chain execution.

from langchain import CallbackManager

callbacks = CallbackManager(...)

Resources