← Back to All Articles

Introduction to LangChain | Framework to Build LLM Applications


LangChain is an open source framework in place to develop LLM applications. The best part about this framework is that it supports LLM application lifecycle at every stage of the develpment lifecycle.

LLM applications are complex systems that require more than just calling a language model. They are developed through various stages, and in every stage LangChain provides efficient and effective tools to build these applications.

There are 4 key components of LangChain:

Lets take a deeper dive into all these 4 components in next few sections.

Chains

For composing modular components into reusable pipelines, developers can put together multiple LLM components in a sequence to create complex applications for things like Chatbots, data extractors, and data analysis. Chains deliver several benefits like:

  • Modularity
  • Composability
  • Reusability
  • Tool Integration
  • Readability
  • Maintainability
  • Productivity

Agents

Agents help in creating systems that interact dynamically with users and environments over time. An agent is an autonomous software entity, that is capable of taking actions to accomplish goals and tasks. Both chains and agents extend LLMs. Agents orchestrate chains while chains compose more lower-level modules.

Chains define reusable logic by sequencing components, agents observes the environment, decides which chain to execute based on observations, takes chain’s specified action and repeats. Agents essentially decides which action to take using LLMs as reasoning engines. Major benefits of agents are:

  • Goal oriented execution
  • Dynamic responses
  • Composition
  • Robustness
  • Statefulness

Memory

It refers to the persisting state between executions of a chain as agents. It helps building conversational and interactive applications. Rather than treating each user input as an isolated prompt, chains can pass conversational memory to models. Agents can also persist facts, relationships and deductions in memory. Several memory options that exist are:

  • ConversationBufferMemory: Stores all messages in model history.
  • ConversationBufferWindowMemory: Retains all recent messages.
  • ConversationKGMemory: Summarize exchanges as a knowledge graph.
  • EntityMemory: Backed by a database, persists agent state and facts.

Tools

They provide modular interfaces for agents to integrate external services like databases and APIs. There are many tools that are available like:

  • Machinetranslator
  • Calculator
  • Search Engine
  • Maps
  • Stocks
  • Weather
  • Wikipedia

There are many more pre-defined tools, and even more custom tools that can be created with the use of Chains or Agents, to complete your defined custom tasks.

This is the first chapter on LangChain, which dealt with introduction to LangChain, its major components, and hopefully gave you an idea of major functionalities present in this framework. In the next installment, we will go through the steps to create first LangChain program to create a very simple program to invoke LLM (Open AI) using the LangChain components.

In case you have any doubts, any questions, or any suggestions, please do feel free to reach out to us.


Liked this article? Find more deep dives on our **Articles Page** or check out our latest **video content**!

See All Articles →