Overview

Embedchain comes with built-in support for various popular large language models. We handle the complexity of integrating these models for you, allowing you to easily customize your language model interactions through a user-friendly interface.

OpenAI

To use OpenAI LLM models, you have to set the OPENAI_API_KEY environment variable. You can obtain the OpenAI API key from the OpenAI Platform.

Once you have obtained the key, you can use it like this:

import os
from embedchain import App

os.environ['OPENAI_API_KEY'] = 'xxx'

app = App()
app.add("https://en.wikipedia.org/wiki/OpenAI")
app.query("What is OpenAI?")

If you are looking to configure the different parameters of the LLM, you can do so by loading the app using a yaml config file.

import os
from embedchain import App

os.environ['OPENAI_API_KEY'] = 'xxx'

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Function Calling

Embedchain supports OpenAI Function calling with a single function. It accepts inputs in accordance with the Langchain interface.

With any of the previous inputs, the OpenAI LLM can be queried to provide the appropriate arguments for the function.

import os
from embedchain import App
from embedchain.llm.openai import OpenAILlm

os.environ["OPENAI_API_KEY"] = "sk-xxx"

llm = OpenAILlm(tools=multiply)
app = App(llm=llm)

result = app.query("What is the result of 125 multiplied by fifteen?")

Google AI

To use Google AI model, you have to set the GOOGLE_API_KEY environment variable. You can obtain the Google API key from the Google Maker Suite

import os
from embedchain import App

os.environ["GOOGLE_API_KEY"] = "xxx"

app = App.from_config(config_path="config.yaml")

app.add("https://www.forbes.com/profile/elon-musk")

response = app.query("What is the net worth of Elon Musk?")
if app.llm.config.stream: # if stream is enabled, response is a generator
    for chunk in response:
        print(chunk)
else:
    print(response)

Azure OpenAI

To use Azure OpenAI model, you have to set some of the azure openai related environment variables as given in the code block below:

import os
from embedchain import App

os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://xxx.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "xxx"
os.environ["OPENAI_API_VERSION"] = "xxx"

app = App.from_config(config_path="config.yaml")

You can find the list of models and deployment name on the Azure OpenAI Platform.

Anthropic

To use anthropic’s model, please set the ANTHROPIC_API_KEY which you find on their Account Settings Page.

import os
from embedchain import App

os.environ["ANTHROPIC_API_KEY"] = "xxx"

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Cohere

Install related dependencies using the following command:

pip install --upgrade 'embedchain[cohere]'

Set the COHERE_API_KEY as environment variable which you can find on their Account settings page.

Once you have the API key, you are all set to use it with Embedchain.

import os
from embedchain import App

os.environ["COHERE_API_KEY"] = "xxx"

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Together

Install related dependencies using the following command:

pip install --upgrade 'embedchain[together]'

Set the TOGETHER_API_KEY as environment variable which you can find on their Account settings page.

Once you have the API key, you are all set to use it with Embedchain.

import os
from embedchain import App

os.environ["TOGETHER_API_KEY"] = "xxx"

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Ollama

Setup Ollama using https://github.com/jmorganca/ollama

import os
from embedchain import App

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

vLLM

Setup vLLM by following instructions given in their docs.

import os
from embedchain import App

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

GPT4ALL

Install related dependencies using the following command:

pip install --upgrade 'embedchain[opensource]'

GPT4all is a free-to-use, locally running, privacy-aware chatbot. No GPU or internet required. You can use this with Embedchain using the following code:

from embedchain import App

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

JinaChat

First, set JINACHAT_API_KEY in environment variable which you can obtain from their platform.

Once you have the key, load the app using the config yaml file:

import os
from embedchain import App

os.environ["JINACHAT_API_KEY"] = "xxx"
# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Hugging Face

Install related dependencies using the following command:

pip install --upgrade 'embedchain[huggingface-hub]'

First, set HUGGINGFACE_ACCESS_TOKEN in environment variable which you can obtain from their platform.

You can load the LLMs from Hugging Face using three ways:

Hugging Face Hub

To load the model from Hugging Face Hub, use the following code:

import os
from embedchain import App

os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "xxx"

config = {
  "app": {"config": {"id": "my-app"}},
  "llm": {
      "provider": "huggingface",
      "config": {
          "model": "bigscience/bloom-1b7",
          "top_p": 0.5,
          "max_length": 200,
          "temperature": 0.1,
      },
  },
}

app = App.from_config(config=config)

Hugging Face Local Pipelines

If you want to load the locally downloaded model from Hugging Face, you can do so by following the code provided below:

from embedchain import App

config = {
  "app": {"config": {"id": "my-app"}},
  "llm": {
      "provider": "huggingface",
      "config": {
          "model": "Trendyol/Trendyol-LLM-7b-chat-v0.1",
          "local": True,  # Necessary if you want to run model locally
          "top_p": 0.5,
          "max_tokens": 1000,
          "temperature": 0.1,
      },
  }
}
app = App.from_config(config=config)

Hugging Face Inference Endpoint

You can also use Hugging Face Inference Endpoints to access custom endpoints. First, set the HUGGINGFACE_ACCESS_TOKEN as above.

Then, load the app using the config yaml file:

from embedchain import App

config = {
  "app": {"config": {"id": "my-app"}},
  "llm": {
      "provider": "huggingface",
      "config": {
        "endpoint": "https://api-inference.huggingface.co/models/gpt2",
        "model_params": {"temprature": 0.1, "max_new_tokens": 100}
      },
  },
}
app = App.from_config(config=config)

Currently only supports text-generation and text2text-generation for now [ref].

See langchain’s hugging face endpoint for more information.

Llama2

Llama2 is integrated through Replicate. Set REPLICATE_API_TOKEN in environment variable which you can obtain from their platform.

Once you have the token, load the app using the config yaml file:

import os
from embedchain import App

os.environ["REPLICATE_API_TOKEN"] = "xxx"

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Vertex AI

Setup Google Cloud Platform application credentials by following the instruction on GCP. Once setup is done, use the following code to create an app using VertexAI as provider:

from embedchain import App

# load llm configuration from config.yaml file
app = App.from_config(config_path="config.yaml")

Mistral AI

Obtain the Mistral AI api key from their console.

os.environ["MISTRAL_API_KEY"] = "xxx"

app = App.from_config(config_path="config.yaml")

app.add("https://www.forbes.com/profile/elon-musk")

response = app.query("what is the net worth of Elon Musk?")
# As of January 16, 2024, Elon Musk's net worth is $225.4 billion.

response = app.chat("which companies does elon own?")
# Elon Musk owns Tesla, SpaceX, Boring Company, Twitter, and X.

response = app.chat("what question did I ask you already?")
# You have asked me several times already which companies Elon Musk owns, specifically Tesla, SpaceX, Boring Company, Twitter, and X.

AWS Bedrock

Setup

  • Before using the AWS Bedrock LLM, make sure you have the appropriate model access from Bedrock Console.
  • You will also need to authenticate the boto3 client by using a method in the AWS documentation
  • You can optionally export an AWS_REGION

Usage

import os
from embedchain import App

os.environ["AWS_ACCESS_KEY_ID"] = "xxx"
os.environ["AWS_SECRET_ACCESS_KEY"] = "xxx"
os.environ["AWS_REGION"] = "us-west-2"

app = App.from_config(config_path="config.yaml")

The model arguments are different for each providers. Please refer to the AWS Bedrock Documentation to find the appropriate arguments for your model.


Groq

Groq is the creator of the world’s first Language Processing Unit (LPU), providing exceptional speed performance for AI workloads running on their LPU Inference Engine.

Usage

In order to use LLMs from Groq, go to their platform and get the API key.

Set the API key as GROQ_API_KEY environment variable or pass in your app configuration to use the model as given below in the example.

import os
from embedchain import App

# Set your API key here or pass as the environment variable
groq_api_key = "gsk_xxxx"

config = {
    "llm": {
        "provider": "groq",
        "config": {
            "model": "mixtral-8x7b-32768",
            "api_key": groq_api_key,
            "stream": True
        }
    }
}

app = App.from_config(config=config)
# Add your data source here
app.add("https://docs.embedchain.ai/sitemap.xml", data_type="sitemap")
app.query("Write a poem about Embedchain")

# In the realm of data, vast and wide,
# Embedchain stands with knowledge as its guide.
# A platform open, for all to try,
# Building bots that can truly fly.

# With REST API, data in reach,
# Deployment a breeze, as easy as a speech.
# Updating data sources, anytime, anyday,
# Embedchain's power, never sway.

# A knowledge base, an assistant so grand,
# Connecting to platforms, near and far.
# Discord, WhatsApp, Slack, and more,
# Embedchain's potential, never a bore.

NVIDIA AI

NVIDIA AI Foundation Endpoints let you quickly use NVIDIA’s AI models, such as Mixtral 8x7B, Llama 2 etc, through our API. These models are available in the NVIDIA NGC catalog, fully optimized and ready to use on NVIDIA’s AI platform. They are designed for high speed and easy customization, ensuring smooth performance on any accelerated setup.

Usage

In order to use LLMs from NVIDIA AI, create an account on NVIDIA NGC Service.

Generate an API key from their dashboard. Set the API key as NVIDIA_API_KEY environment variable. Note that the NVIDIA_API_KEY will start with nvapi-.

Below is an example of how to use LLM model and embedding model from NVIDIA AI:

import os
from embedchain import App

os.environ['NVIDIA_API_KEY'] = 'nvapi-xxxx'

config = {
    "app": {
        "config": {
            "id": "my-app",
        },
    },
    "llm": {
        "provider": "nvidia",
        "config": {
            "model": "nemotron_steerlm_8b",
        },
    },
    "embedder": {
        "provider": "nvidia",
        "config": {
            "model": "nvolveqa_40k",
            "vector_dimension": 1024,
        },
    },
}

app = App.from_config(config=config)

app.add("https://www.forbes.com/profile/elon-musk")
answer = app.query("What is the net worth of Elon Musk today?")
# Answer: The net worth of Elon Musk is subject to fluctuations based on the market value of his holdings in various companies.
# As of March 1, 2024, his net worth is estimated to be approximately $210 billion. However, this figure can change rapidly due to stock market fluctuations and other factors.
# Additionally, his net worth may include other assets such as real estate and art, which are not reflected in his stock portfolio.

If you can't find the specific LLM you need, no need to fret. We're continuously expanding our support for additional LLMs, and you can help us prioritize by opening an issue on our GitHub or simply reaching out to us on our Slack or Discord community.