🧩 Embedding models
Overview
Embedchain supports several embedding models from the following providers:
OpenAI
To use OpenAI embedding function, you have to set the OPENAI_API_KEY
environment variable. You can obtain the OpenAI API key from the OpenAI Platform.
Once you have obtained the key, you can use it like this:
- OpenAI announced two new embedding models:
text-embedding-3-small
andtext-embedding-3-large
. Embedchain supports both these models. Below you can find YAML config for both:
Google AI
To use Google AI embedding function, you have to set the GOOGLE_API_KEY
environment variable. You can obtain the Google API key from the Google Maker Suite
For more details regarding the Google AI embedding model, please refer to the Google AI documentation.
AWS Bedrock
To use AWS Bedrock embedding function, you have to set the AWS environment variable.
For more details regarding the AWS Bedrock embedding model, please refer to the AWS Bedrock documentation.
Azure OpenAI
To use Azure OpenAI embedding model, you have to set some of the azure openai related environment variables as given in the code block below:
You can find the list of models and deployment name on the Azure OpenAI Platform.
GPT4ALL
GPT4All supports generating high quality embeddings of arbitrary length documents of text using a CPU optimized contrastively trained Sentence Transformer.
Hugging Face
Hugging Face supports generating embeddings of arbitrary length documents of text using Sentence Transformer library. Example of how to generate embeddings using hugging face is given below:
Vertex AI
Embedchain supports Google’s VertexAI embeddings model through a simple interface. You just have to pass the model_name
in the config yaml and it would work out of the box.
NVIDIA AI
NVIDIA AI Foundation Endpoints let you quickly use NVIDIA’s AI models, such as Mixtral 8x7B, Llama 2 etc, through our API. These models are available in the NVIDIA NGC catalog, fully optimized and ready to use on NVIDIA’s AI platform. They are designed for high speed and easy customization, ensuring smooth performance on any accelerated setup.
Usage
In order to use embedding models and LLMs from NVIDIA AI, create an account on NVIDIA NGC Service.
Generate an API key from their dashboard. Set the API key as NVIDIA_API_KEY
environment variable. Note that the NVIDIA_API_KEY
will start with nvapi-
.
Below is an example of how to use LLM model and embedding model from NVIDIA AI:
Cohere
To use embedding models and LLMs from COHERE, create an account on COHERE.
Generate an API key from their dashboard. Set the API key as COHERE_API_KEY
environment variable.
Once you have obtained the key, you can use it like this:
- Cohere has few embedding models:
embed-english-v3.0
,embed-multilingual-v3.0
,embed-multilingual-light-v3.0
,embed-english-v2.0
,embed-english-light-v2.0
andembed-multilingual-v2.0
. Embedchain supports all these models. Below you can find YAML config for all:
Ollama
Ollama enables the use of embedding models, allowing you to generate high-quality embeddings directly on your local machine. Make sure to install Ollama and keep it running before using the embedding model.
You can find the list of models at Ollama Embedding Models.
Below is an example of how to use embedding model Ollama:
Clarifai
Install related dependencies using the following command:
set the CLARIFAI_PAT
as environment variable which you can find in the security page. Optionally you can also pass the PAT key as parameters in LLM/Embedder class.
Now you are all set with exploring Embedchain.
Was this page helpful?