Custom configurations
Embedchain offers several configuration options for your LLM, vector database, and embedding model. All of these configuration options are optional and have sane defaults.
You can configure different components of your app (llm
, embedding model
, or vector database
) through a simple yaml configuration that Embedchain offers. Here is a generic full-stack example of the yaml config:
Embedchain applications are configurable using YAML file, JSON file or by directly passing the config dictionary. Checkout the docs here on how to use other formats.
Alright, let’s dive into what each key means in the yaml config above:
app
Section:config
:name
(String): The name of your full-stack application.id
(String): The id of your full-stack application.Only use this to reload already created apps. We recommend users to not create their own ids.collect_metrics
(Boolean): Indicates whether metrics should be collected for the app, defaults toTrue
log_level
(String): The log level for the app, defaults toWARNING
llm
Section:provider
(String): The provider for the language model, which is set to ‘openai’. You can find the full list of llm providers in our docs.config
:model
(String): The specific model being used, ‘gpt-4o-mini’.temperature
(Float): Controls the randomness of the model’s output. A higher value (closer to 1) makes the output more random.max_tokens
(Integer): Controls how many tokens are used in the response.top_p
(Float): Controls the diversity of word selection. A higher value (closer to 1) makes word selection more diverse.stream
(Boolean): Controls if the response is streamed back to the user (set to false).online
(Boolean): Controls whether to use internet to get more context for answering query (set to false).token_usage
(Boolean): Controls whether to use token usage for the querying models (set to false).prompt
(String): A prompt for the model to follow when generating responses, requires$context
and$query
variables.system_prompt
(String): A system prompt for the model to follow when generating responses, in this case, it’s set to the style of William Shakespeare.number_documents
(Integer): Number of documents to pull from the vectordb as context, defaults to 1api_key
(String): The API key for the language model.model_kwargs
(Dict): Keyword arguments to pass to the language model. Used foraws_bedrock
provider, since it requires different arguments for each model.http_client_proxies
(Dict | String): The proxy server settings used to createself.http_client
usinghttpx.Client(proxies=http_client_proxies)
http_async_client_proxies
(Dict | String): The proxy server settings for async calls used to createself.http_async_client
usinghttpx.AsyncClient(proxies=http_async_client_proxies)
vectordb
Section:provider
(String): The provider for the vector database, set to ‘chroma’. You can find the full list of vector database providers in our docs.config
:collection_name
(String): The initial collection name for the vectordb, set to ‘full-stack-app’.dir
(String): The directory for the local database, set to ‘db’.allow_reset
(Boolean): Indicates whether resetting the vectordb is allowed, set to true.batch_size
(Integer): The batch size for docs insertion in vectordb, defaults to100
We recommend you to checkout vectordb specific config here
embedder
Section:provider
(String): The provider for the embedder, set to ‘openai’. You can find the full list of embedding model providers in our docs.config
:model
(String): The specific model used for text embedding, ‘text-embedding-ada-002’.vector_dimension
(Integer): The vector dimension of the embedding model. Defaultsapi_key
(String): The API key for the embedding model.endpoint
(String): The endpoint for the HuggingFace embedding model.deployment_name
(String): The deployment name for the embedding model.title
(String): The title for the embedding model for Google Embedder.task_type
(String): The task type for the embedding model for Google Embedder.model_kwargs
(Dict): Used to pass extra arguments to embedders.http_client_proxies
(Dict | String): The proxy server settings used to createself.http_client
usinghttpx.Client(proxies=http_client_proxies)
http_async_client_proxies
(Dict | String): The proxy server settings for async calls used to createself.http_async_client
usinghttpx.AsyncClient(proxies=http_async_client_proxies)
chunker
Section:chunk_size
(Integer): The size of each chunk of text that is sent to the language model.chunk_overlap
(Integer): The amount of overlap between each chunk of text.length_function
(String): The function used to calculate the length of each chunk of text. In this case, it’s set to ‘len’. You can also use any function import directly as a string here.min_chunk_size
(Integer): The minimum size of each chunk of text that is sent to the language model. Must be less thanchunk_size
, and greater thanchunk_overlap
.
cache
Section: (Optional)similarity_evaluation
(Optional): The config for similarity evaluation strategy. If not provided, the defaultdistance
based similarity evaluation strategy is used.strategy
(String): The strategy to use for similarity evaluation. Currently, onlydistance
andexact
based similarity evaluation is supported. Defaults todistance
.max_distance
(Float): The bound of maximum distance. Defaults to1.0
.positive
(Boolean): If the larger distance indicates more similar of two entities, set itTrue
, otherwiseFalse
. Defaults toFalse
.
config
(Optional): The config for initializing the cache. If not provided, sensible default values are used as mentioned below.similarity_threshold
(Float): The threshold for similarity evaluation. Defaults to0.8
.auto_flush
(Integer): The number of queries after which the cache is flushed. Defaults to20
.
memory
Section: (Optional)top_k
(Integer): The number of top-k results to return. Defaults to10
.
If you provide a cache section, the app will automatically configure and use a cache to store the results of the language model. This is useful if you want to speed up the response time and save inference cost of your app.
If you have questions about the configuration above, please feel free to reach out to us using one of the following methods:
Was this page helpful?