Create a RAG app object on Embedchain. This is the main entrypoint for a developer to interact with Embedchain APIs. An app configures the llm, vector database, embedding model, and retrieval strategy of your choice.

Attributes

local_id
str

App ID

name
str

Name of the app

config
BaseConfig

Configuration of the app

llm
BaseLlm

Configured LLM for the RAG app

db
BaseVectorDB

Configured vector database for the RAG app

embedding_model
BaseEmbedder

Configured embedding model for the RAG app

chunker
ChunkerConfig

Chunker configuration

client
Client

Client object (used to deploy an app to Embedchain platform)

logger
logging.Logger

Logger object

Usage

You can create an app instance using the following methods:

Default setting

Code Example
from embedchain import App
app = App()

Python Dict

Code Example
from embedchain import App

config_dict = {
  'llm': {
    'provider': 'gpt4all',
    'config': {
      'model': 'orca-mini-3b-gguf2-q4_0.gguf',
      'temperature': 0.5,
      'max_tokens': 1000,
      'top_p': 1,
      'stream': False
    }
  },
  'embedder': {
    'provider': 'gpt4all'
  }
}

# load llm configuration from config dict
app = App.from_config(config=config_dict)

YAML Config

JSON Config