Get Started
❓ FAQs
Get Started
- ⚡ Quickstart
- 📚 Introduction
- ❓ FAQs
- 💻 Full stack
- 🔗 Integrations
Components
- 🧩 Introduction
- 🗂️ Data sources
- 🗄️ Vector databases
- 🤖 Large language models (LLMs)
- 🧩 Embedding models
- 🔬 Evaluation
Community
Product
Get Started
❓ FAQs
Collections of all the frequently asked questions
Yes, it does. Please refer to the OpenAI Assistant docs page.
Use the model provided on huggingface: mistralai/Mistral-7B-v0.1
import os
from embedchain import App
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token"
app = App.from_config("huggingface.yaml")
Use the model gpt-4-turbo
provided my openai.
import os
from embedchain import App
os.environ['OPENAI_API_KEY'] = 'xxx'
# load llm configuration from gpt4_turbo.yaml file
app = App.from_config(config_path="gpt4_turbo.yaml")
import os
from embedchain import App
os.environ['OPENAI_API_KEY'] = 'xxx'
# load llm configuration from gpt4.yaml file
app = App.from_config(config_path="gpt4.yaml")
from embedchain import App
# load llm configuration from opensource.yaml file
app = App.from_config(config_path="opensource.yaml")
You can achieve this by setting stream
to true
in the config file.
llm:
provider: openai
config:
model: 'gpt-4o-mini'
temperature: 0.5
max_tokens: 1000
top_p: 1
stream: true
Set up the app by adding an id
in the config file. This keeps the data for future use. You can include this id
in the yaml config or input it directly in config
dict.
app1.py
import os
from embedchain import App
os.environ['OPENAI_API_KEY'] = 'sk-xxx'
app1 = App.from_config(config={
"app": {
"config": {
"id": "your-app-id",
}
}
})
app1.add("https://www.forbes.com/profile/elon-musk")
response = app1.query("What is the net worth of Elon Musk?")
app2.py
import os
from embedchain import App
os.environ['OPENAI_API_KEY'] = 'sk-xxx'
app2 = App.from_config(config={
"app": {
"config": {
# this will persist and load data from app1 session
"id": "your-app-id",
}
}
})
response = app2.query("What is the net worth of Elon Musk?")
Still have questions?
If docs aren’t sufficient, please feel free to reach out to us using one of the following methods:
Was this page helpful?
On this page