OpenAI Logo

Embedchain now supports OpenAI Assistants API which allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries.

At a high level, an integration of the Assistants API has the following flow:

  1. Create an Assistant in the API by defining custom instructions and picking a model
  2. Create a Thread when a user starts a conversation
  3. Add Messages to the Thread as the user ask questions
  4. Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools.

Creating an OpenAI Assistant using Embedchain is very simple 3 step process.

Step 1: Create OpenAI Assistant

Make sure that you have OPENAI_API_KEY set in the environment variable.

Initialize
from embedchain.store.assistants import OpenAIAssistant

assistant = OpenAIAssistant(
    name="OpenAI DevDay Assistant",
    instructions="You are an organizer of OpenAI DevDay",
)

If you want to use the existing assistant, you can do something like this:

Initialize
# Load an assistant and create a new thread
assistant = OpenAIAssistant(assistant_id="asst_xxx")

# Load a specific thread for an assistant
assistant = OpenAIAssistant(assistant_id="asst_xxx", thread_id="thread_xxx")

Step-2: Add data to thread

You can add any custom data source that is supported by Embedchain. Else, you can directly pass the file path on your local system and Embedchain propagates it to OpenAI Assistant.

Add data
assistant.add("/path/to/file.pdf")
assistant.add("https://www.youtube.com/watch?v=U9mJuUkhUzk")
assistant.add("https://openai.com/blog/new-models-and-developer-products-announced-at-devday")

Step-3: Chat with your Assistant

Chat
assistant.chat("How much OpenAI credits were offered to attendees during OpenAI DevDay?")
# Response: 'Every attendee of OpenAI DevDay 2023 was offered $500 in OpenAI credits.'

You can try it out yourself using the following Google Colab notebook:

Open in Colab