๐ฌ chat
chat()
method allows you to chat over your data sources using a user-friendly chat API. You can find the signature below:
Parameters
Question to ask
Configure different llm settings such as prompt, temprature, number_documents etc.
The purpose is to test the prompt structure without actually running LLM inference. Defaults to False
A dictionary of key-value pairs to filter the chunks from the vector database. Defaults to None
Session ID of the chat. This can be used to maintain chat history of different user sessions. Default value: default
Return citations along with the LLM answer. Defaults to False
Returns
If citations=False
, return a stringified answer to the question asked.
If citations=True
, returns a tuple with answer and citations respectively.
Usage
With citations
If you want to get the answer to question and return both answer and citations, use the following code snippet:
When citations=True
, note that the returned sources
are a list of tuples where each tuple has two elements (in the following order):
- source chunk
- dictionary with metadata about the source chunk
url
: url of the sourcedoc_id
: document id (used for book keeping purposes)score
: score of the source chunk with respect to the question- other metadata you might have added at the time of adding the source
Without citations
If you just want to return answers and donโt want to return citations, you can use the following example:
With session id
If you want to maintain chat sessions for different users, you can simply pass the session_id
keyword argument. See the example below:
With custom context window
If you want to customize the context window that you want to use during chat (default context window is 3 document chunks), you can do using the following code snippet:
With Mem0 to store chat history
Mem0 is a cutting-edge long-term memory for LLMs to enable personalization for the GenAI stack. It enables LLMs to remember past interactions and provide more personalized responses.
In order to use Mem0 to enable memory for personalization in your apps:
- Install the
mem0
package usingpip install mem0ai
. - Prepare config for
memory
, refer Configurations.
How Mem0 works:
- Mem0 saves context derived from each user question into its memory.
- When a user poses a new question, Mem0 retrieves relevant previous memories.
- The
top_k
parameter in the memory configuration specifies the number of top memories to consider during retrieval. - Mem0 generates the final response by integrating the userโs question, context from the data source, and the relevant memories.
Was this page helpful?