-
Notifications
You must be signed in to change notification settings - Fork 7.7k
LocalDocs
To get started with LocalDocs, you should first have a look at the documentation.
-
Definitely make sure each file type you need the LLM to read is listed in the settings at
Settings > LocalDocs > Allowed File extensions
:
The list is comma separated you should add each extension in that format. (no,comma,at,the,end) -
Also, use CUDA in the embeddings device setings, if you can (default is CPU, slower 😦)
FAQ: Understanding the LocalDocs Feature of Nomic.AI's GPT4All application.
Answer 1: The LocalDocs feature allows users to create and utilize a local collection of documents that can be referenced by the AI model when generating responses. This enables the user to access specific information from their own files, ensuring accuracy in answers related to personal or organization-specific data. In its current form, it is more a form of sentence similarity search.
Answer 2: To use the LocalDocs feature, first create a collection of documents and ensure it is set up correctly by clicking the database icon in the top right corner and selecting your desired collection. Then, when asking questions within the chat interface, the AI model will search for relevant information from your local document collection to provide more accurate responses.
Answer 3: If you find that the AI model is providing general information instead of answering questions specifically based on your local documents, it may be necessary to adjust your prompt or query. Make sure to use prompting techniques, be careful in your wording and be specific. For example, if you have a .txt document with character info about Tom, include the name Tom in your prompt.
Question 4: How do I check and verify the context of answers based on my LocalDocs? How to handle confabulations?
Answer 4: If you suspect an answer is a confabulation (or hallucination) or not accurately derived from your local document collection, click the "Source" button beneath the response and open the file. This will show you the specific information that was pulled from your documents to generate the reply, allowing you to verify its accuracy and relevance. It is also possible to hover over the bottom right corner of the file in question, which will show a tooltip of the chunk of text that was sent from the embedding model to the LLM.
Answer 5: Yes, it is possible to create and manage multiple LocalDocs collections tailored to specific subjects or projects. By organizing your documents into separate collections, you can ensure that the AI model only references relevant information based on the context of your questions, leading to more accurate and focused responses.
Answer 6: To add new files or make updates to your local document collection, simply navigate to the folder containing your documents on your computer, and ensure that it is selected when clicking the database icon in the top right corner. Any changes made to these files will be reflected within the AI model's responses as long as you have enabled LocalDocs for the chat session.
Answer 7: The GPT4All LocalDocs feature supports a variety of file formats, including but not limited to text files (.txt), markdown files (.md). By utilizing these common file types, you can ensure that your local documents are easily accessible by the AI model for reference within chat sessions. You can add more filetypes in the settings, but only the default filetypes have been extensively tested, so you are on your own, if you do so.
Answer 8: To maximize the effectiveness of the GPT4All LocalDocs feature, consider organizing your document collection into well-structured and clearly labeled files. Use consistent formatting across documents to facilitate easy parsing by the AI model (For example, a question & answer format tends to work really well) , and ensure that relevant information is easily discoverable within each file. Additionally, be specific when asking questions related to your local documents, as this will help guide the AI model towards providing accurate responses based on the content of those files.
I like having many snippets. I set them individually lower in size. (Be careful here)
- With more characters per snippet you will give the LLM more relevant information with each snippet. (This line has 134 characters.)
- With more snippets you will be capable of retrieving more areas with relevant information. (This line is part of one snippet.)
- These three explanations contain 398 characters total and would be split into two snippets. (With the settings shown below.)
The more snippets you use the longer a response will take.
- 1 snippet took me 4 seconds.
- 10 snippets took me 30 seconds.
- 20 doubled that time to 60 seconds.
- 40 more than doubled that time again to 129 seconds.
Take care when deciding how much information you want it to retrieve.
This is what happens when your model is not configured to handle your LocalDocs settings.
You filled all the context window the LLM had, so it lost stuff...(forced to drop old context)...and that's why its an "Advanced" setting.
Using a stronger model with a high context is the best way to use LocalDocs to its full potential.
- Llama 3.1 8b 128k supports up to 128k context. You can set it as high as your systems memory will hold.
- Setting the model context too high may crash. Set it lower and try again.
For you who are curious about what is in your DB. Portable sqlitebrowser is a good tool for any OS to help you see what is going on inside the db. Here, the embeddings are stored as vector database.
You can see all the good stuff you embedded.
As you can see above, I am in "browse data", which shows the snippets.