Replies: 3 comments 2 replies
-
You need to edit makechain.ts and modify the default prompt to fit your use case. Every question is a new request to the GPT3.5 Turbo API, and each request to the API has no memory of your previous request. So your prompt needs to include detailed instructions of how the API should interpret each and every question you send to it, from scratch. The code in this repo sends some of the details of your most recent question/answer back to the API with your next question, which can lead you to the "Hmm, I'm not sure" problem due to the way the default prompt is written. If you ask it about color options of a Ford Mustang, and then follow up with a question about the price of Hondas, the "context" has been limited to the previous discussion about Ford Mustangs... so it's now going to struggle finding an answer about Hondas. You can fix this by modifying the default prompt to better handle your use case. For example you can remove the instructions about responding with "Hmm, I'm not sure" if the answer can't be found in the recent context, and instead request that if the answer can't be found in the recent context, to disregard recent context and look for the answer using the entire document. This approach should fix the Mustang and Honda example I mentioned above. But prompt engineering is an art, not a science, and will require a lot of experimentation to get it to work for your use case. Hope this helps, good luck! |
Beta Was this translation helpful? Give feedback.
-
My fork is very flexible, you can alter the prompts inline :D there's a lot of setup and not as user friendly if you want to not do the setup unfortunately. I am going to work on disabling all the bells and whistles by default soon. It basically extends the chat modes to all the use cases one would mostly have. The prompts are there to lift out and ways to handle history for different modes. Has the temperature set to do what I can tell are the best behaviors on question/answer off facts vs. story telling and making it less exact and factual / less prone to not answering and seeing it as out of context. At least it allows you to get a better idea of what can be done for prompting quickly, although also note how they are modular / stack and have common strings too. https://github.com/groovybits/gaib/blob/main/config/personalityPrompts.ts |
Beta Was this translation helpful? Give feedback.
-
You can:
|
Beta Was this translation helpful? Give feedback.
-
Hi all,
I've managed to use this app, but since I don't have access to GPT4 i'm using GPT3.5 turbo. The answers are always "Hmm, I'm not sure. The context provided does not give a clear answer to the question ..." and then it shows sources. So what could the problem be? And.. is there a way to also use chatgpt if the information isn't in the embeddings?
Beta Was this translation helpful? Give feedback.
All reactions