From 3a774912602168713d8987d0883c73a44b40b127 Mon Sep 17 00:00:00 2001 From: Crizomb <62544756+Crizomb@users.noreply.github.com> Date: Sat, 20 Apr 2024 14:00:55 +0200 Subject: [PATCH] Update README.md --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index fdb4910..0175671 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ Select or not math mode Choose the pdf you want to work on Wait a little bit for the pdf to get vectorized (check task manager to see if your gpu is going vrum) -Launch LM Studio, Go to the local Server tab, choose 1234 as server port, start server +Launch LM Studio, Go to the local Server tab, choose the model you want to run, choose 1234 as server port, start server (If you want to use open-ai or any other cloud LLM services, change line 10 of x/ai_pdf/back_end/inference.py with your api_key and your provider url) Ask questions to the chatbot @@ -45,6 +45,7 @@ Go eat cookies - [ ] Option tabs - [ ] add more different embedding models + - [ ] add menu to choose how many relevant chunk of information the vector search should get from the vector db - [ ] menu to configure api url and api key ## Maybe in the futur