env step for llamaindex

This commit is contained in:
2026-02-05 22:48:39 +03:00
parent effbc7d00f
commit 0adbc29692
2 changed files with 8 additions and 1 deletions

BIN
services/rag/.DS_Store vendored

Binary file not shown.

View File

@@ -35,7 +35,14 @@ Chosen data folder: relatve ./../../../data - from the current folder
- [x] Create file `retrieval.py` with the configuration for chosen RAG framework, that will retrieve data from the vector storage based on the query. Use retrieving library/plugin, that supports chosen vector storage within the chosen RAG framework. Retrieving configuration should search for the provided text in the query as argument in the function and return found information with the stored meta data, like paragraph, section, page etc. Important: if for chosen RAG framework, there is no need in separation of search, separation of retrieving from the chosen vector storage, this step may be skipped and marked done. - [x] Create file `retrieval.py` with the configuration for chosen RAG framework, that will retrieve data from the vector storage based on the query. Use retrieving library/plugin, that supports chosen vector storage within the chosen RAG framework. Retrieving configuration should search for the provided text in the query as argument in the function and return found information with the stored meta data, like paragraph, section, page etc. Important: if for chosen RAG framework, there is no need in separation of search, separation of retrieving from the chosen vector storage, this step may be skipped and marked done.
# Phase 6 (chat feature, as agent, for usage in the cli) # Phase 6 (models strategy, loading env and update on using openai models)
- [ ] Add `CHAT_STRATEGY`, `EMBEDDING_STRATEGY` fields to .env, possible values are "openai" or "ollama".
- [ ] Add `OPENAI_CHAT_URL`, `OPENAI_CHAT_KEY`, `OPENAI_EMBEDDING_MODEL`, `OPENAI_EMBEDDING_BASE_URL`, `OPENAI_EMBEDDING_API_KEY` values to .env.dist with dummy values and to .env with dummy values.
- [ ] Add in all important .env wise places in the code loading .env file for it's variables
- [ ] Create reusable function, that will return configuration for models. It will check CHAT_STRATEGY and load environment variables accordingly, and return config for usage.
- [ ] Add this function everywhere in the codebase where chat or embedding models configuration needed
# Phase 7 (chat feature, as agent, for usage in the cli)
- [ ] Create file `agent.py`, which will incorporate into itself agent, powered by the chat model. It should use integration with ollama, model specified in .env in property: OLLAMA_CHAT_MODEL - [ ] Create file `agent.py`, which will incorporate into itself agent, powered by the chat model. It should use integration with ollama, model specified in .env in property: OLLAMA_CHAT_MODEL
- [ ] Integrate this agent with the existing solution for retrieving, with retrieval.py - [ ] Integrate this agent with the existing solution for retrieving, with retrieval.py