Vector storage Qdrant initialization and configuration

This commit is contained in:
2026-02-04 01:10:07 +03:00
parent c37aec1d99
commit f36108d652
3 changed files with 139 additions and 9 deletions

View File

@@ -20,10 +20,10 @@ Chosen data folder: relatve ./../../../data - from the current folder
- [x] Install all needed libraries for loaders, mentioned in the `EXTENSIONS.md`. If some libraries require API keys for external services, add them to the `.env` file (create it if it does not exist)
# Phase 3 (preparation for storing data in the vector storage + embeddings)
- [ ] Install needed library for using Qdrant connection as vector storage. Ensure ports are used (which are needed in the chosen framework): Rest Api: 6333, gRPC Api: 6334. Database available and running on localhost.
- [ ] Create file called `vector_storage.py`, which will contain vector storage initialization, available for import by other modules of initialized. If needed in chosen RAG framework, add embedding model initialization in the same file. Use ollama, model name defined in the .env file: OLLAMA_EMBEDDING_MODEL. Ollama available by the default local port: 11434.
- [ ] Add strategy of creating collection for this project (name: "documents_llamaindex"), if it does not exist.
- [ ] Just in case add possibility to connect via openai embedding, using openrouter api key. Comment this section, so it can be used in the future.
- [x] Install needed library for using Qdrant connection as vector storage. Ensure ports are used (which are needed in the chosen framework): Rest Api: 6333, gRPC Api: 6334. Database available and running on localhost.
- [x] Create file called `vector_storage.py`, which will contain vector storage initialization, available for import by other modules of initialized. If needed in chosen RAG framework, add embedding model initialization in the same file. Use ollama, model name defined in the .env file: OLLAMA_EMBEDDING_MODEL. Ollama available by the default local port: 11434.
- [x] Add strategy of creating collection for this project (name: "documents_llamaindex"), if it does not exist.
- [x] Just in case add possibility to connect via openai embedding, using openrouter api key. Comment this section, so it can be used in the future.
# Phase 4 (creating module for loading documents from the folder)