Langchain plan phases for openai integration (openai compaible endpoint), server for retrieving data

This commit is contained in:
2026-02-04 21:34:22 +03:00
parent ea4ce23cd9
commit ae8c00316e

View File

@@ -39,3 +39,12 @@ Chosen data folder: relatve ./../../../data - from the current folder
- [x] Create file `agent.py`, which will incorporate into itself agent, powered by the chat model. It should use integration with ollama, model specified in .env in property: OLLAMA_CHAT_MODEL
- [x] Integrate this agent with the existing solution for retrieving, with retrieval.py
- [x] Integrate this agent with the cli, as command to start chatting with the agent. If there is a built-in solution for console communication with the agent, initiate this on cli command.
# Phase 7 (openai integration for chat model)
- [ ] Create openai integration, with using .env variables `OPENAI_CHAT_URL`, `OPENAI_CHAT_KEY`. First one for openai compatible URL, second one for Authorization Bearer token.
- [ ] Make this integration optional, with using .env variable `CHAT_MODEL_STRATEGY`. There can be 2 options: "ollama", "openai". Ollama currently already done and working, so we should write code for checking which option is chosen in .env, with ollama being the default.
# Phase 8 (http endpoint to retrieve data from the vector storage by query)
- [ ] Create file `server.py`, with web framework
- [ ] Add POST endpoint "/api/test-query" which will use agent, and retrieve response for query, sent in JSON format, field "query"