Last week I heard the podcast SoftwareArchitekTOUR – Episode 102: Zuverlässige KI-Architektur from heise online. (german only, sorry).
I really liked the part where they discussed the technical part for a sematic search. Especially when it struck me, that the actual use of LLMs is just once per document and not in the search directly. Also, it suddenly became clear to me why you want/need a vector database for such an AI supported search.
Check it out if you can understand german or try to translate it.