Falco
(Falco)
10
Running https://hf.co/Qwen/Qwen3-Embedding-0.6B
with GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models should be good doable in a server with 2-4GB RAM without a GPU.
3 Me gusta