Differences in search latency between AI semantic and keyword search

Can you expand on what you mean by latency here?

For Related Topics, since every embeddings is pre-calculated there is not extra runtime cost. Much to the contrary, finding related topics SQL query is faster than our old suggested topics query, and we cache related topics for even faster performance.

As for AI Search, our current HyDE[1] approach to it incurs in serious latency, which is why it happens async and the user is presented first with the standard search and the option to augment it with AI results when those are ready. Here on Meta the AI search results are ready 4 seconds after the normal search results, on average.


  1. GPT-4: HyDE stands for Hypothetical Document Embeddings, a technique used in semantic search to find documents based on similarities in their content. This approach enables more precise and contextually relevant search results by assessing the conceptual similarities between documents, rather than relying solely on keyword matching. It represents a zero-shot learning technique that combines GPT-3’s language understanding capabilities with contrastive text encoders, enhancing AI’s ability to comprehend and process natural language data in a more nuanced and effective manner. ↩︎

3 Likes