Is it possible to make the default search Semantic search on the site? How much do these calls cost?

Semantic search really works great and gives better results than regular search. We can use this search system throughout the site, but it does not come by default. It must be selected when searching.

Is it possible to use this search by default? And does this call have a cost?

2 Likes

You can check out the Algolia Plugin for Discourse. I’ve got it working on a few of my sites:

https://meta.discourse.org/search?q=algolia

1 Like

We are revamping how the Semantic Search works, and it will show it’s results alongside the default search results, which should cover your needs. It should land in a couple of weeks.

10 Likes

We just shipped semantic search integrated into the regular search in Discourse AI. It’s now using HyDE to achieve better results, so let me know how it goes for you. One big change is that you must have both the embeddings module configured as well an LLM service (either OpenAI, Anthropic or Llama2).

You can test it here on Meta.

5 Likes

looks great. I’m excited to use it soon. you are all awesome.

1 Like

Having played with it a little now, some observations:

  • The results are different to regular search. Not always better because sometimes a keyword/relevance search actually seems to get the best hit when you know the keywords to search for, but …
  • It does return a wider set of results and these are sometimes really useful.
  • It’s currently really slow for me. I realise this is somewhat unavoidable because there is an LLM generation step in front of the retrieval so not sure this can be fixed, but worth knowing. It seems slower than a short gpt-3.5 call would be…

Some UI points:

  • It’s not always obvious in the returned results which part is relevant. In an app I’m working on I’m chunking documents quite small before creating embeddings (sentences/paragraphs) and this means when searching//retrieving it’s possible to colour each sentence according to the semantic similarity. This would be a bit like highlighting the keywords from the search, but would appear a bit like a heat map with semantically similar parts coloured hot and dissimilar cold.
  • It’s annoying to have to click to expand the semantic results.
  • Have you thought of ways to combine keyword and semantic results? Would it be possible to choose ‘similarity’ or ‘relevance’ as the sort order for result set? That way, if you picked ‘similarity’ then you could start by presenting keyword-based results and insert hyde-retrieved documents into the list as they arrive.
  • It would be really interesting (to me at least) to be able to see the hypothetical document used to do the similarity matching. I can imagine sometimes wanting to edit this document… and because Cosine similarity is relatively cheap (compared to the LLM generation call) it would still be quite fast in the UI to update results as the user updates their query/hypothetical document.

All in all — this is really cool, thanks! Will be great when this is implemented such that the chatbot can use the results.

B

1 Like

We managed to make it go from 45s to 7s this week alone with some clever optimizations, and we are using it to populate a cache so recurring searches are instant.

At the moment we do a single embedding per topic. We plan on doing embeddings per post, and that will make this search even better in the future.

The current UI is temporary and wasn’t made by our design team. Our goal was to get it there and let our community play with it to gather feedback on the feature functional aspects. A proper interface for it will follow shortly.

That would make the results move when you are scanning them, which is a big no-no in UX. Our main goal is coming up with a way to present both in a way that they are helpful and add to the search experience without being annoying.

Yes, which is why we moved both to happens in the same screen and with a single input from what we had before. Further integrating both depends on the UI paradigm we pick for this screen.

On you own instance you can query the ai_api_audit_logs for this. For example a search for Discourse app freezing on iOS here earlier resulted in the following hypothetical post:

Subject: Discourse app freezing on iOS

Hey all, I've been using the Discourse app on my iPhone for a few months now and lately I've been experiencing it freezing up quite a bit. The app will just lock up and become unresponsive, usually when I'm trying to load new posts or navigate between categories. It seems to happen more frequently when I have several topics open at once and am toggling back and forth between them. The loading spinners will spin indefinitely and tapping buttons does nothing. Eventually it will reload but it's getting pretty annoying. I'm running the latest version of iOS 13.3 on an iPhone 8 Plus. Has anyone else been seeing this issue lately? The forums themselves load fine in a mobile browser, it's just the dedicated app that's acting up. I've tried force closing and reopening the app a few times but that doesn't seem to fix it. Any suggestions from other Discourse mobile users on how to resolve these freezing problems? I'd hate to have to stop using the app if it continues to lock up on me. Thanks in advance for any help or advice!

Since we did a few tweaks to the prompt I’m incredibly surprised with the results.

That’s an interesting proposal, but it’s quite complicated to explain this flow for the average user. That said I quite like what Shopify did for their admin UI where they allow your to overwrite some AI suggested products recommendation. Eventually we could do the same here.

That’s already the case since 2 days ago. The AI Bot sources 1/4 of his internal search results using this technique.

5 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.