Discourse 聊天机器人 🤖

:information_source: Summary The Original AI Chatbot for Discourse
:hammer_and_wrench: Repository Link GitHub - merefield/discourse-chatbot: An AI bot with RAG capability for Topics and Chat in Discourse, currently powered by OpenAI
:open_book: Install Guide How to install plugins in Discourse
:heart: Sponsorship Please consider becoming an ongoing sponsor of my open source work at a level that suits your or your organisation’s resources and needs to ensure this plugin gets the maintenance it deserves and continues to work for your site in the future.

Enjoying this plugin? Please :star: it on GitHub ! :pray:

:information_source: As this is an independently contributed plugin, please make support, bug, ux, and feature requests here in this topic. NB This plugin is not available on Discourse.org hosting. You can install it if you are self-hosted and possibly if you are on third party hosting (but check with your provider).

What is it?

  • The original Discourse AI Chatbot!
  • You can use this bot for some Customer Support information tasks (see this guide Building a technical support chatbot)
  • Converse with the bot in any Topic or Chat Channel, one to one or with others! Supports Threads.
  • Customise the character of your bot to suit your forum!
    • want it to sound like William Shakespeare, or Winston Churchill? can do!
  • The “RAG Mode” bot can:
    • Search your forum** for answers so the bot can be an expert on the subject of your forum.
      • Rerank the searches to favour particular groups of authors or Topic tags
      • not just be aware of the information on the current Topic or Channel.
    • Ask your users privately about their outstanding incomplete User Fields.
    • Render and edit AI pictures
    • Search Wikipedia
    • Search current news*
    • Search Google*
    • Crawl remotes sites*
    • Return current End Of Day market data for stocks.*
    • Do “complex” maths accurately (with no made up or “hallucinated” answers!)
    • These “tools” can be extended with a plugin, see below.
  • Vision support - the bot can see your pictures and answer questions on them!
  • Automatically respond to new Topics in specific Categories
  • Uses cutting edge Open AI API and functions capability of their excellent, industry leading Large Language Models.
  • Includes a special quota system to manage access to the bot: more trusted and/or paying members can have greater access to the bot!
    • Meter usage by Query or by Tokens.
  • Also supports Azure and proxy server connections.
    • Use third party proxy processes to translate the calls to support alternative LLMs like Gemini e.g. this one

*sign-up for external (not affiliated) API services required. Links in settings.

RAG mode is very smart and knows facts posted on your forum:

Basic bot mode can sometimes make mistakes, but is cheaper to run because it makes fewer calls to the Large Language Model:


(Sorry China! :wink: )

:biohazard: **Bot’s “vision” - what it can see (potentially share) and privacy :biohazard:

This bot can be used in public spaces on your forum. To make the bot especially useful there is RAG mode (one setting per bot trust level). This is not set by default.

In this mode the bot is, by default, privy to all content a Trust Level 1 user would see, working from this setting:

Thus, if interacted with in a public facing Topic, there is a possibility the bot could “leak” information if you tend to gate content at the Trust Level 0 or 1 level via Category permissions. This level was chosen because through experience most sites usually do not gate sensitive content at low trust levels but it depends on your specific needs.

For this mode, make sure you have at least one user with Trust Level 1 and no additional group membership beyond the automated groups. (bear in mind the bot will then know everything a TL1 level user would know and can share it). You can choose to lower chatbot embeddings benchmark user trust level if you have a Trust Level 0 user with no additional group membership beyond automated groups.

Alternatively:

  • Switch chatbot embeddings strategy to category and populate chatbot embeddings categories with Categories you wish the bot to know about. (Be aware that if you add any private Categories, it should know about those and anything the bot says in public, anywhere might leak to less privileged users so just be a bit careful on what you add).
  • only use the bot in normal mode (but the bot then won’t see any posts)
  • mitigate with moderation

In addition, note anything it can “see” gets shared with Open AI.

You can see that this setup is a compromise. In order to make the bot useful it needs to be knowledgeable about the content on your site. Currently it is not possible for the bot to selectively read members only content and share that only with members which some admins might find limiting but there is no way to easily solve the that whilst the bot is able to talk in public. Contact me if you have special needs and would like to sponsor some work in this space. Bot permissioning with semantic search is a non-trivial problem. The system is currently optimised for speed. NB Private Messages are never read by the bot.

FYI’s

  • May not work on mulit-site installs (not explicitly tested), but PR welcome to improve support :+1:
  • Open AI API response can be slow at times on more advanced models due to high demand. However Chatbot supports GPT 3.5 too which is fast and responsive and perfectly capable.
  • Is extensible and supporting other cloud bots is intended (hence the generic name for the plugin), but currently ‘only’ supports interaction with Open AI Large Language Models (LLM) such as GPT-4 natively. Please contact me if you wish to add additional bot types or want to support me to add more. PR welcome. Can already use proxy servers to access other services without code changes though!
  • Is extensible to support the searching of other content beyond just the current set provided.

Setup

Prerequisites

Aside from the normal changes to app.yml you need to be aware of the following:

To build with Chatbot or AI Topic Summary you need at least version 0.5.1 of the pgvector postgres extension.

Most people will have at least this version already. However, occasionally some installs have an older version installed. This will prevent you building with an error similar to: PG::UndefinedObject: ERROR: access method "hnsw" does not exist

First make sure your container is running:

./launcher restart app

then enter your container

./launcher enter app

then go into the database and update the version of pgvector:

:/var/www/discourse# su postgres -c 'psql discourse'
\dx
ALTER EXTENSION vector UPDATE;
\dx
exit

now leave the container with exit

You should now be able to rebuild.

Creating the Embeddings

If you wish Chatbot to know about the content on your site, turn this setting ON:

chatbot_embeddings_enabled

Only necessary if you want to use the RAG type bot and ensure it is aware of the content on your forum, not just the current Topic.

Initially, we need to create the embeddings for all in-scope posts, so the bot can find forum information. This now happens in the background once this setting is enabled and you do not need to do anything.

This seeding job can take a period of days for very big sites.

Embeddings Scope

This is determined by several settings:

  • chatbot_embeddings_strategy which can be either “benchmark_user” or “category”
  • chatbot_embeddings_benchmark_user_trust_level sets the relevant trust level for the former
  • chatbot_embeddings_categories if category strategy is set, gives the bot access to consider all posts in specified Category.

If you change these settings, over time, the population of Embeddings will morph.

To speed population up

Enter the container:

./launcher enter app

and run the following rake command:

rake chatbot:refresh_embeddings[1]

which at present will run twice due to unknown reason (sorry! feel free to PR) but the [1] ensures the second time it will only add missing embeddings (ie none immediately after first run) so is somewhat moot.

In the unlikely event you get rate limited by OpenAI (unlikely!) you can complete the embeddings by doing this:

rake chatbot:refresh_embeddings[1,1]

which will fill in the missing ones (so nothing lost from the error) but will continue more cautiously putting a 1 second delay between each call to Open AI.

Compared to bot interactions, embeddings are not expensive to create, but do watch your usage on your Open AI dashboard in any case.

NB Embeddings are only created for Posts and only those Posts for which a Trust Level One user would have access. This seemed like a reasonable compromise. It will not create embeddings for posts from Trust Level 2+ only accessible content.

Useful Data Explorer query to monitor embeddings population

@37Rb writes: “Here’s a SQL query I’m using with the Data Explorer plugin to monitor & verify embeddings… in case it helps anyone else.”

SELECT e.id, e.post_id AS post, p.topic_id AS topic, p.post_number,
       p.topic_id, e.created_at, e.updated_at, p.deleted_at AS post_deleted
FROM chatbot_post_embeddings e LEFT JOIN posts p ON e.post_id = p.id

Error when you are trying to get an embedding for too many characters.

You might get an error like this:

OpenAI HTTP Error (spotted in ruby-openai 6.3.1): {"error"=>{"message"=>"This model's maximum context length is 8192 tokens, however you requested 8528 tokens (8528 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", "type"=>"invalid_request_error", "param"=>nil, "code"=>nil}}

This is how you resolve it …

As per your error message, the embedding model has a limit of:

8192 tokens

however you requested 8528

You need to drop the current value of this setting:

chatbot_open_ai_embeddings_char_limit:

by about 4 x the diff and see if it works (a token is roughly 4 characters).

So, in this example, 4 x (8528 - 8192) = 1344

So drop chatbot_open_ai_embeddings_char_limit current value by 1500 to be safe. However, the default value was set according to a lot of testing for English Posts, but for other languages it may need lowering.

This will then cut off more text and request tokens and hopefully the embedding will go through. If not you will need to confirm the difference and reduce it further accordingly. Eventually it will be low enough so you don’t need to look at it again.

How To Switch Embeddings model

You don’t need to do anything but change the setting: the background job will take care of things, if gradually.

If you really want to speed the process up, do:

  • Change the setting chatbot_open_ai_embeddings_model to your new preferred model
  • It’s best to first delete all your current embeddings:
    • go into the container ./launcher enter app
    • enter the rails console rails c
    • run ::DiscourseChatbot::PostEmbedding.delete_all
    • exit (to return to root within container)
  • run rake chatbot:refresh_embeddings[1]
  • if for any Open AI side reason that fails part way through, run it again until you get to 100%
  • the new model is known to be more accurate, so you might have to drop chatbot_forum_search_function_similarity_threshold or you might get no results :). I dropped my default value from 0.8 to 0.6, but your mileage may vary.

Bot Type

Take a moment to read through the entire set of Plugin settings. The chatbot bot type setting is key, and there is one for each chatbot “Trust Level”:

RAG mode is superior but will make more calls to the API, potentially increasing cost. That said, the reduction in its propensity to ultimately output ‘hallucinations’ may facilitate you being able to drop down from GPT-4 to GPT-3.5 and you may end up spending less despite the significant increase in usefulness and reliability of the output. GPT 3.5 is also a better fit for the Agent type based on response times. A potential win-win! Experiment!

For Chatbot to work in Chat you must have Chat enabled.

How to get the bot to respond

Basic rules

  • If permissioned in plugin settings, the bot will Reply to every Post or Message if invoked in a Topic or Chat Channel with only a single User until more people join.
  • the bot will always be invoked if you @ mention it

Special per Category auto-responder

  • Per Category, you can get Chatbot to Reply to every new Topic and you can give it special instructions on what to do, governed by a Category specific prompt that you set up in the Category settings.

Should start something like:

“Welcome me by saying hello and introduce yourself. Share with me 5 posts on the forum by using local forum search relevant to my first post, include links”

ie you need to write it in first person as if user is asking for help. It is not a system prompt. It is not describing how the bot should behave (that’s for the system prompt which is still sent). It is a user asking for specific help (albeit hidden) Basically you write it as if a user post without having to mention the bot.

Bot’s speed of response

This is governed mostly by a setting: ‎chatbot_reply_job_time_delay‎ over which you have discretion.

The intention of having this setting is to:

  • protect you from reaching rate limits of Open AI
  • protect your site from users that would like to spam the bot and cost you money.
  • allow Discourse enough time to upload hotlinked images so the bot can see them.

It is now default ‘2’ seconds and can now be reduced to zero :racing_car: , but be aware of the above risks. Bot vsion (if you are using it) is particularly sensitive to this setting. If you find the bot is not seeing your images, try increasing this value.

Setting this zero and the bot, even in ‘agent’ mode, becomes a lot more ‘snappy’.

Obviously this can be a bit artificial and no real person would actually type that fast … but set it to your taste and wallet size.

NB I cannot directly control the speed of response of Open AI’s API - and the general rule used to be the more sophisticated the model you set the slower this response will usually be. What is more generally the case now is the ‘mini’ models tend to be quicker.

AI powered User Fields collection (experimental)

If a User has some Optional User Fields that are currently blank, enabling this setting, will cause the bot to start asking the user for the information. It will progress through the outstanding User Fields until they are completed and then revert to normal behaviour.

image

(sorry for slowness of response - this was recorded in dev)

NB This feature only supports:

  • Text
  • Dropdowns
  • Confirmations

Multi-select is not yet supported.

The fields must be optional

The order of the User Fields determines priority.

Ollama & llama3 support

Ollama support for (the completely AWESOME!) llama3 is now shipping:

This is for when the bot is run locally in dev or in the cloud with ollama server … and in Basic mode:

  • make sure model is llama3:

  • custom URL needs to be set http://localhost:11434:

image

If you have a big enough server, you could be serving ollama in the cloud there.

DeepSeek Hype :sweat_smile:

Get in on the act with Chatbot :rocket:

You can use the bot in at least Basic mode at least to access V3 and R1

I’ve used:

(no affiliate)

which hosts their models.

Set up like so:

Be sure to set Basic bot mode and substitute your key.

But you may be able to use DeepSeek AI directly if you can access their website to sign up :sweat_smile:

Base URL would be “https://api.deepseek.com”.

OpenAI

You must get a token from https://platform.openai.com/ in order to use the current bot. A default language model is set (one of the most sophisticated), but you can try a cheaper alternative, the list is here

There is an automated part of the setup: upon addition to a Discourse, the plugin currently sets up a AI bot user with the following attributes

  • Name: ‘Chatbot’
  • User Id: -4
  • Bio: “Hi, I’m not a real person. I’m a bot that can discuss things with you. Don’t take me too seriously. Sometimes, I’m even right about stuff!”
  • Group Name: “ai_bot_group”
  • Group Full Name: “AI Bots”

You can edit the name, avatar and bio (see locale string in admin → customize → text) as you wish but make it easy to mention.

It’s not free, so there’s a quota system, and you have to set this up

Initially no-one will have access to the bot, not even staff.

Calling the Open AI API is not free after an initial free allocation has expired! So, I’ve implemented a quota system to keep this under control, keep costs down and prevent abuse. The cost is not crazy with these small interactions, but it may add up if it gets popular. You can read more about OpenAI pricing on their pricing page.

In order to interact with the bot you must belong to a group that has been added to one of the three levels of trusted sets of groups, low, medium & high trust group sets. You can modify each of the number of allowed interactions per week per trusted group sets in the corresponding settings.

You must populate the groups too. That configuration is entirely up to you. They start out blank so initially no-one will have access to the bot:

In this example I’ve made staff have high trust access, whilst trust_level_0 have low trust. They get the corresponding quotas in three additional settings.

Note the user gets the quota based on the highest trusted group they are a member of.

“Prompt Engineering”

There are several locale text “settings” that influence what the bot receives and how the bot responds.

The most important one you should consider changing is the bot’s system prompt. This is sent every time you speak to the bot.

For the basic bot, you can try a system prompt like:

’You are an extreme Formula One fan, you love everything to do with motorsport and its high octane levels of excitement’ instead of the default.

(For the rag bot you must keep everything after “You are a helpful assistant.” or you may break the agent behaviour. Reset it if you run into problems. Again experiment!)

Try one that is most appropriate for the subject matter of your forum. Be creative!

Note that there are now two system prompts for each bot type. One .open is used when talking to the bot in “public”. The other .private is applied when talking to the bot in Personal Messages or Direct Message chat. This is so that you can customize private behaviour for e.g. a support bot.

Changing these locale strings can make the bot behave very differently but cannot be amended on the fly. I would recommend changing only the system prompt as the others play an important role in agent behaviour or providing information on who said what to the bot.

NB In Topics, the first Post and Topic Title are sent in addition to the window of Posts (determined by the lookback setting) to give the bot more context.

You can edit these strings in Admin → Customize → Text under chatbot.prompt., the most important of which are the system prompts which are in: chatbot.prompt.system.

Supports both Posts & Chat Messages!

The bot supports Chat Messages and Topic Posts, including Private Messages (if configured).

You can prompt the bot to respond by replying to it, or @ mentioning it. You can set how far the bot looks behind to get context for a response. The bigger the value the more costly will be each call.

There’s a floating quick chat button that connects you immediately to the bot. Its styling is a little experimental (modifying some z-index values of your base forum on mobile) and it may clash on some pages. This can be disabled in settings. You can choose whether to load the bot into a 1 to 1 chat or a Personal Message.

Now you can choose your preferred icon (default :robot: ) or if setting left blank, will pick up the bot user’s avatar! :sunglasses:

avatar:

OR icon:

And remember, you can also customise the text that appears when it is expanded:

image

… using Admin → Customize → Text

(though you may need to customise the CSS a little to accommodate colours and sizing you want).

Some debugging help

  • make sure setting chatbot include inner thoughts in private messages is ON
  • make sure setting chatbot enable verbose rails logging is ON
  • ssh into your server
  • cd /var/discourse/shared/standalone/log/rails
  • in parallel ask the bot something in the PM
  • go back to console immediately
  • look for these kind of messages:
    • general chat = tail -n 2000 production.log | grep {\"model\":\"
    • vision calls = tail -n 2000 production.log | grep {\"type\":\"image_url\"
  • check “inner thoughts” in the PM

Extending Chatbot’s toolset with plugins

Chatbot plugin has the ability to add other functions in separate plugins, so you don’t have to maintain a fork of the Chatbot repo.

Example function plugin here:

but of course you could add this to any plugin …

This feature was added in this PR

If you need help with extending Chatbot, you can hire me to help you.

Uninstalling the plugin - Important!

Due to recent efforts to simplify the plugin, the only steps necessary to uninstall the plugin are now to remove the clone statement.

Thanks for your interest in the plugin!

Disclaimer: I’m not responsible for what the bot responds with. Consider the plugin to be at Beta stage and things could go wrong. It will improve with feedback. But not necessarily the bots response :rofl: Please understand the pro’s and con’s of a LLM and what they are and aren’t capable of and their limitations. They are very good at creating convincing text but can often be factually wrong.

Important Privacy Note: whatever you write on your forum may get forwarded to Open AI as part of the bots scan of the last few posts once it is prompted to reply (obviously this is restricted to the current Topic or Chat Channel). Whilst it almost certainly won’t be incorporated into their pre-trained models, they will use the data in their analytics and logging. Be sure to add this fact into your forum’s TOS & privacy statements. Related links: https://openai.com/policies/terms-of-use, https://openai.com/policies/privacy-policy, https://platform.openai.com/docs/data-usage-policies

Copyright: Open AI made a statement about Copyright here: https://help.openai.com/en/articles/5008634-will-openai-claim-copyright-over-what-outputs-i-generate-with-the-api

TODO/Roadmap Items

  • Add front and back-end tests :construction:
  • Add “bot typing” indicator and “response streaming” (@Aizada_M, @MarcP) :construction:
  • forgot to mention the bot? Get bot to respond to edits that add its @ mention (@frold )
  • Add a badge? You did mention @botname (@frold )
  • Add setting to include Category and Pinned Posts prompt? (@Ed_S)
  • Ditto Bios to each message history prompt? (@Ed_S , @codergautam). Will this even work. Let’s get evidence.
  • Update Discourse Frotz with this better codebase?
  • Move to use pgvector in favour of pgembedding for vector search now that former supports fast HNSW lookup. :white_check_mark:
  • Add semantic search so that the bot can read your forum Posts and become an “expert” :wink: :white_check_mark:
  • Add agent behaviour to reduce hallucinations and leverage reliable, factual information. :white_check_mark:
  • Add extra logic to convert suspected usernames into @ mentions (@frold ) :white_check_mark:
  • Add GPT-4 support (when Open AI deems me worthy enough of access! :sweat_smile: ) :white_check_mark:
  • Add custom model name support. :white_check_mark:
  • Add option to strip out quotes from Posts before passing text to API. :white_check_mark:
  • Improve error transparency & handling for when Open AI returns an error state :white_check_mark:
  • Add retry capability for timed out API requests :white_check_mark:
  • Add support for ChatGPT :white_check_mark:
  • Lint the plugin to Discourse core standards :white_check_mark:
  • Add CI workflows :white_check_mark:
  • Add settings to influence the nature of the bots response (e.g. how wacky it is). :white_check_mark:
  • include Topic Title & first Posts to prompt :white_check_mark:
  • Add setting to switch from raw Post/Message data to cooked to potentially leverage web training data better (suggestion by @MarcP). NB May cost more and limit what is returned as input tokens are counted and cooked is much bigger. think we’ve abandoned this idea

Credits:

*It still uses OpenAI’s chat GPT engine, but can now leverage local functions and data from API calls to limit hallucinations.

103 个赞

我已将 Jina.ai 添加到支持的 API 列表中,可用于网络搜索和网络爬行。一个密钥同时服务于这两项功能。如果您不删除现有 API 的密钥,它们将被使用。

主要有两个原因:

  • 爬行性能似乎非常好
  • 他们提供按使用量付费的模式,而不是按月收费,这适合我,也可能适合您。

设置中的链接。

大家好,

@merefield 创建的 AI 聊天机器人非常棒。它在我们的社区中做得非常出色。但是,我想知道它是否可以像默认 Discourse AI 设置中的 AI 帖子分类器那样回复每一篇帖子。

这可能吗?

谢谢!:slight_smile:

感谢您对 Discourse Chatbot 的关注。

以下两点可能与您相关:

  • 如果在一个只有一名用户的帖子里调用该机器人,它将回复该帖子的每一楼层,直到有更多人加入该帖子。
  • 您可以设置让聊天机器人回复每个新帖子,并可以为其提供特殊指令,这些指令由您在分类设置中配置的特定于分类的提示来管理。

如果您 @ 提及该机器人,它将始终被调用,所以我不知道您是否也可以利用该功能。

1 个赞

感谢您及时回复@merefield。

感谢您的解释。但是,我们的用例不同。我们希望根据自定义提示中分配类别内的各种关键字来触发 AI 聊天机器人。

例如 - 我们有一个支持类别,AI 聊天机器人会回复每个新主题。我们有一个第二个私有支持类别,AI 聊天机器人应该回复“每个”帖子,而无需被提及。AI 聊天机器人应由关键字触发。

“每一条”帖子?还是只有带关键词的帖子?这是两个不同的要求吗?

也许你可以在预定义回复中提及聊天机器人,然后机器人就会跟进?这将意味着两个帖子,但可能会奏效。

所以,如果你的预定义回复是“@Chatbot(或者你给它改了名字)请处理”,那么聊天机器人就会跟进。当然,你可以定制那个回复。

请注意,聊天机器人也可以回复耳语(如果你配置了的话),所以如果预定义回复是在耳语中,那么普通用户只会看到聊天机器人的回复,这样看起来可能更整洁。

如果你愿意聘请我来扩展聊天机器人的功能,我很乐意这样做,只要我们能想出一个足够通用的解决方案,对其他人也有用,但我认为利用现有的功能是明智的?

我想知道这里的最佳做法是为AI帖子分类器添加一个耳语功能?你可以聘请某人来提交一个PR,或者将其作为一个#feature提出,希望它能被优先处理。这无疑会非常通用有用?

我通过利用添加到 OpenAI API 的一项功能进行了一些代码简化。如果遇到任何问题,请告知我。

我决定是时候称之为 1.0.0 了 :champagne: :tada: :partying_face:

4 个赞

添加了在单独的插件中添加其他函数的功能,因此您不必维护 Chatbot 仓库的 fork。

示例函数插件在此处:

当然,您可以将其添加到任何插件中……

PR:

4 个赞

@Festinger 你解决这个问题了吗?

帖子嵌入:出现了一个问题,但会重试直到达到限制:nil 的未定义方法 `destroy!’

上次更新后,我开始收到上述错误。我尤其在处理图片时会收到此错误。我的视觉模式设置为:直接。以前我没有收到过此类错误。

您是否记得在创建帖子(或主题)后不久将其删除?

如果您在创建帖子后,在它有机会创建嵌入之前就将其删除,就会出现此错误。因此,当您删除它时,Chatbot 会尝试删除一个不存在的嵌入,这会产生一条烦人的日志消息,但不会造成损害 :wink:

如果您能想到更好的方法,欢迎提交 PR。

1 个赞

它像这样在用户发送的问题中给出了一个错误。这不是我删除的东西。

用户是如何看到这个的?这只会在日志中显示吗?

我在日志中看到了。我跟踪了一天,特别是这个问题发生在长期会员的用户帖子中。如果新会员用图片开一个帖子,通常没有问题,也不会收到错误。当老用户用图片开一个帖子时,就会出现问题。

请分享一下堆栈跟踪信息?

那用户为什么还给你发问题?他们跟你说了什么?用户不应该知道有任何问题,问题只应该出现在日志里?

听起来像是你的“老会员”可能会删除他们最近发布的新主题或帖子(因为他们知道怎么做)?

你是否部署了任何类型的自动化?

发布包含图片的帖子时不会发生破坏,所以我不太清楚这怎么会发生。

更新:我刚刚将一张图片上传到一个网站,无法重现。

1 个赞

SerpAPI 没问题,它们的统计数据显示已发出 Google 请求,但 Chatbot 声称无法使用 Google。这种情况已经持续了一段时间,但我没有做出反应,因为当时 DAI 也不想使用 Google。

没有错误消息。

1 个赞

如果您说“在网上搜索…”会发生什么?

内部提示是通用的,因此不假定使用 Google…

让我向您展示这项强大功能的绝佳演示:

(尽管这使用了捆绑的替代方案 jina.ai

我试过了。系统提示中有“当用户要求使用谷歌进行搜索时,用户实际上是指您应该使用您能够访问的网络搜索功能”。我删除了那部分,然后在我的用户提示中尝试了网络搜索措辞——毕竟,它并没有使用谷歌的 API。但没有成功,答案也差不多:我无法进行网络搜索……

这是我找到的唯一信息,但它没有打开这个:

{
  "id": "chatcmpl-A1wT9Dp1fCIVSzRsVPhr3NnNCcPjW",
  "object": "chat.completion",
  "created": 1725026447,
  "model": "gpt-4o-2024-05-13",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "tool_calls": [
          {
            "id": "call_MvzexRPsDj6EGuXhuiFcmtrd",
            "type": "function",
            "function": {
              "name": "web_search",
              "arguments": "{\"query\":\"International Dog Day date, reason, and history\"}"
            }
          }
        ],
        "refusal": null
      },
      "logprobs": null,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 1203,
    "completion_tokens": 20,
    "total_tokens": 1223
  },
  "system_fingerprint": "fp_157b3831f5"
}
2 个赞

我可以不使用它,因为我的目标是使用聊天机器人作为内部问答机器人,而 Jina 可能会解决我使用(通过 WordPress)主内容网站的问题。

但当我从外部世界搜索时,结果并不理想。

1 个赞

我看看是否能重现

1 个赞