AI bot - Custom tools

:bookmark: This guide explains how to create, configure, and integrate custom AI tools within the Discourse AI plugin, enabling administrators to extend the bot’s capabilities with user-defined JavaScript functions.

:person_raising_hand: Required user level: Administrator

Summary

This documentation covers:

  • Creating a new custom AI tool
  • Configuring tool parameters and scripts
  • Available APIs for tool scripts
  • Integrating custom tools with AI personas
  • Testing and troubleshooting custom tools

Creating a new custom AI tool

To create a new AI tool:

  1. Navigate to Admin Panel > Plugins > Discourse AI > Tools
  2. Click “New Tool” (you can use existing presets to learn about options)
  3. Fill in the following fields:
    • Name: The name of the tool as presented to the LLM
    • Description: The description of the tool as presented to the LLM
    • Summary: Summary of what tool does to assist users (displayed in details)
    • Parameters: Define the inputs your tool needs as presented to LLM
    • Script: The JavaScript code that powers your tool
  4. Click “Save”

Configuring tool scripts

Available APIs

Your tool scripts have access to the following APIs:

  1. HTTP Requests:

    http.get(url, options)
    http.post(url, options)
    

    Use these to interact with external services. You can use options to specify HTTP headers:

    http.get(url, { headers: { Auth: "some value" } })
    http.post(url, { headers: { Auth: "some value", body: "some body" } }) 
    http.patch(url, { headers: { Auth: "some value", body: "some body" } }) 
    http.delete(url, { headers: ...})
    http.put(url, { headers: ... , body: ...})
    
  2. LLM (Language Model) Integration:

    llm.truncate(text, length)
    

    This allows you to manipulate text within the AI model’s token limits.

  3. Custom upload integration (RAG)

    index.search(query, { filenames: options, limit: max_fragments_optional })
    
  4. Upload creation support

    upload.create(filename, base_64_content)
    
  5. Execution chain control

    chain.setCustomRaw(raw) - sets the body of the post in the chain.
    

Required functions

Your script must implement:

  • invoke(params): The main function that executes when the tool is called

It may optionally implement:

  • details(): (Optional) Provides additional information about the tool’s execution

Example script:

function invoke(params) {
  let result = http.get("https://api.example.com/data?query=" + params.query);
  return JSON.parse(result.body);
}

function details() {
  return "Fetched data from Example API";
}

Limitations and security

  • Execution Timeout: Tools have a default timeout of 2 seconds
  • HTTP Requests: Maximum of 20 requests per tool execution
  • Sandboxed Environment: No access to server file system or Ruby libraries

Testing your tool

You should test any tool you build to ensure the results the LLM will be provided with match your expectations.

Integrating tools with AI personas

To add your custom tool to an AI Persona:

  1. Go to Admin Panel > Plugins > Discourse AI > Personas
  2. Edit an existing persona or create a new one
  3. In the “Tools” section, you’ll see your custom tools listed alongside built-in tools
  4. Select your custom tool to add it to the persona

Custom tools in action

Once you provide the custom tool to your LLM it can use it to enhance the conversation.

Troubleshooting

If your tool isn’t working as expected:

  1. Use the Test interface to ensure it behaves as expected for your inputs.
  2. Ensure your group is in ai_bot_debugging_allowed_groups. Members of this group have full access to the bot transcripts; you can view the AI logs there.
  3. If anything unexpected is happening, visit https://SITENAME/logs to check for errors.

Additional resources

Last edited by @sam 2024-11-27T00:22:27Z

Last checked by @hugh 2024-08-06T02:00:12Z

Check documentPerform check on document:
11 Likes

I’m just getting into building tools for AI personas in Discourse, in part so I can really focus on it; this seems like it’s very useful given how many useful and public APIs there are!

For instance, I’m getting results such as:

Understanding Decision-Making Through Poker Strategies AI

maiki:

Create an infocard for Breath of the Wild.

gpt-4o:

Here is an information card for “The Legend of Zelda: Breath of the Wild”:


Title: The Legend of Zelda: Breath of the Wild
Developer: Nintendo
Publisher: Nintendo
Release Date: March 3, 2017
Platforms: Nintendo Switch, Wii U
Genre: Action-adventure
Mode: Single-player

Synopsis:
Set in a vast open world, players control Link, who awakens from a deep slumber to defeat Calamity Ganon and save the kingdom of Hyrule. The game is praised for its immersive environment, allowing players to explore and interact freely with the world and its systems.

Awards:
The game has won numerous awards, including several Game of the Year accolades, for its innovative design, expansive world, and enchanting gameplay.

More information: Wikidata Entry


“Breath of the Wild” is renowned for redefining open-world games and delivering a captivating experience that encourages exploration and discovery.

maiki:

The same for Animal Crossing: New Horizons.

gpt-4o:

Here is an information card for “Animal Crossing: New Horizons”:


Title: Animal Crossing: New Horizons
Developer: Nintendo
Publisher: Nintendo
Release Date: March 20, 2020
Platform: Nintendo Switch
Genre: Social simulation
Mode: Single-player, Multiplayer

Synopsis:
In this charming social simulation game, players assume the role of a customizable character who relocates to a deserted island after purchasing a package from Tom Nook. The game allows for real-time exploration, resource gathering, crafting, and customization, fostering a communal and relaxing gameplay experience.

Awards:
The game was acclaimed for its creative freedom and timing of release, receiving accolades for Game of the Year and its contributions to social gaming during the global pandemic.

More information: Wikidata Entry


“Animal Crossing: New Horizons” became a cultural phenomenon, celebrated for its engaging social features and stress-relieving gameplay during challenging times.

maiki:

A long way from Chicago

gpt-4o:

Here is an information card for the book “A Long Way from Chicago”:


Title: A Long Way from Chicago
Author: Richard Peck
Publication Year: 1998
Genre: Historical Fiction, Young Adult
ISBN: 978-0141303529

Synopsis:
This Newbery Honor-winning novel is a series of short stories chronicling the annual summer trips of Joey and his sister Mary Alice from Chicago to a rural Illinois town to visit their larger-than-life Grandma Dowdel during the late 1920s and 1930s. Each chapter presents an amusing and poignant episode that imparts life lessons amidst the backdrop of the Great Depression.

Awards:
The book was awarded a Newbery Honor in 1999 and is celebrated for its humor, depth, and portrayal of rural American life during the early 20th century.

More information: Wikidata Entry


This book is appreciated for its nostalgic and heartwarming storytelling, capturing the essence of familial bonds and simpler times.

maiki:

Thinking in Bets

gpt-4o:

I previously found the Wikidata entry for “Thinking in Bets,” but I’m unable to retrieve further specific details directly from Wikidata at the moment. However, here is a brief card based on available information:


Title: Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts
Author: Annie Duke
Publication Year: 2018
Genre: Non-fiction, Decision Making, Psychology
ISBN: 978-0735216358

Synopsis:
The book leverages insights from poker to help readers understand decision-making under uncertainty. Annie Duke, blending her experiences as a professional poker player and cognitive psychologist, offers a framework for making smarter decisions, evaluating outcomes, and adopting a probabilistic mindset.

More information: Wikidata Entry


For detailed insights, you might consider checking additional resources or databases pertaining to book publications.


Okay, so that’s fine, but I noticed this description…

…and wonder: should I be building my prompts referring directly to those tools, for efficient use?

My general approach to prompt-writing is to refine over time and then lock onto a pattern of behavior I wish for the AI tooling to adopt. However, if I could add specific instructions on, for instance, when to lookup a Wikidata entity, and when to list all claims of a particular entity (two different APIs), then I feel I could refine the entire workflow to flow as I intend… :star_struck: