How to run Data Explorer queries with the Discourse API

It looks like its been fixed since my last post. I just re-tested the same example and it now works as expected.

Thanks for the help!


Is it possible to send parameters with the post request?

So for example say I have a Data Explorer query that finds the latest 10 topics from a category, can I send the category ID with the POST request to /admin/plugins/explorer/<query-id>/run?

Yes, the first code example in the OP should give you what you are looking for. It passes a viewed_at parameter, but a similar approach would work with a category_id parameter.


Ooo that’s awesome - thanks Simon!

(Also apologies for not looking that far down - I thought that might have been out of date after reading a reading a recent deprecation warning).

1 Like

Anybody know why this might not be working?

I’m using the Discourse_api gem with the following:

id = 13
params = {"id":"#{id}"}"/admin/plugins/explorer/queries/86/run.json", params)

But I keep getting:

DiscourseApi::UnprocessableEntity ({"success"=>false, "errors"=>["DataExplorer::ValidationError: Missing parameter id of type int"]})

Running the same via CURL works no problem:

curl -X POST "" \
-H "Content-Type: multipart/form-data;" \
-H "Api-Key: 123345" \
-H "Api-Username: system" \
-F 'params={"id":"13"}'

…and running similar Data Explorer Queries that do not require params (or adding default params to this query) work fine too… so it’s the sending of params that’s the issue - but I can’t work out why. Any ideas?

1 Like

It’s subtle, but you need to pass in a hash with a key of params.

id = "8"
params = { params: '{"id":"' + id + '"}' }
puts"/admin/plugins/explorer/queries/1/run.json", params)

Woohoo it works! Thanks Blake :blush:


@blake any idea why that URL is gone now for me?

curl -i -XPOST -H accept:application/json -H content-type:application/json -d'{"api_key":"<key>","api_username":"<user>"}'

used to work for years in our statistics lambdas, since April 8th it stopped working

HTTP/2 404 
server: nginx
date: Tue, 28 Apr 2020 22:18:48 GMT
content-type: application/json; charset=utf-8
vary: Accept-Encoding
x-request-id: fab6796f-40f3-4d41-a61b-3287d9216fa0
discourse-proxy-id: app-router-tiehunter01
strict-transport-security: max-age=31536000

Discourse made an update to how API authentication is handled on April the 8th, so it makes sense that’s when the request stopped working. You now need to set the Api-Username and Api-Key in the request headers. You can find more details about this at the top of Discourse API Documentation.

The example curl request in this topic’s OP has also been updated to use the new authentication method.


Any suggestions for keeping the queries with the external code that is making them?

One way would be to create/delete the query post each time before using it. This sounds like it should work, but feels a bit like unnecessary churn.

Is it possible to construct a generic query who’s entire contents could be substituted at execution time? With that, the SQL could be kept in the external code’s repository.

That should be possible. It would require you to make three API requests though.

It should be possible to develop a plugin for this. I’m not sure that your plugin would need to make any use of the Data Explorer plugin. You might find some useful information in this topic: New official direct SQL access API for Discourse (mini_sql)


Is it possible to get a csv instead of a json file? I’ve tried appending .csv to the POST but no such luck.

Yes, you can add a format parameter to the request and set it to csv. For a curl request, try passing
-F "format=csv" with the request.


Hmm, tried two things and no luck.

With -F "format=csv"
I get a malformed JSON string error

With -F 'params={"format":"csv"}'
I get a format key in the output that says csv, but it’s is still formatted like a json with keys and values.

I’m looking to automate a csv download of a query, as if I was going to the explorer page and running the query.

1 Like

This request works for me on my local dev site:

curl -X POST "http:/localhost:3000/admin/plugins/explorer/queries/157/run" \
-H "Content-Type: multipart/form-data;" \
-H "Api-Key: $api_key" \
-H "Api-Username: system" \
-F "format=csv"

If you still can’t get this to work, I’ll test it on a production site later today. Once we confirm that it’s working, I’ll update the topic to add a CSV format example.


Got it working. Not sure why, as I had the same -F param as you but it’s working now. Thanks for the help.


Is there a way to do pagination on these queries when calling them via requests in Python? For example, I have params = {'page': '0'} to set the page, then I do the following to call the endpoint:

response =, data=json.dumps(params), headers=headers)

But no matter what page number I provide, I always get the same first page of results. (I’ve tried the page number above as both a string and a plain integer, it made no difference.) Does the page param need to be passed as something other than data during the API call?

Edit: Just tested with a curl command and I’m also seeing this. I get the first page of data regardless of my page number. This is what the query that I’m calling looks like. It returns 31 rows normally.

-- integer :limit = 5
-- integer :page = 0

SELECT username, name, created_at, updated_at
FROM users
OFFSET :page * :limit
LIMIT :limit

Have a look at the pagination example: It should return the correct page. Note the way that the params are included in the request ('params={"page":"0"}, 'params={"page":"1"}), etc. The query’s params need to be set in that way.


Thanks. I tried a million variants for the params field and ended up with the following working:

{'params': '{"page":"' + page + '"}'}

There’s probably a cleaner way but at least it’s working now. :slight_smile:


My team has reports that we run weekly for internal meetings, checkins, and such. In the interest of “automating all the things” I created a GitHub Action to allow for easy execution of Data Explorer queries. This allows me to create workflows that run these queries on a periodic basis and open GitHub Issues with the results.

I don’t know if it would be helpful for anyone else but I wanted to offer it as an option for people if it would be useful.