That sounds like the perfect solution, thanks!
Please mention in the plugin UI that only the first 250 results of the query are visible, because there is no UI indication of that whatsoever.
I’ve just spent a bunch of time debugging a query that returned a different result set depending on the sort order. Turned out there was an implicit LIMIT tacked on.
Scroll through full user list?
Yep. The “download results” have a limit of 10,000 or something, and you can make that number as high as you need by crafting a HTTP request.
Should probably be a site setting that can be shadowed by global
How to run a data-explorer query through JSON API?
I’m desperately trying this url:
But it says 404 page not found.
Queries are run by requests like this:
Results are then returned as JSON.
For more details try opening the dev tools in your browser (probably by pressing F12) and watching the network tab while using the data explorer.
Discourse API says we should appen “.json” suffix
It seems like this URL doesn’t follow the standard pattern. Appending .json is a good guess for many URLs, not a guaranteed specification.
Yep. For this case to work I should have made a POST request, not a GET one.
All works with POST.
May you explain the parameters declaration syntax?
Would be cool if you could configure queries to run at a certain time and send reports of the results to a group
I have a single site (nginx and postgres on same server), I tried to install this plugin and I now get the following error when attempting to do the rebuild:
* Error: Config owner (postgres:107) and data owner (root:0) do not match, and config owner is not root ...fail! Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for reg* system OID user data types ok Checking for contrib/isn with bigint-passing mismatch ok Checking for invalid "line" user columns ok Creating dump of global objects ok Creating dump of database schemas discourse postgres template1 ok encodings for database "postgres" do not match: old "SQL_ASCII", new "UTF8" Failure, exiting -------------------------------------------------------------------------------- ----- UPGRADE OF POSTGRES FAILED You are going to need to export your data and import into a clean instance: In containers/app.yml: Change "templates/postgres.template.yml" TO "templates/po stgres.9.3.template.yml" ```
You got to follow those instructions… Rebuild got you the latest image.
Yeah, so far those don’t seem to be going well either:
+ /usr/bin/docker run -d --restart=always -e LANG=en_US.UTF-8 -e HOME=/root -e RAILS_ENV=production -e UNICORN_WORKERS=6 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e DISCOURSE_DEVELOPER_EMAILSfirstname.lastname@example.org,email@example.com -e DISCOURSE_ENABLE_MINI_PROFILER=false -e DISCOURSE_HOSTNAME=discourse.clcohio.org -e DISCOURSE_SMTP_ADDRESS=smtp.mandrillapp.com -e DISCOURSE_SMTP_PORT=587 -e DISCOURSE_SMTP_USER_NAMEfirstname.lastname@example.org -e DISCOURSE_SMTP_PASSWORD=xxxxxxxxxxxxxxxx -h proddiscourse-app -e DOCKER_HOST_IP=172.17.0.1 --name app -t -p 80:80 -p 2222:22 -p 443:443 --mac-address 02:a2:58:e2:58:2a local_discourse/app /sbin/boot 7587306c5a160576394201298d540a5063474c5c66d1ff1f08d2bdbf29bf7d24 root@proddiscourse:/var/discourse# ./launcher enter app root@proddiscourse-app:/var/www/discourse# cd /shared/postgres_backup && sudo -u postgres pg_dump discourse > backup.db bash: cd: /shared/postgres_backup: No such file or directory
Of course this has to be the first time in literally dozens of Discourse changes when I didn’t snapshot the VM before upgrading. I can pull the upgrade from yesterday, but will lose a bit of data.
Also I don’t understand this instruction:
“Undo the base_image and postgres template from your container config”
How would I “undo” it? I get the postgres template (basically remove the version number) but I didn’t see anything related to changing a “base” line in the app.yml.
I rebuilt again and it seems to have allowed the database backup this time. Not sure what was happening the first time. I get the impression that the “undo the base image” instruction is out of date and can be ignored.
At the risk of asking a stupid question, is there a way to expose a query to regular users, perhaps with the aid of this plugn? I’m not too sure about the needed credentials and I don’t want to make a poor security decision. I think it has been discussed elsewhere here with only a paid solution being outlined, but I’d like to be able to present a list of users ordered by their last activity time, as many traditional forums do. For example, this query provides the desired information:
SELECT username AS "User", (CURRENT_TIMESTAMP - last_seen_at) AS "Last Active" FROM users WHERE last_seen_at IS NOT NULL AND (CURRENT_TIMESTAMP - last_seen_at < INTERVAL '15' MINUTE) ORDER BY last_seen_at DESC
Hi Tsirist welcome to the forum.
After having read the posts in this topic you have realized that this plugin allows Admins to do Read queries (and IMHO a great job at doing that).
Exposing this plugin to non-Admins is not a good idea. The database does contain some “sensitive” information and even without Create, Update, Delete ability the plugin should be used with extreme discretion.
I suppose forking and hacking Core files could be done, but a plugin is what you want here, trust me.
If you don’t have the chops to code it up yourself, becoming a paid customer might help this feature become realized. Else posting in the Marketplace category might get you some results.
I would love a checkbox saying “Make this query public through API” (off by default), so that admins could create queries for additional client-side features programmed by adding custom JS.
Given that it’s now the default not to allow sysadmins to write SQL for badges, it’s not likely that such a feature would be seen as a good idea. It’d be too easy for someone to publish SQL that’d cause problems.