Continuing the discussion from Rubygems: Too Many Requests problem returns!:
Well, perhaps the 429 errors are spurious. Further down in the logs I get this:
I, [2018-12-04T20:54:58.204054 #16] INFO -- : > cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate'
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:
PG::NumericValueOutOfRange: ERROR: integer out of range
: INSERT INTO polls (
post_id,
name,
type,
status,
visibility,
close_at,
min,
max,
step,
anonymous_voters,
created_at,
updated_at
) VALUES (
2228144,
'poll',
0,
0,
1,
NULL,
1,
1000000000000,
NULL,
NULL,
'2017-12-29 23:26:56 UTC',
'2017-12-29 23:26:56 UTC'
) RETURNING id
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/rack-mini-profiler-1.0.0/lib/patches/db/pg.rb:92:in `async_exec'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/rack-mini-profiler-1.0.0/lib/patches/db/pg.rb:92:in `async_exec'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.0/lib/active_record/connection_adapters/postgresql/database_statements.rb:75:in `block (2 levels) in execute'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.0/lib/active_support/dependencies/interlock.rb:48:in `block in permit_concurrent_loads'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.0/lib/active_support/concurrency/share_lock.rb:187:in `yield_shares'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/activesupport-5.2.0/lib/active_support/dependencies/interlock.rb:47:in `permit_concurrent_loads'
/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/activerecord-5.2.0/lib/active_record/connection_adapters/postgresql/database_statements.rb:74:in `block in execute'
....
Perhaps this could be helpful:
```text
PostCustomField.where(post_id: 2228144)
=> [#<PostCustomField:0x000055f1950311a0
id: 639761,
post_id: 2228144,
name: "polls",
value:
"{\"poll\":{\"options\":[{\"id\":\"d15fe6bb7788613d049786a7436c44a1\",\"html\":\"win\",\"votes\":8,\"voter_ids\":[216,38,284,3740,373,2795,6351,564]},{\"id\":\"464e1fad0d18eefccca23fe69cba7d2c\",\"html\":\"lose\",\"votes\":2,\"voter_ids\":[408,3582]},{\"id\":\"c58d6a1219a0a366268341e1d315d301\",\"html\":\"equal\",\"votes\":0,\"voter_ids\":[]}],\"voters\":10,\"status\":\"open\",\"min\":\"1\",\"max\":\"1000000000000\",\"public\":\"true\",\"name\":\"poll\"}}",
created_at: Fri, 29 Dec 2017 23:26:56 UTC +00:00,
updated_at: Fri, 29 Dec 2017 23:26:56 UTC +00:00>,
#<PostCustomField:0x000055f195030700
id: 639762,
post_id: 2228144,
name: "polls-votes",
value:
"{\"408\":{\"poll\":[\"464e1fad0d18eefccca23fe69cba7d2c\"]},\"216\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"3582\":{\"poll\":[\"464e1fad0d18eefccca23fe69cba7d2c\"]},\"38\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"284\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"3740\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"373\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"2795\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"6351\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]},\"564\":{\"poll\":[\"d15fe6bb7788613d049786a7436c44a1\"]}}",
created_at: Fri, 29 Dec 2017 23:26:56 UTC +00:00,
updated_at: Fri, 29 Dec 2017 23:26:56 UTC +00:00>]
[2] pry(main)>
Oh, Yeah. Someone thought that 1000000000000
would be a good maximum number.
So I guess I either need to submit a PR that catches a too-big max or contrive to fix it in that JSON blob in the current database.