I decided to create a new topic since the newest issue isn’t about questionable rounding rules, but completely false rules instead. Hence the topic must be in the bug category. Not a major issue, but somewhat awkward anyway.
We had a public poll where 106 votes had been cast. There were 69 votes for “no” and 37 for “yes”. A simple division gives that 69/106 is about 65.09 %. Yet the system tells that 66 % had voted for “no” and 34 % for “yes”. Here’s a picture:
The final result was 72 votes for “no” and 38 for “yes” and the poll is now closed. 72/110 is about 65,45 % which the system rounds to 66 % as well, so you can see the bug in action at Tappara.co.
I tried to look at the code to find what causes the bug, but failed. I discovered that a function named evenRound is called, but when I seeked for its source, I didn’t find it. This is just a guess, but maybe that function rounds the number to the nearest even integer?
Thanks for the code. I just couldn’t find it which tells how terrible I am at using Github. The code is stripped from Stack Overflow, where its author writes:
I’m not sure what level of accuracy you need, but what I would do is simply add 1 the first n numbers, n being the ceil of the total sum of decimals. In this case that is 3, so I would add 1 to the first 3 items and floor the rest. Of course this is not super accurate, some numbers might be rounded up or down when it shouldn’t but it works okay and will always result in 100%.
It’s not a surprise that with such a harsh method you get weird results. I’d suggest we should be a bit more elaborate. This post in the same topic has 72 upvotes while the post the code is from has just 1.
There are many ways to do just this, provided you are not concerned about reliance on the original decimal data.
The first and perhaps most popular method would be the Largest Remainder Method
Which is basically:
Rounding everything down
Getting the difference in sum and 100
Distributing the difference by adding 1 to items in decreasing order of their decimal parts
…
I think this method would be the best choice. It’s still very simple and gives much better results. The current code needs a couple more lines to tell the function which numbers should be rounded up. There is one more issue to consider, though. From this post in the same topic:
The well-voted answer by Varun Vohra minimizes the sum of the absolute errors, and it’s very simple to implement. However there are edge cases it does not handle - what should be the result of rounding 24.25, 23.25, 27.25, 25.25? One of those needs to be rounded up instead of down.
The post suggests various methods for choosing which one is rounded up and which is rounded down, but you can never completely avoid arbitrary choices. If you want to reach out for perfection, you could read that post further, but I would be satisfied with this as well in these rare special cases.
You would probably just arbitrarily pick the first or last one in the list.
I hope this helps. My skills would probably be sufficient to make the improvements myself if I studied some basic syntax first. I am not going to do it right now, but maybe at some point if someone won’t look into it before.
Edit. I improved the title a little by changing the word “broken” to “imprecise”. This topic should also be moved to feature or merged with the previous topic.
It was like classic exercises for beginning programmers. Here is a way to do it, but someone who has more experience and isn’t totally new to JavaScript might find some shortcuts.