I’ve been replicating Discourse metrics into our BI tool, ThoughtSpot, and when I got to the “Time to First Response” metric, I noticed a flaw in how it’s being reported when viewed on a weekly or monthly basis.
Currently, the Discourse report calculates an average response time per day, but when aggregating to a weekly or monthly view, it simply takes the average of those daily averages instead of computing a weighted average. This can lead to misleading results, especially when some days have significantly more topics than others.
Example:
Here’s a breakdown of daily response times and topic counts:
Date | Avg Response Time (hr) | Topic Count |
---|---|---|
March 16 | 1.9 | 1 |
March 15 | 23.6 | 1 |
March 14 | 0.4 | 3 |
March 13 | 6.0 | 7 |
March 12 | 0.3 | 1 |
March 11 | 2.1 | 8 |
March 10 | 6.6 | 1 |
Now, if we calculate the average of averages, we get:
However, this does not accurately represent the true first response time because it treats each day equally, regardless of how many topics were created. Instead, the correct approach is to calculate a weighted average, factoring in the topic count:
Using this method:
Why This Matters
The difference is significant—5.82 hours vs. 2.85 hours. The current method overweights outlier days (like March 15 with 23.6 hours but only 1 topic) and underweights high-activity days (like March 11 with 8 topics and a 2.1-hour response time).
For a more accurate representation of response times, Discourse should switch to a weighted average when aggregating over time.
Would love to hear others’ thoughts—has anyone else noticed this issue?