The Discourse Servers

Probably pretty good. Unfortunately they don’t support the SMART wear levelling indicator but I can get a fairly generic “sense” of how they’re doing as they are exposing a different attribute:

server  ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
live db 177 Wear_Leveling_Count     0x0013   092   092   000    Pre-fail  Always       -       286
live db 177 Wear_Leveling_Count     0x0013   085   085   000    Pre-fail  Always       -       535

back db 177 Wear_Leveling_Count     0x0013   093   093   000    Pre-fail  Always       -       247
back db 177 Wear_Leveling_Count     0x0013   084   084   000    Pre-fail  Always       -       550

webonly 177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       12
webonly 177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       14

webonly 177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       11
webonly 177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       15

So the live and replica database servers have more wear on them. No surprise there. I should graph this. :smile:

There’s another excellent reason for this and that’s performance. For another customer, I was evaluating 128GB and 256GB “value” drives (i.e. not overprovisioned like the Enterprise drives) as replacements for the 50GB SSDs that reached end of life.

The overprovisioned 50GB SSDs gave you VERY consistent performance on a workload. You knew you were getting the IOPS and latency you needed:

Whereas the “Value” drives let you use all that space, but you have to manually enforce overprovisioning if you must avoid high write completion latency and maintain high IOPS:

(yes, the graphs are slightly different things but the 50GB drive maintains that red line like it was aimed by NASA)

3 « J'aime »