Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/24/2019 in all areas

  1. My speed tests' graphs show a LOT of fluctuation from 0 to 200 Mbps, and that my speed varied by 200%, which I assume is really terrible. Firstly, what does it mean for speed to vary by 200%? 200% of what exactly, my average speed? Since a downward fluctuation greater than 100% is nonsensical, I assume it means something else. Secondly, how much speed variation would be 'acceptable?' Lastly, there are two graphs in the speed test results, one on top of the other, and both have the same labels on the x and y axes. However, they show different results (and only the bottom one contains my upload speed data as well as download). What is the difference between the two graphs? Thanks in advance
    1 point
  2. Oh I see, it means variance. I probably should've played around with the numbers more n figured it out for myself. Yes, standard deviation is more readable (especially when its expressed in mbps and not a percentage) so thanks for that! Also, what is 'My Host Avg?' Is that like my ISP's average speed or something? I absolutely love this site, so thanks for that as well as the detailed explanation above!
    1 point
  3. If you go to https://testmy.net/mysettings you can now change middle variance to standard deviation. Note that it's calculating using TiP points from 10% to 90% ... start and finish points are not in this calculation (my calc in the post above was including all points).
    1 point
  4. First, I will be changing variance to standard deviation in the future. Let's use this result of your as an example. TestMy.net Test ID : 5zlOE7tGc Ideally this flat lines and doesn't deviate from start to finish. Sometimes though, if the result is much lower than your line speed this flat line can be an indication of a bottleneck. Most of the time a flat line with 0% variance is a good thing. One of my recent results as an example. TestMy.net Test ID : fUJ4g~VHj Here's the actual calculation from the program. round(($maximumThruput - $minimumThruput)/(($maximumThruput + $minimumThruput)/2)*100) So your example above would be round((0.32 - 8.32)/((8.32 + 0.32)/2)*100) = -185% The difference from the min and max divided by the average of the min and max... then calculated into percent. The higher the number the more it indicates that the connection was heavily fluctuating during the test. Using standard deviation from your example above you'd get 2.4 Mbps [https://testmy.net/working/deviation/standard-deviation.php?arr=3.28,2.91,1.79,1.84,3.04,4.93,1.43,0.32,0.34,1.31,1.91,3.34,1.89,1.73,3.9,7.28,8.32,8.18] -- Again, ideally this number would be 0 Mbps. My result above's standard deviation is 42 Mbps... higher number but not relative to the result. [https://testmy.net/working/deviation/standard-deviation.php?arr=155.34,345.77,338.49,338.49,340.88,347.01,334.96,331.51,338.49,338.49,345.77,349.53,348.26,342.09,348.26,347.01,339.68,343.31,344.53] So the standard deviation then needs to then be turned into a percentage of your average. $standardDeviation / $middleAverage Your example: 2.4/2.93 = .82 ... or 82% My example: 42/342.24 = .12 ... or 12% Using standard deviation I think will be much easier for everyone to understand. To understand the current formula myself I had to go into the program... my users don't have that luxury. Long story short: Variance shows what I wanted to show but makes it overly complicated. I'll work on that for you.... actually -- I kinda just did, just need to work that all into the program.
    1 point
×
×
  • Create New...