Jump to content

Dave Taht

Members
  • Posts

    1
  • Joined

  • Last visited

  • Days Won

    1
  • Speed Test

    My Results

Reputation Activity

  1. Thanks
    Dave Taht got a reaction from rebrecs in Bufferbloat! (latency under load)   
    Both the dslreports folk and fast.com reached out the the bloat email list (see lists.bufferbloat.net) as to how to go about measuring this problem properly in their codebases. You will find a lot of good info in the archives there, and we're always looking for sites to be actively testing for bufferbloat. Of the two, dslreports has thus far been doing a great job, so great that their dataset is thoroughly polluted by people that used the site to fix their bufferbloat!!, so we no longer have a real picture of what the internet is really looking like. (so I really, really, really applaud the idea of a new site, such as yours, attempting to tackle the problem also) 
     
    I have a few nits on the dslreports stuff I've always wanted them to address, also. A few are:
     
    0) huge threads on the bloat lists that I won't summarize... a noted one is the insistence on doing some level of statistical ledgerdemain on the data (throwing out the worst 5% of the data, or picking an arbitrary threshold of X latency for bufferbloat, etc. )When it comes to this sort of science, the *really* interesting data is in the outliers, not the averages. 
     
    This is a detailed look at that sort of statistical rigor problem from a talk I gave at sigcomm 2014: http://conferences.sigcomm.org/sigcomm/2014/doc/slides/137.pdf
    (they've never invited me back)
     
    1) Since the adoption of fq_codel in OSX, openwrt, thousands of commercial routers (notably now in Wifi - see google's implementation here: http://flent-newark.bufferbloat.net/~d/Airtime based queue limit for FQ_CoDel in wireless interface.pdf ) and the universal enablement of ECN in that OS, we are starting to see ECN negotation and CE markings show up in multiple data sets. It would be good to track that somewhere. 
     
    2) both dslreports and fast.com throw out too much data. The really core and scary bufferbloat problem is when a network is too congested to operate worth a dang in the first place. I keep hoping that someday dslreports, at least, will create a plot that just shows the data they currently throw out - an analogy of what we might discover is here: https://www.space.com/25945-cosmic-microwave-background-discovery-50th-anniversary.html
     
    3) I really like the http://www.dslreports.com/speedtest/results/bufferbloat?up=1 plot - my kvetch is that it is only a 10 day most recent summary and I've had to rely on screen shots to be able to compare stuff over time. I'd long hoped for a deal where they could sell or share that dataset to researchers. The bufferbloat problem IS getting better - assuming the dslreports dataset isn't totally polluted but there is a long, long way to go.
     
    4) Nobody's tests run long enough to saturate higher speed links, due to how slow TCP ramps up. A variable length test, or one that runs longer when it detects high bandwidth is in use. dslreports cuts off their data set and test with 4+ second delays - and we have seen delays as bad as hundreds of seconds in the field.
     
    5) A really simple test would be to measure syn and syn/ack times while under load for a string of very short tcp transactions. This would emulate web traffic better.
     
    6) Recently published (and under discussion on the bloat list) was a pretty good summary of the speedtest problems we have on the internet going forward. Discussion here; https://lists.bufferbloat.net/pipermail/bloat/2019-May/009211.html - the paper, here: 
    https://arxiv.org/pdf/1905.02334.pdf  
    Anyway, we're kind of old internet fogies that mostly use email, and not web forums like this, if you have further questions, want to gain testers, or have someone from the bufferbloat effort or academia help dissect the data, please drop us a line on bloat at lists.bufferbloat.net.
     
    Best of luck with it! Thx!
  2. Like
    Dave Taht got a reaction from Sean in Bufferbloat! (latency under load)   
    Both the dslreports folk and fast.com reached out the the bloat email list (see lists.bufferbloat.net) as to how to go about measuring this problem properly in their codebases. You will find a lot of good info in the archives there, and we're always looking for sites to be actively testing for bufferbloat. Of the two, dslreports has thus far been doing a great job, so great that their dataset is thoroughly polluted by people that used the site to fix their bufferbloat!!, so we no longer have a real picture of what the internet is really looking like. (so I really, really, really applaud the idea of a new site, such as yours, attempting to tackle the problem also) 
     
    I have a few nits on the dslreports stuff I've always wanted them to address, also. A few are:
     
    0) huge threads on the bloat lists that I won't summarize... a noted one is the insistence on doing some level of statistical ledgerdemain on the data (throwing out the worst 5% of the data, or picking an arbitrary threshold of X latency for bufferbloat, etc. )When it comes to this sort of science, the *really* interesting data is in the outliers, not the averages. 
     
    This is a detailed look at that sort of statistical rigor problem from a talk I gave at sigcomm 2014: http://conferences.sigcomm.org/sigcomm/2014/doc/slides/137.pdf
    (they've never invited me back)
     
    1) Since the adoption of fq_codel in OSX, openwrt, thousands of commercial routers (notably now in Wifi - see google's implementation here: http://flent-newark.bufferbloat.net/~d/Airtime based queue limit for FQ_CoDel in wireless interface.pdf ) and the universal enablement of ECN in that OS, we are starting to see ECN negotation and CE markings show up in multiple data sets. It would be good to track that somewhere. 
     
    2) both dslreports and fast.com throw out too much data. The really core and scary bufferbloat problem is when a network is too congested to operate worth a dang in the first place. I keep hoping that someday dslreports, at least, will create a plot that just shows the data they currently throw out - an analogy of what we might discover is here: https://www.space.com/25945-cosmic-microwave-background-discovery-50th-anniversary.html
     
    3) I really like the http://www.dslreports.com/speedtest/results/bufferbloat?up=1 plot - my kvetch is that it is only a 10 day most recent summary and I've had to rely on screen shots to be able to compare stuff over time. I'd long hoped for a deal where they could sell or share that dataset to researchers. The bufferbloat problem IS getting better - assuming the dslreports dataset isn't totally polluted but there is a long, long way to go.
     
    4) Nobody's tests run long enough to saturate higher speed links, due to how slow TCP ramps up. A variable length test, or one that runs longer when it detects high bandwidth is in use. dslreports cuts off their data set and test with 4+ second delays - and we have seen delays as bad as hundreds of seconds in the field.
     
    5) A really simple test would be to measure syn and syn/ack times while under load for a string of very short tcp transactions. This would emulate web traffic better.
     
    6) Recently published (and under discussion on the bloat list) was a pretty good summary of the speedtest problems we have on the internet going forward. Discussion here; https://lists.bufferbloat.net/pipermail/bloat/2019-May/009211.html - the paper, here: 
    https://arxiv.org/pdf/1905.02334.pdf  
    Anyway, we're kind of old internet fogies that mostly use email, and not web forums like this, if you have further questions, want to gain testers, or have someone from the bufferbloat effort or academia help dissect the data, please drop us a line on bloat at lists.bufferbloat.net.
     
    Best of luck with it! Thx!
  3. Like
    Dave Taht got a reaction from CA3LE in Bufferbloat! (latency under load)   
    Both the dslreports folk and fast.com reached out the the bloat email list (see lists.bufferbloat.net) as to how to go about measuring this problem properly in their codebases. You will find a lot of good info in the archives there, and we're always looking for sites to be actively testing for bufferbloat. Of the two, dslreports has thus far been doing a great job, so great that their dataset is thoroughly polluted by people that used the site to fix their bufferbloat!!, so we no longer have a real picture of what the internet is really looking like. (so I really, really, really applaud the idea of a new site, such as yours, attempting to tackle the problem also) 
     
    I have a few nits on the dslreports stuff I've always wanted them to address, also. A few are:
     
    0) huge threads on the bloat lists that I won't summarize... a noted one is the insistence on doing some level of statistical ledgerdemain on the data (throwing out the worst 5% of the data, or picking an arbitrary threshold of X latency for bufferbloat, etc. )When it comes to this sort of science, the *really* interesting data is in the outliers, not the averages. 
     
    This is a detailed look at that sort of statistical rigor problem from a talk I gave at sigcomm 2014: http://conferences.sigcomm.org/sigcomm/2014/doc/slides/137.pdf
    (they've never invited me back)
     
    1) Since the adoption of fq_codel in OSX, openwrt, thousands of commercial routers (notably now in Wifi - see google's implementation here: http://flent-newark.bufferbloat.net/~d/Airtime based queue limit for FQ_CoDel in wireless interface.pdf ) and the universal enablement of ECN in that OS, we are starting to see ECN negotation and CE markings show up in multiple data sets. It would be good to track that somewhere. 
     
    2) both dslreports and fast.com throw out too much data. The really core and scary bufferbloat problem is when a network is too congested to operate worth a dang in the first place. I keep hoping that someday dslreports, at least, will create a plot that just shows the data they currently throw out - an analogy of what we might discover is here: https://www.space.com/25945-cosmic-microwave-background-discovery-50th-anniversary.html
     
    3) I really like the http://www.dslreports.com/speedtest/results/bufferbloat?up=1 plot - my kvetch is that it is only a 10 day most recent summary and I've had to rely on screen shots to be able to compare stuff over time. I'd long hoped for a deal where they could sell or share that dataset to researchers. The bufferbloat problem IS getting better - assuming the dslreports dataset isn't totally polluted but there is a long, long way to go.
     
    4) Nobody's tests run long enough to saturate higher speed links, due to how slow TCP ramps up. A variable length test, or one that runs longer when it detects high bandwidth is in use. dslreports cuts off their data set and test with 4+ second delays - and we have seen delays as bad as hundreds of seconds in the field.
     
    5) A really simple test would be to measure syn and syn/ack times while under load for a string of very short tcp transactions. This would emulate web traffic better.
     
    6) Recently published (and under discussion on the bloat list) was a pretty good summary of the speedtest problems we have on the internet going forward. Discussion here; https://lists.bufferbloat.net/pipermail/bloat/2019-May/009211.html - the paper, here: 
    https://arxiv.org/pdf/1905.02334.pdf  
    Anyway, we're kind of old internet fogies that mostly use email, and not web forums like this, if you have further questions, want to gain testers, or have someone from the bufferbloat effort or academia help dissect the data, please drop us a line on bloat at lists.bufferbloat.net.
     
    Best of luck with it! Thx!
×
×
  • Create New...