With a slower upload speed or higher latency it interferes with the normal flow. If the requests are lagged it will affect the result.
With 100+ smaller requests the connection has to negotiate each one. The latency and upload can effect this.
This will be less pronounced with linear because we don't have to keep initiating requests over and over.
I went around and around with this one, trying to get those connections to ramp up quicker. Originally I was trying to make the test ramp up quicker by adjusting test parameters for that situation. Then realized that it's only doing what it's supposed to do. This happens when the connection is weak, it's only showing you what happened. If something slows down the requests or the process... it affects the end result.
So keep in mind when you're using the beta, it's splitting the multithread process much more than my previous version. 100 elements for < 1MB tests and 200 elements beyond that, where the production multithread at 10MB you only open 12 threads and 200MB opens 30 threads. Big difference. The beta is more demanding.
The difference is before I adjusted the process to meet the connection. Smaller tests were done with less elements. I've decided going forward that TMN shouldn't scale based on the connection, rather measure every connection the same. As the linear test does. Remember I'm only talking about the multithread process.
The beta upload test works the same way, 100 and 200 elements.
A couple of things to can do. Click [customize] and Enable Linear Boost or test linear on connections like that one.
I've seen that too, always on crappy connections. I think you're right about it being due to packet loss. I'm going to see about detecting when a thread gets stuck like that and then reinitiate that thread and report the event in the results.
It's all about how the data is being rendered. The beta is an entirely new test with different variables. These new variables seem to favor more modern connection types because they're better designed for this type of load. A bunch of small requests may be harder to render in some cases than a few large ones. But that's what we're here to test.