In testing with TMN, Speedof.me and speedtest.net, I get dramatically different results. TMN shows 24 Mbps (Dallas), SOM shows 96 Mbps, and ST shows 300 Mbps (Dallas) server. This is with fiber 1000 Mbps based ISP. TMN does show a short burst of speed hitting the 1000Mbps speed but only for a half second or so, then the speed tracks at about 21Mbps. The SOM test shows low performance on smaller download sizes, then higher performance on the larger download sizes. What's of further interest is that this is a Win 7 PC that is modern with high performance features, and when I use a Win 8 laptop, on the same network segment, I get much faster results with TMN (ten times better or more). All tests conducted with Firefox browser. I have further used a utility called, PSPING.EXE, to conduct testing on the local network, and I get varying performance depending on the test sample size times sample iterations. When test sample size (download amount) is set low to say 100 bytes and sample iterations is set high to 10,000, then performance is poor with bandwidth about 4K bytes per second. If sample size is set high to say 10K bytes and iterations is set low to 100, then performance is fast at about 100KBps. This would make sense because Ethernet frames have a lot of overhead for data packets with a small payload. Also of note, the network infrastructure is all wired Gigabit including Cat6 in all critical runs. I have performed testing with only the test PC's connected to the network to rule out bandwidth contention and I have tried tweak advanced network driver settings for optimal network performance. So, here's the question, is Win 7 using less efficient IO calls than Win 8 when using TMN? Is the data read into a small buffer many times requiring many IO calls or a large buffer with one IO call?