Leaderboard
Popular Content
Showing content with the highest reputation since 08/23/2023 in all areas
-
Based on how the Beta seems to function, I would think it would be a prime opportunity to add a "dual" test, which simultaneously runs both upload and download. This would be nice for testing in enterprises that have their own data-centers and host VPN/Web/etc. traffic, as they can find out what types of bandwidth carving they should do. I.e., if my Buckeye 1-gig fiber supports a full-duplex 1-gig, then I don't have to worry as much about having a good upload speed when someone else is using my network. But if it's a half-duplex 1-gig, then it changes how I might want to carve out bandwidth. Just a thought, would be an interesting metric to add. Could be a purple double-arrow in the results that indicates the combined speed as well, since it's not a pure upload/download, but a "dual" / "simultaneous" / "mixed" test.3 points
-
3 points
-
With a slower upload speed or higher latency it interferes with the normal flow. If the requests are lagged it will affect the result. With 100+ smaller requests the connection has to negotiate each one. The latency and upload can effect this. This will be less pronounced with linear because we don't have to keep initiating requests over and over. I went around and around with this one, trying to get those connections to ramp up quicker. Originally I was trying to make the test ramp up quicker by adjusting test parameters for that situation. Then realized that it's only doing what it's supposed to do. This happens when the connection is weak, it's only showing you what happened. If something slows down the requests or the process... it affects the end result. So keep in mind when you're using the beta, it's splitting the multithread process much more than my previous version. 100 elements for < 1MB tests and 200 elements beyond that, where the production multithread at 10MB you only open 12 threads and 200MB opens 30 threads. Big difference. The beta is more demanding. The difference is before I adjusted the process to meet the connection. Smaller tests were done with less elements. I've decided going forward that TMN shouldn't scale based on the connection, rather measure every connection the same. As the linear test does. Remember I'm only talking about the multithread process. The beta upload test works the same way, 100 and 200 elements. A couple of things to can do. Click [customize] and Enable Linear Boost or test linear on connections like that one. I've seen that too, always on crappy connections. I think you're right about it being due to packet loss. I'm going to see about detecting when a thread gets stuck like that and then reinitiate that thread and report the event in the results. It's all about how the data is being rendered. The beta is an entirely new test with different variables. These new variables seem to favor more modern connection types because they're better designed for this type of load. A bunch of small requests may be harder to render in some cases than a few large ones. But that's what we're here to test.2 points
-
Instructions to Enable the Beta are in the Private area. The beta is private but members with access are allowed to share instructions. Please use this public forum for all threads related to Beta 23. If you encounter any issues or bugs please copy the URL in your browser and include it with your post. This helps reproduce the issue. Screen recordings are also very helpful. Start new topic Happy Testing!2 points
-
I run a company that does network installation, testing, certification, and remediation, and one of the services I offer currently is a wireless signal map. I also currently run multiple speed-tests from different rooms on each site and aggregate those results into a map as well. At the moment, I run them via Speedtest.net, and save the result into an excel spreadsheet manually. It would be handy if I could use TMN natively for it, by starting a "project" and then being able to label tests by room/location and Wireless connection type (b/g or n/ac most often) so that I can export all the results into one file to pull them into my final report. It would also be nice to have those results long-term, so that if I get a client call-back for another round of testing I can compare to the old project to see if anything has changed.1 point
-
I can't finish it until at least Friday but I'll let you know as soon as it's ready. 😀1 point
-
Well it's definitely falling into place. QR authorization done. You can now have your phone signed in, then go from computer to computer and scan the QR code. A couple seconds, it links up and then asks you which projected to add it to. Once you select, the target computer magically gets authorized. It's really cool. Again, great idea Elliot. Coming together nicely. I'll be hitting you up for real-world testing as soon as I finish my testing.1 point
-
Click [Customize] then select 500 MB max. I may change the default after we get going.1 point
-
Now that I type this, wonder if having a "stress test" with a mixture of block sizes would be handy, would simulate what computers / networks actually do. Some are doing really small data-loads and some are doing large data-loads. Just a thought.1 point
-
Ahhhh, I see. I can definitely work that in there. Great idea... super slick. I'll add that, need to also add pagination (for when you have many projects listed). Added search functionality last night, pretty cool predictive search. Where pulls and displays the search results while you're typing. First time I've done that, it's really nice. After those elements are complete I'll reset it's databases and put it through real-world testing. After I've tested some more I'll get it out to you. Might be the fastest turn around I've done for a request, ever. And this isn't some small thing, I've been working solid on it since Wednesday. The main controller for this part of the program is currently 44 kB and then hooks had to be created across the rest of the program to make it play with everything. Probably another 30 kB. So I estimate about 74,000 characters. And that's employing my goal of writing as little code as possible to get the same thing done. ... I'm not just slapping crap together, you'll see. I wrote it and I'm not sure how because it's been less than a week (not even 5 days yet actually). And I didn't touch the keys until day 2, lol. --- seriously, funny thing is I don't feel like I've done anything. It built itself. Very useful functionality, as it's coming together I can see how it will benefit many more use cases. Can't wait to share it with you.1 point
-
The difference in download speeds between a laptop and a PC can be attributed to several factors. Firstly, hardware disparities play a crucial role. PCs often have more powerful processors and network cards, allowing for faster data retrieval. Additionally, laptops may have limitations due to their compact size, leading to smaller antennas and reduced signal strength, especially in older or budget models. Software can also affect download speeds. Background processes, applications, or bloatware running on laptops can consume bandwidth, slowing down downloads. PC users have more control over their system's resources, enabling them to optimize for faster downloads.1 point
-
Wonder if it could be done pre-test then, similar to how HBO Max and such let me use my phone to scan a QR code to attach the TV/device to my account. 1. Open TMN on target computer 2. Click "I have a project under another account" 3. Page opens QR Code 4. Open TMN on phone authenticated to my account 5. Open camera and scan QR Code from #3 6. Add target computer to account for 24 hours (simple cookie with the info would suffice) 7. Run tests from target computer Just spitballing ideas, would make it pretty painless to use this way and sticks pretty much to your original plan.1 point
-
You can't add tests that have already been performed. This works by creating side databases and then querying the new data stored. You would create a project (just enter a name), it then generates a URL (a simple 5 character key). After you visit the URL on the client machine it instructs TMN to additionally store the results in the project's side database. You'd then run your tests, going from area to area... giving each unique sub ID's. Along the way from the My Project screen you'll see the number of results and averages for each sub-project. You can then query each database individually. After the system sees that 2 or more sub-project databases have results for a given project it will then display the option to aggregate the databases together for you on the fly. From that you can see all of the results and differentiate between each of your sub-projects results. You can then click 'export' and dump all the results to CSV, again aggregated with all of the data included to help you differentiate the results. I think this will be especially helpful for your use case but home users can benefit as well. Being able to go from room to room and quickly switch the project to help build a map of possible weak areas. It's like identifiers on a much deeper level... and then the identifiers can be used on top of this if that helps some people. Great idea dude, I can't wait for you to put it to use. I don't know how useful QR codes will really be... but I'm having fun.1 point
-
Sweet! Any way it can be setup to work in reverse? I.e. allow me to be on a desktop / laptop computer at a client site and scan it with my phone to add the test(s) to the project?1 point
-
From my testing so far, the Beta appears to work well with my 4G based Internet connection at home. However, when I managed to give it a quick test run at my workplace, the beta kept delivering speeds under 1Mbps down in Chrome even though they have a 10Mbps DSL connection. From further testing at home and setting upload / download limits on my MikroTik router, I found that when I set the upload speed to 512Kbps to match the DSL uplink at my workplace, I am able to replicate the issue here and also uncovered a few other small issues. With my workplace DSL connection the following is the Beta test followed by the linear test in Chrome with the UK server: Retest with the German server. As I write this post, I see the up/down rows are swapped on the left. 🙃 I did one more test in Edge and although it performed better than Chrome, the upload and download was still around half the linear download test: Other observations: The Beta test does not mention it being a multithread test in the test result. For comparison, the non-beta multithread test mentions "Multithread": Download tests with a block size under 1MB incorrectly show the KB as MB in the test results page. For example, the following test result on the left shows a test block of 205kB, however in the test results page, it shows "205 MB": If the download or upload test is unable to fetch all the blocks, it gets stuck. This happened a few times, probably due to the small packet loss on my 4G connection, such as the following screenshot where it endlessly waited here for the final 2 kB block.1 point
-
I tried retesting with my router upload limited to 0.5Mbps up (to mimic my workplace DSL uplink) and the 10MB manual download block performs better giving about 7-8Mbps in Chrome after a few tests. With a 102MB manual block it gets around 50Mbps: With linear, it gives around the full download speed like when I do not have the upload throttled to 512Kbps. Going by your test results above, it appears you either did not limit the upload speed to 512Kbps on your gateway / router (to mimic a slow 512Kbps DSL upload) or it was measuring the 512Kbps upload limit as 39.2Mbps. The following shows the 512Kbps upload configured on my router to mimic the 512Kbps upload limit on my workplace's DSL connection. With MikroTik routers, FastTrack must be disabled (IP -> Firewall) for speed limits to take effect.1 point
-
Alright, it's coming along very nicely. I'm able to enter a project name and sub name (if desired). TMN then generates a short, friendly URL and puts it on your list. When you visit the URL it greets the client and ties that computer to the project for 30 days. The URL is the sign-in. From your project list you can see how many results each has and query the results as you normally do. Then export the results. Everything in the database works the same. Add extra identifiers on top of this to further differentiate. Just need to give it some logic for aggregating each projects sub-projects into one query, then polish the UI a little more and you can get working with the program immediately. I think it will make things easier for you. After I read your post on Wednesday I meditated for what felt like only 15 minutes, plotted it all out. And then it just poured out onto my keyboard. Your details helped me visualize. TMN's framework helped put it together quickly... it was pretty much written, just had to give it instructions for this scenario. I'm making this specifically for you and don't really care if anyone else uses it. Original Members get original tools!1 point
-
Thank you for the details, really helps. Well underway now, I wrote about 20 kB last night and into this morning. I'll update this thread.1 point
-
Honestly I would pay $10-20/mo for a service (i.e. "TMN Enterprise") where I could add projects, then add buildings/floors/rooms within that project and take sampled results that I can then export either to a file or via an API. It would be nice to have a little parity in concept with the way my Fluke's work, like the below screenshot. Typically, when I use the LIQ-100 or DSX2-8000 I: 1. Setup a project within the device 2. Run a test / measurement 3. Save the test / measurement with the patch number or some other label indication 4. Export into Fluke LinkWare for PC 5. Create the facility structure and sort the results into the proper location 6. Generate a PDF from the results to send to the client With TMN, I would imagine (and prefer) a workflow like the following: 1. Setup a project 2. Setup the location / further structure, such as the floor, etc. 3. Run a test 4. Save the test to the project (or not) with a name (such as "Room", etc.) 5. Export the results as a CSV, JSON, or have an API to retrieve them so they can be dumped into a PDF later Often, if I'm trying to do a speed test at a client site, I'm doing it either on a client PC, or I'm doing wireless testing with my laptop. Having a platform where I could log the test to my project(s) without having to totally log into TMN would be nice as well. I.e. having some type of Username -> MFA Authenticator app request to save the one result to my projects would be awesome. These are just some thoughts, would be nice to have a speed-test tool I could speak to that functioned in a useful way. Even a "My Projects" would be a great start, at least then I can start using my laptop to do client-site testing and troubleshooting.1 point
-
You can kinda do that right now, by using identifiers. Then take note of the date, later query the database for that date range to only pull the results theoretically from the one site. Maybe make a separate TMN account for these type of results. I intend on making it much easier to organize in your situation by incorporating sub accounts. Which will help keep track of multiple overall scenarios. I like the name you just came up with, My Projects. Bumping this up in my to-do list.1 point
-
$95/mo for my 1gig. works for me. Better then comcrap trying to charge me $250 for 2boxes/phone/internet years ago.1 point
-
Hi John, thank you for sharing the great information. Very happy to hear that you've improved your speeds. Did you switch your DNS to Cloudflare 1.1.1.1 (and 1.0.0.1) or Google 8.8.8.8 (and 8.8.4.4)? For other people who'd like to do this you can find it on the Connection Guide under "Step 6 - Improve Your DNS Resolution and Privacy". DNS over HTTPS is available for Firefox users. I'm pretty certain that this is going to become the new standard and will eventually roll out to all browsers. May be able to do this at the router level in the future too. ISPs won't like that. Right now, all of your DNS traffic is being send with clear text, meaning it could be snooped. DNS over HTTPS encrypts your DNS queries. Encryption always has the potential to slow things down but actually (reading the link above) they say in many cases it's actually faster. Again, happy to hear you've picked up some extra speed that you've been paying for.1 point