Jump to content

Testmy.net Connect failed: Connection refused


Stang777

Recommended Posts

Hi,

 

I am hoping someone can help me connect to testmy.net speed test site. I have used this site without problem for over a year, but tonight I ran a test, the results were much less than I usually get, so I deleted the results from my saved tests, then reset my modem. After my modem came back up, I could not connect to the site that runs the speed tests. I can connect to every other site I try, including the forum, but when I try to go to the test site, it goes to a page that just says... Connect failed: Connection refused.

 

I have restated my computer, used Ccleaner to clear my cookies and cache, just as I have for years, yet when I try to get the site to load the speed tests, I get a page that says... Connect failed: Connection refused

 

While I can use any other speed test site without a problem even after not being able to use this site to test my speed, and my speeds are right back to the high speed of around 250 that I expect, this is the only speed site I care to use.

 

Does anyone have any ideas that might make it so I can use this site to test my speeds again?

 

Thank you for any and all help you might be able to give me :)

Link to comment
Share on other sites

Sorry, I woke up to master database issues.

 

I'm running on an outdated slave while I make copies of the data and get the master back online.

 

Your most recent results will be missing until it's back online.

 

Sorry about the inconvenience everyone.  I've been needing to migrate this server for a while now... it started displaying issues a while back.  I've already built up the new server and just need to copy databases over to it.  It's not an easy move, hundreds of GB of data to move around.

 

I'll update this thread as I have updates.

Link to comment
Share on other sites

Thank you so very much for responding to my post and giving me that information. I am sorry you are having issues with your database and server, but am very happy to know it isn't something wrong on my end. I will just be patient as you work it all out. Thank you again for responding and letting me know. I wish you a lot of good luck in getting everything worked out and situated. I also thank you for providing this great site for testing our speeds, it, and you, are very much appreciated.

Link to comment
Share on other sites

We're running back on the master. 

 

I need to import results from yesterday from my backup database over to the master, for now you'll be missing the last 24 hours of results.  I should be able to take care of that this evening.

 

 

Sorry again about any inconvenience.  Please let me know if you see any residual errors.

Link to comment
Share on other sites

No problem, it happens.  Until I figure out a better system for storing data.... this is what happens.  Every so often I get headaches like this but it reminds me why I do this.  It also teaches me to do it better.  Because to be honest, this SUPER embarrasses me.  Any form of downtime is unacceptable.  That pushes me to make TMN more reliable for you.

 

First thing I want to share is https://status.testmy.net/ -- I will be continuing to add more monitors of different types here.  This is monitored by a third party (as it needs to be) UptimeRobot.  They are not a sponsor, they may not even know of TMN.  In fact, I just paid them...  I was just looking for a pingdom alternative and liked them so much I paid for a year right away.  Pingdom is great by the way, I just want something more simple , that does the job for less money... that's how I do it.  I like them enough to not hide their logo on that stats page, even though they offer me that ability.

 

Second, I found that the Master database server is running on half memory.  I had a crash a few months ago that prompted me to build a new server.  I feel like any crash is a time to build a new server... once I build one, it normally runs without reboot... until I want to replace it.

 

When it crashed, it didn't want to come back online.  I was frantic and completely forgot about my IPMI controls... but Carl at Data102 quick ran from the 2nd floor to the 12th floor and got to my console.  Gave an error about memory missing.  He hard rebooted and it was online since.

 

What I never realized until today when I rebooted... it's not registering memory on a slot I know has memory in it.

 

Screen Shot 2018-12-13 at 12.25.37 AM.png

 

P1-DIMMA1 should be present.

 

Sigh.  -_-

 

On thing people may not realize when they visit here is that this is built by a dude in his basement.  They also might not realize that the same dude has to built the systems that run the site, down to the individual hardware components.  I had a cloud hosting company offer me (I'm not joking) "$50,000 per month of services for 6 months... just try us out!" -- a large, reputable company that I use for my mirrors.  They aren't joking.  But I don't want to go down that rabbit hole.  For the resources I'm comfortable running.... they don't even offer packages.  I scroll down the page, get to the end and it's a ridiculous price.  I'm like, "yeah right, I'll buy my hardware and build it up."

 

Okay, so I ordered more memory.  I'm upgrading the slave while I'm at it.  I can't get my v2 architecture on the server I just built to perform as well as the v3 and v4 in the beasts I already have running.  The server I just built is 20 core, 40 thread... it's faster in some workloads but for what I need it for, it can't compete.  Even my slave server is 2X faster in mysql queries.  And it's an ant compared to the master server.  The slave server is even right on par with the master in the same queries... but Mistress falls behind.  2X slower.  We can't have that, even as a backup.  So Mistress will have to be a backup of a backup... forever. :haha:

 

I feel now, it's time to bring the slave up to the master's standard.

 

I only have an E5-2603 v3 with 16GB non-ECC and 1 256 GB EVO PRO.  I ordered an E5-2630 v4, 32 GB Crucial 2400 ECC and I already have 3 256GB EVO 850 PRO's on hand ... add to the other one to build a new RAID-Z2 array.  Also ordered a new 16GB Crucial 2133 ECC to replace the one that may have died.  I may find that the slot died, who knows until I get in there.

 

So the slave will pick up a larger redundant storage array, 32 GB of buffered memory (instead of 16GB unbuffered) and 10 more processing threads (up from 6).  Less threads than the Master and the Master also has redundant power... so it will always be the Master until I get something more powerful with redundant power. (datacenter's needs to do routine maintenance sometimes, if you're on A/B power they'll turn off A... work.. then turn off B.  Server sounds an alarm but keeps running.  I maintained service and data integrity 100% during that type of maintenance before.  It's worth having just for that, not to mention a power supply failure.)

 

... the 32GB 2400 MT/s memory will go in the Master, the 2133 goes in the slave.

 

So I'll have to make sure everything is pushed to the Master, take down the slave... open it up and take out the non-ECC RAM, put in the new stick of 2133 I just ordered (and CPU/SSDs).  Bring it back online, push everything to the slave... take down the Master... install the 2400 sticks... bring the Master back up.. take down the slave and rebuild it from scratch.  100% new software setup.  The server I just recently built still comes into play... it's the backup while the backup is offline.  ;)  --- then the slave goes back online... Master comes down, it gets The Royal Treatment.  Ready to come back online and remind you that the backup server by comparison, is still a backup server.

 

It can be fixed a few different ways.  My favorite way is to start from scratch.  Fresh kernel, fresh install... and while you're at it, fix any short comings.  I feel like the CPU power on the slave CAN'T keep up with TMN... the Master with an E5-2640 v4 has a hard time with many tasks, but the E5-2603 v3 is like 10X slower.  It would take many days to import the main databases for TMN on the slave, still takes at least 12 hours on the Master... but much better than MANY days.

 

Second shortcoming, it needs ECC RAM.  It's the only server I have without this.

 

Third, it needs a RAID-4 or RAID-Z2 array.  It's the only server I have without this.

 

If that stick of memory is bad... according to Crucial it will be covered under warrantee.  I'll update this thread with the RMA later.

 

Memory and CPU's are expensive right now.  The CPU and memory I'm replacing are selling used for > 2X what I paid for them in 2016!  That's a win... but the parts I need to buy cost 2X more also.  ?  --- screw all of your investments in 2016... you should have bought Xeon's and high end server RAM and put them in a box, lol, no joke.  Double your money, are you kidding me!?  nice.   ... me and my stupid bitcoins over here. :lol:  (obviously kidding, bitcoin is far from stupid,  e.g. the year 2021)

 

Site may crash around 5:45-6am ... I'll be waiting.  If it doesn't we'll hopefully be good until my hardware comes in from Amazon. :-D -- after I've installed the hardware we're looking at up to a 24 hour window of instability.  The slave server should minimize any downtime but a lot is going on during that window.  Fingers crossed it all goes smoothly.

 

BOOM, CRASH!  Using the slave DB.

 

I'm rebuilding the Master database again, sorry guys.  Probably have to rebuild ResultDetails database after that too, usually if one of those crashes they both do... but not always.  I might have to push the master databases over to the Mistress server.  We're talking 0.16 sec vs 0.30 sec for the same query.  Not a huge deal.  I'm telling you... for this to happen with this frequency, something is majorly wrong with this mysql server.

 

[root@master tmn_scores]# myisamchk -r Master
- recovering (with sort) MyISAM-table 'Master'
Data records: 0
- Fixing index 1
128840000

 

That's 1 of 6 indexes, that will take a little time.

 

- Fixing index 1
144940000

 

pretty fast though.  Just in the time I typed that.  I remember this taking forever years ago.

 

[root@master tmn_scores]# myisamchk -r Master
- recovering (with sort) MyISAM-table 'Master'
Data records: 0
- Fixing index 1
151880000

 

I'll get us back where we should be.  Any results you log right now (while we're on the backup database) may not show in the original database but I'll import them later.  ;) 

 

I need sleep once the time comes but it's going to be nearly impossible right now.

 

On index 3 now...

 

-D

 

 

Link to comment
Share on other sites

Actually, it was 5 indexes on that table.  On to ResultDetails...

 

[root@master tmn_scores]# myisamchk -r Master
- recovering (with sort) MyISAM-table 'Master'
Data records: 0
- Fixing index 1
- Fixing index 2
- Fixing index 3
- Fixing index 4
- Fixing index 5
Data records: 151885140
[root@master tmn_scores]# cd ../
[root@master mysql]# cd tmn_scoresXI
[root@master tmn_scoresXI]# myisamchk -r ResultDetails
- recovering (with sort) MyISAM-table 'ResultDetails'
Data records: 120457254
- Fixing index 1
70010000

 

Link to comment
Share on other sites

It kept crashing, at first it wasn't crashing tables but toward the end it was doing it every time.

 

Everything has been migrated over to a new server.  It feels correct now.

 

Sorry it took longer than it should have, I tried a direct copy of the mysql directory, which would have been faster but after starting mysql it kept having speed issues, as if it had no indexes.  Multiple attempts to repair and then I gave up and did it the slow, reliable way.  mysqldump and import.

 

This new MariaDB (mysql) server is only doing that one purpose for just the master databases.  The forum is on a different server... I think that helped a lot this time.  The forum is heavily integrated, it's integration can be toggled on and off but it's not ideal for me to do that.  Having the forum database on a different server minimized downtime and left this line of communication open.

 

Before the server running MariaDB had other tasks to deal with.  Not that it couldn't handle it... what I feel like it did is overcomplicated the ground floor to the point that it made it hard to determine the source of this issue.  Many different software packages updating... complicates things.  Sure, you can dig into logs and eventually fix it... but I feel at that point it's time to refresh and learn from mistakes.

 

So this server is running a super simple MariaDB only install.

 

Please let me know if you see anything out of place or not working.

Link to comment
Share on other sites

Hi,

 

Does server running? as it been now for around 2 weeks i can get in main page:

 

image.thumb.png.39ba92ae0268e393359c61a2f6e3a5d2.png

 

But when i click Test my internet, does not matter which one - download, upload or combine - i always get this:

image.png.b36ad38df9a4f26f29e85a118309f61c.png

 

Any ideas whats going on? I know the owner looks like he's moving servers and etc, but come on, its not rocket science! It can not take 2 weeks?! lol Thanks in advance.

Link to comment
Share on other sites

18 minutes ago, Cryian said:

Any ideas whats going on? I know the owner looks like he's moving servers and etc, but come on, its not rocket science! It can not take 2 weeks?! lol Thanks in advance.

 

There was a reference to the wrong database server address.  Appeared on my end to be working because I hadn't selected one of those test types with predictive algorithm.  The regular tests I was selecting don't reach out to the database.

 

Thank you for pointing that out.  It just needed to be pointed to the correct address.

 

You're right, it's not rocket science.  It's computer science...

 

I'll add this to my monitoring (testing those database driven links every 1 minute) so if I make that mistake again I'll be alerted right away.  I thought I had manually checked for this but I was checking the wrong URLs.  I changed the reference during a smaller crash (that I caused on the backend by stopping a repair process)... it only went down for a few seconds and I needed to then change the IP back to the correct IP.

 

It takes a lot of time to reorganize TMN's databases when a crash this major happens.  Hundreds of millions of entries across hundreds of thousands of tables across dozens of databases.  It's not a simple process like most other websites may deal with... there is A TON of data and there are restrictions on how fast it can be imported.  I was pretty prepared for this incident.  Had a backup server on standby, had a new server built and on standby... restore procedure in writing... and it still took forever.  There are always unforeseen issues. 

 

Thank you for reporting this, please let me know if you see anything else that needs attention.

 

Link to comment
Share on other sites

  • 2 weeks later...

29/December/2018

 

I thought that I should let you know that when I am using my VPN and try a speed test this message appears " Something went wrong. Please try again. " This has only started happening about 2 months ago, I get frustrated as I believe that TestMy net is the best and most honest speed test, I hope this doesn't upset you.

Link to comment
Share on other sites

As of yesterday, I am getting "Something went wrong. Please try again." when I try to connect to the main page; I can only get the forum. This is from both computers, 2 different browsers (Firefox and Opera), no VPN. ( So, different from Bernard.) Are you down? Hope you have solved your server issues without too much hair torn out. You offer a great service that is much appreciated, even by duffers like me.

Link to comment
Share on other sites

  • CA3LE locked this topic
Guest
This topic is now closed to further replies.
×
×
  • Create New...