Jump to content

mudmanc4

Moderators
  • Posts

    14,887
  • Joined

  • Last visited

  • Days Won

    232
  • Speed Test

    My Results

Everything posted by mudmanc4

  1. If flash based tests were as accurate as stated, why each time I Do have a known issue, and I'm testing as much as possible in as many ways , to locate the issue along thew lines , as a general rule, it shows no issues. When I'm well aware there is an issue. Makes no sense. From what I understand , and i could be incorrect, many of these flash tests use many small tests then do the average result math. If this is so , thats not real world. Real world today is maxing out bi-directional throughput. At once. Or one solid chunk of data transferred in one fashion or another. Point my flaw in thinking.
  2. I had to set up a massive growing [cannot name it] forum with dual databases a couple years ago, really made a difference. They have last time i checked over 90k members with nearly 6 million posts, at around 10k active members, and at any given time of the day roughly 2-3 thousand members on at a time. Thats the only thing that saved the place at the time. Not sure what there doing now I have nothing to do with it. As far as this project, I'm experimenting with using sockets 'SOCKS' or chrooting the sql server, which I'm finding is no trivial task in VPS considering it needs to find all it's resources within the jail. So just creating it's own directory and cramming it in there is not going to work as securely as setting it up within it's own volume will.
  3. 6000 concurrent connections on 2GB ram running apache ? You meant users online right ? Even 1000 concurrent accessing an NGIX server would be pushing it way over the top , unless of course it's 90% static material ? If you figure apache will need 10 MB ram / concurrent request and you have 1000 instances that in itself is way off the charts at around 10 GB. I know my comment is literal , and unlikely qualified for your setup , there's not much room for application with that math. What am I missing ? edit: I had to go redo the math in a different light. Roughly 10KB per request at 1000 concurrent connections, thats 9.7 MB per second ( thats CONCURRENT, or persistent database connections. If set up this way. Am I wrong ?
  4. I'm at the point of {way too far in depth} concerning RSA / DSA ssh keys. Since DSA is faster at generation , but slower validating, and RSA is really no longer the industry standard, other then USB cards ect. I did say industry standards , not what people are still using lol There's quite a bit of variable in any way this is done, considering this is for database security, speed as well as security are concerns. As well as staying , as much as possible PCI compliant.
  5. At this point Iv'e been working with several variables to benchmark overall system performance for mysql. Yes , I started from a basic point of usability, resource allocation and performance compared to the last several years of logs. Ive boiled it down to using debian (squeeze) for the database server. Although the latest stable mysql-server is 5.1 , I don't foresee any issues. At least at this point. Any thoughts ?
  6. I was outside , banging on the door bang bang bang, I heard laughing and clapping , and all that glass clinking going on , I sat on the step for a while , pouted and walked home scared , tired , and lonely.
  7. Actually testmy.net is telling you there is an issue somewhere along the testing route. Which is the very reason for it's being. Running traceroutes can tell us where the lagging router is, checking peer availability , congestion and availability is another. There was a time where this was not an issue , yet with the recent national routing changes, specific cross continental advertising and video streaming services, not to mention the latest knowledge of the Utah datacenter for national security , and it's special routing procedures , have caused a few peering issues that the ISP cannot and does not have any say in. For instance , I can test to testmy.net dallas from silicon valley area ( Cupertino ) like it's within my ISP's internal network, but testing from NW Ohio , renders drastically different results. This is not a flaw, this is valuable information. You can get information on the status of various peers if you so choose here , I use these tools regularly. Have a look at the image below to understand main routing
  8. Welcome ainos
  9. No need to close it imo , it's a great example of how ISP internal network testing servers are really no good to anyone other then a level 1 tech , to see if your connection is working . And to see real world testing done, and compare the ISP peering and there actual capability to supply an internet connection or not.
  10. Hi clinycb, Sounds at firth thought a DNS issue locally. You may want to check your router and make sure you have trusted DNS server setup. As the dial up line will not be using the same network. Before changing anything though , try going to testmy.net with the IP address 174.120.187.140 , This may or may not work , but would be good to know before changing anything on the router.
  11. Hope you have a great week oxoxoxox
  12. Great story dann0 no doubt
  13. Must be the procedure of the test itself ? See if / what the boss has to say.
  14. Has to do with peering from what Iv'e gathered. At some point there is a crappy peer node. Been dealing with that myself for a couple years.
  15. Hi Mike, Sorry for the confusion , I was going to elaborate more on some of the osx shortcuts , and post them here. This thread was something I started and did not follow through with , hence why I said I let it go. We do appreciate your posts for sure.
  16. Yea my thoughts are chuck all GUI application access, it just makes things more difficult. The plan Iv'e had on that server is $300 /annually 1000GB /month 512 MB /memory , 100 domains, unlimited databases, 30GB disk blah blah , so I can't go wrong really. But Iv'e well overgrown that and have drawn attention. Hence the secondary server with the same resources less a bit , no panel ( unless i install one ) which i will not. So plesk is the only way with that pricing, I could chuck it , but really I'd have to wipe the VPS and start playing outside of what it's set up for. Which I did at one point, but went back because at that time i could not administer dns zones for separate domains ect. And plesk has that ability , but i want to do more then it can handle within the application itself.
  17. I always make a new page when I learn something new. I start it as I go add to it then wrap it in css and go. So thats not an issue. It shouldn't be too difficult in any sense , but making sure there's nothing open , * not listening on any other port then the specified non standard, creating a key set between the application server and the database server while allowing users to create a database through the panel, or via ssh is whats going to make me think. I'll make a simple database , maybe with wordpress or something , a wiki or who knows and test with this. Then I'll have one of you guys portscan and probe the hell out of it from every direction , even give out the credentials.without the keys it should be impossible. I'm thinking of a way to do a mac address auth as well. No ? But if it's too complex **, it'll slow the data transmission down too much. edit : *actually listening on no ports at all ** Keep alives ? And all the databases are run as different users with different passwords
  18. Last time I had to do this I used [bind-address=wha.te.ver.ip] with a specific port , granting users, flushing and dealing with iptables. Done. This time I'de like to create a tunnel between the two , and using the external server solely as sql . Iv'e got everything installed, rsync set up locally and remote [for when I bork something lol ] sql 5.3 x86_64 centos 5.5 running only basic services needed for access and the sql server. The httpd server is running plesk which has a remote sql server ability, but it only has very basic settings, such as administrator and password duh ? So this will be done with , and without plesk support , which bothers me not , and is one good reason the secondary server is running no GUI , including myphpadmin or anything else unnecessary like it. So this will be done with plesk support , to allow the creation of databases on the remote machine from the users panel. As locally the sql server will be un installed completely. Iv'e already set up password-less ssh logins for myself locally as well as the backup server via rsync. Iv'e read quite a bit about tunneling sql, but before I begin I'de like to hear from anyone with experience , or any suggestions. Thanks for any replies.
  19. Yea well sometimes you have to do things that you know well don't forward normal thinking
  20. Iv'e grabbed another VPS without any panel of any type ( I hate them anyhow ) , so I'll make a database server out of it and relieve some stresses on the main server. It's difficult enough to optimize an x86_64 server with 512 Mb ram , on top that try running a heavily modified version of apache and sql 5.3 , between plesk and sql server it eats nearly 400 Mb itself. This should take care of all of it. In the meantime still looking for the reason for this crazy shit , since i do not run either. My thoughts stay the same , vbulletin directory permission issue somewhere. Iv'e always disliked it from the time i payed too much for it. Sloppy un- organized clunky resource hog ( the bitch got back ) lol
  21. Yea about ten years ago. But still people are using them like mad. Just the software went another direction. From what I understand.
  22. Thanks for the input guy's. Since most of the domains I have are auto renew , I wasn't watching. Seems last year about this time verisign increased the costs by roughly 0.50 , so registrars are jacking up everything. And the .co's have gotten highly sought after , so now I'll be paying roughly 30.00 each per year , and the .us's are about the same give or take. Promotes me to get that eNom reseller account i need to anyhow. Since IPB has implemented this within nexus I planned on it. Anyhow , I wish tucows would hold the .us and .co's , I have dozens of .net .com's with them and never had any issues at all , and at 12 bucks overall you can't beat it. Although chances of this going up are increasing all the time. Getting rid of the domain squatters might be a good thing. Since everything is taken up and parked , no content , nothing, it send many things in a funky direction , and puts many wen entrepreneurs in a tough position. Look at it this way, I have a customer that Iv'e been building an ecommerce suite for , they sell hand made in USA dog products , such as pillow , and beds , nicknaks of all sorts. Her legal company name is taken by a squatter, which wants $6000.00 for the domain. So I went to a .us domain and boom , 30 bucks lol .
  23. Yea I just figured that out. I'm looking to get away from go daddy on several .us domains and quite a few .co , as they've more then doubled there registrar fee's i one year. After reading eh massive amounts of issues people are having with hover I'm a bit skeptical . And still doing initial investigations as if I can even use them as a registrar , or if they only serve soft layer . I would use tucows for them all , but they do not handle .co or .us Hence , looking for a decent registrar
  24. List your hosting provider and your experience with them , good or bad . Iv'e created a poll with a few of the most popular by name domain registrars and hosting providers, if yours is not up there when you post it , It will be added. Any and all experiences are welcome.
×
×
  • Create New...