BillCherryAtl Posted January 7, 2012 CID Share Posted January 7, 2012 Hi Guys I thought I'd post a solution to a problem I had recently - it might help someone else. Last year I was getting 30+ Mbps down with Comcast using their "Boost" option (costs a bit more than regular). Then this year, it dropped to 10Mbps. Of course I complained to Comcast, ran all types of scans, checked my PC with HighjackThis, etc. Nothing. I have a Dell XPS with a mother board capable of handling 24GB of RAM. I had 15GB in it but then took out a 1GB stick to put in a 4GB stick giving me 18GB of RAM. I also have a bootable second hard drive "F" with Win 7 Ultimate but my primary "C" dive has Win 7 Home Premium. The puzzler is the secondary "F" drive was still getting more than 30Mbps down but the primary "C" drive was only getting 10 Mbps. What's up with that? I tried everything until I discovered that Windows 7 Home Premium can only "handle" a maximum 16GB of RAM - Win 7 Ultimate can "handle" 192GB of RAM. When I dropped my RAM back to 16GB my speeds returned to above 30Mbps on the primary C drive. The lesson for me is that if you put more RAM in your machine than your Operating System can handle, you might down grade your speed when you think you're helping. tommie gorman, CA3LE, Jessica Coffey and 1 other 4 Quote Link to comment Share on other sites More sharing options...
TriRan Posted January 7, 2012 CID Share Posted January 7, 2012 I have heard using more then the supported amount of ram can cause glitches glad you got it sorted CA3LE 1 Quote Link to comment Share on other sites More sharing options...
nanobot Posted January 7, 2012 CID Share Posted January 7, 2012 They all can technically handle the same amount of RAM, however Microsoft imposes limitations to force you to upgrade. The only difference is 32-bit vs. 64-bit, the reason 32-bit can only support four gigabytes of RAM, is because of the limitation of addressing. 32-Bit OS's can only address 2^32 bytes of RAM. (4,294,967,296, divide this by 1024 three times to get to the gigabyte level, and we end up with 4.) Now, 64-bit operating systems have the same style of limitations. They can only support 2^64 bytes of RAM. (18,446,744,073,709,551,616, divide this by 1024 three times and we get 17,179,869,184 gigabytes, divide it three more times and we finally come up with 16 exabytes.) However, due to the massive memory capabilities of 64-bit operating systems, Microsoft (and other OS distributors) limit the amount of memory because there is literally no computer that can process that much memory, and in order to keep commercially operating they have to do something in order for you to be required to upgrade at some point. I am sure that using more than the supported amount doesn't actually cause the glitches, it's probably the programming that causes it. Really, it would just make it so that you cannot address the extra amount. (2G in his case.) At least, that's what I would think would happen. Then again I don't spend my life studying RAM in particular. Thanks, EBrown Quote Link to comment Share on other sites More sharing options...
TriRan Posted January 8, 2012 CID Share Posted January 8, 2012 i meant more of a using 4GB of ram on a 32bit OS can cause or so i have heard, i've never experienced this myself Quote Link to comment Share on other sites More sharing options...
tommie gorman Posted January 8, 2012 CID Share Posted January 8, 2012 Hi Guys I thought I'd post a solution to a problem I had recently - it might help someone else. Last year I was getting 30+ Mbps down with Comcast using their "Boost" option (costs a bit more than regular). Then this year, it dropped to 10Mbps. Of course I complained to Comcast, ran all types of scans, checked my PC with HighjackThis, etc. Nothing. I have a Dell XPS with a mother board capable of handling 24GB of RAM. I had 15GB in it but then took out a 1GB stick to put in a 4GB stick giving me 18GB of RAM. I also have a bootable second hard drive "F" with Win 7 Ultimate but my primary "C" dive has Win 7 Home Premium. The puzzler is the secondary "F" drive was still getting more than 30Mbps down but the primary "C" drive was only getting 10 Mbps. What's up with that? I tried everything until I discovered that Windows 7 Home Premium can only "handle" a maximum 16GB of RAM - Win 7 Ultimate can "handle" 192GB of RAM. When I dropped my RAM back to 16GB my speeds returned to above 30Mbps on the primary C drive. The lesson for me is that if you put more RAM in your machine than your Operating System can handle, you might down grade your speed when you think you're helping. I like that food for thought, I will remember that for sure. And I feel I have seen that likewise. Thanks for sharing it. Quote Link to comment Share on other sites More sharing options...
BillCherryAtl Posted January 8, 2012 Author CID Share Posted January 8, 2012 EBrown - Thank you for your insightful information. I'm sort of a geek that thrives on knowledge about all things computers. I appreciate your post. Quote Link to comment Share on other sites More sharing options...
nanobot Posted January 8, 2012 CID Share Posted January 8, 2012 EBrown - Thank you for your insightful information. I'm sort of a geek that thrives on knowledge about all things computers. I appreciate your post. Not a problem. In all honesty it's the same reason IPv6 is replacing IPv4. We ran out of IP addresses. With the 128-bit structure of IPv6, we literally have enough addresses for each person on the world to have trillions of them for themselves. (340,282,366,920,938,463,463,374,607,431,768,211,456 total addresses, Estimating 6,840,507,000 people in the world, and we can determine that there is approximately 4.97451968 × 1028 addresses PER PERSON.) All of this leaves us to wonder, what will we do with that many addresses? The address space is divided into two sections, similar to IPv4. There is a network portion, the first 64 bits, and a host portion, the last 64 bits. (We have already seen that this is 18,446,744,073,709,551,616 for each section.) Now, using this information we are dividing the 64-bit network portion up even further, with a 32-bit section at the front being allocated to local Internet Registries. They allocate a /32 (32-bit network header) block to an ISP. Then RFC 6177 recommends the distribution of that into a /56 for each end-user site. (I.e. 256 networks into each site, replaced RFC 3177 recommending /48 for each site, thus allowing for more end users for each /32 block.) This leaves each /32 block with 16,777,216 /56 sites. (I.e. Each /32 block can have 16,777,216 end users.) The only problem now is transitioning to IPv6, which is a much larger task than most people thought. (Considering the hundreds of thousands if not millions of routers and switches in the world, that all have to transition, this is going to be interesting.) There are actually multiple methods to make the transition implementation easier. Including some IPv6 tunneling standards, to wrap an IPv6 network through an IPv4 network. (Although we are still in the process of getting it distributed throughout the world.) For the most part, China and the American D.o.D. are the only places fully converged. (That I know of at least.) Thanks, EBrown tommie gorman 1 Quote Link to comment Share on other sites More sharing options...
mudmanc4 Posted January 9, 2012 CID Share Posted January 9, 2012 I think the IP address will be useless and outdated within 20 years give or take. Everything will be accessing the same databases , running off each devices network. No need to allocate any specific identification if all data is shared. How you say ? Maybe one of the oldest techniques known to computing. Random number generators. Each time a device requests data , it's given a unique hash for that specific session. Which expires immediately after the session closes , which is within milliseconds after the transmission ends. With speeds we'll likely see by then, the movement of information will be thousands of times faster then it is now, so no need to allocate a specific machine ID. Run off the rails with that vision. Quote Link to comment Share on other sites More sharing options...
tommie gorman Posted January 9, 2012 CID Share Posted January 9, 2012 Not a problem. In all honesty it's the same reason IPv6 is replacing IPv4. We ran out of IP addresses. With the 128-bit structure of IPv6, we literally have enough addresses for each person on the world to have trillions of them for themselves. (340,282,366,920,938,463,463,374,607,431,768,211,456 total addresses, Estimating 6,840,507,000 people in the world, and we can determine that there is approximately 4.97451968 × 1028 addresses PER PERSON.) All of this leaves us to wonder, what will we do with that many addresses? Thanks, EBrown Maybe get my dog one for christmas. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.