In Louisville, Time Warner Cable recently started massively upgrading its internet network to compete against an imminent Google competition. While many people rejoiced, it has led to a surge of hardware problems throughout the city in various people’s homes. For starters, the old modems that worked so well for so long can’t handle the new bandwidth, so they need to be upgraded. Now, depending on the internet that many people are getting, they’re finding out that their routers only had 100-megabit ports on them, so they can’t use their 300-megabit internet. After that, the G (54 megabit) or N (300 megabit) wireless that has worked so well at home for so long suddenly can’t keep up, so now users are having to be wired in or upgrade their wireless to AC. From there, the devices on the other end possibly have to be upgraded to leverage that new wireless. It’s a good problem to have, but it’s been a reality in many homes (including mine): constantly fixing one thing only to have it cause a new problem.
Bottlenecks exist in every network, and there is always a bottleneck. It’s impossible to build an environment where every component of it operates at the same speed. Even if there’s no overcommitment in the entire network (oh, but there will be), the internet speed won’t be the same, the wireless will be slower, the NICs in the servers will aggregate to more bandwidth potential than the uplinks of the switches have. Within the server, the CPU may have to wait on storage, the storage may have to wait on its connectivity between the server and the storage … there is always a bottleneck.
The key to all of this comes down to whether or not that bottleneck in question is actually causing enough of a problem to be addressed. Before my internet was upgraded, everything was working great at home and I had zero concerns with any of my components. As soon as I upgraded the internet speed, I suddenly had loads of problems and had to replace four pieces of hardware.
It’s very normal to upgrade one piece of the environment only to find that the upgrade causes a problem with the rest. For example, adding flash to a one-gigabit iSCSI SAN may immediately reveal that the one gigabit connection can’t keep up and now it’s a bottleneck. It wasn’t before when the storage was slower, but now that the storage is much faster, your connection suddenly can’t keep up. Having a gigabit uplink for your access switch to the distribution switch may have been fine for a long time, but once you hang five or six AC access points off of it, suddenly it may not be fast enough and you’ll have to upgrade it.
When you consider an upgrade to a component of your environment, think about all the pieces that interact with it. Are those parts going to have the bandwidth to utilize it? Is the hardware powerful enough to leverage the new speed? Does it have all of the necessary connections? Will your applications actually be able to perform faster or are they limited by their design? Ask yourself if you’re simply moving the limitation from one location to another.
Doing upgrades requires this type of careful consideration, since it costs tens of thousands for businesses to do upgrades to switches or cabling or servers (not just a hundred bucks for a new modem and router for your home Wi-Fi). Unexpected costs like that quickly make everyone involved look bad and can cause the project to fail. It’s very important to think through your whole environment when you’re considering any upgrade.