Let me start by saying I am not a network analyst, never been one, but I did stay at a Holiday Inn-Express last night. I want to look at WAN, networks and bandwidth from the perspective of how they affect archiving of images (big sets of data) across the WAN. For any truly smart network folk, I appreciate comments and corrections. Now to the meat of things.
Networks are like a highway and you are driving in a car, or pick-up truck. There are two main factors that are important, the number of lanes and the speed you are driving. If you are on an old country road, it is likely 2 lanes and you may be driving 40-50 MPH. Not bad, and if its just you, you don’t need more than 2 lanes and you get there just fine. The number of lanes is bandwidth, the speed you are driving is latency and the road is the connection, sometimes called the pipe.
If more people start driving on the same road it starts to slow down, you can add lanes or increase the speed limit. It makes sense to expand the road to 2, 3 or even 4 lanes. At a certain point expanding the lanes doesn’t help, why because it costs a lot of money and you get incremental benefits in speed. Sure you may go from 50 MPH to 60 or even 70, but you don’t get much faster than that, even if you have 12 lanes. Obviously even if you have a ton of bandwidth, if you are driving slow you are unhappy.
Latency, or the speed that data is flowing can be dramatically affected by the route you take. Say you are driving from Dallas to Chicago. According to google maps, you take 75 North, then 69 north and finally I 44, it is 927 miles and should take just under 14 hours. Let’s say that there is a wreck on I 44 and your trusty phone re-routes you through Charlotte, NC. Your trip is now 1,785 miles and will take 27 hours…. No bueno. This is EXACTALLY how data gets routed in and around the internet. In this case your latency just went from 14 hours to 27 hours. It really doesn’t matter how many lanes the highway has, you have a long drive ahead of you. This is known in the networking world as coincidently the route. It is often measured by the number of “hops” which basically equates to the number of cities between you and your destination. 4 or 5 hops is good, 10 or 12 is bad. The more hops the longer it will take and the more likely you got routed through Atlanta or Charlotte on your way to Chicago.
Now to add insult to injury, when you have a big load to send, let’s say a 100 MB file, or a 600 MB breast tomo exam. To continue the analogy let’s say a ton of bricks. You can only fit a portion of those bricks in your trusty pick-up (I am really from Dallas). Given that your truck can only fit 1/10 of the bricks at a time you need to make 10 trips. Now you can see that the latency adds up, very quickly, because of course your truck has to make 2 trips across the network for each load. Your network does this as well. You send some data, and the other side sends back a verification of what it received. This is where someone will say AH HA! I will just send 10 trucks at once! I do need more bandwidth! Unfortunately, it just doesn’t work that way, you can’t put all the data on the wire at once. As the file is broken up, due to constraints in the systems themselves each file, each computer is limited to 3 trucks. Let’s say that is state law, limited number of trucks in America, sun spots, I don’t know…. It just is.
If you have stayed with me through all of this, you should see that there is a balancing act going on. You want to have enough bandwidth so that you are not constrained by one lane, but at a certain point the constraint tips and it is not bandwidth but latency that is slowing down your data transfer. So, what can be done? That my friends I will leave to the network people, but I think it has something to do with point to point connections and dedicated routes through “the cloud”.
Please let me know what topics you would like to discuss