Wednesday, April 1, 2009

The Agenda

- Shared LAN Technology

- LAN Switching Basics

- Key Switching Technologies

We'll begin by looking at traditional shared LAN technologies. We'll then look at LAN switching basics, and then some key switching technologies, such as spanning tree and multicast controls.

Let's begin our discussion by reviewing shared LAN technologies.

Shared LAN Technology

Early Local Area Networks

The earliest Local Area Network technologies that were installed widely were either thick Ethernet or thin Ethernet infrastructures. And it's important to understand some of he limitations of these to see where we're at today with LAN switching.With thick Ethernet installations there were some important limitations such as distance, for example. Early thick Ethernet networks were limited to only 500 meters before the signal degraded.In order to extend beyond the 500 meter distance, they required to install repeaters to boost and amplify that signal.There were also limitations on the number of stations and servers we could have on our network, as well as the placement of those workstations on the network.

The cable itself was relatively expensive, it was also large in diameter, which made it difficult or more challenging to install throughout the building, as we pulled it through the walls and ceilings and so on. As far as adding new users, it was relatively simple.There could use what was known as a non-intrusive tap to plug in a new station anywhere along the cable.And in terms of the capacity that was provided by this thick Ethernet network, it provided 10 megabits per second, but this was shared bandwidth, meaning that that 10 megabits was shared amongst all users on a given segment.

A slight improvement to thick Ethernet was thin Ethernet technology, commonly referred to as cheaper net.This was less expensive and it required less space in terms of installation than thick Ethernet because it was actually thinner in diameter, which is where the name thin Ethernet came from.It was still relatively challenging to install, though, as it sometimes required what we call home runs, or a direct run from a workstation back to a hub or concentrator.And also adding users required a momentary interruption in the network, because we actually had to cut or make a break in a cable segment in order to add a new server or workstation. So those are some of the limitations of early thin and thick Ethernet networks.An improvement on thin and thick Ethernet technology was adding hubs or concentrators into our network. And this allowed us to use something known as UTP cabling, or Unshielded Twisted Pair cabling.

As you can see indicated in the diagram on the left, Ethernet is fundamentally what we call a shared technology.And that is, all users of a given LAN segment are fighting for the same amount of bandwidth. And this is very similar to the cars you see in our diagram, here, all trying to get onto the freeway at once.This is really what our frames, or packets, do in our network as we're trying to make transmissions on our Ethernet network. So, this is actually what's occurring on our hub.Even though each device has its own cable segment connecting into the hub, we're still all fighting for the same fixed amount of bandwidth in the network.Some common terms that we hear associated with the use of hubs, sometimes we call these Ethernet concentrators, or Ethernet repeaters, and they're basically self-contained Ethernet segments within a box.So while physically it looks like everybody has their own segment to their workstation, they're all interconnected inside of this hub, so it's still a shared Ethernet technology.Also, these are passive devices, meaning that they're virtually transparent to the end users, the end users don't even know that those devices exist, and they don't have any role in terms of a forwarding decision in the network whatsoever, they also don't provide any segmentation within the network whatsoever.And this is basically because they work at Layer 1 in the OSI framework.

Collisions: Telltale Signs

A by-product that we have in any Ethernet network is something called collisions. And this is a result of the fundamental characteristic of how any Ethernet network works.Basically, what happens in an Ethernet network is that many stations are sharing the same segment. So what can happen is any one of these stations can transmit at any given time.And if 2 or more stations try to transmit at the same time, it's going to result in what we call a collision. And this is actually one of the early tell-tale signs that your Ethernet network is becoming too congested. Or we simply have too many users on the same segment.And when we get to a certain number of collisions in the network, where they become excessive, this is going to cause sluggish network response times, and a good way to measure that is by the increasing number of user complaints that are reported to the network manager.

Other Bandwidth Consumers

It's also important to understand fundamentally how transmissions can occur in the network. There's basically three different ways that we can communicate in the network. The most common way is by way of unicast transmissions.And when we make a unicast transmission, we basically have one transmitter that's trying to reach one receiver, which is by far the most common, or hopefully the most common form of communication in our network.

Another way to communicate is with a mechanism known as a broadcast. And that is when one transmitter is trying to reach all receivers in the network.So, as you can see in the diagram, in the middle, our server station is sending out one message, and it's being received by everyone on that particular segment.

The last mechanism we have is what is known as a multicast.And a multicast is when one transmitter is trying to reach, not everyone, but a subset or a group of the entire segment.So as you can see in the bottom diagram, we're reaching two stations, but there's one station that doesn't need to participate, so he's not in our multicast group. So those are the three basic ways that we can communicate within our Local Area Network.

Broadcasts Consume Bandwidth

Now, in terms of broadcast, it's relatively easy to broadcast in a network, and that's a transmission mechanism that many different protocols use to communicate certain information, such as address resolution, for example.Address resolution is something that all protocols need to do in order to map Layer 2 MAC addresses up to logical layer, or Layer 3, addresses. For example, in an IP network we do something known as an ARP, an Address Resolution Protocol.And this allows us to map Layer 3 IP addresses down to Layer 2 MAC-layer addresses. Also, in terms of distributing routing protocol information, we do this by way of broadcasting, and also some key network services in our networks rely on broadcast mechanisms as well.

And it doesn't really matter what our protocol is, whether it's AppleTalk or Novell IPX, or TCP IP, for example, all of these different Layer 3 protocols rely on the broadcast mechanism. So, in other words, all of these protocols produce broadcast traffic in a network.

Broadcasts Consume Processor Performance

Now, in addition to consuming bandwidth on the network, another by-product of broadcast traffic in the network is that they consume CPU cycles as well.Since broadcast traffic is sent out and received by all stations on the network, that means that we must interrupt the CPU of all stations connected to the network.So here in this diagram you see the results of a study that was performed with several different CPUs on a network. And it shows you the relative level of CPU degradation as the number of broadcasts on a network increases.

So you can see, we did this study based on a SPARC2 CPU, a SPARC5 CPU and also a Pentium CPU. And as the number of broadcasts increased, the amount of CPU cycles consumed, simply by processing and listening to that broadcast traffic, increased dramatically.So, the other thing we need to recognize is that a lot of times the broadcast traffic in our network is not needed by the stations that receive it.So what we have then in shared LAN technologies is our broadcast traffic running throughout the network, needlessly consuming bandwidth, and needlessly consuming CPU cycles.

Hub-Based LANs

So hubs are introduced into the network as a better way to scale our thinand thick Ethernet networks. It's important to remember, though, that these are still shared Ethernet networks, even though we're using hubs.

Basically what we have is an individual desktop connection for each individual workstation or server in the network, and this allows us to centralize all of our cabling back to a wiring closet for example. There are still security issues here, though.It's still relatively easy to tap in and monitor a network by way of a hub. In fact it's even easier to do that because all of the resources are generally located centrally.If we need to scale this type of network we're going to rely on routers to scale this network beyond the workgroup, for example.

It's makes adds, moves and changes easier because we can simply go to the wiring closet and move cables around, but we'll see later on with LAN switching that it's even easier with LAN switching.Also, in terms of our workgroups, in a hub or concentrator based network, the workgroups are determined simply by the physical hub that we plug into. And once again we'll see later on with LAN switching how we can improve this as well.

Bridges

Another way is to add bridges. In order to scale our networks we need to do something known as segmentation. And bridges provide a certain level of segmentation in our network.And bridges do this by adding a certain amount of intelligence into the network. Bridges operate at Layer 2, while hubs operate at Layer 1. So operating at Layer 2 gives us more intelligence in order to make an intelligent forwarding decision.

That's why we say that bridges are more intelligent than a hub, because they can actually listen in, or eavesdrop on the traffic going through the bridge, they can look at source and destination addresses, and they can build a table that allows them to make intelligent forwarding decisions.

They actually collect and pass frames between two network segments and while they're doing this they're making intelligent forwarding decisions. As a result, they can actually provide greater control of the traffic within our network.

Switches—Layer 2

To provide even better control we're going to look to switches to provide the most control in our network, at least at Layer 2. And as you can see in the diagram, have improved the model of traffic going through our network.

Getting back to our traffic analogy, as you can see looking at the highway here, we've actually subdivided the main highway so that each particular car has it's own lane that they can drive on through the network. And fundamentally, this is what we can provide in our data networks as well.So that when we look at our network we see that physically each station has its own cable into the network, well, conceptually we can think of this as each workstation having their own lane through the highway.Basically there is something known as micro-segmentation. That's a fancy way simply to say that each workstation gets its own dedicated segment through the network.

Switches versus Hubs

If we compare that with a hub or with a bridge, we're limited on the number of simultaneous conversations we can have at a time.Remember that if two stations tried to communicate in a hubbed environment, that caused something known as collisions. Well, in a switched environment we're not going to expect collisions because each workstation has its own dedicated path through the network.What that means in terms of bandwidth, and in terms of scalability, is we have dramatically more bandwidth in the network. Each station now will have a dedicated 10 megabits per second worth of bandwidth.

So when we look at our switches versus our hubs, and the top diagram, remember that we're looking at a hub. And this is when all of our traffic was fighting for the same fixed amount of bandwidth.Looking at the bottom diagram you can see that we've improved our traffic flow through the network, because we've provided a dedicated lane for each workstation.

The Need for Speed: Early Warning Signs

Now, how can you tell if you have congestion problems in your network? Well, some early things to look at, some early things to watch out for, include increased delay on our file transfers.If basic file transfers are taking a long, long time in the network, that means we may need more bandwidth. Also, another thing to watch out for is print jobs that take a very long time to print out.From the time we queue them from our workstation, till the time they actually get printed, if that's increasing, that's an indication that we may have some LAN congestion problems.Also, if your organization is looking to take advantage of multimedia applications, you're going to need to move beyond basic shared LAN technologies, because those shared LAN technologies don't have the multicast controls that we're going to need for multimedia applications.

Typical Causes of Network Congestion

Some causes of this congestion, if we're seeing those early warning signs some things we might want to look for, if we have too many users on a shared LAN segment. Remember that shared LAN segments have a fixed amount of bandwidth.As we add users, proportionally, we're degrading the amount of bandwidth per user. So we're going to get to a certain number of users and it's going to be too much congestion, too many collisions, too many simultaneous conversations trying to occur all at the same time.

And that's going to reduce our performance. Also, when we look at the newer technologies that we're using in our workstations. With early LAN technologies the workstations were relatively limited in terms of the amount of traffic they could dump on the network.Well, with newer, faster CPUs, faster busses, faster peripherals and so on, it's much easier for a single workstation to fill up a network segment.So by virtue of the fact that we have much faster PCs, we can also do more with the applications that are on there, we can more quickly fill up the available bandwidth that we have.

Network Traffic Impact from Centralization of Servers

Also, the way the traffic is distributed on our network can have an impact as well. A very common thing to do in many networks is to build what's known as a server farm for example.Well, in a server farm effectively what we're doing is centralizing all of the resources on our network that need to be accessed by all of the workstations in our network.So what happens here is we cause congestion on those centralized segments within the network. So, when we start doing that, what we're going to do is cause congestion on those centralized or backbone resources.

Servers are gradually moving into a central area (data center) versus being located throughout the company to:

- Ensure company data integrity
- Maintain the network and ensure operability
- Maintain security
- Perform configuration and administrative functions

More centralized servers increase the bandwidth demands on campus and workgroup backbones

Today’s LANs

- Mostly switched resources; few shared
- Routers provide scalability
- Groups of users determined by physical location

When we look at today's LANs, the ones that are most commonly implemented today, we're looking at mostly switched infrastructures, because of the price point of deploying switches, many companies are bypassing the shared hub technologies and moving directly to switches.Even within switched networks, at some point we still need to look to routers to provide scalability. And also we see that in terms of the grouping of users, they're largely determined by the physical location.So that's a quick look at traditional shared LAN technologies. What we want to do now, since we know those limitations, we want to look at how we can fix some of those issues. We want to see how we can deploy LAN switches to take advantage of some new, improved technologies.

LAN Switching Basics

- Enables dedicated access
- Eliminates collisions and increases capacity
- Supports multiple conversations at the same time

First of all, it's important to understand the reason that we use LAN switching. Basically, they do this to provide what we called earlier as micro-segmentation. Again, micro-segmentation provides dedicated bandwidth for each user on the network.What this is going to do is eliminate collisions in our network, and it's going to effectively increase the capacity for each station connected to the network.It'll also support multiple, simultaneous conversations at any given time, and this will dramatically improve the bandwidth that's available, and it'll dramatically improve the scalability in our network.

LAN Switch Operation

So let's take a look at the fundamental operation of a LAN switch to see what it can do for us. As you can see indicated in the diagram, we have some data that we need to transmit from Station A to Station B.

Now, as we watch this traffic go through the network, remember that the switch operates at Layer 2. What that means is the switch has the ability to look at the MAC-layer address, the Media Access Control address, that's on each frame as it goes through the network.

And we're going to see that the switch actually looks at the traffic as it goes through to pick off that MAC address and store it in an address table.So, as the traffic goes through, you can see that we've made an entry into this table in terms of which station and the port that it's connected to on the switch.

Now what happens, once that frame of data is in the switch, we have no choice but to flood it to all ports. The reason that we flood it to all ports is because we don't know where the destination station resides.

Once that address entry is made into the table, though, when we have a response coming back from Station B, going back to Station A, we now know where Station A is connected to the network.

So what we do is we transmit our data into the switch,but notice the switch doesn't flood that traffic this time, it sends it only out port number 3. The reason is because we know exactly where Station A is on the network, because of that original transmission we made.On that original transmission we were able to note where that MAC address came from. That allows us to more efficiently deliver that traffic in the network.

Switching Technology: Full Duplex

Another concept that we have in LAN switching that allows us to dramatically improve the scalability, is something known as full duplex transmission. And that effectively doubles the amount of bandwidth between nodes.This can be important, for example, between high bandwidth consumers such as between a switch and a server connection, for example. It provides essentially collision free transmissions in the network.

And what this provides, for example, in 10 megabit per second connections, it effectively provides 10 meg of transmit capacity, and 10 megabit of receive capacity, for effectively 20 megabits of capacity on a single connection.Likewise, for a 100 megabit per second connection, we can get effectively 200 megabits per second of throughput

Switching Technology: Two Methods

Another concept that we have in switching is that we have actually two different modes of switching. And this is important because it can actually effect the performance or the latency of the switching through our network.

Cut-through

First of all we have something known as cut through switching. What cut through switching does, is, as the traffic flows through the switch, the switch simple reads the destination MAC address, in other words we find out where the traffic needs to go through, go to.And as the data flows through the switch we don't actually look at all of the data. We simply look at that destination address, and then, as the name implies, we cut it through to its destination without continuing to read the rest of the frame.

Store-and-forward

And that allows to improve performance over another method known as store and forward. With store and forward switching, what we do is we actually read, not only the destination address, but we read the entire frame of data.As we read that entire frame we then make a decision on where it needs to go, and send it on it's way. The obvious trade-off there is, if we're going to read the entire frame it takes longer to do that.

But the reason that we read the entire frame is that we can do some error correction, or error detection, on that frame, that may increase the reliability if we're having problems with that in a switched network.So cut through switching is faster, but the trade-off is that we can't do any error detection in our switched network.

Key Switching Technologies

let's look at some key technologies within LAN switching.

- 802.1d Spanning-Tree Protocol

- Multicasting

The Need for Spanning Tree

Specifically we'll look at the Spanning Tree Protocol, and also some multicasting controls that we have in our network.As we build out large networks, one of the problems we have at Layer 2 in the OSI model, is if we're just making forwarding decisions at Layer 2, that means that we cannot have any Physical Layer loops in our network.

So if we have a simple network, as we see in the diagram here, what these switches are going to do is that anytime they have any multicast, broadcast traffic, or any unknown traffic, that's going to create storms of traffic that are going to get looped endlessly through our network.So in order to prevent that situation we need to cut out any of the loops.

802.1d Spanning-Tree Protocol (STP)

Spanning Tree Protocol, or STP. This is actually an industry standard that's defined by the IEEE standards committee, it's known as the 802.1d Spanning Tree Protocol.This allows us to have physical redundancy in the network, but it logically disconnects those loops.

It's important to understand that we logically disconnect the loops because that allows us to dynamically re-establish a connection if we need to, in the event of a failure within our network.The way that the switches do this, and actually bridges can do this as well, is that they simply communicate by way of a protocol, back and forth. The basically exchange these little hello messages.

If they stop hearing a given communication from a certain device on the network, we know that a network device has failed. And when a network failure occurs we have to re-establish a link in order to maintain that redundancy.technically, these little exchanges are known as BPDUs or Bridge Protocol Data Units.

Now, Spanning Tree protocol works just fine, but one of the issues with Spanning Tree is that it can take anywhere from half a minute to a full minute in order for the network to fully converge, or in order for all devices to know the status of the network.So in order to improve on this, there are some refinements that Cisco has introduced, such as PortFast and UplinkFast, and this allows your Spanning Tree protocol to converge even faster.

Multicasting

Now, another issue that we have in Layer 2 networks, or switched networks, is control of our multicast traffic. There's a lot of new applications that are emerging today such as video based applications, desktop conferencing, and so on, that take advantage of multicasting

But without special controls in the network, multicasting is going to quickly congest our network. Okay, so what we need is to add intelligent multicasting in the network.

Multipoint Communications

Now, again, let's understand that there are a few fundamental ways that we have in order to achieve multipoint communications, because effectively, that's what we're trying to do with our video based applications or any of our multimedia type applications that use this mechanism.

One way is to broadcast our traffic. And what that does is it effectively sends our messages everywhere. The problem, and the obvious down side there is that not everybody necessarily needs to hear these communications.So while it will get the job done, it's not the most efficient way to get the job done. So the better way to do this is by way of multicasting.

And that is, the applications will use a special group address to communicate to only those stations or group of stations that need to receive these transmissions.And that's what we mean by multipint communications. That's going to be the more effective way to do that.

Multicast

This also needs to be done dynamically because these multicast groups are going to change over time at any given moment. So, in order to do this, we need some special protocols in our network. First of all, in the Wide Area, we need something known as multicast routing protocols.Certainly, in our Wide Area we already have routing protocols such as RIP, the Routing Information Protocol, or OSPF, or IGRP, for example, but what we need to do is add multicast extensions so that these routing protocols need, understand how to handle the need for our multicast groups.

An example of a multicast routing protocol would be PIM, or Protocol Independent multicasting, for example. This is simply an extension of the existing routing protocols in our network.Another protocol we have is known as IGMP, or the Internet Group Management Protocol. And IGMP simply allows us to identify the group membership of the IP stations that want to participate in a given multicast conversation.

So as you can see indicated by the red traffic in our network, we have channel #1 being multicast through the network. And by way of IGMP, the workstations can signal back to the original video servers that they want to participate.And by way of the multicast routing protocols are added, we can efficiently deliver our traffic in the Wide Area.Now, another challenge that we have is once our traffic gets to the Local Area Network, or the switch, by default that traffic is going to be flooded to all stations in the network.

End-to-End Multicast

And that's because IGMP works at Layer 3,, but our LAN switch works at Layer 2. So the switch has no concept of our Layer 3 group membership. So what we need to do is add some intelligence to our switch.The intelligence that going to add is a protocol such as CGMP, for example, or Cisco Group Management Protocol. Another similar technology that we could add, is called IGMP Snooping, which has the same effect in the Local Area Network.

And that effect is, as you see in the diagram, to limit our multicast traffic to only those stations that want to participate in the group. So now, as you can see, the red channel, or channel number 1, is delivered to only station #1 and station #3.

The station 2 does not receive this content because he doesn't wish to participate.So the advantage of adding protocols such as IGMP, CGMP, IGMP Snooping, and Protocol Independent multicasting into our network, that achieved bandwidth savings for our multicast traffic.

Why Use Multicast?

What we see indicated in the red is, as we add stations to our multicast group, the amount of bandwidth we need to do that is going to increase in a linear fashion.But by adding multicast controls, you can see the amount of bandwidth is reduced dramatically. Because these intelligent multicast controls can better make, can make better use of the bandwidth in our network.So by adding multicast controls that's going to also reduce the cost of networking as well because we've reduced the bandwidth that we need, so that's going to provide a dramatic improvement to our Local Area Network.

- Summary -

- Switches provide dedicated access

- Switches eliminate collisions and increase capacity

- Switches support multiple conversations at the same time

- Switches provide intelligence for multicasting