The most widely used link-state routing protocols today is an IETF standard routing protocol called Open Short Path First. Developed in 1988, this routing protocol was created to overcome the limitations that RIP presented for large-scale networks. This article looks at the fundamentals of OSPF and shows you how to apply them in your configuration.
OSPF is unique among the routing protocols that are discussed in this article. This is true from its operations all the way down to the configuration.
Notably, one of the more intriguing aspects of this routing protocol is that it is completely classless. With the subnet masks accompanying the networks in the routing updates, routers are aware of all the individual subnets that exist. The upside of this knowledge is that you do not have to concern yourself too much with discontinuous network designs, and you can implement VLSM addressing throughout the topology.
The downside of knowing all the subnets is that the topology database can grow to be quite large, depending on the size of the autonomous system. Not only does this knowledge exhaust the memory resources in your routers, but any change in the links associated with those subnets causes a flood of updates, consequently causing each router to run the SPF algorithm again.
If your autonomous system consists of 1000 routers, each one has to expend the processing resources to flood the update and rerun its algorithmic calculations based on the residual topology. If the link id continuously going up and then down (known as flapping), each of the 1000 routers continually floods updates and returns its Dijkstra SPF algorithm, which could exhaust a router’s resources and detrimentally affect the router’s capacity to function.
OSPF mitigates the need for excessive topology databases and updates traffic overhead by segmenting an OSPF autonomous system into smaller areas. As mentioned before, routers that transmit information from one area to another can be configured to summarize the subnets being advertised to other areas.
In this situation, routers in other areas need to keep only summarized entries in their topology table, minimizing the amount of memory required. If a link goes down within the area, only devices within that area need to be notified, because the rest of the OSPF autonomous system is aware of only the summarized route. with that update confined within that area, other routers in different areas do not receive an update and do not have to flood and recalculate the information in the update.
For instance, the following picture showed an OSPF autonomous system in which you have created three areas. Routers C and E have the responsibility of summarizing the subnets in their areas to the rest of the OSPF autonomous system. This hierarchy in routing ensures that any link failure that occurs in the areas they are summarizing does not go beyond those routers. Because these hierarchical routers are sitting on the border between two areas, they are called Area Border Routers (ABRs).
Notice that at the center of the OSPF autonomous system in the above picture is Area 0, to which Area 1 and Area 51 are attached. This is not by coincidence, but rather by design. Area 0, also known as the backbone area, is an essential part of an OSPF design because, as the name states, this is the area in which traffic from one area must transit to reach another area. If your network design requires only a single area, that area must be Area 0.
Any area that is created must somehow be connected to the backbone area. Because this area is truly an information highway interconnecting all other areas, it typically consists of robust routers called backbone routers that are either completely inside or have an interface that connects to Area 0. Traffic originating in one area is sent to the backbone ABR for that area, which in turn ultimately passes the traffic to the destination backbone ABR and finally to the destination router inside the remote area. Because these links inside the backbone carry excessive traffic, the backbone routers typically are interconnected with high-speed interlinks such as Fast Ethernet or Gigabit Ethernet.
Recall that the term stub in networking refers to networks that contain a single pathway in or out. Accordingly, a stub area is an area that contains only one pathway in or out of that area. The IETF created this concept of a stub area as a measure to decrease the topology database even further for routers that are inside a stub area.
Again, the ABR routers take the credit for this reduction in OSPF overhead, as shown in the picture below. If the area is configured on all routers inside that area as a stub area, the ABR replaces all the networks it learns from the rest of the OSPF autonomous system with a default route. It also advertises that route to the routers inside the stub area. This makes sense because the routers inside the stub area are using that ABR as a gateway of last resort to leave their area.
When OSPF routers run the Dijkstra algorithms to calculate the best route to reach destination subnets, they use the lowest cumulative cost to reach that network. The path cost is calculated by taking 108 divided by the bandwidth in bps. The following table lists some of the common path costs associated with their respective bandwidths.
|T1 (1.544 Mbps)||64|
|E1 (2.048 Mbps)||48|
|Ethernet (10 Mbps)||10|
|Fast Ethernet (100 Mbps)||1|
|Gigabit Ethernet (1000 Mbps)||1|
Notice that when you reach and exceed Fast Ethernet speeds of 100 Mbps, the cost is still 1. For that reason, you can configure OSPF to use a different reference for the bandwidth that is higher than 108 to account for links of that magnitude.
Unlike most routing protocols, OSPF routers identify each other with something known as a router ID. The router ID is s 32-bit unique number in which the router is known to the OSPF autonomous system. This ID is determined in the following order:
- The highest IP address assigned to an active logical interface at the start of the OSPF process.
- If no logical interface is present, the highest IP address of an active physical interface at the start of the OSPF process.
Note that if there is a logical interface, the IP address overrides any physical IP address for the router ID, even if it is a lower value. What do I mean by a logical interface? Cisco routers let you create logical or virtual interfaces called loopback interfaces. The advantage of using this virtual interface is that, unlike physical interfaces, loopback interfaces cannot go down unless the router is malfunctioning or turned off.
The environment in which OSPF operates varies greatly depending on the type of topology to which an OSPF interface is connected. Such operations as hello and dead timers, neighbor discovery methods, and OSPF update overhead reduction ultimately is dictated by the OSPF interface’s topology. Here are the three main types of topologies:
- Broadcast multi-access: these topologies denote multiple devices accessing a medium in which broadcasts and multicasts are heard by all devices sharing that medium (such as Ethernet).
- Non-broadcast multi-access: NBMA topologies are similar to broadcast multi-access topologies (multiple devices accessing a medium), except that devices cannot hear each others’ broadcasts because the medium is separated by other routers, such as with Frame Relay.
- Point to Point: A point-to-point link has only two devices on a shared network link.
To demonstrate the point, consider how OSPF timers operate in different topologies. For instance, in broadcast multi-access and point-to-point links, the hello and dead intervals are 10 and 40 seconds, respectively. Remember, these hello messages are not full routing updates like distance vector routing protocols. The hello messages contain minimal information to identify the sending device to other neighbor routers to ensure that their dead timers do not expire and cause a topology change.
Because NBMA network topologies such as Frame Relay typically comprise slower links, the default timers for these topologies are 30 seconds for hello messages and 120 seconds for the dead timers. The hello messages are not sent as often in NBMA topologies to ensure that OSPF routers do not needlessly consume bandwidth on the WAN links.
Another significant topology-related function of OSPF is the election of a Designated Router (DR) and a Backup Designated Router (BDR) in broadcast and non-broadcast multi-access topologies. Routers in these topologies undergo these elections to reduce the amount of update overhead that can be incurred when link-state changes.
For example, in the following picture, all the routers in Area 7 are connected to a switch, which indicates a broadcast multi-access topology.
If the link connected to Area 0 on ABR Router B were to fail, OSPF protocol dictates that it flood the update to all the neighbors in its neighbor table. When Router B sends that update to all the routers in the topology, all devices hear it because they are all connected to the same switch. However, recall that after the other routers receive that update, they have to alert all their neighbors again, all the routers connected to the switch. This time, multiple routers send the update and cause excessive traffic on the switched network to devices that are already aware of the update. If you have a large number of routers in the topology, this update traffic can consume quite a bit of unnecessary bandwidth and processor utilization. In point-to-point links, it is not necessary to have a DR or BDR election, because only two routers are on the segment, and there is no threat of excessive update traffic.
When a DR and BDR are elected (in case the DR fails), routing updates are minimized, because the update is sent only to the DR and BDR. The DR then is responsible for updating the rest of the topology. The election is determined by the following:
- Highest interface priority: an arbitrary number you can configure on an interface by interface basis. The default is 1. A value of 0 renders the device ineligible for DR and BDR election.
- Highest router ID: in the event of a tie, the highest router ID is the tiebreaker.
The following picture shows the election process between several routers in a broadcast multi-access topology. Because Router D’s interface priority is highest, that router becomes the DR for that segment. Router F, with the second-highest priority, becomes BDR if Router D is turned off or crashes. Now if a link fails, the update is sent to Router D, which in turn updates the rest of the topology.
One missing piece of this OSPF puzzle is how the devices manage to send updates to the DR and BDR routers only if they are all connected in the same topology. The answer lies in the manner in which OSPF routers propagate LSAs and LSUs. Rather than broadcast this information as RIPv1 does, OSPF sends updates to two different reserved multicast addresses. The multicast address, 220.127.116.11, is reserved for the DR and BDR. When a router needs to send an update in a broadcast or nonbroadcast topology, it sends the LSU to 18.104.22.168, which only the DR and BDR process. The DR then sends the LSU to the multicast address of 22.214.171.124, which is the address to which all OSPF routers listen for updates and hello messages.
So now when Router B detects the link failure, it multicasts its LSU to 126.96.36.199, which only Router D and Router F process. Because Router D is the DR, it disseminates the update to everyone else in the topology by sending the update to 188.8.131.52.
Recall that link-state routing protocols establish who the router’s neighbors are before exchanging updates. This process is actually quite intricate and depends on several factors. To clarify, let’s look at what happens when an OSPF router comes online.
After the OSPF process is started in a router, it sends a hello message out all interfaces that are configured to participate in OSPF. The hello LSAs are sent to the multicast address of 184.108.40.206 so that all devices running the OSPF process it. Information contained in the hello messages includes the following: router ID, hello/dead intervals, known neighbors, area ID, priority, DR address, BDR address, authentication password (similar to RIPv2), and stub area flags (if the area is configured as a stub area).
A router that receives this hello message adds that neighbor to its neighbor table only if the hello/dead intervals, area ID, authentication password, and stub flag match its configuration.
If these values match, the router sends back a hello message, which includes the Router ID of the new router in the neighbor list. At that point, the original router adds that router to its neighbor table. This process occurs until the router discovers all the neighbors on its links.
It is important to note that no update information has been exchanged at this point. If the topology has a DR elected (indicated in the Hello message is received), it synchronizes its topology table with that router because the DR always has the most current information. If the topology is a point-to-point connection, the two routers synchronize with the neighbor on the other side of the link. After the topology tables are synchronized, it is said that the devices have formed an adjacency. Now that the router has all possible routes in the topology table, it can run the Dijkstra algorithm to calculate the best routes for each subnet.