The-7-Layers-of-OSI-Model

We already have a brief introduction about the OSI model in the previous articles. We know that there are 7 layers of the OSI Model and in this article, we are going deeper into those layers.

The 7 Layers of OSI Model

The article will be long, but as long as you have an interest in CCNA and networking, this cannot be monotonous for you.

As we already know, these are the 7 layers of the OSI model:

  • The Application Layer (Layer 7)
  • The Presentation Layer (Layer 6)
  • The Session Layer (Layer 5)
  • The Transport Layer (Layer 4)
  • The Network Layer (Layer 3)
  • The Data Link Layer (Layer 2)
  • The Physical Layer (Layer 1)

We’ll see each one of the above separately.

The Application Layer

The application layer of the OSI model marks the spot where users actually communicate to the computer. This layer only comes into play when it’s apparent that access to the network is going to need soon.

Take the case of Internet Explorer (IE). You could uninstall every trace of networking components from a system, such as TCP/IP, NIC cards, and so on, and you could still use IE to view a local HTML document without any problem.

But things would definitely get messy if you tried to do something like view an HTML document that must be retrieved using HTTP or nab a file with FTP or TFTP.

That’s because IE will respond to requests such as those by attempting to access the Application layer. And what’s happening is that the Application layer is acting as an interface between the actual application program which isn’t at all a part of the layered structure and the next layer down by providing ways for the application to send information down through the protocol stack.

In other words, IE doesn’t truly reside within the Application layer it interfaces with Application layer protocols when it needs to deal with remote resources.

The application layer is also responsible for identifying and establishing the availability of the intended communication partner and determining whether sufficient resources for the intended communication exist.

These tasks are important because computer applications sometimes require more than only desktop resources. Often, they’ll unite communicating components from more than one network application.

Prime examples are file transfers and email, as well as enabling remote access, network management activities, client/server processes, and information location.

Many network applications provide services for communication over enterprise networks, but for present and future internetworking, the need is fast developing to reach beyond the limits of the current physical network.

The Presentation Layer

The presentation layer gets its name from its purpose; it presents data to the Application layer and is responsible for data translation and code formatting.

This layer is essentially a translator and provides coding and conversion functions. A successful data-transfer technique is to adapt the data into a standard format before transmission.

Computers are configured to receive this generically formatted data and then convert the data back into its native format for actual reading (for example, EBCDIC to ASCII).

By providing translation services, the presentation layer ensures that data transferred from the Application layer of one system can be read by the Application layer of another one.

The OSI has protocol standards that define how standard data should be formatted. Tasks like data compression, decompression, encryption, and decryption are associated with this layer. Some presentation layer standards are involved in multimedia operations too.

The Session Layer

The session layer is responsible for setting up, managing and then tearing down sessions between presentation layer entities. This layer also provides dialog control between devices or nodes.

It coordinates communication between systems and serves to organize their communication by offering three different modes: simplex, half-duplex, and full-duplex. To sum up, the session layer basically keeps different applications’ data separate from other applications’ data.

These three layers are the upper layers that define how the application within the end stations will communicate with each other and with users.

Below are the four lower layers which define how data is transmitted end to end.

The Transport Layer

The transport layer segments and reassembles data into a data stream. Services located in the Transport layer segment and reassemble data from upper-layer applications and unite it into the same data stream.

They provide end-to-end data transport services and can establish a logical connection between the sending host and destination host on an internetwork.

Some of you are probably familiar with TCP and UDP already. (But if you’re not, no worries I’ll tell you all about them). If so, you know that both work at the Transport layer and that TCP is a reliable service and UDP is not.

This means that application developers have more options because they have a choice between the two protocols when working with TCP/IP protocols.

The transport layer is responsible for providing mechanisms for multiplexing upper-layer applications, establishing sessions, and tearing down virtual circuits. It also hides details of any network-dependent information from the higher layers by providing transparent data transfer.

The transport layer can be connectionless or connection-oriented. However, cisco is mostly concerned with you understanding the connection-oriented portion of the Transport layer. The following sections will provide the skinny on the connection-oriented (reliable) protocol of the Transport layer.

Flow control

Data integrity is ensured at the Transport layer by maintaining flow control and by allowing users to request reliable data transport between systems.

Flow control prevents a sending host on one side of the connection from overflowing the buffers in the receiving host an event that can result in lost data.

Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:

  • The segments delivered are acknowledged back to the sender upon their reception.
  • Any segments not acknowledged are transmitted.
  • Segments are sequenced back into their proper order upon arrival at their destination.
  • A manageable data flow is maintained in order to avoid congestion, overloading, and data loss.

Connection-oriented communication

In reliable transport operation, a device that wants to transmit sets up a connection-oriented communication with a remote device by creating a session.

The transmitting device first establishes a connection-oriented session with its peer system, which is called a call setup or a three-way handshake. Data is then transferred; when the transfer is finished, a call termination takes place to tear down the virtual circuit.

The picture below depicts a typical reliable session taking place between sending and receiving systems.

Looking at it, you can see that both hosts’ application programs begin by notifying their individual operating systems that a connection is about to be initiated.

The two operating systems communicate by sending messages over the network confirming that the transfer is approved and that both sides are ready for it to take place.

After all of this required synchronization takes place, a connection is fully established and the data transfer begins (this virtual circuit setup is called overhead!).

ccna 7 layers

While the information is being transferred between hosts, the two machines periodically check in with each other, communicating through their protocol software to ensure that all is going well and that the data is being received properly.

Let me sum up the steps in the connection-oriented session-the three-way handshake pictured in figure 1.9.

  • The first “connection agreement” segment is a request for synchronization.
  • The second the third segment acknowledges the request and establishes connection parameters the rules between hosts. These segments request that the receiver’s sequencing is synchronized here as well so that a bidirectional connection is formed.
  • The final segment is also an acknowledgment. It notifies the destination host that the connection agreement has been accepted and that the actual connection has been established. Data transfer can now begin.

It sounds pretty simple, but things don’t always flow so smoothly. Sometimes during a transfer, congestion can occur because a high-speed computer is generating data traffic a lot faster than the network can handle transferring.

A bunch of computers simultaneously sending datagrams through a single gateway or destination can also botch things up nicely. In the latter case, a gateway or destination can become congested even though no single source caused the problem.

In either case, the problem is basically akin to the freeway bottleneck too much traffic for too small a capacity. It’s not usually one car that’s the problem; there are simply too many cars on that freeway.

Okay, so what happens when a machine receives a flood of datagrams too quickly for it to process? It stores them in a memory section called a buffer. But this buffering action can solve the problem only if the datagrams are part of a small burst. If not, and the datagram deluge continues, a device’s memory will eventually be exhausted, its flood capacity will be exceeded, and it will react by discarding any additional datagrams that arrive.

No huge worries here, though. Because of the transport function, network flood control systems really work quite well. Instead of dumping resources and allowing data to be lost, the transport can issue a “not ready” indicator to the sender, or source, of the flood as shown in the picture below. This mechanism works kind of like a stoplight, signaling the sending device to stop transmitting segment traffic to its overwhelmed peer.

After the peer receiver processes the segments already in its memory reservoir its buffer it sends out a “ready” transport indicator. When the machine waiting to transmit the rest of its datagrams receives this “go” indicator, it resumes its transmission.

ccna 7 layers

In fundamental, reliable, connection-oriented data transfer, datagrams are delivered to the receiving host in exactly the same sequence they’re transmitted and the transmission fails if this order is breached! If any data segments are lost, duplicated, or damaged along the way, failure will transmit. This problem is solved by having the receiving host acknowledge that it has received each and every data segment.

A service is considered connection-oriented if it has the following characteristics:

Windowing

Ideally, data throughput happens quickly and efficiently. And as you can imagine, it would be slow if the transmitting machine had to wait for an acknowledgment after sending each segment but because there’s time available after the sender transmits the data segment and before it finishes processing acknowledgments from the receiving machine, the sender uses the break as an opportunity to transmit more data.

The quantity of data segments (measured in bytes) that the transmitting machine is allowed to send without receiving an acknowledgment for them is called a window.

So the size of the window controls how much information is transferred from one end to the other. While some protocols quantify information by observing the number of packets, TCP/IP measures it by counting the number of bytes.

As you can see in picture below, there are two window sizes-one set to 1 and one sets to 3.

ccna 7 layers

When you’ve configured a window size of 1, the sending machine waits for an acknowledgment for each data segment it transmits before transmitting another. If you’ve configured a window of size 3, it’s allowed to transmit three data segments before an acknowledgment is received.

In our simplified example, both the sending and receiving machines are workstations. In reality, this is not done in simple numbers but in the number of bytes that can be sent.

Acknowledgments

Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other through a fully functional data link. It guarantees that the data won’t be duplicated or lost.

This is achieved through something called positive acknowledgment with retransmission technique that requires a receiving machine to communicate with the transmitting source by sending an acknowledgment message back to the sender when it receives data.

The sender documents each segment it sends and waits for this acknowledgment before sending the next segment. When it sends to a segment, the transmitting machine starts a timer and retransmits if it expires before an acknowledgment is returned from the receiving end.

In the following picture, the sending machine transmits segments 1, 2, and 3. The receiving node acknowledges it has received them by requesting segment 4. When it receives the acknowledgment, the sender then transmits segments 4, 5, and 6. If segment 5 doesn’t make it to the destination, the receiving node acknowledges that event with a request for the segment to be resent. The sending machine will then resend the lost segment and wait for an acknowledgment, which it must receive in order to move on to the transmission of segment 7.

The 7 Layers of the OSI Model explained - CCNA Course

The Network Layer

The Network layer (also called layer 3) manages device addressing, tracks the location of devices on the network, and determines the best way to move data, which means that the Network layer must transport traffic between devices that aren’t locally attached. Routers (layer 3 devices) are specified at the Network layer and provide the routing services within an internetwork.

It happens like this: first, when a packet is received on a router interface, the destination IP address is checked. If the packet isn’t destined for that particular router, it will look up the destination network address in the routing table.

Once the router chooses an exit interface, the packet will be sent to that interface to be framed and sent out on the local network. If the router can’t find an entry for the packet’s destination network in the routing table, the router drops the packet.

Two types of packets are used at the Network layer: data and route updates.

Data packets

It is used to transport user data through the internetwork. Protocols used to support data traffic are called routed protocols; examples of routed protocols are IP and IPv6. You’ll learn about IP addressing.

Route update packets

It is used to update neighboring routers about the networks connected to all routers within the internetwork. Protocols that send route update packets are called routing protocols; examples of some common ones are RIP, RIPv2, EIGRP, and OSPF. Route update packets are used to help build and maintain routing tables on each router.

In figure 1.13, I’ve given you an example of a routing table. The routing table used in a router includes the following information:

Network addresses

Protocol-specific network addresses. A router must maintain a routing table for individual routing protocols because each routing protocol keeps track of a network with a different addressing scheme (IP, IPv6, and IPX, for example). Think of it as a street sign in each of the different languages spoken by the residents that live on a particular street. So, if there were American, Spanish, and French folks on a street named cat, the sign would read cat/Gato/Chat.

ccna 7 layers

Interface

The exit interfaces a packet will take when destined for a specific network.

Metric

The distance to the remote network. Different routing protocols use different ways of computing this distance. I’m going to cover routing protocols, but for now, know that some routing protocols (namely RIP) use something called a hop count (the number of routers a packet passes through en route to a remote network), while others use bandwidth, delay of the line, or even tick count (1/18 of a second).

And as I mentioned earlier, routers break up broadcast domains, which means that by default, broadcasts aren’t forwarded through a router. Do you remember why this is a good thing? Routers also break up collision domains, but you can also do that using layer 2 (Data link layer) switches. Because each interface in a router represents a separate network, it must be assigned unique network identification numbers, and each host on the network connected to that router must use the same network number. The following picture shows how a router works in an internetwork.

The 7 Layers of the OSI Model explained - CCNA Course

Here are some points about routers that you should really commit to memory:

  • Routers, by default, will not forward any broadcast or multicast packets.
  • Routers use the logical address in a Network layer header to determine the next-hop router to forward the packet to.
  • Routers can use access lists, created by an administrator, to control security on the types of packets that are allowed to enter or exit an interface.
  • Routers can provide layer 2 bridging functions if needed and can simultaneously route through the same interface.
  • Layer 3 devices (routers in this case) provide connections between virtual LANs (VLANs).
  • Routers can provide quality of service (QoS) for specific types of network traffic.

The Data Link Layer

The Data link layer provides the physical transmission of the data and handles error notification, network topology, and flow control. This means that the Data link layer will ensure that messages are delivered to the proper device on a LAN using hardware addresses and will translate messages from the Network layer into bits for the Physical layer to transmit.

The data link layer formats the message into pieces, each called a data frame and adds a customized header containing the hardware destination and source address.

This added information forms a sort of capsule that surrounds the original message in much the same way that engines, navigational devices, and other tools were attached to the lunar modules of the Apollo project.

These various pieces of equipment were useful only during certain stages of space flight and were stripped off the module and discarded when their designated stage was complete. Data traveling through networks is similar.

The following demonstration shows the Data link layer with the Ethernet and IEEE specifications. When you check it out, notice that the IEEE 802.2 standard is used in conjunction with and adds functionality to the other IEEE standards.

The 7 Layers of the OSI Model explained - CCNA Course

It’s important for you to understand that routers, which work at the Network layer, don’t care at all about where a particular host is located. They’re only concerned about where networks are located and the best way to reach them including remote ones.

Routers are totally obsessive when it comes to networks. And for once, this is a good thing! It’s the Data link layer that’s responsible for the actual unique identification of each device that resides on a local network.

For a host to send packets to individual hosts on a local network as well as transmit packets between routers, the Data link layer uses hardware addressing.

Each time a packet is sent between routers, it’s framed with control information at the Data link layer, but that information is stripped off at the receiving router and only the original packet is left completely intact.

This framing of the packet continues for each hop until the packet is finally delivered to the correct receiving host. It’s really important to understand that the packet itself is never altered along the route; it’s only encapsulated with the type of control information required for it to be properly passed on to the different media types.

The IEEE Ethernet Data link layer has to sublayers:

Media Access Control (MAC) 802.3

It defines how packets are placed on the media. Contention media access is “first come/first served” access where everyone shares the same bandwidth, hence the name. Physical addressing is defined here, as well as logical topologies. What’s a logical topology? It’s the signal path through a physical topology. Line discipline, error notification (not correction), ordered delivery of frames, and optional flow control can also be used at this sublayer.

Logical Link Control (LLC) 802.2

It is responsible for identifying Network layer protocols and then encapsulating them. An LLC header tells the Data Link Layer what to do with a packet once a frame is received. It works like this: a host will receive a frame and look in the LLC header to find out where the packet is destined to say, the IP protocol at the Network Layer. The LLC can also provide flow control and sequencing of control bits.

The switches and bridges I talked about near the beginning of the article both work at the Data link layer and filter the network using hardware (MAC) addresses. We will look at these in the following section.

Switches and bridges at the Data Link Layer

Layer 2 switching is considered hardware-based bridging because it uses specialized hardware called an application-specific integrated circuit (ASIC). ASICs can run up to gigabit speed with very low latency rates.

Bridges and switches read each frame as it passes through the network. The layer 2 device then puts the source hardware address in a filter table and keeps tracks of which port the frame was received on. This information (logged in the bridge’s or switch’s filter table) is what helps the machine determine the location of the specific sending device. The following picture shows a switch in an internetwork.

ccna 7 layers

The real estate business is all about location, location, location, and it’s the same way for both layer 2 and layer 3 devices. Though both need to be able to negotiate the network, it’s crucial to remember that they’re concerned with very different parts of it. Primarily, layer 3 machines (such as routers) need to locate specific networks, whereas layer 2 machines (switches and bridges) need to eventually locate specific devices. So, networks are to routers as individual devices are to switches and bridges. And routing tables that “map” the internetwork is for routers as filter tables that “map” individual devices are for switches and bridges.

After a filter table is built on the layer 2 device, it will forward frames only to the segment where the destination hardware address is located. If the destination device is on the same segment as the frame, the layer 2 device will block the frame from going to any other segment. If the destination is on a different segment, the frame can be transmitted only to that segment. This is called transparent bridging.

When a switch interface receives a frame with a destination hardware address that isn’t found in the device’s filter table, it will forward the frame to all connected segments. If the unknown device that was sent the “mystery frame” replies to this forwarding action, the switch updates its filter table regarding that device’s location. But in the event, the destination address of the transmitting frame is a broadcast address, the switch will forward all broadcasts to every connected segment by default.

All devices that the broadcast is forwarded to are considered to be in the same broadcast domain. This can be a problem; layer 2 devices propagate layer 2 broadcast storms that choke performance, and the only way to stop a broadcast storm from propagating through an internetwork is with a layer 3 device a router.

The biggest benefit of using switches instead of hubs in your internetwork is that each switch port is actually its own collision domain. (conversely, a hub creates one large collision domain). But even armed with a switch, you still can’t break up broadcast domains. Neither switches nor bridges will do that. They’ll typically simply forward all broadcasts instead.

Another benefit of LAN switching over hub-centered implementations is that each device on every segment plugged into a switch can transmit simultaneously at least, they can as long as there is only one host on each port and a hub isn’t plugged into a switch port. As you might have guessed, hubs allow only one device per network segment to communicate at a time.

The Physical Layer

Finally arriving at the bottom, we find that the Physical layer does two things: it sends bits and receives bits.

Bits come only in values of 1 or 0 Morse code with numerical values. The physical layer communicates directly with the various types of actual communication media.

Different kinds of media represent these bit values in different ways. Some use audio tones, while others employ state transitions changes in voltage from high to low and low to high.

Specific protocols are needed for each type of media to describe the proper bit patterns to be used, how data is encoded into media signals and the various qualities of the physical media’s attachment interface.

The Physical layer specifies the electrical, mechanical, procedural, and functional requirements for activating, maintaining, and deactivating a physical link between end systems.

This layer is also where you identify the interface between the data terminal equipment (DTE) and the data communication equipment (DCE). (some old phone company employees still call DCE data circuit terminating equipment). The DCE is usually located at the service provider, while the DTE is the attached device. The services available to the DTE are most often accessed via a modem or channel service unit/data service unit (CSU/DSU).

The Physical layer’s connectors and different physical topologies are defined by the OSI as standards, allowing disparate systems to communicate. The CCNA objectives are only interested in the IEEE Ethernet standards.

Hubs at the Physical layer

A hub is really a multiple port repeater. A repeater receives a digital signal and re-amplifies or regenerates that signal and then forwards the digital signal out all active ports without looking at any data.

An active hub does the same thing. Any digital signal received from a segment on a hub port is regenerated or re-amplified and transmitted out all ports on the hub. This means all devices plugged into a hub are in the same collision domain as well as in the same broadcast domain. The following picture shows a hub in a network.

The 7 Layers of the OSI Model explained - CCNA Course

Hubs, like repeaters, don’t examine any of the traffic as it enters and is then transmitted out to the other parts of the physical media.

Every device connected to the hub, or hubs, must listen if a device transmits. A physical star network where the hub is a central device and cables extend in all directions out from it is the type of topology a hub creates.

Visually, the design really does resemble a star, whereas Ethernet Networking runs a logical bus topology, meaning that the signal has to run through the network from end to end.

With this, we completed the article describing details about the 7 layers of OSI Model in computer networks categorized in the CCNA Course.

In the next article, we’ll learn about Ethernet Networking.

We hope you like the post and learned something valuable. Let us know in the comment section below.