By Tom Kovanic, business development manager, Panduit Corp.
Nobody likes a bad packet.
IT network managers dislike packet loss because it steals valuable bandwidth, reducing the link’s available throughput. Managers of OT networks dislike corrupted packets for a different reason: they negatively impact latency, which could impact real-time applications. Why do IT network managers and OT network managers look at corrupted packets differently? And what impact does packet loss have on the organization’s IT/OT network infrastructure – and achieving its Industrial Internet of Things goals?
Before we answer these questions, we need to look at how packets turn bad.
Corrupted packets can occur when they encounter a bit error as the packet moves from one end of the network to the other. Bit errors almost always occur in the lowest layer of a protocol stack, the physical layer. The job of the physical layer is to move information from one end of the network to the other. Typically, this information is represented by a stream of 0s and 1s. The physical layer does not assign any meaning to the stream of 0s and 1s because the upper layers handle that task.
Outside interference such as lightning or other electrical noise can cause the bit error if the physical layer uses copper cabling or wireless connection. In optical networks, a bit error could occur if the optical module fails, causing it to have difficulty determining the stream of 1s and 0s. Other causes could be improperly terminated cabling, dirty fiber optic connectors, or water penetrating the cable.
Detecting Bit Errors
The physical layer has no idea if a bit error has occurred. At the physical layer, the bits have no meaning. The physical layer presents the stream of 0s and 1s, including any bit errors, to the data link layer.
The data link layer assigns meaning to the stream of bits. The sender takes the packet it will send and runs it through an algorithm. The result of the algorithm is a number that can be used to determine if a bit error has occurred. This number is appended to the data packet and is sent to the receiver.
The receiver at the other end of the link runs the packet through the same algorithm, which results in a locally generated number. The receiver compares the local number with the number transmitted as part of the packet. If they are the same, no errors occurred during the transmission. If they are not the same, then a bit error has occurred corrupting that packet.
What does the data link layer do when it detects a corrupted packet? With most protocols, the switch discards the corrupted packet and the receiver asks the sender to re-send the offending packet. That is how a single bit error can cause thousands of wasted bits that rob a network of valuable throughput and add to a network’s latency.
The Impact of Corrupted Packets
A corrupted packet impacts a network in two ways: it reduces throughput and adds to latency. Depending on which side of the fence you are on with IT/OT convergence, you will either cringe at the thought of reduced throughput or the thought of added latency. A corrupted packet reduces throughput when the packet must be sent twice. Because the higher layers of the protocol stack can take no action until a correct packet arrives, corrupted packets also add to a network’s latency.
IT managers are more concerned about throughput than latency. The latency of the enterprise network is responsive enough for their applications. However, there is an insatiable appetite for more throughput in enterprise networks.
OT network managers look at packet loss differently. On the factory floor, a network’s latency is more important than bandwidth or throughput. For example, when a sensor on the factory floor sends a packet to request an action, it needs the response in milliseconds. The corrupted packet cannot deliver the request, and the retransmission delays the decision on the appropriate action to take. This event can be costly.
Networking infrastructures play an important role in minimizing packet loss. This is especially the case when network speeds move beyond 10Gb/s. Here are two considerations regarding networking infrastructure and how it may corrupt your packets.
* The first area to consider is proper installation and maintenance of the network. Improper installation of connectors could unbalance twisted pair cabling, allowing electromagnetic interference (EMI) to impact link performance. Cleaning the fiber optic connectors is always important, but even more so at higher network speeds. Proper grounding and bonding eliminate ground loops between different pieces of networking equipment.
* Another consideration is the media type, for example, copper or fiber. You should consider CAT6A unshielded twisted pair copper cabling for new installations, as it provides the best performance for most applications without the added expense of shielded cable. For harsh environments where EMI is present, you may need to install shielded copper cable or fiber cabling which is immune to EMI.
Using the Right Infrastructure
IT and OT network managers might disagree about how packet loss impacts their networks, but they can agree that a robust infrastructure can help prevent packet corruption.