A ppp frame includes a fcs which is
Both are prone to errors if the payload just happens to contain a valid packet and line corruption causes the proceeding bytes to contain the "start of frame" byte sequence but that sounds highly improbable. The former is safer, the latter is more efficient.
![a ppp frame includes a fcs which is a ppp frame includes a fcs which is](https://i.imgur.com/qDrIWBY.png)
![a ppp frame includes a fcs which is a ppp frame includes a fcs which is](https://slideplayer.com/slide/6079449/18/images/12/PPP+Frame+Structure+(1).jpg)
The GFP receiver synchronizes to the GFP frame boundary through a three-state process. UPDATE: I found what I was looking for (which isn't strictly what I asked about) by looking up "GFP CRC-based framing".
#A ppp frame includes a fcs which is how to#
How does the protocol know where the next frame begins?ĭoes it just scan for the next occurrence of "flag" (in the case of PPP)? If so, what happens if the packet payload just so happens to contain "flag" itself? My point is that, whether packet-framing or "length" fields are used, it's not clear how to recover from invalid packets where the "length" field could be corrupt or the "framing" bytes could just so happen to be part of the packet payload. Protocols of the Internet protocol suite tend to use checksums.Looking at the data-link level standards, such as PPP general frame format or Ethernet, it's not clear what happens if the checksum is invalid. Refer to Ethernet frame § Frame check sequence for more information.īy far the most popular FCS algorithm is a cyclic redundancy check (CRC), used in Ethernet and other IEEE 802 protocols with 32 bits, in X.25 with 16 or 32 bits, in HDLC with 16 or 32 bits, in Frame Relay with 16 bits, in Point-to-Point Protocol (PPP) with 16 or 32 bits, and in other data link layer protocols. An alternative approach is to generate the bit reversal of the FCS so that the reversed FCS can be also sent least significant bit (bit 0) first. For Ethernet and other IEEE 802 protocols, the standard states that data is sent least significant bit first, while the FCS is sent most significant bit (bit 31) first. The FCS is often transmitted in such a way that the receiver can compute a running sum over the entire frame, together with the trailing FCS, expecting to see a fixed result (such as zero) when it is correct.