Skip to main content

Transport Layer

  • Chapter
  • First Online:
Internetworking

Part of the book series: X.media.publishing ((XMEDIAPUBL))

  • 2250 Accesses

Abstract

The Internet protocol provides the unifying connection for a variety of network technologies. It creates a uniform, homogeneous and global network from different architectures that can be likened to a colorful patchwork. But this is by no mean the end of the story. On its own, IP provides an unreliable and insecure service. Data is transported into the receiver’s network, each individual piece independent of the other, and without a guarantee of secure, reliable delivery. Reliable communication is, however, the prerequisite for efficient and safe data traffic in the network. An application must be able to rely on once-sent data actually reaching its designated receiver. At the very least, it is necessary to determine whether or not the transmitted data has in fact reached its goal, and if so, whether this happened in a timely and reliable manner. The establishment and management of a secure, reliable connection is one of the tasks of the next higher protocol layer in transport layer of the TCP/IP reference model. This is where the complex and reliable Transmission Control Protocol (TCP) works, and the speed-optimized User Datagram Protocol. These provide the user comfortable tools for connection management. The transport-layer protocols place a further abstraction layer on the Internet and implement a direct end-to-end connection, without attention having to be paid to the details of the connection, e.g., routing of the data traffic.

Much might be done—did we stand fast together.

–Friedrich Schiller (1759–1805), “Wilhelm Tell”.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christoph Meinel .

Glossary

Address resolution

The translation of IP addresses into the corresponding hardware addresses of the network hardware. A host or a router uses address resolution if data within the same physical network is sent to another computer. Address resolution is always restricted to a particular network, i.e., a computer never resolves the addresses of a computer in another network. Vice versa, the fixed assignment of a hardware address into an IP address is known as address binding.

Address Resolution Protocol (ARP)

An implemented protocol from the TCP/IP protocol family for address resolution of protocol addresses into hardware addresses. To determine a hardware address, the requesting computer sends the corresponding protocol address as an ARP message via broadcast. The requesting computer recognizes its own protocol address and is the only one to send its hardware address as an ARP message directly back to the requesting computer.

Application Programming Interface (API)

An interface between the application and protocol software; the so-called application programming interface makes routines and data structures available for use and control of the communication between applications and protocol software.

Authentication

Proves the identity of a user or the integrity of a message. In authentication, certificates of a trustworthy instance are used for identity verification and digital signals are generated and sent with a message for checking its integrity.

Broadcasting

Term used for addressing all computers in a network simultaneously. A distinction is made in the Internet between a broadcast in the local network and a broadcast over all networks. If a message is to be forwarded to just one computer—which corresponds to standard addressing—the term unicast is used.

Certificate

Digital certificates are the electronic equivalent to a personal ID card. They assign their owner a unique public key and therefore a digital signature, which can only be generated with the corresponding private key. Certificates must be issued and signed by a trustworthy third party known as a certification authority.

Certificate Authority (CA)

A certificate authority authenticates public keys of registered users with the help of certificates, according to the Internet standard RFC 1422. It is implemented in user identification. The public key of the user is digitally signed with the name of the user and the control details of the CA and issued in this form as a certificate.

Congestion control

If a sender is transmitting faster along a communication connection in a network than the receiver can process the received data, the data is then stored temporarily in a special buffer expressly for this purpose. Once the buffer is full the incoming data packets can no longer be secured and must be discarded—with resulting congestion. Because the transport layer only establishes and controls end-to-end connections there is no possibility of directly inuencing the intermediate network systems along a connection. This is the reason that possible delays due to congestion have to indirectly be taken into account when determining the round-trip time between sender and receiver. Data flow between sender and receiver must likewise be throttled via ow control to counteract any arising congestion situations.

Connection-oriented/connectionless service

A distinction is generally made between connection-oriented and connectionless Internet services. Before the actual start of data transmission, connection-oriented services must establish a connection via fixed, negotiated exchanges in the network. The thus-established connection path is used for the duration of the entire communication. Connectionless services do not choose a fixed connection path in advance. The transmitted data packets are sent—independent of each other—on possibly different paths through the Internet.

Cryptography

The branch of computer science and mathematics concerned with the construction and evaluation of the key procedure. The aim of cryptology centers on protecting confidential information from being accessed by unauthorized third parties.

Data Encryption Standard (DES)

Symmetrical block encryption standard that was published in 1977 and updated in 1993 for commercial use. DES encodes blocks of 64 bits each with an equally long key (effectively 56 bits). The DES procedure is made up of a total of 19 rounds. The 16 inner rounds are controlled by the key. The DES procedure represents a 64 bit substitution encryption procedure and can be easily broken today by relatively simple means. A multiple implementation of DES is carried out using different keys for heightened security, e.g., triple DES (3DES).

Data integrity

While in cryptography there is no way to prevent data or messages from being changed by an unauthorized third party during transport, the changes can be made identifiable through the use of so-called hash functions that send a digital fingerprint along with the transmitted data.

Diffie-Hellman Procedure

The first, officially known, asymmetrical encryption procedure developed in 1976 by W. Diffie, M. Hellman and R. Merkle. Similar to the RSA procedure, Diffie-Hellman is based on a mathematical function whose reversal—particularly the problem of the discrete logarithm—is virtually impossible to calculate with a reasonable effort.

Digital signature

Is used to authenticate a document and consists of the digital fingerprint of the document encrypted with the private key of the originator.

Flow control

In a communication network, ow control prevents a faster sender from flooding a slower receiver with transmitted data and causing congestion. As a rule, the receiver has buffer storage where incoming data packets can be stored until subsequent further processing. Protocol mechanisms must be provided to prevent this intermediate storage from overowing. This means the possibility for the receiver to inform the sender to wait until the buffer storage has been processed before sending more data packets.

Fragmentation/defragmentation

The length of data packets sent by a communication protocol in a packet-switched network is always limited below the application layer due to technical restrictions. If the length of the message to be sent is larger than the respective prescribed data packet length, the message is broken up into message pieces (fragments). These correspond to the prescribed length restrictions. Individual fragments are given a sequence number so that after transmission they may be reassembled correctly (defragmented) at the receiver. This is necessary as transmission order cannot be guaranteed in the Internet.

Gateway

Intermediate system in the network that is capable of connecting individual networks into a new system. Gateways allow communication between application programs on different end systems and are located in the application layer of the communication protocol model. They are thus able to translate different application protocols into each other.

Internet

The merging of multiple, mutually incompatible network types with each other. Appearing to the user as a homogeneous universal network, it allows all computers connected to a single network in this union the possibility to communicate transparently with every other host on the internet. An internet is not subject to limitations in terms of its expansion. The concept of internetworking is very exible, with an unlimited extension of the internet possible at any time.

Internet Protocol (IP)

Protocol on the network layer of the TCP/IP reference model. As one of the cornerstones of the Internet, IP is responsible for the global Internet, made up of many heterogeneous individual networks, appearing as a unified, homogeneous network. A standard addressing schema (IP addresses) offers worldwide unique computer identification. For this, IP provides a connectionless packet-switching datagram service, which works according to the best effort principle rather than fulfilling a service guarantee.

IP datagram

The data packets transmitted via the IP protocol are referred to as datagrams. This is because the IP protocol only provides a connectionless and unreliable service (datagram service).

IPv4 address

32-bit binary address uniquely identifying a computer in the global Internet. This address is subdivided into four octets for better readability. These are interpreted as unsigned decimal integers in a binary code, each separated by a decimal point (e.g., 232.23.3.5). The IPv4 address is subdivided into two parts: the address prefix (network ID), which uniquely establishes worldwide the network where the addressed computer is located, and the address suffix (host ID), uniquely identifying the computer within its local network.

IPv6

The successor protocol standard of the IPv4 Internet protocol, o_ering a considerably expanded functionality. The limited address space in IPv4, one of the major problems of the popular IP standard, was drastically increased in IPv6 from 32 bits to 128 bits.

Key

A message can be transmitted safely via an insecure medium if its contents remain hidden from unauthorized third parties. This is done with the help of an encryption procedure (cipher). The original message, the so-called plain text, is used for encryption with a transformation function contained in the encrypted message (cipher text). The transformation function for encryption can be parameterized via a key. The size of the key space is a measure for the difficulty of an unauthorized reversal of the transformation function.

Man-in-the-middle attack

An attack on a secure connection between two communication partners. The attacker intervenes between both (“man in the middle”), intercepts the communication and manipulates it - unnoticed by the communication partners.

Multicasting

A source transmits to a group of receivers simultaneously in a multicasting transmission. A 1:n-communication is involved. Multicasting is often used for the transmission of real-time multimedia data. Network Address Translation (NAT): With a small number of public IPv4 addresses, NAT technology makes it possible to operate a much larger number of computers in a shared network and to manage them dynamically using the private IPv4 address space. The NAT-operated devices remain publicly reachable via the Internet, although they do not have their own public IP address and can only be addressed over an appropriate NAT gateway.

Port number

16-bit long identification for a TCP connection that is always associated with a specific application program. The port numbers 0–255 are reserved for special TCP/IP applications (well known ports) and the port numbers 256–1,023 for special UNIX applications. The port numbers 1,024–56,535 can be used for the individual’s own applications and are not subject to fixed assignment.

Privacy

Contents of a private message may only be known to the sender and receiver of the message. If an unauthorized third party “listens in” on a communication (eaves-dropping), confidentiality can no longer be guaranteed (loss of privacy).

Public key encryption

In the cryptographic procedure known as “public key,” every communication partner has a pair of keys consisting of a so-called public key and a secret, private key. The public key is available to all participants with whom communication is desired. The participants wanting to communicate with the holder of the public key encrypt their message with its public key. A message encrypted this way can only be decoded by the public key holder with the help of the corresponding secret key held securely by the owner.

Public Key Infrastructure (PKI)

In implementing the public key encryption procedure, every participant is required to have a key pair consisting of a key accessible to everyone (public key) and a secret key to which only it has access (private key). To eliminate abuse, the assignment of the participant to its public key is confirmed by a trustworthy third party, the Certificate Authority (CA), by means of a certificate. In order to be able to evaluate the security of a certificate, the rules as to how this certificate is created (security policy) must be made public. A PKI contains all of the organizational and technical measures required for a secure implementation of an asymmetrical key procedure for the encryption or that are necessary for the digital signature.

Router

A switching computer capable of connecting two or more subnets with each other. Routers work in the transport layer (IP layer) of the network and are capable of forwarding incoming data packets on the shortest route through the network based on their destination address.

Routing

There are often multiple intermediate systems (routers) along the path between the sender and the receiver in an internet. These handle the forwarding of transmitted data to the respective receiver. The determination of the correct path from sender to receiver is known as routing. Routers receive a transmitted data packet, evaluate its address information and forward it accordingly to the designated receiver.

RSA procedure

This is the most well-known asymmetrical encryption procedure and is named after its developers: Rivest, Shamir and Adleman. Just as Diffie-Hellman encryption, the RSA procedure works with two keys. One is a public key, available to everyone, and the other is a secret, private key. RSA is based on facts from number theory—the problem of prime factorization. A decryption with reasonable effort is not possible without knowledge of the secret, private key.

Secret key encryption

Oldest family of encryption procedures with the sender and receiver both using an identical secret key for the encryption and decryption of a message. A distinction is made between block cipher, where the message to be encrypted is segmented into blocks of a fixed length before its encryption, and stream cipher, where the encrypted message is viewed as a text stream. A one-time key of identical length is generated and encryption of the message is carried out character by character. Symmetrical encryption involves the problem of keeping the key exchange secret from third parties.

Service primitives

Abstract, implementation-independent processes for the use of a service on a certain level of the TCP/IP reference model, also called service elements. They define communication processes and can be used as abstract guidelines in defining communication interfaces. Only the data to be exchanged between communication partners is defined via service primitives and not how the process is carried out.

Socket

The TCP protocol provides a reliable connection between two end systems. For this purpose, sockets are defined at the endpoints of participating computers. They are made up of the IP address of the computer and a 16-bit long port number. These uniquely define this connection together with the corresponding equivalent of the communication partner. Via sockets so-called service primitives are available, which allow a command and control of the data transmission. Sockets associate incoming and outgoing buffer storage with the connections they have started.

Transmission Control Protocol (TCP)

Protocol standard on the transport layer of the TCP/IP reference model. TCP provides a reliable, connectionless transport service upon which many Internet applications are based.

User Datagram Protocol (UDP)

Protocol standard on the transport layer of the TCP/IP reference model. UDP provides a simple, non-guaranteed, connectionless transport service over which IP datagrams are sent via the IP protocol. The principle difference between IP and UDP is actually only that UDP is capable of managing port numbers, which allow applications on different computers to communicate with each other via the Internet.

Virtual connection

A connection between two end systems solely created by the installed software at the end systems. The actual connection network must therefore not provide any resources but only guarantee data transport. Because the connection is not present in real form across the network, rather its illusion created by the software installed at the end systems, the term virtual connection is used.

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Meinel, C., Sack, H. (2013). Transport Layer. In: Internetworking. X.media.publishing. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35392-5_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35392-5_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35391-8

  • Online ISBN: 978-3-642-35392-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics