
EITC/IS/CNF Computer Networking Fundamentals is the European IT Certification programme on theory and practical aspects of basic computer networking.
The curriculum of the EITC/IS/CNF Computer Networking Fundamentals focuses on knowledge and practical skills in foundations in computer networking organized within the following structure, encompassing comprehensive and structured EITCI certification curriculum self-learning materials supported by referenced open-access video didactic content as a basis for preparation towards earning this EITC Certification by passing a corresponding examination.
A computer network is a collection of computers that share resources between network nodes. To communicate with one another, the computers use standard communication protocols across digital linkages. Telecommunication network technologies based on physically wired, optical, and wireless radio-frequency systems that can be assembled in a number of network topologies make up these interconnections. Personal computers, servers, networking hardware, and other specialized or general-purpose hosts can all be nodes in a computer network. Network addresses and hostnames may be used to identify them. Hostnames serve as easy-to-remember labels for nodes, and they’re rarely modified after they’re assigned. Communication protocols such as the Internet Protocol use network addresses to locate and identify nodes. Security is one of the most critical aspects of networking. This EITC curriculums covers foundations of computer networking.
A computer network is a collection of computers that share resources between network nodes. To communicate with one another, the computers use standard communication protocols across digital linkages. Telecommunication network technologies based on physically wired, optical, and wireless radio-frequency systems that can be assembled in a number of network topologies make up these interconnections. Personal computers, servers, networking hardware, and other specialized or general-purpose hosts can all be nodes in a computer network. Network addresses and hostnames may be used to identify them. Hostnames serve as easy-to-remember labels for nodes, and they’re rarely modified after they’re assigned. Communication protocols such as the Internet Protocol use network addresses to locate and identify nodes. Security is one of the most critical aspects of networking.
The transmission medium used to convey signals, bandwidth, communications protocols to organize network traffic, network size, topology, traffic control mechanism, and organizational goal are all factors that can be used to classify computer networks.
Access to the World Wide Web, digital video, digital music, shared usage of application and storage servers, printers, and fax machines, and use of email and instant messaging programs are all supported via computer networks.
A computer network uses multiple technologies such as email, instant messaging, online chat, audio and video telephone conversations, and video conferencing to extend interpersonal connections via electronic means. A network allows network and computing resources to be shared. Users can access and use network resources such as printing a document on a shared network printer or accessing and using a shared storage drive. A network allows authorized users to access information stored on other computers on the network by transferring files, data, and other sorts of information. To complete tasks, distributed computing makes advantage of computing resources spread over a network.
Packet-mode transmission is used by the majority of current computer networks. A packet-switched network transports a network packet, which is a formatted unit of data.
Control information and user data are the two types of data in packets (payload). The control information includes information such as source and destination network addresses, error detection codes, and sequencing information that the network need to transmit user data. Control data is typically included in packet headers and trailers, with payload data in the middle.
The bandwidth of the transmission medium can be better shared among users using packets than with circuit switched networks. When one user isn’t transmitting packets, the connection can be filled with packets from other users, allowing the cost to be shared with minimal disturbance, as long as the link isn’t abused. Often, the path a packet must take through a network is unavailable right now. In that instance, the packet is queued and will not be sent until a link becomes available.
Packet network physical link technologies often limit packet size to a specific maximum transmission unit (MTU). A larger message may be fractured before being transferred, and the packets are reassembled to form the original message once they arrive.
Topologies of common networks
The physical or geographic locations of network nodes and links have little impact on a network, but the architecture of a network’s interconnections can have a considerable impact on its throughput and dependability. A single failure in various technologies, such as bus or star networks, can cause the entire network to fail. In general, the more interconnections a network has, the more stable it is; yet, the more expensive it is to set up. As a result, most network diagrams are organized according to their network topology, which is a map of network hosts’ logical relationships.
The following are examples of common layouts:
All nodes in a bus network are connected to a common media via this medium. This was the original Ethernet configuration, known as 10BASE5 and 10BASE2. On the data link layer, this is still a prevalent architecture, albeit current physical layer variants use point-to-point links to build a star or a tree instead.
All nodes are connected to a central node in a star network. This is the common configuration in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client connects to the central wireless access point.
Each node is connected to its left and right neighbour nodes, forming a ring network in which all nodes are connected and each node can reach the other node by traversing nodes to the left or right. This topology was used in token ring networks and the Fiber Distributed Data Interface (FDDI).
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that each node has at least one traversal.
Each node in the network is connected to every other node in the network.
The nodes in a tree network are arranged in a hierarchical order. With several switches and no redundant meshing, this is the natural topology for a bigger Ethernet network.
The physical architecture of a network’s nodes does not always represent the network’s structure. The network architecture of FDDI, for example, is a ring, but the physical topology is frequently a star, because all nearby connections can be routed through a single physical site. However, because common ducting and equipment placements might represent single points of failure owing to concerns like fires, power outages, and flooding, the physical architecture is not wholly meaningless.
Overlay networks
A virtual network that is established on top of another network is known as an overlay network. Virtual or logical links connect the overlay network’s nodes. Each link in the underlying network corresponds to a path that may pass via several physical links. The overlay network’s topology may (and frequently does) differ from the underlying network’s. Many peer-to-peer networks, for example, are overlay networks. They’re set up as nodes in a virtual network of links that runs over the Internet.
Overlay networks have existed since the dawn of networking, when computer systems were connected across telephone lines via modems before there was a data network.
The Internet is the most visible example of an overlay network. The Internet was originally designed as an extension of the telephone network. Even today, an underlying mesh of sub-networks with widely varied topologies and technology allows each Internet node to communicate with nearly any other. The methods for mapping a fully linked IP overlay network to its underlying network include address resolution and routing.
A distributed hash table, which maps keys to network nodes, is another example of an overlay network. The underlying network in this case is an IP network, and the overlay network is a key-indexed table (really a map).
Overlay networks have also been proposed as a technique to improve Internet routing, such as by ensuring higher-quality streaming media through quality of service assurances. Previous suggestions like IntServ, DiffServ, and IP Multicast haven’t gotten much traction, owing to the fact that they require all routers in the network to be modified. On the other hand, without the help of Internet service providers, an overlay network can be incrementally installed on end-hosts running the overlay protocol software. The overlay network has no influence over how packets are routed between overlay nodes in the underlying network, but it can regulate the sequence of overlay nodes that a message passes through before reaching its destination.
Connections to the Internet
Electrical cable, optical fiber, and free space are examples of transmission media (also known as the physical medium) used to connect devices to establish a computer network. The software to handle media is defined at layers 1 and 2 of the OSI model — the physical layer and the data link layer.
Ethernet refers to a group of technologies that use copper and fiber media in local area network (LAN) technology. IEEE 802.3 defines the media and protocol standards that allow networked devices to communicate over Ethernet. Radio waves are used in some wireless LAN standards, whereas infrared signals are used in others. The power cabling in a building is used to transport data in power line communication.
In computer networking, the following wired technologies are employed.
Coaxial cable is frequently used for local area networks in cable television systems, office buildings, and other work sites. The transmission speed varies between 200 million bits per second and 500 million bits per second.
The ITU-T G.hn technology creates a high-speed local area network using existing house wiring (coaxial cable, phone lines, and power lines).
Wired Ethernet and other standards employ twisted pair cabling. It usually consists of four pairs of copper wiring that can be used to transmit both voice and data. Crosstalk and electromagnetic induction are reduced when two wires are twisted together. The transmission speed ranges from 2 to 10 gigabits per second. There are two types of twisted pair cabling: unshielded twisted pair (UTP) and shielded twisted pair (STP) (STP). Each form is available in a variety of category ratings, allowing it to be used in a variety of situations.
Red and blue lines on a world map
Submarine optical fiber telecommunication lines are depicted on a map from 2007.
A glass fiber is an optical fiber. It uses lasers and optical amplifiers to transmit light pulses that represent data. Optical fibers provide several advantages over metal lines, including minimal transmission loss and resilience to electrical interference. Optical fibers may simultaneously carry numerous streams of data on distinct wavelengths of light using dense wave division multiplexing, which raises the rate of data transmission to billions of bits per second. Optic fibers are utilized in subsea cables that connect continents and can be used for lengthy runs of cable carrying very high data rates. Single-mode optical fiber (SMF) and multi-mode optical fiber (MMF) are the two primary forms of fiber optics (MMF). Single-mode fiber offers the advantage of sustaining a coherent signal over dozens, if not hundreds, of kilometers. Multimode fiber is less expensive to terminate but has a maximum length of only a few hundred or even a few dozens of meters, depending on the data rate and cable grade.
Wireless networks
Wireless network connections can be formed using radio or other electromagnetic communication methods.
Terrestrial microwave communication makes use of Earth-based transmitters and receivers that look like satellite dishes. Microwaves on the ground operate in the low gigahertz range, limiting all communications to line-of-sight. The relay stations are around 40 miles (64 kilometers) apart.
Satellites that communicate through microwave are also used by communications satellites. The satellites are normally in geosynchronous orbit, which is 35,400 kilometers (22,000 miles) above the equator. Voice, data, and television signals can be received and relayed by these Earth-orbiting devices.
Several radio communications technologies are used in cellular networks. The systems divide the covered territory into several geographic groups. A low-power transceiver serves each area.
Wireless LANs employ a high-frequency radio technology comparable to digital cellular in order to communicate. Spread spectrum technology is used in wireless LANs to allow communication between several devices in a small space. Wi-Fi is a type of open-standards wireless radio-wave technology defined by IEEE 802.11.
Free-space optical communication communicates via visible or invisible light. Line-of-sight propagation is employed in most circumstances, which restricts the physical positioning of connecting devices.
The Interplanetary Internet is a radio and optical network that extends the Internet to interplanetary dimensions.
RFC 1149 was a fun April Fool’s Request for Comments on IP via Avian Carriers. In 2001, it was put into practice in real life.
The last two situations have a long round-trip delay, resulting in delayed two-way communication but not preventing the transmission of massive volumes of data (they can have high throughput).
Nodes in a network
Networks are constructed using extra basic system building elements such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls in addition to any physical transmission media. Any given piece of equipment will almost always contain various building blocks and so be able to do multiple tasks.
Interfaces to the Internet
A network interface circuit that includes an ATM port.
An auxiliary card that serves as an ATM network interface. A large number of network interfaces are pre-installed.
A network interface controller (NIC) is a piece of computer hardware that links a computer to a network and may process low-level network data. A connection for taking a cable, or an aerial for wireless transmission and reception, as well as the related circuitry, may be found on the NIC.
Each network interface controller in an Ethernet network has a unique Media Access Control (MAC) address, which is normally stored in the controller’s permanent memory. The Institute of Electrical and Electronics Engineers (IEEE) maintains and oversees MAC address uniqueness to prevent address conflicts between network devices. An Ethernet MAC address is six octets long. The three most significant octets are allocated for NIC manufacturer identification. These manufacturers assign the three least-significant octets of every Ethernet interface they build using solely their allotted prefixes.
Hubs and repeaters
A repeater is an electronic device that accepts a network signal and cleans it of unwanted noise before regenerating it. The signal is retransmitted at a greater power level or to the other side of the obstruction, allowing it to go further without deterioration. Repeaters are necessary in most twisted pair Ethernet systems for cable runs greater than 100 meters. Repeaters can be tens or even hundreds of kilometers apart when using fiber optics.
Repeaters work on the OSI model’s physical layer, but they still take a little time to regenerate the signal. This can result in a propagation delay, which can compromise network performance and function. As a result, several network topologies, such as the Ethernet 5-4-3 rule, limit the number of repeaters that can be utilized in a network.
An Ethernet hub is an Ethernet repeater with many ports. A repeater hub helps with network collision detection and fault isolation in addition to reconditioning and distributing network signals. Modern network switches have mostly replaced hubs and repeaters in LANs.
Switches and bridges
In contrast to a hub, network bridges and switches only forward frames to the ports involved in the communication, but a hub forwards frames to all ports. A switch can be thought of as a multi-port bridge because bridges only have two ports. Switches typically feature a large number of ports, allowing for a star topology for devices and the cascading of further switches.
The data link layer (layer 2) of the OSI model is where bridges and switches operate, bridging traffic between two or more network segments to form a single local network. Both are devices that forward data frames across ports based on the MAC address of the destination in each frame. Examining the source addresses of received frames teaches them how to associate physical ports with MAC addresses, and they only forward frames when necessary. If the device is targeting an unknown destination MAC, it broadcasts the request to all ports except the source and deduces the location from the response.
The collision domain of the network is divided by bridges and switches, while the broadcast domain remains the same. Bridging and switching assist break down a huge, congested network into a collection of smaller, more efficient networks, which is known as network segmentation.
Routers
The ADSL telephone line and Ethernet network cable connectors are seen on a typical home or small business router.
A router is an Internetworking device that processes the addressing or routing information in packets to forward them between networks. The routing table is frequently used in conjunction with the routing information. A router determines where to pass packets using its routing database, rather than broadcasting packets, which is wasteful for very large networks.
Modems
Modems (modulator-demodulator) connect network nodes through wires that were not designed for digital network traffic or for wireless. To do this, the digital signal modulates one or more carrier signals, resulting in an analog signal that can be customized to provide the appropriate transmission qualities. Audio signals delivered over a conventional voice telephone connection were modulated by early modems. Modems are still widely used for digital subscriber line (DSL) telephone lines and cable television systems employing DOCSIS technology.
Firewalls are network devices or software that are used to control network security and access regulations. Firewalls are used to separate secure internal networks from potentially insecure external networks like the Internet. Typically, firewalls are set up to refuse access requests from unknown sources while permitting activities from known ones. The importance of firewalls in network security is growing in lockstep with the rise in cyber threats.
Protocols for communication
Protocols as they relate to the Internet’s layering structure
The TCP/IP model and its relationships with popular protocols used at various tiers.
When a router is present, message flows descend through protocol layers, across to the router, up the router’s stack, back down, and on to the final destination, where it climbs back up the router’s stack.
In the presence of a router, message flows between two devices (A-B) at the four tiers of the TCP/IP paradigm (R). The red flows represent effective communication pathways, whereas the black paths represent actual network connections.
A communication protocol is a set of instructions for sending and receiving data via a network. Protocols for communication have a variety of properties. They can be either connection-oriented or connectionless, use circuit mode or packet switching, and use hierarchical or flat addressing.
Communications operations are divided up into protocol layers in a protocol stack, which is frequently built according to the OSI model, with each layer leveraging the services of the one below it until the lowest layer controls the hardware that transports information across the media. Protocol layering is used extensively in the world of computer networking. HTTP (World Wide Web protocol) running over TCP over IP (Internet protocols) over IEEE 802.11 is a good example of a protocol stack (the Wi-Fi protocol). When a home user is surfing the web, this stack is utilized between the wireless router and the user’s personal computer.
A few of the most common communication protocols are listed here.
Protocols that are widely used
Suite of Internet Protocols
All current networking is built on the Internet Protocol Suite, often known as TCP/IP. It provides both connectionless and connection-oriented services over an intrinsically unstable network traversed using Internet protocol datagram transfer (IP). The protocol suite defines the addressing, identification, and routing standards for Internet Protocol Version 4 (IPv4) and IPv6, the next iteration of the protocol with much expanded addressing capabilities. The Internet Protocol Suite is a set of protocols that defines how the Internet works.
IEEE 802 is an acronym for “International Electrotechnical
IEEE 802 refers to a group of IEEE standards that deal with local and metropolitan area networks. The IEEE 802 protocol suite as a whole offers a wide range of networking capabilities. A flat addressing method is used in the protocols. They mostly work at the OSI model’s layers 1 and 2.
MAC bridging (IEEE 802.1D), for example, uses the Spanning Tree Protocol to route Ethernet traffic. VLANs are defined by IEEE 802.1Q, while IEEE 802.1X defines a port-based Network Access Control protocol, which is the foundation for the authentication processes used in VLANs (but also in WLANs) — this is what the home user sees when entering a “wireless access key.”
Ethernet is a group of technologies that are utilized in wired LANs. IEEE 802.3 is a collection of standards produced by the Institute of Electrical and Electronics Engineers that describes it.
LAN (wireless)
Wireless LAN, often known as WLAN or WiFi, is the most well-known member of the IEEE 802 protocol family for home users today. It is based on the IEEE 802.11 specifications. IEEE 802.11 has a lot in common with wired Ethernet.
SONET/SDH
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are multiplexing techniques that use lasers to transmit multiple digital bit streams across optical fiber. They were created to transmit circuit mode communications from many sources, primarily to support circuit-switched digital telephony. SONET/SDH, on the other hand, was an ideal candidate for conveying Asynchronous Transfer Mode (ATM) frames due to its protocol neutrality and transport-oriented features.
Mode of Asynchronous Transfer
Asynchronous Transfer Mode (ATM) is a telecommunications network switching technology. It encodes data into small, fixed-size cells using asynchronous time-division multiplexing. This is in contrast to other protocols that use variable-sized packets or frames, such as the Internet Protocol Suite or Ethernet. Both circuit and packet switched networking are similar to ATM. This makes it a suitable fit for a network that needs to manage both high-throughput data and real-time, low-latency content like voice and video. ATM has a connection-oriented approach, in which a virtual circuit between two endpoints must be established before the actual data transmission can begin.
While ATMs are losing favor in favor of next-generation networks, they continue to play a role in the last mile, or the connection between an Internet service provider and a residential user.
Cellular benchmarks
The Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (IDEN) are some of the different digital cellular standards (iDEN).
Routing
Routing determines the best paths for information to travel via a network. For instance, the best routes from node 1 to node 6 are likely to be 1-8-7-6 or 1-8-10-6, as these have the thickest paths.
Routing is the process of identifying network paths for the transmission of data. Many types of networks, including circuit switching networks and packet switched networks, require routing.
Routing protocols direct packet forwarding (the transit of logically addressed network packets from their source to their final destination) across intermediate nodes in packet-switched networks. Routers, bridges, gateways, firewalls, and switches are common network hardware components that act as intermediate nodes. General-purpose computers can also forward packets and conduct routing, albeit their performance may be hindered due to their lack of specialist hardware. Routing tables, which keep track of the paths to multiple network destinations, are frequently used to direct forwarding in the routing process. As a result, building routing tables in the router’s memory is critical for efficient routing.
There are generally several routes to pick from, and different factors can be considered when deciding which routes should be added to the routing table, such as (ordered by priority):
Longer subnet masks are desirable in this case (independent if it is within a routing protocol or over a different routing protocol)
When a cheaper metric/cost is favored, this is referred to as a metric (only valid within one and the same routing protocol)
When it comes to administrative distance, a shorter distance is desired (only valid between different routing protocols)
The vast majority of routing algorithms only employ one network path at a time. Multiple alternative paths can be used with multipath routing algorithms.
In its notion that network addresses are structured and that comparable addresses signify proximity throughout the network, routing, in a more restrictive sense, is sometimes contrasted with bridging. A single routing table item can indicate the route to a collection of devices using structured addresses. Structured addressing (routing in the restricted sense) outperforms unstructured addressing in big networks (bridging). On the Internet, routing has become the most used method of addressing. Within isolated situations, bridging is still commonly employed.
The organizations that own the networks are usually in charge of managing them. Intranets and extranets may be used in private company networks. They may also provide network access to the Internet, which is a global network with no single owner and essentially unlimited connectivity.
Intranet
An intranet is a collection of networks managed by a single administrative agency. The IP protocol and IP-based tools such as web browsers and file transfer apps are used on the intranet. The intranet can only be accessed by authorized individuals, according to the administrative entity. An intranet is most typically an organization’s internal LAN. At least one web server is usually present on a large intranet to provide users with organizational information. An intranet is anything on a local area network that is behind the router.
Extranet
An extranet is a network that is likewise administrated by a single organization but only allows for a limited access to a certain external network. For example, a firm may grant access to particular portions of its intranet to its business partners or customers in order to share data. From a security sense, these other entities are not necessarily to be trusted. WAN technology is frequently used to connect to an extranet, however it is not always used.
Internet
An Internetwork is the joining of several different types of computer networks to form a single network by layering networking software on top of each other and connecting them via routers. The Internet is the most well-known example of a network. It is an interconnected global system of governmental, academic, business, public, and private computer networks. It is based on the Internet Protocol Suite’s networking technologies. It is the successor to DARPA’s Advanced Research Projects Agency Network (ARPANET), which was built by the US Department of Defense’s DARPA. The World Wide Web (WWW), the Internet of Things (IoT), video transport, and a wide range of information services are all made possible by the Internet’s copper communications and optical networking backbone.
Participants on the Internet employ a wide range of protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) maintained by the Internet Assigned Numbers Authority and address registries. Through the Border Gateway Protocol (BGP), service providers and major companies share information about the reachability of their address spaces, building a redundant global mesh of transmission pathways.
Darknet
A darknet is an Internet-based overlay network that can only be accessed by using specialist software. A darknet is an anonymizing network that uses non-standard protocols and ports to connect only trustworthy peers — commonly referred to as “friends” (F2F).
Darknets differ from other distributed peer-to-peer networks in that users can interact without fear of governmental or corporate interference because sharing is anonymous (i.e., IP addresses are not publicly published).
Services for the network
Network services are applications that are hosted by servers on a computer network in order to give functionality to network members or users, or to assist the network in its operation.
Well-known network services include the World Wide Web, e-mail, printing, and network file sharing. DNS (Domain Name System) gives names to IP and MAC addresses (names like “nm.lan” are easier to remember than numbers like “210.121.67.18”), and DHCP ensures that all network equipment has a valid IP address.
The format and sequencing of messages between clients and servers of a network service is typically defined by a service protocol.
The performance of the network
Consumed bandwidth, related to achieved throughput or goodput, i.e., the average rate of successful data transfer via a communication link, is measured in bits per second. Technology such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example, bandwidth allocation protocol and dynamic bandwidth allocation), and others affect throughput. The average consumed signal bandwidth in hertz (the average spectral bandwidth of the analog signal representing the bit stream) during the examined time frame determines the bandwidth of a bit stream.
A telecommunications network’s design and performance characteristic is network latency. It defines the time it takes for a piece of data to transit through a network from one communication endpoint to the next. It’s usually measured in tenths of a second or fractions of a second. Depending on the location of the precise pair of communication endpoints, the delay may vary slightly. Engineers typically report both the maximum and average delay, as well as the delay’s various components:
The time it takes for a router to process the packet header.
Queuing time – the amount of time a packet spends in the routing queues.
The time it takes to push the packet’s bits onto the link is called transmission delay.
Propagation delay is the amount of time it takes for a signal to travel through the media.
Signals encounter a minimal amount of delay due to the time it takes to send a packet serially via a link. Due to network congestion, this delay is extended by more unpredictable levels of delay. The time it takes for an IP network to respond can vary from a few milliseconds to several hundred milliseconds.
Service quality
Network performance is usually measured by the quality of service of a telecommunications product, depending on the installation requirements. Throughput, jitter, bit error rate, and delay are all factors that can influence this.
Examples of network performance measurements for a circuit-switched network and one sort of packet-switched network, namely ATM, are shown below.
Circuit-switched networks: The grade of service is identical with network performance in circuit switched networks. The number of calls that are denied is a metric indicating how well the network performs under high traffic loads. Noise and echo levels are examples of other forms of performance indicators.
Line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem upgrades can all be used to evaluate the performance of an Asynchronous Transfer Mode (ATM) network.
Because each network is unique in its nature and architecture, there are numerous approaches to assess its performance. Instead of being measured, performance can instead be modeled. State transition diagrams, for example, are frequently used to model queuing performance in circuit-switched networks. These diagrams are used by the network planner to examine how the network functions in each state, ensuring that the network is planned appropriately.
Congestion on the network
When a link or node is subjected to a higher data load than it is rated for, network congestion occurs, and the quality of service suffers. Packets must be deleted when networks get congested and queues become too full, hence networks rely on re-transmission. Queuing delays, packet loss, and the blocking of new connections are all common results of congestion. As a result of these two, incremental increases in offered load result in either a slight improvement in network throughput or a decrease in network throughput.
Even when the initial load is lowered to a level that would not typically cause network congestion, network protocols that use aggressive retransmissions to correct for packet loss tend to keep systems in a state of network congestion. As a result, with the same amount of demand, networks utilizing these protocols can exhibit two stable states. Congestive collapse refers to a stable situation with low throughput.
To minimize congestion collapse, modern networks employ congestion management, congestion avoidance, and traffic control strategies (i.e. endpoints typically slow down or sometimes even stop transmission entirely when the network is congested). Exponential backoff in protocols like 802.11’s CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in routers are examples of these strategies. Implementing priority schemes, in which some packets are transmitted with higher priority than others, is another way to avoid the detrimental impacts of network congestion. Priority schemes do not cure network congestion on their own, but they do help to mitigate the consequences of congestion for some services. 802.1p is one example of this. The intentional allocation of network resources to specified flows is a third strategy for avoiding network congestion. The ITU-T G.hn standard, for example, uses Contention-Free Transmission Opportunities (CFTXOPs) to deliver high-speed (up to 1 Gbit/s) local area networking over existing house wires (power lines, phone lines and coaxial cables).
RFC 2914 for the Internet goes into great length about congestion control.
Resilience of the network
“The ability to offer and sustain an adequate level of service in the face of defects and impediments to normal operation,” according to the definition of network resilience.
Networks security
Hackers utilize computer networks to spread computer viruses and worms to networked devices, or to prohibit these devices from accessing the network via a denial-of-service assault.
The network administrator’s provisions and rules for preventing and monitoring illegal access, misuse, modification, or denial of the computer network and its network-accessible resources are known as network security. The network administrator controls network security, which is the authorisation of access to data in a network. Users are given a username and password that grants them access to information and programs under their control. Network security is used to secure daily transactions and communications among organizations, government agencies, and individuals on a range of public and private computer networks.
The monitoring of data being exchanged via computer networks such as the Internet is known as network surveillance. Surveillance is frequently carried out in secret, and it may be carried out by or on behalf of governments, corporations, criminal groups, or people. It may or may not be lawful, and it may or may not necessitate judicial or other independent agency approval.
Surveillance software for computers and networks is widely used today, and almost all Internet traffic is or could be monitored for signs of illegal activity.
Governments and law enforcement agencies utilize surveillance to maintain social control, identify and monitor risks, and prevent/investigate criminal activities. Governments now have unprecedented power to monitor citizens’ activities thanks to programs like the Total Information Awareness program, technologies like high-speed surveillance computers and biometrics software, and laws like the Communications Assistance For Law Enforcement Act.
Many civil rights and privacy organizations, including Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increased citizen surveillance could lead to a mass surveillance society with fewer political and personal freedoms. Fears like this have prompted a slew of litigation, including Hepting v. AT&T. In protest of what it calls “draconian surveillance,” the hacktivist group Anonymous has hacked into official websites.
End-to-end encryption (E2EE) is a digital communications paradigm that ensures that data going between two communicating parties is protected at all times. It entails the originating party encrypting data so that it can only be decrypted by the intended recipient, with no reliance on third parties. End-to-end encryption protects communications from being discovered or tampered with by intermediaries such as Internet service providers or application service providers. In general, end-to-end encryption ensures both secrecy and integrity.
HTTPS for online traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio are all examples of end-to-end encryption.
End-to-end encryption is not included in most server-based communications solutions. These solutions can only ensure the security of communications between clients and servers, not between communicating parties. Google Talk, Yahoo Messenger, Facebook, and Dropbox are examples of non-E2EE systems. Some of these systems, such as LavaBit and SecretInk, have even claimed to provide “end-to-end” encryption when they don’t. Some systems that are supposed to provide end-to-end encryption, such as Skype or Hushmail, have been shown to feature a back door that prevents the communication parties from negotiating the encryption key.
The end-to-end encryption paradigm does not directly address concerns at the communication’s endpoints, such as client technological exploitation, low-quality random number generators, or key escrow. E2EE also ignores traffic analysis, which involves determining the identities of endpoints as well as the timings and volumes of messages transmitted.
When e-commerce first appeared on the World Wide Web in the mid-1990s, it was clear that some type of identification and encryption was required. Netscape was the first to attempt to create a new standard. Netscape Navigator was the most popular web browser at the time. The Secure Socket Layer (SSL) was created by Netscape (SSL). SSL necessitates the use of a certificated server. The server transmits a copy of the certificate to the client when a client requests access to an SSL-secured server. The SSL client verifies this certificate (all web browsers come preloaded with a comprehensive list of CA root certificates), and if it passes, the server is authenticated, and the client negotiates a symmetric-key cipher for the session. Between the SSL server and the SSL client, the session is now in a highly secure encrypted tunnel.
To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.
The EITC/IS/CNF Computer Networking Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Participants can access answers and ask more relevant questions in the Questions and answers section of the e-learning interface under currently progressed EITC programme curriculum topic. Direct and unlimited consultancy with domain experts is also accessible via the platform integrated online messaging system, as well as through the contact form.
For details on the Certification procedure check How it Works.
Download the complete offline self-learning preparatory materials for the EITC/IS/CNF Computer Networking Fundamentals programme in a PDF file
EITC/IS/CNF preparatory materials – standard version
EITC/IS/CNF preparatory materials – extended version with review questions