×
1 Choose EITC/EITCA Certificates
2 Learn and take online exams
3 Get your IT skills certified

Confirm your IT skills and competencies under the European IT Certification framework from anywhere in the world fully online.

EITCA Academy

Digital skills attestation standard by the European IT Certification Institute aiming to support Digital Society development

LOG IN TO YOUR ACCOUNT BY EITHER YOUR USERNAME OR EMAIL ADDRESS

CREATE AN ACCOUNT FORGOT YOUR PASSWORD?

FORGOT YOUR DETAILS?

AAH, WAIT, I REMEMBER NOW!

CREATE AN ACCOUNT

ALREADY HAVE AN ACCOUNT?
EUROPEAN INFORMATION TECHNOLOGIES CERTIFICATION ACADEMY - ATTESTING YOUR PROFESSIONAL DIGITAL SKILLS
  • SIGN UP
  • LOGIN
  • INFO

EITCA Academy

EITCA Academy

The European Information Technologies Certification Institute - EITCI ASBL

Certification Authority

EITCI Institute

Brussels, European Union

Governing European IT Certification (EITC) standard in support of the IT professionalism and Digital Society

  • CERTIFICATES
    • EITCA ACADEMIES
      • EITCA ACADEMIES CATALOGUE<
      • EITCA/CG COMPUTER GRAPHICS
      • EITCA/IS INFORMATION SECURITY
      • EITCA/BI BUSINESS INFORMATION
      • EITCA/KC KEY COMPETENCIES
      • EITCA/EG E-GOVERNMENT
      • EITCA/WD WEB DEVELOPMENT
      • EITCA/AI ARTIFICIAL INTELLIGENCE
    • EITC CERTIFICATES
      • EITC CERTIFICATES CATALOGUE<
      • COMPUTER GRAPHICS CERTIFICATES
      • WEB DESIGN CERTIFICATES
      • 3D DESIGN CERTIFICATES
      • OFFICE IT CERTIFICATES
      • BITCOIN BLOCKCHAIN CERTIFICATE
      • WORDPRESS CERTIFICATE
      • CLOUD PLATFORM CERTIFICATENEW
    • EITC CERTIFICATES
      • INTERNET CERTIFICATES
      • CRYPTOGRAPHY CERTIFICATES
      • BUSINESS IT CERTIFICATES
      • TELEWORK CERTIFICATES
      • PROGRAMMING CERTIFICATES
      • DIGITAL PORTRAIT CERTIFICATE
      • WEB DEVELOPMENT CERTIFICATES
      • DEEP LEARNING CERTIFICATESNEW
    • CERTIFICATES FOR
      • EU PUBLIC ADMINISTRATION
      • TEACHERS AND EDUCATORS
      • IT SECURITY PROFESSIONALS
      • GRAPHICS DESIGNERS & ARTISTS
      • BUSINESSMEN AND MANAGERS
      • BLOCKCHAIN DEVELOPERS
      • WEB DEVELOPERS
      • CLOUD AI EXPERTSNEW
  • FEATURED
  • SUBSIDY
  • HOW IT WORKS
  •   IT ID
  • ABOUT
  • CONTACT
  • MY ORDER
    Your current order is empty.
EITCIINSTITUTE
CERTIFIED

EITC/IS/WSA Windows Server Administration

Thursday, 21 October 2021 by admin

EITC/IS/WSA Windows Server Administration is the European IT Certification programme on administration and security management in Windows Server, a Microsoft leading networking operating system for servers.

The curriculum of the EITC/IS/WSA Windows Server Administration focuses on knowledge and practical skills in administration and security management in Microsoft Windows Server organized within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Windows Server is a brand name for a group of server operating systems released by Microsoft since 2003. After Linux it is one of the most popular operating systems for network servers. It includes Active Directory, DNS Server, DHCP Server, Group Policy, as well as many other popular features for state-of-the-art network servers. In contrary to Linux (the most popular operating system for servers), Microsoft Windows Server is not open-source, but a proprietary software.

Since 2003, Microsoft has released a series of server operating systems under the Windows Server brand name. Windows Server 2003 was the first Windows server edition to be offered under that brand. Windows NT 3.1 Advanced Server was the initial server edition, followed by Windows NT 3.5 Server, Windows NT 3.51 Server, Windows NT 4.0 Server, and Windows 2000 Server. Active Directory, DNS Server, DHCP Server, Group Policy, and many other popular features were included in Windows 2000 Server for the first time.

Microsoft typically provides ten years of support for Windows Server, with five years of mainstream support and an extra five years of extended support. These editions also include a comprehensive graphical user interface (GUI) desktop experience. Server Core and Nano Server variants were introduced with Windows Server 2008 R2 to decrease the OS footprint. To distinguish these updates from semi-annual releases, Microsoft referred to them as “long-term servicing” releases between 2015 and 2021. (see below.)

Microsoft has published a major version of Windows Server every four years for the past sixteen years, with one minor version released two years after a major release. The “R2” suffix was added to the titles of the minor versions. Microsoft violated this pattern in October 2018 when it released Windows Server 2019, which was supposed to be “Windows Server 2016 R2.” In addition, Windows Server 2022 is a small enhancement over the previous version.

The following are included in the full releases:

  • Windows Server 2003 is a server operating system (April 2003)
  • Windows Server 2003 R2 is a version of Windows Server 2003. (December 2005)
  • Windows Server 2008 is a server operating system developed by Microsoft (February 2008)
  • Windows Server 2008 R2 is the latest version of Windows Server (October 2009)
  • Windows Server 2012 is a server operating system (September 2012)
  • Windows Server 2012 R2 is the latest version of Windows Server (October 2013)
    2016 is the latest version of Windows Server (September 2016)
  • Windows Server 2019 is the latest version of Windows Server (October 2018)
  • Microsoft Windows Server 2022 (August 2021)

Main features of the Windows Server include:

  • Security with multiple layers of protection: improving organization’s security posture by starting with the operating system.
  • Azure’s hybrid capabilities: increasing IT efficiency by extending datacenters to Azure.
  • Platform for a variety of applications: giving developers and IT pros the tools they need to create and deploy a variety of apps using an application platform.
  • Integration with Azure: options like Azure Hybrid Benefit and Extended Security Updates are available.

Microsoft’s Active Directory (AD) is a directory service for Windows domain networks. An Active Directory domain controller authenticates and authorizes all users and computers in a Windows domain network, as well as assigning and enforcing security policies and installing or upgrading software. A schema describes the sorts of objects that can be stored in an Active Directory database, as well as the qualities and information that the objects represent. A forest is a group of trees that share a global catalog, directory schema, logical structure, and directory configuration. A tree is a collection of one or more domains linked in a transitive trust hierarchy in a continuous namespace. A domain is a logical collection of objects (computers, users, and devices) that share an Active Directory database. The DNS name structure, which is the Active Directory namespace, is used to identify domains. Users in one domain can access resources in another domain thanks to trusts. When a child domain is created, trusts between the parent and child domains are automatically created. Domain controllers are servers that are configured with the Active Directory Domain Services role and host an Active Directory database for a specific domain. Sites are groups of interconnected subnets in a specific geographical place. Changes made on one domain controller are replicated to all other domain controllers that share the same Active Directory database (meaning within in the same domain). The Knowledge Consistency Checker (KCC) service manages traffic by creating a replication topology of site links based on the defined sites. Change notice activates domain controllers to start a pull replication cycle, resulting in frequent and automatic intrasite replication. Intersite replication intervals are usually shorter and depending on the amount of time that has passed rather than on change notification. While most domain updates can be executed on any domain controller, some activities can only be performed on a particular server. These servers are referred to as the “operation masters” (originally Flexible Single Master Operations or FSMOs). Schema Master, Domain Naming Master, PDC Emulator, RID Master, and Infrastructure Master are the operation master positions. A domain’s or forest’s functional level determines which advanced features are available in the forest or domain. For Windows Server 2016 and 2019, different functional levels are offered. All domain controllers should be configured to provide the highest functional level for forests and domains. For administrative purposes, containers are used to group Active Directory objects. The domain, Builtin, Users, Computers, and Domain Controllers are the default containers. Organizational Units (OUs) are object containers that are used to provide an administrative hierarchy to a domain. They support both administrative delegation and the deployment of Group Policy objects. The Active Directory database is used in a domain to authenticate users and computers for all of the domain’s computers and users. A workgroup is an alternate setup in which each machine is in charge of authenticating its own users. All machines in the domain have access to domain accounts, which are maintained in the Active Directory database. Each local computer’s Security Account Manager (SAM) database stores local accounts that are only accessible by that computer. Distribution groups and security groups are the two types of user groups supported by Active Directory. Email applications, such as Microsoft Exchange, use distribution groups. User accounts are grouped together in security groups for the purposes of applying privileges and permissions. The scope of Active Directory groups can be set to Universal, Global, or Domain Local. Any account in the forest can be a member of a universal group, which can be assigned to any resource in the forest. Any account in the domain can be a member of a global group, and they can be allocated to any resource in the forest. Any account in the forest can be a member of a domain local group, which can be allocated to any domain resource. Other universal groups and global groups from the forest can be found in universal groups. Global groups from the same domain can contain additional global groups. Domain local groups can contain both forest universal and global groups as well as domain local groups from the same domain. Microsoft recommends using global groups to organize users and domain local groups to arrange resources for managing accounts and resources. To put it another way, AGDLP is the process of putting accounts into global groups, global groups into domain local groups, and giving domain local groups authorization to access resources.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/WSA Windows Server Administration Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/LSA Linux System Administration

Thursday, 21 October 2021 by admin

EITC/IS/LSA Linux System Administration is the European IT Certification programme on administration and security management in Linux, an open-source networking operating system often used in servers with a worldwide leading position.

The curriculum of the EITC/IS/LSA Linux System Administration focuses on knowledge and practical skills in administration and security management in Linux organized within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Linux is a collection of open-source Unix-like operating systems, which are generally accepted as a leading standard for network servers operating systems, based on Linus Torvalds’ Linux kernel, which was initially released in 1991. The Linux kernel, as well as accompanying system software and libraries, are commonly bundled in a Linux distribution, with many of them being licensed under the GNU Project. Although many Linux distributions use the term “Linux”, the Free Software Foundation prefers the term “GNU/Linux” to underline the significance of GNU software.

Debian, Fedora, and Ubuntu are all popular Linux distributions. Red Hat Enterprise Linux and SUSE Linux Enterprise Server are two commercial distributions. A windowing system like X11 or Wayland, as well as a desktop environment like GNOME or KDE Plasma, are included in desktop Linux distributions. Server distributions may or may not include graphics, or may include a solution stack such as LAMP. Anyone can produce a distribution for any purpose because Linux is a freely redistributable open-source software.

Linux was created for Intel’s x86 architecture-based personal computers, but it has subsequently been ported to more platforms than any other operating system. Linux has the greatest installed base of all general-purpose operating systems due to the dominance of the Linux-based Android on smartphones. Despite the fact that Linux is only used by only 2.3 percent of desktop computers, the Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and accounts for about 20% of all sub-$300 laptop sales. Linux is the most popular operating system for servers (about 96.4 percent of the top 1 million web servers run Linux), as well as other big iron systems like mainframe computers and TOP500 supercomputers (since November 2017, having gradually eliminated all competitors).

Linux is also available for embedded systems, which are devices whose operating system is often incorporated in the firmware and is highly customized to the system. Routers, automation controls, smart home technology, televisions (Samsung and LG Smart TVs use Tizen and WebOS, respectively), automobiles (Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota all use Linux), digital video recorders, video game consoles, and smartwatches are all examples of Linux-based devices. The avionics of the Falcon 9 and Dragon 2 are based on a customized version of Linux.

Linux is one of the most renowned examples of free and open-source software collaboration. Under the rules of its individual licenses, such as the GNU General Public License, the source code may be used, updated, and distributed commercially or non-commercially by anybody.

The Linux kernel was not designed, but rather evolved through natural selection, according to several open source developers. Although the Unix architecture acted as a scaffolding, Torvalds believes that “Linux evolved with a lot of mutations – and because the mutations were less than random, they were faster and more directed than alpha-particles in DNA.” The revolutionary characteristics of Linux, according to Eric S. Raymond, are social rather than technical: before Linux, sophisticated software was painstakingly built by small groups, but “Linux grew up in a very different way. It was hacked on almost inadvertently from the start by large groups of volunteers who communicated solely through the Internet. The stupidly simple technique of publishing every week and receiving input from hundreds of users within days, generating a form of quick Darwinian selection on the mutations brought by developers, rather than rigorous standards or dictatorship, was used to preserve quality.” “Linux wasn’t designed, it evolved,” says Bryan Cantrill, an engineer for a competing OS, but he sees this as a limitation, claiming that some features, particularly those related to security, cannot be evolved into, because “this isn’t a biological system at the end of the day, it’s a software system.” A Linux-based system is a modular Unix-like operating system that draws much of its architectural inspiration from Unix principles developed in the 1970s and 1980s. A monolithic kernel, the Linux kernel, is used in such a system to handle process control, networking, peripheral access, and file systems. Device drivers are either built into the kernel directly or added as modules that are loaded while the system runs.

The GNU userland is an important feature of most Linux-based systems, with Android being an exception. The toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The Project’s implementation of the C library works as a wrapper for the Linux kernel’s system calls necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. Bash, a popular CLI shell, is also developed as part of the project. Most Linux systems’ graphical user interface (or GUI) is based on an implementation of the X Window System. More recently, the Linux community has been working to replace X11 with Wayland as the replacement display server protocol. Linux systems benefit from several other open-source software initiatives.

A Linux system’s installed components include the following:

  • GNU GRUB, LILO, SYSLINUX, or Gummiboot are examples of bootloaders. This is a software that executes when the computer is powered on and after the firmware initialization to load the Linux kernel into the computer’s main memory.
  • An init program, such as sysvinit or the more recent systemd, OpenRC, or Upstart. This is the initial process started by the Linux kernel, and it sits at the top of the process tree; in other words, init is where all other processes start. It initiates tasks like system services and login prompts (whether graphical or in terminal mode).
  • Software libraries are collections of code that can be utilized by other programs. The dynamic linker that handles the use of dynamic libraries on Linux systems employing ELF-format executable files is known as ld-linux.so. If the system is set up so that the user can generate applications themselves, header files will be included to describe the installed libraries’ interface. Aside from the GNU C Library (glibc), which is the most widely used software library on Linux systems, there are other more libraries, such as SDL and Mesa.
  • The GNU C Library is the standard C standard library, which is required to run C programs on a computer system. Alternatives for embedded systems have been developed, including musl, EGLIBC (a glibc clone originally used by Debian), and uClibc (built for uClinux), however the last two are no longer maintained. Bionic, Android’s own C library, is used.
  • GNU coreutils is the standard implementation of basic Unix commands. For embedded devices, there are alternatives such as the copyleft BusyBox and the BSD-licensed Toybox.
  • Widget toolkits are libraries for creating software applications’ graphical user interfaces (GUIs). GTK and Clutter, created by the GNOME project, Qt, developed by the Qt Project and led by The Qt Company, and Enlightenment Foundation Libraries (EFL), maintained mostly by the Enlightenment team, are among the widget toolkits available.
  • A package management system, such as dpkg or RPM, is used to manage packages. Packages can also be built from source tarballs or binary tarballs.
  • Command shells and windowing environments are examples of user interface programs.

The user interface, often known as the shell, is typically a command-line interface (CLI), a graphical user interface (GUI), or controls coupled to the accompanying hardware. The typical user interface on desktop PCs is usually graphical, while the CLI is frequently accessible via terminal emulator windows or a separate virtual console.

Text-based user interfaces, or CLI shells, employ text for both input and output. The Bourne-Again Shell (bash), which was created for the GNU project, is the most widely used shell under Linux. The CLI is used entirely by most low-level Linux components, including various sections of the userland. The CLI is especially well-suited to automating repeated or delayed operations, and it allows for relatively easy inter-process communication.

The GUI shells, packed with full desktop environments such as KDE Plasma, GNOME, MATE, Cinnamon, LXDE, Pantheon, and Xfce, are the most popular user interfaces on desktop systems, while a number of other user interfaces exist. The X Window System, also known as “X,” underpins the majority of popular user interfaces. It enables network transparency by allowing a graphical application operating on one machine to be displayed on another, where a user can interact with it; however, some X Window System extensions are not capable of working over the network. There are several X display servers, the most popular of which is X.Org Server, which is the reference implementation.

Server distributions may provide a command-line interface for developers and administrators, but may also include a bespoke interface for end-users that is tailored to the system’s use-case. This custom interface is accessed via a client running on a different system that isn’t necessarily Linux-based.

For X11, there are several types of window managers, including tiling, dynamic, stacking, and compositing. Window managers interact with the X Window System and allow you to control the location and appearance of individual application windows. Simpler X window managers like dwm, ratpoison, i3wm, or herbstluftwm have a minimalist interface, whereas more complex window managers like FVWM, Enlightenment, or Window Maker include additional features like a built-in taskbar and themes, but are still lightweight when compared to desktop environments. Window managers such as Mutter (GNOME), KWin (KDE), and Xfwm (xfce) are included in most desktop environments’ basic installations, but users can choose to use a different window manager if they prefer.

Wayland is a display server protocol that was designed to replace the X11 protocol, however it has yet to gain widespread use as of 2014. Wayland, unlike X11, doesn’t require an external window manager or compositing manager. As a result, a Wayland compositor serves as a display server, window manager, and compositing manager all in one. Wayland’s reference implementation is Weston, although Mutter and KWin from GNOME and KDE are being converted to Wayland as standalone display servers. Since version 19, Enlightenment has been successfully ported.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/LSA Linux System Administration Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/CNF Computer Networking Fundamentals

Monday, 18 October 2021 by admin

EITC/IS/CNF Computer Networking Fundamentals is the European IT Certification programme on theory and practical aspects of basic computer networking.

The curriculum of the EITC/IS/CNF Computer Networking Fundamentals focuses on knowledge and practical skills in foundations in computer networking organized within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

A computer network is a collection of computers that share resources between network nodes. To communicate with one another, the computers use standard communication protocols across digital linkages. Telecommunication network technologies based on physically wired, optical, and wireless radio-frequency systems that can be assembled in a number of network topologies make up these interconnections. Personal computers, servers, networking hardware, and other specialized or general-purpose hosts can all be nodes in a computer network. Network addresses and hostnames may be used to identify them. Hostnames serve as easy-to-remember labels for nodes, and they’re rarely modified after they’re assigned. Communication protocols such as the Internet Protocol use network addresses to locate and identify nodes. Security is one of the most critical aspects of networking. This EITC curriculums covers foundations of computer networking.

A computer network is a collection of computers that share resources between network nodes. To communicate with one another, the computers use standard communication protocols across digital linkages. Telecommunication network technologies based on physically wired, optical, and wireless radio-frequency systems that can be assembled in a number of network topologies make up these interconnections. Personal computers, servers, networking hardware, and other specialized or general-purpose hosts can all be nodes in a computer network. Network addresses and hostnames may be used to identify them. Hostnames serve as easy-to-remember labels for nodes, and they’re rarely modified after they’re assigned. Communication protocols such as the Internet Protocol use network addresses to locate and identify nodes. Security is one of the most critical aspects of networking.

The transmission medium used to convey signals, bandwidth, communications protocols to organize network traffic, network size, topology, traffic control mechanism, and organizational goal are all factors that can be used to classify computer networks.

Access to the World Wide Web, digital video, digital music, shared usage of application and storage servers, printers, and fax machines, and use of email and instant messaging programs are all supported via computer networks.

A computer network uses multiple technologies such as email, instant messaging, online chat, audio and video telephone conversations, and video conferencing to extend interpersonal connections via electronic means. A network allows network and computing resources to be shared. Users can access and use network resources such as printing a document on a shared network printer or accessing and using a shared storage drive. A network allows authorized users to access information stored on other computers on the network by transferring files, data, and other sorts of information. To complete tasks, distributed computing makes advantage of computing resources spread over a network.

Packet-mode transmission is used by the majority of current computer networks. A packet-switched network transports a network packet, which is a formatted unit of data.

Control information and user data are the two types of data in packets (payload). The control information includes information such as source and destination network addresses, error detection codes, and sequencing information that the network need to transmit user data. Control data is typically included in packet headers and trailers, with payload data in the middle.

The bandwidth of the transmission medium can be better shared among users using packets than with circuit switched networks. When one user isn’t transmitting packets, the connection can be filled with packets from other users, allowing the cost to be shared with minimal disturbance, as long as the link isn’t abused. Often, the path a packet must take through a network is unavailable right now. In that instance, the packet is queued and will not be sent until a link becomes available.

Packet network physical link technologies often limit packet size to a specific maximum transmission unit (MTU). A larger message may be fractured before being transferred, and the packets are reassembled to form the original message once they arrive.

Topologies of common networks

The physical or geographic locations of network nodes and links have little impact on a network, but the architecture of a network’s interconnections can have a considerable impact on its throughput and dependability. A single failure in various technologies, such as bus or star networks, can cause the entire network to fail. In general, the more interconnections a network has, the more stable it is; yet, the more expensive it is to set up. As a result, most network diagrams are organized according to their network topology, which is a map of network hosts’ logical relationships.

The following are examples of common layouts:

All nodes in a bus network are connected to a common media via this medium. This was the original Ethernet configuration, known as 10BASE5 and 10BASE2. On the data link layer, this is still a prevalent architecture, albeit current physical layer variants use point-to-point links to build a star or a tree instead.
All nodes are connected to a central node in a star network. This is the common configuration in a small switched Ethernet LAN, where each client connects to a central network switch, and logically in a wireless LAN, where each wireless client connects to the central wireless access point.
Each node is connected to its left and right neighbour nodes, forming a ring network in which all nodes are connected and each node can reach the other node by traversing nodes to the left or right. This topology was used in token ring networks and the Fiber Distributed Data Interface (FDDI).
Mesh network: each node is connected to an arbitrary number of neighbors in such a way that each node has at least one traversal.
Each node in the network is connected to every other node in the network.
The nodes in a tree network are arranged in a hierarchical order. With several switches and no redundant meshing, this is the natural topology for a bigger Ethernet network.
The physical architecture of a network’s nodes does not always represent the network’s structure. The network architecture of FDDI, for example, is a ring, but the physical topology is frequently a star, because all nearby connections can be routed through a single physical site. However, because common ducting and equipment placements might represent single points of failure owing to concerns like fires, power outages, and flooding, the physical architecture is not wholly meaningless.

Overlay networks

A virtual network that is established on top of another network is known as an overlay network. Virtual or logical links connect the overlay network’s nodes. Each link in the underlying network corresponds to a path that may pass via several physical links. The overlay network’s topology may (and frequently does) differ from the underlying network’s. Many peer-to-peer networks, for example, are overlay networks. They’re set up as nodes in a virtual network of links that runs over the Internet.

Overlay networks have existed since the dawn of networking, when computer systems were connected across telephone lines via modems before there was a data network.

The Internet is the most visible example of an overlay network. The Internet was originally designed as an extension of the telephone network. Even today, an underlying mesh of sub-networks with widely varied topologies and technology allows each Internet node to communicate with nearly any other. The methods for mapping a fully linked IP overlay network to its underlying network include address resolution and routing.

A distributed hash table, which maps keys to network nodes, is another example of an overlay network. The underlying network in this case is an IP network, and the overlay network is a key-indexed table (really a map).

Overlay networks have also been proposed as a technique to improve Internet routing, such as by ensuring higher-quality streaming media through quality of service assurances. Previous suggestions like IntServ, DiffServ, and IP Multicast haven’t gotten much traction, owing to the fact that they require all routers in the network to be modified. On the other hand, without the help of Internet service providers, an overlay network can be incrementally installed on end-hosts running the overlay protocol software. The overlay network has no influence over how packets are routed between overlay nodes in the underlying network, but it can regulate the sequence of overlay nodes that a message passes through before reaching its destination.

Connections to the Internet

Electrical cable, optical fiber, and free space are examples of transmission media (also known as the physical medium) used to connect devices to establish a computer network. The software to handle media is defined at layers 1 and 2 of the OSI model — the physical layer and the data link layer.

Ethernet refers to a group of technologies that use copper and fiber media in local area network (LAN) technology. IEEE 802.3 defines the media and protocol standards that allow networked devices to communicate over Ethernet. Radio waves are used in some wireless LAN standards, whereas infrared signals are used in others. The power cabling in a building is used to transport data in power line communication.

In computer networking, the following wired technologies are employed.

Coaxial cable is frequently used for local area networks in cable television systems, office buildings, and other work sites. The transmission speed varies between 200 million bits per second and 500 million bits per second.
The ITU-T G.hn technology creates a high-speed local area network using existing house wiring (coaxial cable, phone lines, and power lines).
Wired Ethernet and other standards employ twisted pair cabling. It usually consists of four pairs of copper wiring that can be used to transmit both voice and data. Crosstalk and electromagnetic induction are reduced when two wires are twisted together. The transmission speed ranges from 2 to 10 gigabits per second. There are two types of twisted pair cabling: unshielded twisted pair (UTP) and shielded twisted pair (STP) (STP). Each form is available in a variety of category ratings, allowing it to be used in a variety of situations.
Red and blue lines on a world map
Submarine optical fiber telecommunication lines are depicted on a map from 2007.
A glass fiber is an optical fiber. It uses lasers and optical amplifiers to transmit light pulses that represent data. Optical fibers provide several advantages over metal lines, including minimal transmission loss and resilience to electrical interference. Optical fibers may simultaneously carry numerous streams of data on distinct wavelengths of light using dense wave division multiplexing, which raises the rate of data transmission to billions of bits per second. Optic fibers are utilized in subsea cables that connect continents and can be used for lengthy runs of cable carrying very high data rates. Single-mode optical fiber (SMF) and multi-mode optical fiber (MMF) are the two primary forms of fiber optics (MMF). Single-mode fiber offers the advantage of sustaining a coherent signal over dozens, if not hundreds, of kilometers. Multimode fiber is less expensive to terminate but has a maximum length of only a few hundred or even a few dozens of meters, depending on the data rate and cable grade.

Wireless networks

Wireless network connections can be formed using radio or other electromagnetic communication methods.

Terrestrial microwave communication makes use of Earth-based transmitters and receivers that look like satellite dishes. Microwaves on the ground operate in the low gigahertz range, limiting all communications to line-of-sight. The relay stations are around 40 miles (64 kilometers) apart.
Satellites that communicate through microwave are also used by communications satellites. The satellites are normally in geosynchronous orbit, which is 35,400 kilometers (22,000 miles) above the equator. Voice, data, and television signals can be received and relayed by these Earth-orbiting devices.
Several radio communications technologies are used in cellular networks. The systems divide the covered territory into several geographic groups. A low-power transceiver serves each area.
Wireless LANs employ a high-frequency radio technology comparable to digital cellular in order to communicate. Spread spectrum technology is used in wireless LANs to allow communication between several devices in a small space. Wi-Fi is a type of open-standards wireless radio-wave technology defined by IEEE 802.11.
Free-space optical communication communicates via visible or invisible light. Line-of-sight propagation is employed in most circumstances, which restricts the physical positioning of connecting devices.
The Interplanetary Internet is a radio and optical network that extends the Internet to interplanetary dimensions.
RFC 1149 was a fun April Fool’s Request for Comments on IP via Avian Carriers. In 2001, it was put into practice in real life.
The last two situations have a long round-trip delay, resulting in delayed two-way communication but not preventing the transmission of massive volumes of data (they can have high throughput).

Nodes in a network

Networks are constructed using extra basic system building elements such as network interface controllers (NICs), repeaters, hubs, bridges, switches, routers, modems, and firewalls in addition to any physical transmission media. Any given piece of equipment will almost always contain various building blocks and so be able to do multiple tasks.

Interfaces to the Internet

A network interface circuit that includes an ATM port.
An auxiliary card that serves as an ATM network interface. A large number of network interfaces are pre-installed.
A network interface controller (NIC) is a piece of computer hardware that links a computer to a network and may process low-level network data. A connection for taking a cable, or an aerial for wireless transmission and reception, as well as the related circuitry, may be found on the NIC.

Each network interface controller in an Ethernet network has a unique Media Access Control (MAC) address, which is normally stored in the controller’s permanent memory. The Institute of Electrical and Electronics Engineers (IEEE) maintains and oversees MAC address uniqueness to prevent address conflicts between network devices. An Ethernet MAC address is six octets long. The three most significant octets are allocated for NIC manufacturer identification. These manufacturers assign the three least-significant octets of every Ethernet interface they build using solely their allotted prefixes.

Hubs and repeaters

A repeater is an electronic device that accepts a network signal and cleans it of unwanted noise before regenerating it. The signal is retransmitted at a greater power level or to the other side of the obstruction, allowing it to go further without deterioration. Repeaters are necessary in most twisted pair Ethernet systems for cable runs greater than 100 meters. Repeaters can be tens or even hundreds of kilometers apart when using fiber optics.

Repeaters work on the OSI model’s physical layer, but they still take a little time to regenerate the signal. This can result in a propagation delay, which can compromise network performance and function. As a result, several network topologies, such as the Ethernet 5-4-3 rule, limit the number of repeaters that can be utilized in a network.

An Ethernet hub is an Ethernet repeater with many ports. A repeater hub helps with network collision detection and fault isolation in addition to reconditioning and distributing network signals. Modern network switches have mostly replaced hubs and repeaters in LANs.

Switches and bridges

In contrast to a hub, network bridges and switches only forward frames to the ports involved in the communication, but a hub forwards frames to all ports. A switch can be thought of as a multi-port bridge because bridges only have two ports. Switches typically feature a large number of ports, allowing for a star topology for devices and the cascading of further switches.

The data link layer (layer 2) of the OSI model is where bridges and switches operate, bridging traffic between two or more network segments to form a single local network. Both are devices that forward data frames across ports based on the MAC address of the destination in each frame. Examining the source addresses of received frames teaches them how to associate physical ports with MAC addresses, and they only forward frames when necessary. If the device is targeting an unknown destination MAC, it broadcasts the request to all ports except the source and deduces the location from the response.

The collision domain of the network is divided by bridges and switches, while the broadcast domain remains the same. Bridging and switching assist break down a huge, congested network into a collection of smaller, more efficient networks, which is known as network segmentation.

Routers

The ADSL telephone line and Ethernet network cable connectors are seen on a typical home or small business router.
A router is an Internetworking device that processes the addressing or routing information in packets to forward them between networks. The routing table is frequently used in conjunction with the routing information. A router determines where to pass packets using its routing database, rather than broadcasting packets, which is wasteful for very large networks.

Modems
Modems (modulator-demodulator) connect network nodes through wires that were not designed for digital network traffic or for wireless. To do this, the digital signal modulates one or more carrier signals, resulting in an analog signal that can be customized to provide the appropriate transmission qualities. Audio signals delivered over a conventional voice telephone connection were modulated by early modems. Modems are still widely used for digital subscriber line (DSL) telephone lines and cable television systems employing DOCSIS technology.

Firewalls are network devices or software that are used to control network security and access regulations. Firewalls are used to separate secure internal networks from potentially insecure external networks like the Internet. Typically, firewalls are set up to refuse access requests from unknown sources while permitting activities from known ones. The importance of firewalls in network security is growing in lockstep with the rise in cyber threats.

Protocols for communication

Protocols as they relate to the Internet’s layering structure
The TCP/IP model and its relationships with popular protocols used at various tiers.
When a router is present, message flows descend through protocol layers, across to the router, up the router’s stack, back down, and on to the final destination, where it climbs back up the router’s stack.
In the presence of a router, message flows between two devices (A-B) at the four tiers of the TCP/IP paradigm (R). The red flows represent effective communication pathways, whereas the black paths represent actual network connections.
A communication protocol is a set of instructions for sending and receiving data via a network. Protocols for communication have a variety of properties. They can be either connection-oriented or connectionless, use circuit mode or packet switching, and use hierarchical or flat addressing.

Communications operations are divided up into protocol layers in a protocol stack, which is frequently built according to the OSI model, with each layer leveraging the services of the one below it until the lowest layer controls the hardware that transports information across the media. Protocol layering is used extensively in the world of computer networking. HTTP (World Wide Web protocol) running over TCP over IP (Internet protocols) over IEEE 802.11 is a good example of a protocol stack (the Wi-Fi protocol). When a home user is surfing the web, this stack is utilized between the wireless router and the user’s personal computer.

A few of the most common communication protocols are listed here.

Protocols that are widely used

Suite of Internet Protocols
All current networking is built on the Internet Protocol Suite, often known as TCP/IP. It provides both connectionless and connection-oriented services over an intrinsically unstable network traversed using Internet protocol datagram transfer (IP). The protocol suite defines the addressing, identification, and routing standards for Internet Protocol Version 4 (IPv4) and IPv6, the next iteration of the protocol with much expanded addressing capabilities. The Internet Protocol Suite is a set of protocols that defines how the Internet works.

IEEE 802 is an acronym for “International Electrotechnical
IEEE 802 refers to a group of IEEE standards that deal with local and metropolitan area networks. The IEEE 802 protocol suite as a whole offers a wide range of networking capabilities. A flat addressing method is used in the protocols. They mostly work at the OSI model’s layers 1 and 2.

MAC bridging (IEEE 802.1D), for example, uses the Spanning Tree Protocol to route Ethernet traffic. VLANs are defined by IEEE 802.1Q, while IEEE 802.1X defines a port-based Network Access Control protocol, which is the foundation for the authentication processes used in VLANs (but also in WLANs) — this is what the home user sees when entering a “wireless access key.”

Ethernet is a group of technologies that are utilized in wired LANs. IEEE 802.3 is a collection of standards produced by the Institute of Electrical and Electronics Engineers that describes it.

LAN (wireless)
Wireless LAN, often known as WLAN or WiFi, is the most well-known member of the IEEE 802 protocol family for home users today. It is based on the IEEE 802.11 specifications. IEEE 802.11 has a lot in common with wired Ethernet.

SONET/SDH
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are multiplexing techniques that use lasers to transmit multiple digital bit streams across optical fiber. They were created to transmit circuit mode communications from many sources, primarily to support circuit-switched digital telephony. SONET/SDH, on the other hand, was an ideal candidate for conveying Asynchronous Transfer Mode (ATM) frames due to its protocol neutrality and transport-oriented features.

Mode of Asynchronous Transfer
Asynchronous Transfer Mode (ATM) is a telecommunications network switching technology. It encodes data into small, fixed-size cells using asynchronous time-division multiplexing. This is in contrast to other protocols that use variable-sized packets or frames, such as the Internet Protocol Suite or Ethernet. Both circuit and packet switched networking are similar to ATM. This makes it a suitable fit for a network that needs to manage both high-throughput data and real-time, low-latency content like voice and video. ATM has a connection-oriented approach, in which a virtual circuit between two endpoints must be established before the actual data transmission can begin.

While ATMs are losing favor in favor of next-generation networks, they continue to play a role in the last mile, or the connection between an Internet service provider and a residential user.

Cellular benchmarks
The Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (IDEN) are some of the different digital cellular standards (iDEN).

Routing

Routing determines the best paths for information to travel via a network. For instance, the best routes from node 1 to node 6 are likely to be 1-8-7-6 or 1-8-10-6, as these have the thickest paths.
Routing is the process of identifying network paths for the transmission of data. Many types of networks, including circuit switching networks and packet switched networks, require routing.

Routing protocols direct packet forwarding (the transit of logically addressed network packets from their source to their final destination) across intermediate nodes in packet-switched networks. Routers, bridges, gateways, firewalls, and switches are common network hardware components that act as intermediate nodes. General-purpose computers can also forward packets and conduct routing, albeit their performance may be hindered due to their lack of specialist hardware. Routing tables, which keep track of the paths to multiple network destinations, are frequently used to direct forwarding in the routing process. As a result, building routing tables in the router’s memory is critical for efficient routing.

There are generally several routes to pick from, and different factors can be considered when deciding which routes should be added to the routing table, such as (ordered by priority):

Longer subnet masks are desirable in this case (independent if it is within a routing protocol or over a different routing protocol)
When a cheaper metric/cost is favored, this is referred to as a metric (only valid within one and the same routing protocol)
When it comes to administrative distance, a shorter distance is desired (only valid between different routing protocols)
The vast majority of routing algorithms only employ one network path at a time. Multiple alternative paths can be used with multipath routing algorithms.

In its notion that network addresses are structured and that comparable addresses signify proximity throughout the network, routing, in a more restrictive sense, is sometimes contrasted with bridging. A single routing table item can indicate the route to a collection of devices using structured addresses. Structured addressing (routing in the restricted sense) outperforms unstructured addressing in big networks (bridging). On the Internet, routing has become the most used method of addressing. Within isolated situations, bridging is still commonly employed.

The organizations that own the networks are usually in charge of managing them. Intranets and extranets may be used in private company networks. They may also provide network access to the Internet, which is a global network with no single owner and essentially unlimited connectivity.

Intranet
An intranet is a collection of networks managed by a single administrative agency. The IP protocol and IP-based tools such as web browsers and file transfer apps are used on the intranet. The intranet can only be accessed by authorized individuals, according to the administrative entity. An intranet is most typically an organization’s internal LAN. At least one web server is usually present on a large intranet to provide users with organizational information. An intranet is anything on a local area network that is behind the router.

Extranet
An extranet is a network that is likewise administrated by a single organization but only allows for a limited access to a certain external network. For example, a firm may grant access to particular portions of its intranet to its business partners or customers in order to share data. From a security sense, these other entities are not necessarily to be trusted. WAN technology is frequently used to connect to an extranet, however it is not always used.

Internet
An Internetwork is the joining of several different types of computer networks to form a single network by layering networking software on top of each other and connecting them via routers. The Internet is the most well-known example of a network. It is an interconnected global system of governmental, academic, business, public, and private computer networks. It is based on the Internet Protocol Suite’s networking technologies. It is the successor to DARPA’s Advanced Research Projects Agency Network (ARPANET), which was built by the US Department of Defense’s DARPA. The World Wide Web (WWW), the Internet of Things (IoT), video transport, and a wide range of information services are all made possible by the Internet’s copper communications and optical networking backbone.

Participants on the Internet employ a wide range of protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) maintained by the Internet Assigned Numbers Authority and address registries. Through the Border Gateway Protocol (BGP), service providers and major companies share information about the reachability of their address spaces, building a redundant global mesh of transmission pathways.

Darknet
A darknet is an Internet-based overlay network that can only be accessed by using specialist software. A darknet is an anonymizing network that uses non-standard protocols and ports to connect only trustworthy peers — commonly referred to as “friends” (F2F).

Darknets differ from other distributed peer-to-peer networks in that users can interact without fear of governmental or corporate interference because sharing is anonymous (i.e., IP addresses are not publicly published).

Services for the network

Network services are applications that are hosted by servers on a computer network in order to give functionality to network members or users, or to assist the network in its operation.

Well-known network services include the World Wide Web, e-mail, printing, and network file sharing. DNS (Domain Name System) gives names to IP and MAC addresses (names like “nm.lan” are easier to remember than numbers like “210.121.67.18”), and DHCP ensures that all network equipment has a valid IP address.

The format and sequencing of messages between clients and servers of a network service is typically defined by a service protocol.

The performance of the network

Consumed bandwidth, related to achieved throughput or goodput, i.e., the average rate of successful data transfer via a communication link, is measured in bits per second. Technology such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example, bandwidth allocation protocol and dynamic bandwidth allocation), and others affect throughput. The average consumed signal bandwidth in hertz (the average spectral bandwidth of the analog signal representing the bit stream) during the examined time frame determines the bandwidth of a bit stream.

A telecommunications network’s design and performance characteristic is network latency. It defines the time it takes for a piece of data to transit through a network from one communication endpoint to the next. It’s usually measured in tenths of a second or fractions of a second. Depending on the location of the precise pair of communication endpoints, the delay may vary slightly. Engineers typically report both the maximum and average delay, as well as the delay’s various components:

The time it takes for a router to process the packet header.
Queuing time – the amount of time a packet spends in the routing queues.
The time it takes to push the packet’s bits onto the link is called transmission delay.
Propagation delay is the amount of time it takes for a signal to travel through the media.
Signals encounter a minimal amount of delay due to the time it takes to send a packet serially via a link. Due to network congestion, this delay is extended by more unpredictable levels of delay. The time it takes for an IP network to respond can vary from a few milliseconds to several hundred milliseconds.

Service quality

Network performance is usually measured by the quality of service of a telecommunications product, depending on the installation requirements. Throughput, jitter, bit error rate, and delay are all factors that can influence this.

Examples of network performance measurements for a circuit-switched network and one sort of packet-switched network, namely ATM, are shown below.

Circuit-switched networks: The grade of service is identical with network performance in circuit switched networks. The number of calls that are denied is a metric indicating how well the network performs under high traffic loads. Noise and echo levels are examples of other forms of performance indicators.
Line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem upgrades can all be used to evaluate the performance of an Asynchronous Transfer Mode (ATM) network.

Because each network is unique in its nature and architecture, there are numerous approaches to assess its performance. Instead of being measured, performance can instead be modeled. State transition diagrams, for example, are frequently used to model queuing performance in circuit-switched networks. These diagrams are used by the network planner to examine how the network functions in each state, ensuring that the network is planned appropriately.

Congestion on the network

When a link or node is subjected to a higher data load than it is rated for, network congestion occurs, and the quality of service suffers. Packets must be deleted when networks get congested and queues become too full, hence networks rely on re-transmission. Queuing delays, packet loss, and the blocking of new connections are all common results of congestion. As a result of these two, incremental increases in offered load result in either a slight improvement in network throughput or a decrease in network throughput.

Even when the initial load is lowered to a level that would not typically cause network congestion, network protocols that use aggressive retransmissions to correct for packet loss tend to keep systems in a state of network congestion. As a result, with the same amount of demand, networks utilizing these protocols can exhibit two stable states. Congestive collapse refers to a stable situation with low throughput.

To minimize congestion collapse, modern networks employ congestion management, congestion avoidance, and traffic control strategies (i.e. endpoints typically slow down or sometimes even stop transmission entirely when the network is congested). Exponential backoff in protocols like 802.11’s CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in routers are examples of these strategies. Implementing priority schemes, in which some packets are transmitted with higher priority than others, is another way to avoid the detrimental impacts of network congestion. Priority schemes do not cure network congestion on their own, but they do help to mitigate the consequences of congestion for some services. 802.1p is one example of this. The intentional allocation of network resources to specified flows is a third strategy for avoiding network congestion. The ITU-T G.hn standard, for example, uses Contention-Free Transmission Opportunities (CFTXOPs) to deliver high-speed (up to 1 Gbit/s) local area networking over existing house wires (power lines, phone lines and coaxial cables).

RFC 2914 for the Internet goes into great length about congestion control.

Resilience of the network

“The ability to offer and sustain an adequate level of service in the face of defects and impediments to normal operation,” according to the definition of network resilience.

Networks security

Hackers utilize computer networks to spread computer viruses and worms to networked devices, or to prohibit these devices from accessing the network via a denial-of-service assault.

The network administrator’s provisions and rules for preventing and monitoring illegal access, misuse, modification, or denial of the computer network and its network-accessible resources are known as network security. The network administrator controls network security, which is the authorisation of access to data in a network. Users are given a username and password that grants them access to information and programs under their control. Network security is used to secure daily transactions and communications among organizations, government agencies, and individuals on a range of public and private computer networks.

The monitoring of data being exchanged via computer networks such as the Internet is known as network surveillance. Surveillance is frequently carried out in secret, and it may be carried out by or on behalf of governments, corporations, criminal groups, or people. It may or may not be lawful, and it may or may not necessitate judicial or other independent agency approval.

Surveillance software for computers and networks is widely used today, and almost all Internet traffic is or could be monitored for signs of illegal activity.

Governments and law enforcement agencies utilize surveillance to maintain social control, identify and monitor risks, and prevent/investigate criminal activities. Governments now have unprecedented power to monitor citizens’ activities thanks to programs like the Total Information Awareness program, technologies like high-speed surveillance computers and biometrics software, and laws like the Communications Assistance For Law Enforcement Act.

Many civil rights and privacy organizations, including Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increased citizen surveillance could lead to a mass surveillance society with fewer political and personal freedoms. Fears like this have prompted a slew of litigation, including Hepting v. AT&T. In protest of what it calls “draconian surveillance,” the hacktivist group Anonymous has hacked into official websites.

End-to-end encryption (E2EE) is a digital communications paradigm that ensures that data going between two communicating parties is protected at all times. It entails the originating party encrypting data so that it can only be decrypted by the intended recipient, with no reliance on third parties. End-to-end encryption protects communications from being discovered or tampered with by intermediaries such as Internet service providers or application service providers. In general, end-to-end encryption ensures both secrecy and integrity.

HTTPS for online traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio are all examples of end-to-end encryption.

End-to-end encryption is not included in most server-based communications solutions. These solutions can only ensure the security of communications between clients and servers, not between communicating parties. Google Talk, Yahoo Messenger, Facebook, and Dropbox are examples of non-E2EE systems. Some of these systems, such as LavaBit and SecretInk, have even claimed to provide “end-to-end” encryption when they don’t. Some systems that are supposed to provide end-to-end encryption, such as Skype or Hushmail, have been shown to feature a back door that prevents the communication parties from negotiating the encryption key.

The end-to-end encryption paradigm does not directly address concerns at the communication’s endpoints, such as client technological exploitation, low-quality random number generators, or key escrow. E2EE also ignores traffic analysis, which involves determining the identities of endpoints as well as the timings and volumes of messages transmitted.

When e-commerce first appeared on the World Wide Web in the mid-1990s, it was clear that some type of identification and encryption was required. Netscape was the first to attempt to create a new standard. Netscape Navigator was the most popular web browser at the time. The Secure Socket Layer (SSL) was created by Netscape (SSL). SSL necessitates the use of a certificated server. The server transmits a copy of the certificate to the client when a client requests access to an SSL-secured server. The SSL client verifies this certificate (all web browsers come preloaded with a comprehensive list of CA root certificates), and if it passes, the server is authenticated, and the client negotiates a symmetric-key cipher for the session. Between the SSL server and the SSL client, the session is now in a highly secure encrypted tunnel.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/CNF Computer Networking Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/WAPT Web Applications Penetration Testing

Monday, 18 October 2021 by admin

EITC/IS/WAPT Web Applications Penetration Testing is the European IT Certification programme on theoretical and practical aspects of web application penetration testing (white hacking), including various technics for web sites spidering, scanning and attack techniques, including specialized penetration testing tools and suites.

The curriculum of the EITC/IS/WAPT Web Applications Penetration Testing covers introduction to Burp Suite, web spridering and DVWA, brute force testing with Burp Suite, web application firewall (WAF) detection with WAFW00F, target scope and spidering, discovering hidden files with ZAP, WordPress vulnerability scanning and username enumeration, load balancer scan, cross-site scripting, XSS – reflected, stored and DOM, proxy attacks, configuring the proxy in ZAP, files and directories attacks, file and directory discovery with DirBuster, web attacks practice, OWASP Juice Shop, CSRF – Cross Site Request Forgery, cookie collection and reverse engineering, HTTP Attributes – cookie stealing, SQL injection, DotDotPwn – directory traversal fuzzing, iframe injection and HTML injection, Heartbleed exploit – discovery and exploitation, PHP code injection, bWAPP – HTML injection, reflected POST, OS command injection with Commix, server-side include SSI injection, pentesting in Docker, OverTheWire Natas, LFI and command injection, Google hacking for pentesting, Google Dorks For penetration testing, Apache2 ModSecurity, as well as Nginx ModSecurity, within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Web application security (often referred to as Web AppSec) is the concept of designing websites to function normally even when they are attacked. The notion is integrating a set of security measures into a Web application to protect its assets from hostile agents. Web applications, like all software, are prone to flaws. Some of these flaws are actual vulnerabilities that can be exploited, posing a risk to businesses. Such flaws are guarded against via web application security. It entails employing secure development approaches and putting in place security controls throughout the software development life cycle (SDLC), ensuring that design flaws and implementation issues are addressed. Online penetration testing, which is carried out by experts who aim to uncover and exploit web application vulnerabilities using a so-called white hacking approach, is an essential practice in order to enable appropriate defense.

A web penetration test, also known as a web pen test, simulates a cyber assault on a web application in order to find exploitable flaws. Penetration testing is frequently used to supplement a web application firewall in the context of web application security (WAF). Pen testing, in general, entails attempting to penetrate any number of application systems (e.g., APIs, frontend/backend servers) in order to find vulnerabilities, such as unsanitized inputs that are vulnerable to code injection attacks.

The online penetration test’s findings can be used to configure WAF security policies and address discovered vulnerabilities.

Penetration testing has five steps.

The pen testing procedure is divided into five steps.

  1. Planning and scouting
    Defining the scope and goals of a test, including the systems to be addressed and the testing methodologies to be utilized, is the first stage.
    To gain a better understanding of how a target works and its potential weaknesses, gather intelligence (e.g., network and domain names, mail server).
  2. Scanning
    The next stage is to figure out how the target application will react to different types of intrusion attempts. This is usually accomplished by employing the following methods:
    Static analysis – Examining an application’s code to predict how it will behave when it is run. In a single pass, these tools can scan the entire code.
    Dynamic analysis is the process of inspecting an application’s code while it is operating. This method of scanning is more practical because it provides a real-time view of an application’s performance.
  3. Obtaining access
    To find a target’s weaknesses, this step employs web application assaults such as cross-site scripting, SQL injection, and backdoors. To understand the damage that these vulnerabilities might inflict, testers try to exploit them by escalating privileges, stealing data, intercepting traffic, and so on.
  4. Keeping access
    The purpose of this stage is to assess if the vulnerability can be exploited to establish a long-term presence in the compromised system, allowing a bad actor to get in-depth access. The goal is to mimic advanced persistent threats, which can stay in a system for months in order to steal a company’s most sensitive information.
  5. Analysis
    The penetration test results are then put into a report that includes information such as:
    Vulnerabilities that were exploited in detail
    Data that was obtained that was sensitive
    The amount of time the pen tester was able to stay unnoticed in the system.
    Security experts use this data to assist configure an enterprise’s WAF settings and other application security solutions in order to patch vulnerabilities and prevent further attacks.

Methods of penetration testing

  • External penetration testing focuses on a firm’s assets that are visible on the internet, such as the web application itself, the company website, as well as email and domain name servers (DNS). The objective is to obtain access to and extract useful information.
  • Internal testing entails a tester having access to an application behind a company’s firewall simulating a hostile insider attack. This isn’t necessary a rogue employee simulation. An employee whose credentials were obtained as a result of a phishing attempt is a common starting point.
  • Blind testing is when a tester is simply provided the name of the company that is being tested. This allows security experts to see how an actual application assault might play out in real time.
  • Double-blind testing: In a double-blind test, security professionals are unaware of the simulated attack beforehand. They won’t have time to shore up their fortifications before an attempted breach, just like in the real world.
  • Targeted testing – in this scenario, the tester and security staff collaborate and maintain track of each other’s movements. This is an excellent training exercise that gives a security team real-time feedback from the perspective of a hacker.

Web application firewalls and penetration testing

Penetration testing and WAFs are two separate but complementary security techniques. The tester is likely to leverage WAF data, such as logs, to find and exploit an application’s weak areas in many types of pen testing (with the exception of blind and double blind tests).

In turn, pen testing data can help WAF administrators. Following the completion of a test, WAF configurations can be modified to protect against the flaws detected during the test.

Finally, pen testing satisfies certain of the security auditing methods’ compliance requirements, such as PCI DSS and SOC 2. Certain requirements, such as PCI-DSS 6.6, can only be met if a certified WAF is used. However, due to the aforementioned benefits and potential to modify WAF settings, this does not make pen testing any less useful.

What is the significance of web security testing?

The goal of web security testing is to identify security flaws in Web applications and their setup. The application layer is the primary target (i.e., what is running on the HTTP protocol). Sending different forms of input to a Web application to induce problems and make the system respond in unexpected ways is a common approach to test its security. These “negative tests” look to see if the system is doing anything it wasn’t intended to accomplish.

It’s also vital to realize that Web security testing entails more than just verifying the application’s security features (such as authentication and authorization). It’s also crucial to ensure that other features are deployed safely (e.g., business logic and the use of proper input validation and output encoding). The purpose is to make sure that the Web application’s functions are safe.

What are the many types of security assessments?

  • Test for Dynamic Application Security (DAST). This automated application security test is best suited for low-risk, internal-facing apps that must meet regulatory security requirements. Combining DAST with some manual online security testing for common vulnerabilities is the best strategy for medium-risk apps and crucial applications undergoing minor changes.
  • Security Check for Static Applications (SAST). This application security strategy includes both automated and manual testing methods. It’s ideal for detecting bugs without having to run apps in a live environment. It also allows engineers to scan source code to detect and fix software security flaws in a systematic manner.
  • Penetration Examination. This manual application security test is ideal for essential applications, particularly those that are undergoing significant changes. To find advanced attack scenarios, the evaluation uses business logic and adversary-based testing.
  • Application Self-Protection in the Runtime (RASP). This growing application security method incorporates a variety of technology techniques to instrument an application so that threats may be watched and, hopefully, prevented in real time as they occur.

What role does application security testing play in lowering company’s risk?

The vast majority of attacks on web applications include:

  • SQL Injection
  • XSS (Cross Site Scripting)
  • Remote Command Execution
  • Path Traversal Attack
  • Restricted content access
  • Compromised user accounts
  • Malicious code installation
  • Lost sales revenue
  • Customers’ trust eroding
  • Brand reputation harming
  • And a lot of other attacks

In today’s Internet environment, a Web application might be harmed by a variety of challenges. The graphic above depicts a few of the most common attacks perpetrated by attackers, each of which can cause significant damage to an individual application or an entire business. Knowing the many assaults that render an application vulnerable, as well as the possible results of an attack, allows company to resolve vulnerabilities ahead of time and effectively test for them.

Mitigating controls can be established throughout the early phases of the SDLC to prevent any issues by identifying the root cause of the vulnerability. During a Web application security test, knowledge of how these threats work can also be used to target known places of interest.

Recognizing the impact of an attack is also important for managing company’s risk, as the impacts of a successful attack may be used to determine the severity of the vulnerability overall. If vulnerabilities are discovered during a security test, determining their severity allows company to prioritize remedial efforts more effectively. To reduce risk to company, start with critical severity issues and work one’s way down to lower impact ones.

Prior to identifying an issue, assessing the possible impact of each program in company’s application library will help you prioritize application security testing. Wenb security testing can be scheduled to target firm’s critical applications first, with more targeted testing to lower the risk against the business. With an established list of high-profile applications, wenb security testing can be scheduled to target firm’s critical applications first, with more targeted testing to lower the risk against the business.

During a web application security test, what features should be examined?

During Web application security testing, consider the following non-exhaustive list of features. An ineffective implementation of each could result in weaknesses, putting company at danger.

  • Configuration of the application and server. Encryption/cryptographic setups, Web server configurations, and so on are all examples of potential flaws.
  • Validation of input and error handling Poor input and output processing leads to SQL injection, cross-site scripting (XSS), and other typical injection issues.
  • Authentication and maintenance of sessions. Vulnerabilities that could lead to user impersonation. Credential strength and protection should be taken into account as well.
  • Authorization. The application’s capacity to protect against vertical and horizontal privilege escalations is being tested.
  • Logic in business. Most programs that provide business functionality rely on these.
  • Logic on the client’s end. This type of feature is becoming more common with modern, JavaScript-heavy webpages, as well as webpages using other types of client-side technologies (e.g., Silverlight, Flash, Java applets).

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/WAPT Web Applications Penetration Testing Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/WASF Web Applications Security Fundamentals

Monday, 18 October 2021 by admin

EITC/IS/WASF Web Applications Security Fundamentals is the European IT Certification programme on theoretical and practical aspects of World Wide Web services security ranging from security of basic web protocols, through privacy, threats and attacks on different layers of web traffic network communication, web servers security, security in higher layers, including web browsers and web applications, as well as authentication, certificates and phising.

The curriculum of the EITC/IS/WASF Web Applications Security Fundamentals covers introduction to HTML and JavaScript web security aspects, DNS, HTTP, cookies, sessions, cookie and session attacks, Same Origin Policy, Cross-Site Request Forgery, exceptions to the Same Origin Policy, Cross-Site Scripting (XSS), Cross-Site Scripting defenses, web fingerprinting, privacy on the web, DoS, phishing and side channels, Denial-of-Service, phishing and side channels, injection attacks, Code injection, transport layer security (TLS) and attacks, HTTPS in the real world, authentication, WebAuthn, managing web security, security concerns in Node.js project, server security, safe coding practices, local HTTP server security, DNS rebinding attacks, browser attacks, browser architecture, as well as writing secure browser code, within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Web application security is a subset of information security that focuses on website, web application, and web service security. Web application security, at its most basic level, is based on application security principles, but it applies them particularly to the internet and web platforms. Web application security technologies, such as Web application firewalls, are specialized tools for working with HTTP traffic.

The Open Web Application Security Project (OWASP) offers resources that are both free and open. A non-profit OWASP Foundation is in charge of it. The 2017 OWASP Top 10 is the outcome of current study based on extensive data gathered from over 40 partner organizations. Approximately 2.3 million vulnerabilities were detected across over 50,000 applications using this data. The top ten most critical online application security concerns, according to the OWASP Top 10 – 2017, are:

  • Injection
  • Authentication issues
  • Exposed sensitive data XML external entities (XXE)
  • Access control that isn’t working
  • Misconfiguration of security
  • Site-to-site scripting (XSS)
  • Deserialization that isn’t secure
  • Using components that have known flaws
  • Logging and monitoring are insufficient.

Hence The practice of defending websites and online services against various security threats that exploit weaknesses in an application’s code is known as web application security. Content management systems (e.g., WordPress), database administration tools (e.g., phpMyAdmin), and SaaS apps are all common targets for online application assaults.

Web applications are considered high-priority targets by the perpetrators because:

  • Because of the intricacy of their source code, unattended vulnerabilities and malicious code modification are more likely.
  • High-value rewards, such as sensitive personal information obtained through effective source code tampering.
  • Ease of execution, because most assaults can be readily automated and deployed indiscriminately against thousands, tens, or even hundreds of thousands of targets at once.
  • Organizations who fail to safeguard their web applications are vulnerable to attack. This can lead to data theft, strained client relationships, cancelled licenses, and legal action, among other things.

Vulnerabilities in websites

Input/output sanitization flaws are common in web applications, and they’re frequently exploited to either change source code or get unauthorized access.

These flaws allow for the exploitation of a variety of attack vectors, including:

  • SQL Injection – When a perpetrator manipulates a backend database with malicious SQL code, information is revealed. Illegal list browsing, table deletion, and unauthorized administrator access are among the consequences.
  • XSS (Cross-site Scripting) is an injection attack that targets users in order to gain access to accounts, activate Trojans, or change page content. When malicious code is injected directly into an application, this is known as stored XSS. When malicious script is mirrored from an application onto a user’s browser, this is known as reflected XSS.
  • Distant File Inclusion – This form of attack allows a hacker to inject a file into a web application server from a remote location. This can lead to dangerous scripts or code being executed within the app, as well as data theft or modification.
  • Cross-site Request Forgery (CSRF) – A type of attack that can result in an unintended transfer of cash, password changes, or data theft. It occurs when a malicious web program instructs a user’s browser to conduct an undesired action on a website to which they are logged in.

In theory, effective input/output sanitization might eradicate all vulnerabilities, rendering an application impervious to unauthorized modification.

However, because most programs are in a perpetual state of development, comprehensive sanitization is rarely a viable option. Furthermore, apps are commonly integrated with one another, resulting in a coded environment that is becoming increasingly complex.

To avoid such dangers, web application security solutions and processes, such as PCI Data Security Standard (PCI DSS) certification, should be implemented.

Firewall for web applications (WAF)

WAFs (web application firewalls) are hardware and software solutions that protect applications from security threats. These solutions are designed to inspect incoming traffic in order to detect and block attack attempts, compensating for any code sanitization flaws.

WAF deployment addresses a crucial criterion for PCI DSS certification by protecting data against theft and modification. All credit and debit cardholder data maintained in a database must be safeguarded, according to Requirement 6.6.

Because it is put ahead of its DMZ at the network’s edge, establishing a WAF usually does not necessitate any changes to an application. It then serves as a gateway for all incoming traffic, filtering out dangerous requests before they can interact with an application.

To assess which traffic is allowed access to an application and which has to be weeded out, WAFs employ a variety of heuristics. They can quickly identify malicious actors and known attack vectors thanks to a regularly updated signature pool.

Almost all WAFs may be tailored to individual use cases and security regulations, as well as combating emerging (also known as zero-day) threats. Finally, to acquire additional insights into incoming visitors, most modern solutions use reputational and behavior data.

In order to build a security perimeter, WAFs are usually combined with additional security solutions. These could include distributed denial-of-service (DDoS) prevention services, which give the extra scalability needed to prevent high-volume attacks.

Checklist for web application security
There are a variety of approaches for safeguarding web apps in addition to WAFs. Any web application security checklist should include the following procedures:

  • Collecting data — Go over the application by hand, looking for entry points and client-side codes. Classify content that is hosted by a third party.
  • Authorization — Look for path traversals, vertical and horizontal access control issues, missing authorization, and insecure, direct object references when testing the application.
  • Secure all data transmissions with cryptography. Has any sensitive information been encrypted? Have you employed any algorithms that aren’t up to snuff? Are there any randomness errors?
  • Denial of service — Test for anti-automation, account lockout, HTTP protocol DoS, and SQL wildcard DoS to improve an application’s resilience against denial of service attacks. This does not include security against high-volume DoS and DDoS attacks, which require a mix of filtering technologies and scalable resources to resist.

For further details, one can check the OWASP Web Application Security Testing Cheat Sheet (it’s also a great resource for other security-related topics).

DDoS protection

DDoS assaults, or distributed denial-of-service attacks, are a typical way to interrupt a web application. There are a number of approaches for mitigating DDoS assaults, including discarding volumetric attack traffic at Content Delivery Networks (CDNs) and employing external networks to appropriately route genuine requests without causing a service interruption.

DNSSEC (Domain Name System Security Extensions) protection

The domain name system, or DNS, is the Internet’s phonebook, and it reflects how an Internet tool, such as a web browser, finds the relevant server. DNS cache poisoning, on-path attacks, and other means of interfering with the DNS lookup lifecycle will be used by bad actors to hijack this DNS request process. If DNS is the Internet’s phone book, DNSSEC is unspoofable caller ID. A DNS lookup request can be protected using the DNSSEC technology.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/WASF Web Applications Security Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/ACSS Advanced Computer Systems Security

Monday, 18 October 2021 by admin

EITC/IS/ACSS Advanced Computer Systems Security is the European IT Certification programme on theoretical and practical aspects of cybersecurity in computer systems.

The curriculum of the EITC/IS/ACSS Advanced Computer Systems Security covers knowledge and practical skills in mobile smart devices security, security analysis, symbolic execution, networks security (including web security model and secure channels and security certificates), practical implementations in real-life scenarios, security of messaging and storage, as well as timing attacks within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Advanced computer systems security goes beyond introductory notions. The curriculum first covers mobile devices security (including security of mobile apps). The curriculum then proceeds to formal security analysis, which is an important aspect of advanced computer systems security, with a main focus set on symbolic execution. Further the curriculum discusses introduction to networks security, including introduction of the web security model, networking security, definition and theory of secure channels, as well as security certificates. Furthermore the curriculum addresses practical implementation of information security, especially considering real life scenarios. It then proceeds to discussing certain areas of security applications, namely communication (messaging), and storage (with untrusted storage servers). It concludes on discussing advanced computer systems security threats in the form of the CPU timing attacks.

Protecting computer systems and information from harm, theft, and illegal use is generally known as computer systems security, sometimes also referred to as cybersecurity. Serial numbers, physical security measures, monitoring and alarms are commonly employed to protect computer gear, just as they are for other important or sensitive equipment. Information and system access in software, on the other hand, are protected using a variety of strategies, some of which are fairly complicated and requiring adequate professional competencies.

Four key hazards are addressed by the security procedures associated with computer systems’ processed information and access:

  • Data theft from government computers, such as intellectual property,
  • Vandalism, including the use of a computer virus to destroy or hijack data,
  • Fraud, such as hackers (or e.g. bank staff) diverting funds to their own accounts,
  • Invasion of privacy, such as obtaining protected personal financial or medical data from a large database without permission.

The most basic method of safeguarding a computer system from theft, vandalism, invasion of privacy, and other irresponsible behavior is to track and record the various users’ access to and activity on the system. This is often accomplished by giving each person who has access to a system a unique password. The computer system may then trace the use of these passwords automatically, noting information like which files were accessed with which passwords, and so on. Another security technique is to keep a system’s data on a different device or medium that is ordinarily inaccessible via the computer system. Finally, data is frequently encrypted, allowing only those with a single encryption key to decode it (which falls under the notion of cryptography).

Since the introduction of modems (devices that allow computers to interact via telephone lines) in the late 1960s, computer security has been increasingly crucial. In the 1980s, the development of personal computers exacerbated the problem by allowing hackers (irresponsibly acting, typically self-taught computer professionals, bypassing computer access restrictions) to unlawfully access important computer systems from the comfort of their own homes. With the explosive rise of the Internet in the late twentieth and early twenty-first centuries, computer security became a major concern. The development of enhanced security systems tries to reduce such vulnerabilities, yet computer crime methods are always evolving, posing new risks.

Asking what is being secured is one technique to determine the similarities and differences in computer systems security. 

As an example,

  • Information security is the protection of data against unauthorized access, alteration, and deletion.
  • Application security is the protection of an application from cyber threats such as SQL injection, DoS attacks, data breaches, and so on.
  • Computer security is defined as protecting computer systems that communicate over computer networks in terms of control by keeping them updated and patched.
  • Network security is defined as securing both software and hardware technologies in a networking environment – cybersecurity is defined as protecting computer systems that communicate over computer networks in terms of control by keeping them updated and patched.

It’s critical to recognize the differences between these terms, even if there isn’t always a clear understanding of their definitions or the extent to which they overlap or are interchangeable. Computer systems security refers to the safeguards put in place to ensure the confidentiality, integrity, and availability of all computer systems components.

The following are the components of a computer system that must be protected:

  • Hardware, or the physical components of a computer system, such as the system memory and disk drive.
  • Firmware is nonvolatile software that is permamently stored on the nonvolatile memory of a hardware device and is generally transparent to the user.
  • Software are computer programmes that provide users with services such as an operating system, word processor, and web browser, determining how the hardware operates to process information accordingly with the objectives defined by the software.

The CIA Triad is primarily concerned with three areas of computer systems security:

  • Confidentiality ensures that only the intended audience has access to information.
  • Integrity refers to preventing unauthorized parties from altering data processed.
  • Availability refers to the ability to prevent unauthorized parties from altering data.

Information and computer components must be useable while also being safeguarded against individuals or software that shouldn’t be able to access or modify them.

Most frequent computer systems security threats

Computer systems security risks are potential dangers that could disrupt your computer’s routine operation. As the world becomes more digital, cyber risks are becoming more prevalent. The following are the most dangerous types of computer security threats:

  • Viruses – a computer virus is a malicious program that is installed without the user’s knowledge on their computer. It replicates itself and infects the user’s data and programs. The ultimate purpose of a virus is to prevent the victim’s computer from ever functioning correctly or at all.
  • Computer worm – a computer worm is a type of software that can copy itself from one computer to another without the need for human intervention. Because a worm can replicate in large volumes and at high speeds, there is a risk that it will eat up your computer’s hard disk space.
  • Phishing – action of individual who pose as a trustworthy person or entity in order to steal critical financial or personal information (including computer systems access credenials) via so-called phishing emails or instant messaging. Phishing is, regrettably, incredibly simple to carry out. A victim is being deceived into believing the communication from the phisher is an authentic official communication and the victim freely provides sensitive personal information.
  • Botnet – a botnet is a group of computers linked to the internet that have been infected with a computer virus by a hacker. The term zombie computer or a bot refers to a single computer in the botnet. The victim’s computer, which is the bot in botnet, will be exploited for malicious actions and larger-scale attacks like DDoS as a result of this threat.
  • Rootkit – a rootkit is a computer program that maintains privileged access to a computer while attempting to conceal its presence. The rootkit’s controller will be able to remotely execute files and change system configurations on the host machine once it has been installed.
  • Keylogger – keyloggers, often known as keystroke loggers, can monitor a user’s computer activity in real time. It records all keystrokes performed by the user’s keyboard. The use of a keylogger to steal people’s login credentials, such as username and password, is also a serious threat.

These are perhaps the most prevalent security risks one may encounter recently. There are more, such as malware, wabbits, scareware, bluesnarfing, and many others. There are, fortunately, techniques to defend computer systems and their users against such attacks.

We all want to keep our computer systems and personal or professional information private in this digital era, thus computer systems security is essential to protect our personal information. It’s also critical to keep our computers secure and healthy by avoiding viruses and malware from wreaking havoc on system performance.

Practices in computer systems security

These days, computer systems security risks are growing more and more innovative. To protect against these complicated and rising computer security risks and stay safe online, one must arm themselves with information and resources. One can take the following precautions:

  • Installing dependable anti-virus and security software
  • Because a firewall functions as a security guard between the internet and your local area network, you should activate it.
  • Keep up with the newest software and news about your devices, and install updates as soon as they become available.
  • If you are unsure about the origins of an email attachment, do not open it.
  • Using a unique combination of numbers, letters, and case types, change passwords on a regular basis.
  • While accessing the internet, be cautious of pop-ups and drive-by downloads.
  • Investing the time to learn about the fundamentals of computer security and to keep up with the latest cyber-threats
  • Perform daily complete system scans and establish a regular system backup schedule to ensure that your data is recoverable in the event that your machine fails.

Aside from these, there are a slew of other professional approaches to safeguard computer systems. Aspects including adequate security architectural specification, encryption, and specialist software can help protect computer systems.

Regrettably, the number of cyber dangers is rapidly increasing, and more complex attacks are appearing. To combat these attacks and mitigate hazards, more professional and specialized cybersecurity skills are required.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/ACSS Advanced Computer Systems Security Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/CSSF Computer Systems Security Fundamentals

Monday, 03 May 2021 by admin

EITC/IS/CSSF Computer Systems Security Fundamentals is the European IT Certification programme on theoretical and practical aspects of cybersecurity in computer systems.

The curriculum of the EITC/IS/CSSF Computer Systems Security Fundamentals covers knowledge and practical skills in computer systems security architecture, user authentication, classes of attacks, security vulnerabilities damage mitigation, privilege separation, software containers and isolation, as well as secure enclaves, within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Computer systems security is a broad concept of applying architectures and methodologies for assuring secure information processing and communication in computer systems. To address this problem from a theoretical point of view the curriculum first covers computer systems security architecture. Then it proceeds to discussing problems of user authentication in secure computer systems, followed by consideration of computer systems attacks, focusing on a general class of the so-called buffer overflows attacks. The curriculum then covers security vulnerabilities damage mitigation in computer systems, focusing on privilege separation, linux containers and software isolation. The last part of the curriculum covers secure enclaves in computer systems.

Protecting computer systems and information from harm, theft, and illegal use is generally known as computer systems security, sometimes also referred to as cybersecurity. Serial numbers, physical security measures, monitoring and alarms are commonly employed to protect computer gear, just as they are for other important or sensitive equipment. Information and system access in software, on the other hand, are protected using a variety of strategies, some of which are fairly complicated and requiring adequate professional competencies.

Four key hazards are addressed by the security procedures associated with computer systems’ processed information and access:

  • Data theft from government computers, such as intellectual property,
  • Vandalism, including the use of a computer virus to destroy or hijack data,
  • Fraud, such as hackers (or e.g. bank staff) diverting funds to their own accounts,
  • Invasion of privacy, such as obtaining protected personal financial or medical data from a large database without permission.

The most basic method of safeguarding a computer system from theft, vandalism, invasion of privacy, and other irresponsible behavior is to track and record the various users’ access to and activity on the system. This is often accomplished by giving each person who has access to a system a unique password. The computer system may then trace the use of these passwords automatically, noting information like which files were accessed with which passwords, and so on. Another security technique is to keep a system’s data on a different device or medium that is ordinarily inaccessible via the computer system. Finally, data is frequently encrypted, allowing only those with a single encryption key to decode it (which falls under the notion of cryptography).

Since the introduction of modems (devices that allow computers to interact via telephone lines) in the late 1960s, computer security has been increasingly crucial. In the 1980s, the development of personal computers exacerbated the problem by allowing hackers (irresponsibly acting, typically self-taught computer professionals, bypassing computer access restrictions) to unlawfully access important computer systems from the comfort of their own homes. With the explosive rise of the Internet in the late twentieth and early twenty-first centuries, computer security became a major concern. The development of enhanced security systems tries to reduce such vulnerabilities, yet computer crime methods are always evolving, posing new risks.

Asking what is being secured is one technique to determine the similarities and differences in computer systems security. 

As an example,

  • Information security is the protection of data against unauthorized access, alteration, and deletion.
  • Application security is the protection of an application from cyber threats such as SQL injection, DoS attacks, data breaches, and so on.
  • Computer security is defined as protecting computer systems that communicate over computer networks in terms of control by keeping them updated and patched.
  • Network security is defined as securing both software and hardware technologies in a networking environment – cybersecurity is defined as protecting computer systems that communicate over computer networks in terms of control by keeping them updated and patched.

It’s critical to recognize the differences between these terms, even if there isn’t always a clear understanding of their definitions or the extent to which they overlap or are interchangeable. Computer systems security refers to the safeguards put in place to ensure the confidentiality, integrity, and availability of all computer systems components.

The following are the components of a computer system that must be protected:

  • Hardware, or the physical components of a computer system, such as the system memory and disk drive.
  • Firmware is nonvolatile software that is permamently stored on the nonvolatile memory of a hardware device and is generally transparent to the user.
  • Software are computer programmes that provide users with services such as an operating system, word processor, and web browser, determining how the hardware operates to process information accordingly with the objectives defined by the software.

The CIA Triad is primarily concerned with three areas of computer systems security:

  • Confidentiality ensures that only the intended audience has access to information.
  • Integrity refers to preventing unauthorized parties from altering data processed.
  • Availability refers to the ability to prevent unauthorized parties from altering data.

Information and computer components must be useable while also being safeguarded against individuals or software that shouldn’t be able to access or modify them.

Most frequent computer systems security threats

Computer systems security risks are potential dangers that could disrupt your computer’s routine operation. As the world becomes more digital, cyber risks are becoming more prevalent. The following are the most dangerous types of computer security threats:

  • Viruses – a computer virus is a malicious program that is installed without the user’s knowledge on their computer. It replicates itself and infects the user’s data and programs. The ultimate purpose of a virus is to prevent the victim’s computer from ever functioning correctly or at all.
  • Computer worm – a computer worm is a type of software that can copy itself from one computer to another without the need for human intervention. Because a worm can replicate in large volumes and at high speeds, there is a risk that it will eat up your computer’s hard disk space.
  • Phishing – action of individual who pose as a trustworthy person or entity in order to steal critical financial or personal information (including computer systems access credenials) via so-called phishing emails or instant messaging. Phishing is, regrettably, incredibly simple to carry out. A victim is being deceived into believing the communication from the phisher is an authentic official communication and the victim freely provides sensitive personal information.
  • Botnet – a botnet is a group of computers linked to the internet that have been infected with a computer virus by a hacker. The term zombie computer or a bot refers to a single computer in the botnet. The victim’s computer, which is the bot in botnet, will be exploited for malicious actions and larger-scale attacks like DDoS as a result of this threat.
  • Rootkit – a rootkit is a computer program that maintains privileged access to a computer while attempting to conceal its presence. The rootkit’s controller will be able to remotely execute files and change system configurations on the host machine once it has been installed.
  • Keylogger – keyloggers, often known as keystroke loggers, can monitor a user’s computer activity in real time. It records all keystrokes performed by the user’s keyboard. The use of a keylogger to steal people’s login credentials, such as username and password, is also a serious threat.

These are perhaps the most prevalent security risks one may encounter recently. There are more, such as malware, wabbits, scareware, bluesnarfing, and many others. There are, fortunately, techniques to defend computer systems and their users against such attacks.

We all want to keep our computer systems and personal or professional information private in this digital era, thus computer systems security is essential to protect our personal information. It’s also critical to keep our computers secure and healthy by avoiding viruses and malware from wreaking havoc on system performance.

Practices in computer systems security

These days, computer systems security risks are growing more and more innovative. To protect against these complicated and rising computer security risks and stay safe online, one must arm themselves with information and resources. One can take the following precautions:

  • Installing dependable anti-virus and security software
  • Because a firewall functions as a security guard between the internet and your local area network, you should activate it.
  • Keep up with the newest software and news about your devices, and install updates as soon as they become available.
  • If you are unsure about the origins of an email attachment, do not open it.
  • Using a unique combination of numbers, letters, and case types, change passwords on a regular basis.
  • While accessing the internet, be cautious of pop-ups and drive-by downloads.
  • Investing the time to learn about the fundamentals of computer security and to keep up with the latest cyber-threats
  • Perform daily complete system scans and establish a regular system backup schedule to ensure that your data is recoverable in the event that your machine fails.

Aside from these, there are a slew of other professional approaches to safeguard computer systems. Aspects including adequate security architectural specification, encryption, and specialist software can help protect computer systems.

Regrettably, the number of cyber dangers is rapidly increasing, and more complex attacks are appearing. To combat these attacks and mitigate hazards, more professional and specialized cybersecurity skills are required.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/CSSF Computer Systems Security Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/CCTF Computational Complexity Theory Fundamentals

Monday, 03 May 2021 by admin

EITC/IS/CCTF Computational Complexity Theory Fundamentals is the European IT Certification programme on theoretical aspects of foundations of computer science which are also a basis of classical asymmetric public-key cryptography vastly used in the Internet.

The curriculum of the EITC/IS/CCTF Computational Complexity Theory Fundamentals covers theoretical knowledge on foundations of computer science and computational models upon basic concepts such as deterministic and nondeterministic finite state machines, regular languages, context free grammers and languages theory, automata theory, Turing Machines, decidability of problems, recursion, logic and complexity of algorithmics for fundamental security applications within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

An algorithm’s computational complexity is the amount of resources required to operate it. Time and memory resources are given special attention. The complexity of a problem is defined as the complexity of the best algorithms for solving it. Analysis of algorithms is the study of the complexity of explicitly given algorithms, whereas computational complexity theory is the study of the complexity of problems solutions with best known algorithms. Both domains are intertwined because an algorithm’s complexity is always an upper constraint on the complexity of the problem it solves. Furthermore, it is frequently necessary to compare the complexity of a certain algorithm to the complexity of the problem to be solved while constructing efficient algorithms. In most circumstances, the only information available regarding a problem’s difficulty is that it is less than the complexity of the most efficient known techniques. As a result, there is a lot of overlap between algorithm analysis and complexity theory.

Complexity theory plays an important not only in foundations of computational models as basis for computer science but also in foundations of classical asymmetric cryptography (so called public-key cryptography) which is widely disseminated in modern networks, especially in the Internet. The public-key encryption is based on computational difficult of certain asymmetric mathematical problems such as for example factorization of large numbers into its prime factors (this operation is a hard problem in the complexity theory classification, because there are not known efficient classical algorithms to solve it with resources scaling polynomially rather than exponentially with the increase of the problem’s input size, which is in contrast to a very simple reverse operation of multiplying to known prime factors to give the original large number). Using this asymmetry in an architecture of the public-key cryptography (by defining a computationally asymmetric relation between the public key, that can be easily computed from a private key, while the private key cannot be easily computer from a public key, one can publicly announce the public key and enable other communication sides to use it for asymmetric encryption of data, which can then only be decrypted with the coupled private key, not available computationally to third parties, thus making the communication secure).

The computational complexity theory was developed mainly on achievements of computer science and algorithmics pioneers, such as Alan Turing, whose work was critical to breaking the Enigma cipher of Nazi Germany, which played a profound role in allies winning the Second World War. Cryptanalysis aiming at devising and automating the computational processes of analyzing data (mainly encrypted communication) in order to uncover the hidden information was used to breach cryptographic systems and gain access to the contents of encrypted communication, usually of strategic military importance. It was also cryptanalysis which catalyzed development of first modern computers (which were initially applied to a strategical goal of codebreaking). The British Colossus (considered as the first modern electronic and programme computer) was preceded by the Polish “bomb”, an electronic computational device designed by Marian Rejewski to assist in breaking Enigma ciphers, and handed over to Great Britain by the Polish intelligence along with the captured German Enigma encryption machine, after Poland was invaded by Germany in 1939. On the basis of this device Alan Turing developed its more advanced counterpart to successfully break German encrypted communication, which has been later developed into modern computers.

Because the amount of resources required to run an algorithm varies with the size of the input, the complexity is usually expressed as a function f(n), where n is the input size and f(n) is either the worst-case complexity (the maximum amount of resources required across all inputs of size n) or the average-case complexity (the average of the amount of resources over all inputs of size n). The number of required elementary operations on an input of size n is commonly stated as time complexity, where elementary operations are believed to take a constant amount of time on a particular computer and change only by a constant factor when run on a different machine. The amount of memory required by an algorithm on an input of size n is known as space complexity.

Time is the most usually considered resource. When the term “complexity” is used without qualifier, it usually refers to the complexity of time.

The traditional units of time (seconds, minutes, and so on) are not employed in complexity theory since they are too reliant on the computer chosen and the advancement of technology. For example, a computer today can execute an algorithm substantially quicker than a computer from the 1960s, yet, this is due to technological breakthroughs in computer hardware rather than an inherent quality of the algorithm. The goal of complexity theory is to quantify the inherent time needs of algorithms, or the fundamental time limitations that an algorithm would impose on any computer. This is accomplished by counting how many basic operations are performed during the computation. These procedures are commonly referred to as steps because they are considered to take constant time on a particular machine (i.e., they are unaffected by the amount of the input).

Another crucial resource is the amount of computer memory required to perform algorithms.

Another often used resource is the amount of arithmetic operations. In this scenario, the term “arithmetic complexity” is used. The time complexity is generally the product of the arithmetic complexity by a constant factor if an upper constraint on the size of the binary representation of the numbers that occur during a computation is known.

The size of the integers utilized during a computation is not constrained for many methods, and it is unrealistic to assume that arithmetic operations require a fixed amount of time. As a result, the time complexity, also known as bit complexity, may be significantly higher than the arithmetic complexity. The arithmetic difficulty of computing the determinant of a nn integer matrix, for example, is O(n^3) for standard techniques (Gaussian elimination). Because the size of the coefficients might expand exponentially during the computation, the bit complexity of the same methods is exponential in n. If these techniques are used in conjunction with multi-modular arithmetic, the bit complexity can be decreased to O(n^4).

The bit complexity, in formal terms, refers to the number of operations on bits required to run an algorithm. It equals the temporal complexity up to a constant factor in most computation paradigms. The number of operations on machine words required by computers is proportional to the bit complexity. For realistic models of computation, the time complexity and bit complexity are thus identical.

The resource that is often considered in sorting and searching is the amount of entries comparisons. If the data is well arranged, this is a good indicator of the time complexity.

On all potential inputs, counting the number of steps in an algorithm is impossible. Because the complexity of an input rises with its size, it is commonly represented as a function of the input’s size n (in bits), and so the complexity is a function of n. For the same-sized inputs, however, the complexity of an algorithm can vary substantially. As a result, a variety of complexity functions are routinely employed.

The worst-case complexity is the sum of all complexity for all size n inputs, while the average-case complexity is the sum of all complexity for all size n inputs (this makes sense, as the number of possible inputs of a given size is finite). When the term “complexity” is used without being further defined, the worst-case time complexity is taken into account.

The worst-case and average-case complexity are notoriously difficult to calculate correctly. Furthermore, these exact values have little practical application because any change in machine or calculation paradigm would vary the complexity slightly. Furthermore, resource usage is not crucial for small values of n, therefore ease of implementation is often more appealing than low complexity for small n.

For these reasons, most attention is paid to the complexity’s behavior for high n, that is, its asymptotic behavior as n approaches infinity. As a result, large O notation is commonly used to indicate complexity.

Computational models

The choice of a computation model, which consists of specifying the essential operations that are performed in a unit of time, is crucial in determining the complexity. This is sometimes referred to as a multitape Turing machine when the computation paradigm is not specifically described.

A deterministic model of computation is one in which the machine’s subsequent states and the operations to be performed are entirely defined by the previous state. Recursive functions, lambda calculus, and Turing machines were the first deterministic models. Random-access machines (also known as RAM-machines) are a popular paradigm for simulating real-world computers.

When the computation model isn’t defined, a multitape Turing machine is usually assumed. On multitape Turing machines, the time complexity is the same as on RAM machines for most algorithms, albeit considerable attention in how data is stored in memory may be required to achieve this equivalence.

Various choices may be made at some steps of the computation in a non-deterministic model of computing, such as non-deterministic Turing machines. In complexity theory, all feasible options are considered at the same time, and non-deterministic time complexity is the amount of time required when the best choices are always made. To put it another way, the computation is done concurrently on as many (identical) processors as are required, and the non-deterministic computation time is the time taken by the first processor to complete the computation. This parallelism can be used in quantum computing by using superposed entangled states when running specialized quantum algorithms, such as Shor’s factorization of tiny integers for example.

Even if such a computation model is not currently practicable, it has theoretical significance, particularly in relation to the P = NP problem, which asks if the complexity classes produced by using “polynomial time” and “non-deterministic polynomial time” as least upper bounds are same. On a deterministic computer, simulating an NP-algorithm requires “exponential time.” If a task can be solved in polynomial time on a non-deterministic system, it belongs to the NP difficulty class. If an issue is in NP and is not easier than any other NP problem, it is said to be NP-complete. The Knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem are all NP-complete combinatorial problems. The most well-known algorithm has exponential complexity for all of these problems. If any of these issues could be solved in polynomial time on a deterministic machine, then all NP problems could be solved in polynomial time as well, and P = NP would be established. As of 2017, it is widely assumed that P NP, implying that the worst situations of NP problems are fundamentally difficult to solve, i.e., take far longer than any feasible time span (decades) given interesting input lengths.

Parallel and distributed computing

Parallel and distributed computing involve dividing processing across multiple processors that all operate at the same time. The fundamental distinction between the various models is the method of sending data between processors. Data transmission between processors is typically very quick in parallel computing, whereas data transfer between processors in distributed computing is done across a network and is thus substantially slower.

A computation on N processors takes at least the quotient by N of the time it takes on a single processor. In reality, because some subtasks cannot be parallelized and some processors may need to wait for a result from another processor, this theoretically ideal bound will never be attained.

The key complexity issue is thus to develop algorithms so that the product of computing time by the number of processors is as close as possible to the time required to perform the same computation on a single processor.

Quantum computation

A quantum computer is a computer with a quantum mechanics-based computation model. The Church–Turing thesis holds true for quantum computers, implying that any issue that a quantum computer can solve may also be solved by a Turing machine. However, some tasks might theoretically be solved using a quantum computer rather than a classical computer with a significantly lower temporal complexity. For the time being, this is strictly theoretical, as no one knows how to develop a practical quantum computer.

Quantum complexity theory was created to investigate the different types of issues that can be solved by quantum computers. It’s utilized in post-quantum cryptography, which is the process of creating cryptographic protocols that are resistant to quantum computer assaults.

Complexity of the problem (lower bounds)

The infimum of the complexities of the algorithms that may solve the problem, including undiscovered techniques, is the complexity of the problem. As a result, the complexity of a problem is equal to the complexity of any algorithm that solves it.

As a result, any complexity given in large O notation represents a complexity of both the algorithm and the accompanying problem.

On the other hand, obtaining nontrivial lower bounds on issue complexity is often difficult, and there are few strategies for doing so.

In order to solve most issues, all input data must be read, which takes time proportionate to the magnitude of the data. As a result, such problems have at least linear complexity, or, in big omega notation, a complexity of Ω(n).

Some problems, such as those in computer algebra and computational algebraic geometry, have very large solutions. Because the output must be written, the complexity is constrained by the maximum size of the output.

The number of comparisons required for a sorting algorithm has a nonlinear lower bound of Ω(nlogn). As a result, the best sorting algorithms are the most efficient since their complexity is O(nlogn). The fact that there are n! ways to organise n things leads to this lower bound. Because each comparison divides this collection of n! orders into two pieces, the number of N comparisons required to distinguish all orders must be 2N > n!, implying O(nlogn) by Stirling’s formula.

Reducing a problem to another problem is a common strategy for obtaining reduced complexity constraints.

Algorithm development

Evaluating an algorithm’s complexity is an important element of the design process since it provides useful information about the performance that may be expected.

It is a frequent misunderstanding that, as a result of Moore’s law, which predicts the exponential growth of modern computer power, evaluating the complexity of algorithms will become less relevant. This is incorrect because the increased power allows for the processing of massive amounts of data (big data). For example, any algorithm should function well in less than a second when sorting alphabetically a list of a few hundreds of entries, such as the bibliography of a book. On the other hand, for a million entries (for example, the phone numbers of a large city), the basic algorithms that require O(n2) comparisons would have to perform a trillion comparisons, which would take three hours at a speed of 10 million comparisons per second. Quicksort and merge sort, on the other hand, only require nlogn comparisons (as average-case complexity for the former, as worst-case complexity for the latter). This produces around 30,000,000 comparisons for n = 1,000,000, which would take only 3 seconds at 10 million comparisons per second.

As a result, assessing complexity may allow for the elimination of many inefficient algorithms prior to implementation. This can also be used to fine-tune complex algorithms without having to test all possible variants. The study of complexity allows focusing the effort for increasing the efficiency of an implementation by determining the most costly steps of a complex algorithm.

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/CCTF Computational Complexity Theory Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/ACC Advanced Classical Cryptography

Monday, 03 May 2021 by admin

EITC/IS/ACC Advanced Classical Cryptography is the European IT Certification programme advancing expertise level in classical cryptography, primarily focusing on the public-key cryptography, with an introduction to practical public-key ciphers, as well as digital signatures, public key infrastructure and security certificates widely used in the Internet.

The curriculum of the EITC/IS/ACC Advanced Classical Cryptography focuses on the public-key (asymmetric) cryptography, starting with the introduction to the Diffie-Hellman Key Exchange and the discrete log problem (including its generalization), then proceeding to the encryption with discrete log problem, covering the Elgamal Encryption Scheme, elliptic curves and the Elliptic Curve Cryptography (ECC), digital signatures (including security services and the Elgamal Digital Signature), hash functions (including the SHA-1 has function), Message Authentication Codes (including MAC and HMAC), key establishing (including Symmetric Key Establishment SKE and Kerberos) to finish with the man-in-the-middle attacks class consideration, along with cryptographic certificates and the Public Key Infrastructure (PKI), within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Cryptography refers to ways of secure communication in the presence of an adversary. Cryptography, in a broader sense, is the process of creating and analyzing protocols that prevent third parties or the general public from accessing private (encrypted) messages. Modern classical cryptography is based on several main features of information security such as data confidentiality, data integrity, authentication, and non-repudiation. In contrast to quantum cryptography, which is based on radically different quantum physics rules that characterize nature, classical cryptography refers to cryptography based on classical physics laws. The fields of mathematics, computer science, electrical engineering, communication science, and physics all meet in classical cryptography. Electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications are all examples of cryptography applications.

Prior to the current era, cryptography was almost synonymous with encryption, turning information from readable to unintelligible nonsense. To prevent attackers from gaining access to an encrypted message, the sender only shares the decoding process with the intended receivers. The names Alice (“A”) for the sender, Bob (“B”) for the intended recipient, and Eve (“eavesdropper”) for the adversary are frequently used in cryptography literature.

Cryptography methods have become increasingly complex, and its applications have been more diversified, since the development of rotor cipher machines in World War I and the introduction of computers in World War II.

Modern cryptography is strongly reliant on mathematical theory and computer science practice; cryptographic methods are built around computational hardness assumptions, making them difficult for any opponent to break in practice. While breaking into a well-designed system is theoretically possible, doing so in practice is impossible. Such schemes are referred to as “computationally safe” if they are adequately constructed; nevertheless, theoretical breakthroughs (e.g., improvements in integer factorization methods) and faster computing technology necessitate constant reevaluation and, if required, adaptation of these designs. There are information-theoretically safe systems, such as the one-time pad, that can be proven to be unbreakable even with infinite computing power, but they are significantly more difficult to employ in practice than the best theoretically breakable but computationally secure schemes.

In the Information Age, the advancement of cryptographic technology has produced a variety of legal challenges. Many nations have classified cryptography as a weapon, limiting or prohibiting its use and export due to its potential for espionage and sedition. Investigators can compel the surrender of encryption keys for documents pertinent to an investigation in some places where cryptography is lawful. In the case of digital media, cryptography also plays a key role in digital rights management and copyright infringement conflicts.

The term “cryptograph” (as opposed to “cryptogram”) was first used in the nineteenth century, in Edgar Allan Poe’s short story “The Gold-Bug.”

Until recently, cryptography nearly solely referred to “encryption,” which is the act of turning ordinary data (known as plaintext) into an unreadable format (called ciphertext). Decryption is the opposite of encryption, i.e., going from unintelligible ciphertext to plaintext. A cipher (or cypher) is a set of techniques that perform encryption and decryption in the reverse order. The algorithm and, in each case, a “key” are in charge of the cipher’s detailed execution. The key is a secret (preferably known only by the communicants) that is used to decrypt the ciphertext. It is commonly a string of characters (ideally short so that it can be remembered by the user). A “cryptosystem” is the ordered collection of elements of finite potential plaintexts, cyphertexts, keys, and the encryption and decryption procedures that correspond to each key in formal mathematical terms. Keys are crucial both formally and practically, because ciphers with fixed keys can be easily broken using only the cipher’s information, making them useless (or even counter-productive) for most purposes.

Historically, ciphers were frequently used without any additional procedures such as authentication or integrity checks for encryption or decryption. Cryptosystems are divided into two categories: symmetric and asymmetric. The same key (the secret key) is used to encrypt and decrypt a message in symmetric systems, which were the only ones known until the 1970s. Because symmetric systems use shorter key lengths, data manipulation in symmetric systems is faster than in asymmetric systems. Asymmetric systems encrypt a communication with a “public key” and decrypt it using a similar “private key.” The use of asymmetric systems improves communication security, owing to the difficulty of determining the relationship between the two keys. RSA (Rivest–Shamir–Adleman) and ECC are two examples of asymmetric systems (Elliptic Curve Cryptography). The widely used AES (Advanced Encryption Standard), which superseded the earlier DES, is an example of a high-quality symmetric algorithm (Data Encryption Standard). The various children’s language tangling techniques, such as Pig Latin or other cant, and indeed all cryptographic schemes, however seriously meant, from any source prior to the introduction of the one-time pad early in the twentieth century, are examples of low-quality symmetric algorithms.

The term “code” is often used colloquially to refer to any technique of encryption or message concealing. However, in cryptography, code refers to the substitution of a code word for a unit of plaintext (i.e., a meaningful word or phrase) (for example, “wallaby” replaces “attack at dawn”). In contrast, a cyphertext is created by modifying or substituting an element below such a level (a letter, a syllable, or a pair of letters, for example) in order to form a cyphertext.

Cryptanalysis is the study of ways for decrypting encrypted data without having access to the key required to do so; in other words, it is the study of how to “break” encryption schemes or their implementations.

In English, some people interchangeably use the terms “cryptography” and “cryptology,” while others (including US military practice in general) use “cryptography” to refer to the use and practice of cryptographic techniques and “cryptology” to refer to the combined study of cryptography and cryptanalysis. English is more adaptable than a number of other languages, where “cryptology” (as practiced by cryptologists) is always used in the second sense. Steganography is sometimes included in cryptology, according to RFC 2828.

Cryptolinguistics is the study of language properties that have some relevance in cryptography or cryptology (for example, frequency statistics, letter combinations, universal patterns, and so on).

Cryptography and cryptanalysis have a long history.
History of cryptography is the main article.
Prior to the modern era, cryptography was primarily concerned with message confidentiality (i.e., encryption)—the conversion of messages from an intelligible to an incomprehensible form and again, rendering them unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption was designed to keep the conversations of spies, military leaders, and diplomats private. In recent decades, the discipline has grown to incorporate techniques such as message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs, and secure computation, among other things.

The two most common classical cipher types are transposition ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘hello world’ becomes ‘ehlol owrdl’ in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘fly at once’ becomes ‘gmz bu Simple versions of either have never provided much privacy from cunning adversaries. The Caesar cipher was an early substitution cipher in which each letter in the plaintext was replaced by a letter a certain number of positions down the alphabet. According to Suetonius, Julius Caesar used it with a three-man shift to communicate with his generals. An early Hebrew cipher, Atbash, is an example. The oldest known usage of cryptography is a carved ciphertext on stone in Egypt (about 1900 BCE), however it’s possible that this was done for the enjoyment of literate spectators rather than to conceal information.

Crypts are reported to have been known to the Classical Greeks (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (the practice of concealing even the presence of a communication in order to keep it private) was also invented in ancient times. A phrase tattooed on a slave’s shaved head and hidden beneath the regrown hair, according to Herodotus. The use of invisible ink, microdots, and digital watermarks to conceal information are more current instances of steganography.

Kautiliyam and Mulavediya are two types of ciphers mentioned in India’s 2000-year-old Kamasutra of Vtsyyana. The cipher letter substitutions in the Kautiliyam are based on phonetic relationships, such as vowels becoming consonants. The cipher alphabet in the Mulavediya comprises of matching letters and employing reciprocal ones.

According to Muslim scholar Ibn al-Nadim, Sassanid Persia had two secret scripts: the h-dabrya (literally “King’s script”), which was used for official correspondence, and the rz-saharya, which was used to exchange secret messages with other countries.

In his book The Codebreakers, David Kahn writes that contemporary cryptology began with the Arabs, who were the first to carefully document cryptanalytic procedures. The Book of Cryptographic Messages was written by Al-Khalil (717–786), and it contains the earliest use of permutations and combinations to list all conceivable Arabic words with and without vowels.

Ciphertexts generated by a classical cipher (as well as some modern ciphers) reveal statistical information about the plaintext, which can be utilized to break the cipher. Nearly all such ciphers could be broken by an intelligent attacker after the discovery of frequency analysis, possibly by the Arab mathematician and polymath Al-Kindi (also known as Alkindus) in the 9th century. Classical ciphers are still popular today, albeit largely as puzzles (see cryptogram). Risalah fi Istikhraj al-Mu’amma (Manuscript for the Deciphering Cryptographic Messages) was written by Al-Kindi and documented the first known usage of frequency analysis cryptanalysis techniques.

Some extended history encryption approaches, such as homophonic cipher, that tend to flatten the frequency distribution, may not benefit from language letter frequencies. Language letter group (or n-gram) frequencies may give an attack for those ciphers.

Until the discovery of the polyalphabetic cipher, most notably by Leon Battista Alberti around 1467, virtually all ciphers were accessible to cryptanalysis using the frequency analysis approach, though there is some evidence that it was already known to Al-Kindi. Alberti came up with the idea of using separate ciphers (or substitution alphabets) for different parts of a communication (perhaps for each successive plaintext letter at the limit). He also created what is thought to be the first automatic encryption device, a wheel that executed a portion of his design. Encryption in the Vigenère cipher, a polyalphabetic cipher, is controlled by a key word that governs letter substitution based on which letter of the key word is utilized. Charles Babbage demonstrated that the Vigenère cipher was vulnerable to Kasiski analysis in the mid-nineteenth century, but Friedrich Kasiski published his findings ten years later.

Despite the fact that frequency analysis is a powerful and broad technique against many ciphers, encryption has remained effective in practice because many would-be cryptanalysts are unaware of the technique. Breaking a message without utilizing frequency analysis needed knowledge of the cipher employed and possibly the key involved, making espionage, bribery, burglary, defection, and other cryptanalytically uninformed tactics more appealing. The secret of a cipher’s algorithm was ultimately acknowledged in the 19th century as neither a reasonable nor feasible assurance of message security; in fact, any appropriate cryptographic scheme (including ciphers) should remain secure even if the opponent fully understands the cipher algorithm itself. The key’s security should be sufficient for a good cipher to retain confidentiality in the face of an assault. Auguste Kerckhoffs first stated this fundamental principle in 1883, and it is known as Kerckhoffs’s Principle; alternatively, and more bluntly, Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, restated it as Shannon’s Maxim—’the enemy knows the system.’

To help with ciphers, many physical gadgets and assistance have been utilized. The scytale of ancient Greece, a rod allegedly employed by the Spartans as a transposition cipher tool, may have been one of the first. Other aids were devised in medieval times, such as the cipher grille, which was also used for steganography. With the development of polyalphabetic ciphers, more sophisticated aids such as Alberti’s cipher disk, Johannes Trithemius’ tabula recta scheme, and Thomas Jefferson’s wheel cipher became available (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption systems were devised and patented in the early twentieth century, including rotor machines, which were famously employed by the German government and military from the late 1920s to World War II. Following WWI, the ciphers implemented by higher-quality instances of these machine designs resulted in a significant rise in cryptanalytic difficulty.

Cryptography was primarily concerned with linguistic and lexicographic patterns prior to the early twentieth century. Since then, the focus has evolved, and cryptography now includes aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics in general. Cryptography is a type of engineering, but it’s unique in that it deals with active, intelligent, and hostile resistance, whereas other types of engineering (such as civil or chemical engineering) merely have to deal with natural forces that are neutral. The link between cryptography difficulties and quantum physics is also being investigated.

The development of digital computers and electronics aided cryptanalysis by allowing for the creation of considerably more sophisticated ciphers. Furthermore, unlike traditional ciphers, which exclusively encrypted written language texts, computers allowed for the encryption of any type of data that could be represented in any binary format; this was novel and crucial. In both cipher design and cryptanalysis, computers have so supplanted language cryptography. Unlike classical and mechanical methods, which primarily manipulate traditional characters (i.e., letters and numerals) directly, many computer ciphers operate on binary bit sequences (occasionally in groups or blocks). Computers, on the other hand, have aided cryptanalysis, which has partially compensated for increased cipher complexity. Despite this, good modern ciphers have remained ahead of cryptanalysis; it is often the case that using a good cipher is very efficient (i.e., quick and requiring few resources, such as memory or CPU capability), whereas breaking it requires an effort many orders of magnitude greater, and vastly greater than that required for any classical cipher, effectively rendering cryptanalysis impossible.

Modern cryptography makes its debut.
The new mechanical devices’ cryptanalysis proved to be challenging and time-consuming. During WWII, cryptanalytic activities at Bletchley Park in the United Kingdom fostered the invention of more efficient methods for doing repetitive tasks. The Colossus, the world’s first completely electronic, digital, programmable computer, was developed to aid in the decoding of ciphers created by the German Army’s Lorenz SZ40/42 machine.

Cryptography is a relatively new field of open academic research, having only begun in the mid-1970s. IBM employees devised the algorithm that became the Federal (i.e., US) Data Encryption Standard; Whitfield Diffie and Martin Hellman published their key agreement algorithm; and Martin Gardner’s Scientific American column published the RSA algorithm. Cryptography has since grown in popularity as a technique for communications, computer networks, and computer security in general.

There are profound ties with abstract mathematics since several modern cryptography approaches can only keep their keys secret if certain mathematical problems are intractable, such as integer factorization or discrete logarithm issues. There are just a handful cryptosystems that have been demonstrated to be 100% secure. Claude Shannon proved that the one-time pad is one of them. There are a few key algorithms that have been shown to be secure under certain conditions. The inability to factor extremely big integers, for example, is the basis for believing that RSA and other systems are secure, but proof of unbreakability is unattainable because the underlying mathematical problem remains unsolved. In practice, these are widely utilized, and most competent observers believe they are unbreakable in practice. There exist systems similar to RSA, such as one developed by Michael O. Rabin, that are provably safe if factoring n = pq is impossible; however, they are practically useless. The discrete logarithm issue is the foundation for believing that some other cryptosystems are secure, and there are similar, less practical systems that are provably secure in terms of the discrete logarithm problem’s solvability or insolvability.

Cryptographic algorithm and system designers must consider possible future advances when working on their ideas, in addition to being cognizant of cryptographic history. For example, as computer processing power has improved, the breadth of brute-force attacks has grown, hence the required key lengths have grown as well. Some cryptographic system designers exploring post-quantum cryptography are already considering the potential consequences of quantum computing; the announced imminence of modest implementations of these machines may make the need for preemptive caution more than just speculative.

Classical cryptography in the modern day

Symmetric (or private-key) cryptography is a type of encryption in which the sender and receiver use the same key (or, less commonly, in which their keys are different, but related in an easily computable way and are kept in secret, privately). Until June 1976, this was the only type of encryption that was publicly known.

Block ciphers and stream ciphers are both used to implement symmetric key ciphers. A block cipher encrypts input in blocks of plaintext rather than individual characters, like a stream cipher does.

The US government has designated the Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) as cryptography standards (albeit DES’s certification was eventually withdrawn once the AES was established). DES (especially its still-approved and significantly more secure triple-DES variation) remains popular despite its deprecation as an official standard; it is used in a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. There have been a slew of different block ciphers invented and released, with varying degrees of success. Many, including some designed by qualified practitioners, such as FEAL, have been extensively broken.

Stream ciphers, unlike block ciphers, generate an infinitely lengthy stream of key material that is coupled with plaintext bit-by-bit or character-by-character, similar to the one-time pad. The output stream of a stream cipher is generated from a concealed internal state that changes as the cipher functions. The secret key material is used to set up that internal state at first. The stream cipher RC4 is extensively used. By creating blocks of a keystream (instead of a pseudorandom number generator) and using an XOR operation to each bit of the plaintext with each bit of the keystream, block ciphers can be employed as stream ciphers.

Message authentication codes (MACs) are similar to cryptographic hash functions, with the exception that a secret key can be used to validate the hash value upon receipt; this extra intricacy prevents an attack against naked digest algorithms, and so is regarded to be worthwhile. A third sort of cryptographic technique is cryptographic hash functions. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.

Although a message or set of messages can have a different key than others, symmetric-key cryptosystems employ the same key for encryption and decryption. The key management required to use symmetric ciphers securely is a big disadvantage. Each individual pair of communicating parties should, ideally, share a different key, as well as possibly a different ciphertext for each ciphertext sent. The number of keys required grows in direct proportion to the number of network participants, necessitating complicated key management techniques to keep them all consistent and secret.

Whitfield Diffie and Martin Hellman invented the concept of public-key (also known as asymmetric key) cryptography in a seminal 1976 work, in which two distinct but mathematically related keys—a public key and a private key—are employed. Even though they are inextricably linked, a public key system is built in such a way that calculating one key (the’private key’) from the other (the’public key’) is computationally infeasible. Rather, both keys are produced in secret, as a linked pair. Public-key cryptography, according to historian David Kahn, is “the most revolutionary new notion in the field since polyalphabetic substitution arose in the Renaissance.”

The public key in a public-key cryptosystem can be freely transmitted, but the coupled private key must be kept hidden. The public key is used for encryption, whereas the private or secret key is utilized for decryption in a public-key encryption scheme. While Diffie and Hellman were unable to create such a system, they demonstrated that public-key cryptography was conceivable by providing the Diffie–Hellman key exchange protocol, a solution that allows two people to covertly agree on a shared encryption key. The most widely used format for public key certificates is defined by the X.509 standard.

The publication of Diffie and Hellman sparked widespread academic interest in developing a practical public-key encryption system. Ronald Rivest, Adi Shamir, and Len Adleman eventually won the contest in 1978, and their answer became known as the RSA algorithm.

In addition to being the earliest publicly known instances of high-quality public-key algorithms, the Diffie–Hellman and RSA algorithms have been among the most commonly utilized. The Cramer–Shoup cryptosystem, ElGamal encryption, and numerous elliptic curve approaches are examples of asymmetric-key algorithms.

GCHQ cryptographers foresaw several scholarly advancements, according to a document issued in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization. According to legend, asymmetric key cryptography was invented by James H. Ellis about 1970. Clifford Cocks invented a solution in 1973 that was extremely similar to RSA in terms of design. Malcolm J. Williamson is credited with inventing the Diffie–Hellman key exchange in 1974.

Digital signature systems are also implemented using public-key cryptography. A digital signature is similar to a traditional signature in that it is simple for the user to create yet difficult for others to forge. Digital signatures can also be permanently linked to the content of the communication being signed; this means they can’t be’moved’ from one document to another without being detected. There are two algorithms in digital signature schemes: one for signing, which uses a secret key to process the message (or a hash of the message, or both), and one for verification, which uses the matching public key with the message to validate the signature’s authenticity. Two of the most used digital signature methods are RSA and DSA. Public key infrastructures and many network security systems (e.g., SSL/TLS, many VPNs) rely on digital signatures to function.

The computational complexity of “hard” problems, such as those arising from number theory, is frequently used to develop public-key methods. The integer factorization problem is related to the hardness of RSA, while the discrete logarithm problem is related to Diffie–Hellman and DSA. The security of elliptic curve cryptography is based on elliptic curve number theoretic problems. Most public-key algorithms include operations like modular multiplication and exponentiation, which are substantially more computationally expensive than the techniques used in most block ciphers, especially with normal key sizes, due to the difficulty of the underlying problems. As a result, public-key cryptosystems are frequently hybrid cryptosystems, in which the message is encrypted with a fast, high-quality symmetric-key algorithm, while the relevant symmetric key is sent with the message but encrypted with a public-key algorithm. Hybrid signature schemes, in which a cryptographic hash function is computed and only the resulting hash is digitally signed, are also commonly used.

Hash Functions in Cryptography

Cryptographic hash functions are cryptographic algorithms that produce and use specific keys to encrypt data for either symmetric or asymmetric encryption, and they can be thought of as keys. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.

Cryptographic primitives and cryptosystems

Much of cryptography’s theoretical work focuses on cryptographic primitives—algorithms having basic cryptographic properties—and how they relate to other cryptographic challenges. These basic primitives are then used to create more complex cryptographic tools. These primitives provide fundamental qualities that are utilized to create more complex tools known as cryptosystems or cryptographic protocols that ensure one or more high-level security properties. The boundary between cryptographic primitives and cryptosystems, on the other hand, is arbitrary; the RSA algorithm, for example, is sometimes regarded a cryptosystem and sometimes a primitive. Pseudorandom functions, one-way functions, and other cryptographic primitives are common examples.

A cryptographic system, or cryptosystem, is created by combining one or more cryptographic primitives to create a more complicated algorithm. Cryptosystems (e.g., El-Gamal encryption) are meant to provide specific functionality (e.g., public key encryption) while ensuring certain security qualities (e.g., random oracle model chosen-plaintext attack CPA security). To support the system’s security qualities, cryptosystems utilise the properties of the underlying cryptographic primitives. A sophisticated cryptosystem can be generated from a combination of numerous more rudimentary cryptosystems, as the distinction between primitives and cryptosystems is somewhat arbitrary. In many circumstances, the cryptosystem’s structure comprises back-and-forth communication between two or more parties in space (e.g., between the sender and recipient of a secure message) or across time (e.g., between the sender and receiver of a secure message) (e.g., cryptographically protected backup data).

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/ACC Advanced Classical Cryptography Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments

EITC/IS/CCF Classical Cryptography Fundamentals

Monday, 03 May 2021 by admin

EITC/IS/CCF Classical Cryptography Fundamentals is the European IT Certification programme on theoretical and practical aspects of classical cryptography, including both the private-key and the public-key cryptography, with an introduction to practical ciphers widely used in the Internet, such as the RSA.

The curriculum of the EITC/IS/CCF Classical Cryptography Fundamentals covers introduction to private-key cryptography, modular arithmetic and historical ciphers, stream ciphers, random numbers, the One-Time Pad (OTP) unconditionally secure cipher (under assumption of providing a solution to the key distribution problem, such as is given e.g. by the Quantum Key Distribution, QKD), linear feedback shift registers, Data Encryption Standard (DES cipher, including encryption, key schedule and decryption), Advanced Encryption Standard (AES, introducing Galois fields based cryptography), applications of block ciphers (including modes of their operation), consideration of multiple encryption and brute-force attacks, introduction to public-key cryptography covering number theory, Euclidean algorithm, Euler’s Phi function and Euler’s theorem, as well as the introduction to the RSA cryptosystem and efficient exponentiation, within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.

Cryptography refers to ways of secure communication in the presence of an adversary. Cryptography, in a broader sense, is the process of creating and analyzing protocols that prevent third parties or the general public from accessing private (encrypted) messages. Modern classical cryptography is based on several main features of information security such as data confidentiality, data integrity, authentication, and non-repudiation. In contrast to quantum cryptography, which is based on radically different quantum physics rules that characterize nature, classical cryptography refers to cryptography based on classical physics laws. The fields of mathematics, computer science, electrical engineering, communication science, and physics all meet in classical cryptography. Electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications are all examples of cryptography applications.

Prior to the current era, cryptography was almost synonymous with encryption, turning information from readable to unintelligible nonsense. To prevent attackers from gaining access to an encrypted message, the sender only shares the decoding process with the intended receivers. The names Alice (“A”) for the sender, Bob (“B”) for the intended recipient, and Eve (“eavesdropper”) for the adversary are frequently used in cryptography literature.

Cryptography methods have become increasingly complex, and its applications have been more diversified, since the development of rotor cipher machines in World War I and the introduction of computers in World War II.

Modern cryptography is strongly reliant on mathematical theory and computer science practice; cryptographic methods are built around computational hardness assumptions, making them difficult for any opponent to break in practice. While breaking into a well-designed system is theoretically possible, doing so in practice is impossible. Such schemes are referred to as “computationally safe” if they are adequately constructed; nevertheless, theoretical breakthroughs (e.g., improvements in integer factorization methods) and faster computing technology necessitate constant reevaluation and, if required, adaptation of these designs. There are information-theoretically safe systems, such as the one-time pad, that can be proven to be unbreakable even with infinite computing power, but they are significantly more difficult to employ in practice than the best theoretically breakable but computationally secure schemes.

In the Information Age, the advancement of cryptographic technology has produced a variety of legal challenges. Many nations have classified cryptography as a weapon, limiting or prohibiting its use and export due to its potential for espionage and sedition. Investigators can compel the surrender of encryption keys for documents pertinent to an investigation in some places where cryptography is lawful. In the case of digital media, cryptography also plays a key role in digital rights management and copyright infringement conflicts.

The term “cryptograph” (as opposed to “cryptogram”) was first used in the nineteenth century, in Edgar Allan Poe’s short story “The Gold-Bug.”

Until recently, cryptography nearly solely referred to “encryption,” which is the act of turning ordinary data (known as plaintext) into an unreadable format (called ciphertext). Decryption is the opposite of encryption, i.e., going from unintelligible ciphertext to plaintext. A cipher (or cypher) is a set of techniques that perform encryption and decryption in the reverse order. The algorithm and, in each case, a “key” are in charge of the cipher’s detailed execution. The key is a secret (preferably known only by the communicants) that is used to decrypt the ciphertext. It is commonly a string of characters (ideally short so that it can be remembered by the user). A “cryptosystem” is the ordered collection of elements of finite potential plaintexts, cyphertexts, keys, and the encryption and decryption procedures that correspond to each key in formal mathematical terms. Keys are crucial both formally and practically, because ciphers with fixed keys can be easily broken using only the cipher’s information, making them useless (or even counter-productive) for most purposes.

Historically, ciphers were frequently used without any additional procedures such as authentication or integrity checks for encryption or decryption. Cryptosystems are divided into two categories: symmetric and asymmetric. The same key (the secret key) is used to encrypt and decrypt a message in symmetric systems, which were the only ones known until the 1970s. Because symmetric systems use shorter key lengths, data manipulation in symmetric systems is faster than in asymmetric systems. Asymmetric systems encrypt a communication with a “public key” and decrypt it using a similar “private key.” The use of asymmetric systems improves communication security, owing to the difficulty of determining the relationship between the two keys. RSA (Rivest–Shamir–Adleman) and ECC are two examples of asymmetric systems (Elliptic Curve Cryptography). The widely used AES (Advanced Encryption Standard), which superseded the earlier DES, is an example of a high-quality symmetric algorithm (Data Encryption Standard). The various children’s language tangling techniques, such as Pig Latin or other cant, and indeed all cryptographic schemes, however seriously meant, from any source prior to the introduction of the one-time pad early in the twentieth century, are examples of low-quality symmetric algorithms.

The term “code” is often used colloquially to refer to any technique of encryption or message concealing. However, in cryptography, code refers to the substitution of a code word for a unit of plaintext (i.e., a meaningful word or phrase) (for example, “wallaby” replaces “attack at dawn”). In contrast, a cyphertext is created by modifying or substituting an element below such a level (a letter, a syllable, or a pair of letters, for example) in order to form a cyphertext.

Cryptanalysis is the study of ways for decrypting encrypted data without having access to the key required to do so; in other words, it is the study of how to “break” encryption schemes or their implementations.

In English, some people interchangeably use the terms “cryptography” and “cryptology,” while others (including US military practice in general) use “cryptography” to refer to the use and practice of cryptographic techniques and “cryptology” to refer to the combined study of cryptography and cryptanalysis. English is more adaptable than a number of other languages, where “cryptology” (as practiced by cryptologists) is always used in the second sense. Steganography is sometimes included in cryptology, according to RFC 2828.

Cryptolinguistics is the study of language properties that have some relevance in cryptography or cryptology (for example, frequency statistics, letter combinations, universal patterns, and so on).

Cryptography and cryptanalysis have a long history.
History of cryptography is the main article.
Prior to the modern era, cryptography was primarily concerned with message confidentiality (i.e., encryption)—the conversion of messages from an intelligible to an incomprehensible form and again, rendering them unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption was designed to keep the conversations of spies, military leaders, and diplomats private. In recent decades, the discipline has grown to incorporate techniques such as message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs, and secure computation, among other things.

The two most common classical cipher types are transposition ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘hello world’ becomes ‘ehlol owrdl’ in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘fly at once’ becomes ‘gmz bu Simple versions of either have never provided much privacy from cunning adversaries. The Caesar cipher was an early substitution cipher in which each letter in the plaintext was replaced by a letter a certain number of positions down the alphabet. According to Suetonius, Julius Caesar used it with a three-man shift to communicate with his generals. An early Hebrew cipher, Atbash, is an example. The oldest known usage of cryptography is a carved ciphertext on stone in Egypt (about 1900 BCE), however it’s possible that this was done for the enjoyment of literate spectators rather than to conceal information.

Crypts are reported to have been known to the Classical Greeks (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (the practice of concealing even the presence of a communication in order to keep it private) was also invented in ancient times. A phrase tattooed on a slave’s shaved head and hidden beneath the regrown hair, according to Herodotus. The use of invisible ink, microdots, and digital watermarks to conceal information are more current instances of steganography.

Kautiliyam and Mulavediya are two types of ciphers mentioned in India’s 2000-year-old Kamasutra of Vtsyyana. The cipher letter substitutions in the Kautiliyam are based on phonetic relationships, such as vowels becoming consonants. The cipher alphabet in the Mulavediya comprises of matching letters and employing reciprocal ones.

According to Muslim scholar Ibn al-Nadim, Sassanid Persia had two secret scripts: the h-dabrya (literally “King’s script”), which was used for official correspondence, and the rz-saharya, which was used to exchange secret messages with other countries.

In his book The Codebreakers, David Kahn writes that contemporary cryptology began with the Arabs, who were the first to carefully document cryptanalytic procedures. The Book of Cryptographic Messages was written by Al-Khalil (717–786), and it contains the earliest use of permutations and combinations to list all conceivable Arabic words with and without vowels.

Ciphertexts generated by a classical cipher (as well as some modern ciphers) reveal statistical information about the plaintext, which can be utilized to break the cipher. Nearly all such ciphers could be broken by an intelligent attacker after the discovery of frequency analysis, possibly by the Arab mathematician and polymath Al-Kindi (also known as Alkindus) in the 9th century. Classical ciphers are still popular today, albeit largely as puzzles (see cryptogram). Risalah fi Istikhraj al-Mu’amma (Manuscript for the Deciphering Cryptographic Messages) was written by Al-Kindi and documented the first known usage of frequency analysis cryptanalysis techniques.

Some extended history encryption approaches, such as homophonic cipher, that tend to flatten the frequency distribution, may not benefit from language letter frequencies. Language letter group (or n-gram) frequencies may give an attack for those ciphers.

Until the discovery of the polyalphabetic cipher, most notably by Leon Battista Alberti around 1467, virtually all ciphers were accessible to cryptanalysis using the frequency analysis approach, though there is some evidence that it was already known to Al-Kindi. Alberti came up with the idea of using separate ciphers (or substitution alphabets) for different parts of a communication (perhaps for each successive plaintext letter at the limit). He also created what is thought to be the first automatic encryption device, a wheel that executed a portion of his design. Encryption in the Vigenère cipher, a polyalphabetic cipher, is controlled by a key word that governs letter substitution based on which letter of the key word is utilized. Charles Babbage demonstrated that the Vigenère cipher was vulnerable to Kasiski analysis in the mid-nineteenth century, but Friedrich Kasiski published his findings ten years later.

Despite the fact that frequency analysis is a powerful and broad technique against many ciphers, encryption has remained effective in practice because many would-be cryptanalysts are unaware of the technique. Breaking a message without utilizing frequency analysis needed knowledge of the cipher employed and possibly the key involved, making espionage, bribery, burglary, defection, and other cryptanalytically uninformed tactics more appealing. The secret of a cipher’s algorithm was ultimately acknowledged in the 19th century as neither a reasonable nor feasible assurance of message security; in fact, any appropriate cryptographic scheme (including ciphers) should remain secure even if the opponent fully understands the cipher algorithm itself. The key’s security should be sufficient for a good cipher to retain confidentiality in the face of an assault. Auguste Kerckhoffs first stated this fundamental principle in 1883, and it is known as Kerckhoffs’s Principle; alternatively, and more bluntly, Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, restated it as Shannon’s Maxim—’the enemy knows the system.’

To help with ciphers, many physical gadgets and assistance have been utilized. The scytale of ancient Greece, a rod allegedly employed by the Spartans as a transposition cipher tool, may have been one of the first. Other aids were devised in medieval times, such as the cipher grille, which was also used for steganography. With the development of polyalphabetic ciphers, more sophisticated aids such as Alberti’s cipher disk, Johannes Trithemius’ tabula recta scheme, and Thomas Jefferson’s wheel cipher became available (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption systems were devised and patented in the early twentieth century, including rotor machines, which were famously employed by the German government and military from the late 1920s to World War II. Following WWI, the ciphers implemented by higher-quality instances of these machine designs resulted in a significant rise in cryptanalytic difficulty.

Cryptography was primarily concerned with linguistic and lexicographic patterns prior to the early twentieth century. Since then, the focus has evolved, and cryptography now includes aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics in general. Cryptography is a type of engineering, but it’s unique in that it deals with active, intelligent, and hostile resistance, whereas other types of engineering (such as civil or chemical engineering) merely have to deal with natural forces that are neutral. The link between cryptography difficulties and quantum physics is also being investigated.

The development of digital computers and electronics aided cryptanalysis by allowing for the creation of considerably more sophisticated ciphers. Furthermore, unlike traditional ciphers, which exclusively encrypted written language texts, computers allowed for the encryption of any type of data that could be represented in any binary format; this was novel and crucial. In both cipher design and cryptanalysis, computers have so supplanted language cryptography. Unlike classical and mechanical methods, which primarily manipulate traditional characters (i.e., letters and numerals) directly, many computer ciphers operate on binary bit sequences (occasionally in groups or blocks). Computers, on the other hand, have aided cryptanalysis, which has partially compensated for increased cipher complexity. Despite this, good modern ciphers have remained ahead of cryptanalysis; it is often the case that using a good cipher is very efficient (i.e., quick and requiring few resources, such as memory or CPU capability), whereas breaking it requires an effort many orders of magnitude greater, and vastly greater than that required for any classical cipher, effectively rendering cryptanalysis impossible.

Modern cryptography makes its debut.
The new mechanical devices’ cryptanalysis proved to be challenging and time-consuming. During WWII, cryptanalytic activities at Bletchley Park in the United Kingdom fostered the invention of more efficient methods for doing repetitive tasks. The Colossus, the world’s first completely electronic, digital, programmable computer, was developed to aid in the decoding of ciphers created by the German Army’s Lorenz SZ40/42 machine.

Cryptography is a relatively new field of open academic research, having only begun in the mid-1970s. IBM employees devised the algorithm that became the Federal (i.e., US) Data Encryption Standard; Whitfield Diffie and Martin Hellman published their key agreement algorithm; and Martin Gardner’s Scientific American column published the RSA algorithm. Cryptography has since grown in popularity as a technique for communications, computer networks, and computer security in general.

There are profound ties with abstract mathematics since several modern cryptography approaches can only keep their keys secret if certain mathematical problems are intractable, such as integer factorization or discrete logarithm issues. There are just a handful cryptosystems that have been demonstrated to be 100% secure. Claude Shannon proved that the one-time pad is one of them. There are a few key algorithms that have been shown to be secure under certain conditions. The inability to factor extremely big integers, for example, is the basis for believing that RSA and other systems are secure, but proof of unbreakability is unattainable because the underlying mathematical problem remains unsolved. In practice, these are widely utilized, and most competent observers believe they are unbreakable in practice. There exist systems similar to RSA, such as one developed by Michael O. Rabin, that are provably safe if factoring n = pq is impossible; however, they are practically useless. The discrete logarithm issue is the foundation for believing that some other cryptosystems are secure, and there are similar, less practical systems that are provably secure in terms of the discrete logarithm problem’s solvability or insolvability.

Cryptographic algorithm and system designers must consider possible future advances when working on their ideas, in addition to being cognizant of cryptographic history. For example, as computer processing power has improved, the breadth of brute-force attacks has grown, hence the required key lengths have grown as well. Some cryptographic system designers exploring post-quantum cryptography are already considering the potential consequences of quantum computing; the announced imminence of modest implementations of these machines may make the need for preemptive caution more than just speculative.

Classical cryptography in the modern day

Symmetric (or private-key) cryptography is a type of encryption in which the sender and receiver use the same key (or, less commonly, in which their keys are different, but related in an easily computable way and are kept in secret, privately). Until June 1976, this was the only type of encryption that was publicly known.

Block ciphers and stream ciphers are both used to implement symmetric key ciphers. A block cipher encrypts input in blocks of plaintext rather than individual characters, like a stream cipher does.

The US government has designated the Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) as cryptography standards (albeit DES’s certification was eventually withdrawn once the AES was established). DES (especially its still-approved and significantly more secure triple-DES variation) remains popular despite its deprecation as an official standard; it is used in a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. There have been a slew of different block ciphers invented and released, with varying degrees of success. Many, including some designed by qualified practitioners, such as FEAL, have been extensively broken.

Stream ciphers, unlike block ciphers, generate an infinitely lengthy stream of key material that is coupled with plaintext bit-by-bit or character-by-character, similar to the one-time pad. The output stream of a stream cipher is generated from a concealed internal state that changes as the cipher functions. The secret key material is used to set up that internal state at first. The stream cipher RC4 is extensively used. By creating blocks of a keystream (instead of a pseudorandom number generator) and using an XOR operation to each bit of the plaintext with each bit of the keystream, block ciphers can be employed as stream ciphers.

Message authentication codes (MACs) are similar to cryptographic hash functions, with the exception that a secret key can be used to validate the hash value upon receipt; this extra intricacy prevents an attack against naked digest algorithms, and so is regarded to be worthwhile. A third sort of cryptographic technique is cryptographic hash functions. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.

Although a message or set of messages can have a different key than others, symmetric-key cryptosystems employ the same key for encryption and decryption. The key management required to use symmetric ciphers securely is a big disadvantage. Each individual pair of communicating parties should, ideally, share a different key, as well as possibly a different ciphertext for each ciphertext sent. The number of keys required grows in direct proportion to the number of network participants, necessitating complicated key management techniques to keep them all consistent and secret.

Whitfield Diffie and Martin Hellman invented the concept of public-key (also known as asymmetric key) cryptography in a seminal 1976 work, in which two distinct but mathematically related keys—a public key and a private key—are employed. Even though they are inextricably linked, a public key system is built in such a way that calculating one key (the’private key’) from the other (the’public key’) is computationally infeasible. Rather, both keys are produced in secret, as a linked pair. Public-key cryptography, according to historian David Kahn, is “the most revolutionary new notion in the field since polyalphabetic substitution arose in the Renaissance.”

The public key in a public-key cryptosystem can be freely transmitted, but the coupled private key must be kept hidden. The public key is used for encryption, whereas the private or secret key is utilized for decryption in a public-key encryption scheme. While Diffie and Hellman were unable to create such a system, they demonstrated that public-key cryptography was conceivable by providing the Diffie–Hellman key exchange protocol, a solution that allows two people to covertly agree on a shared encryption key. The most widely used format for public key certificates is defined by the X.509 standard.

The publication of Diffie and Hellman sparked widespread academic interest in developing a practical public-key encryption system. Ronald Rivest, Adi Shamir, and Len Adleman eventually won the contest in 1978, and their answer became known as the RSA algorithm.

In addition to being the earliest publicly known instances of high-quality public-key algorithms, the Diffie–Hellman and RSA algorithms have been among the most commonly utilized. The Cramer–Shoup cryptosystem, ElGamal encryption, and numerous elliptic curve approaches are examples of asymmetric-key algorithms.

GCHQ cryptographers foresaw several scholarly advancements, according to a document issued in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization. According to legend, asymmetric key cryptography was invented by James H. Ellis about 1970. Clifford Cocks invented a solution in 1973 that was extremely similar to RSA in terms of design. Malcolm J. Williamson is credited with inventing the Diffie–Hellman key exchange in 1974.

Digital signature systems are also implemented using public-key cryptography. A digital signature is similar to a traditional signature in that it is simple for the user to create yet difficult for others to forge. Digital signatures can also be permanently linked to the content of the communication being signed; this means they can’t be’moved’ from one document to another without being detected. There are two algorithms in digital signature schemes: one for signing, which uses a secret key to process the message (or a hash of the message, or both), and one for verification, which uses the matching public key with the message to validate the signature’s authenticity. Two of the most used digital signature methods are RSA and DSA. Public key infrastructures and many network security systems (e.g., SSL/TLS, many VPNs) rely on digital signatures to function.

The computational complexity of “hard” problems, such as those arising from number theory, is frequently used to develop public-key methods. The integer factorization problem is related to the hardness of RSA, while the discrete logarithm problem is related to Diffie–Hellman and DSA. The security of elliptic curve cryptography is based on elliptic curve number theoretic problems. Most public-key algorithms include operations like modular multiplication and exponentiation, which are substantially more computationally expensive than the techniques used in most block ciphers, especially with normal key sizes, due to the difficulty of the underlying problems. As a result, public-key cryptosystems are frequently hybrid cryptosystems, in which the message is encrypted with a fast, high-quality symmetric-key algorithm, while the relevant symmetric key is sent with the message but encrypted with a public-key algorithm. Hybrid signature schemes, in which a cryptographic hash function is computed and only the resulting hash is digitally signed, are also commonly used.

Hash Functions in Cryptography

Cryptographic hash functions are cryptographic algorithms that produce and use specific keys to encrypt data for either symmetric or asymmetric encryption, and they can be thought of as keys. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.

Cryptographic primitives and cryptosystems

Much of cryptography’s theoretical work focuses on cryptographic primitives—algorithms having basic cryptographic properties—and how they relate to other cryptographic challenges. These basic primitives are then used to create more complex cryptographic tools. These primitives provide fundamental qualities that are utilized to create more complex tools known as cryptosystems or cryptographic protocols that ensure one or more high-level security properties. The boundary between cryptographic primitives and cryptosystems, on the other hand, is arbitrary; the RSA algorithm, for example, is sometimes regarded a cryptosystem and sometimes a primitive. Pseudorandom functions, one-way functions, and other cryptographic primitives are common examples.

A cryptographic system, or cryptosystem, is created by combining one or more cryptographic primitives to create a more complicated algorithm. Cryptosystems (e.g., El-Gamal encryption) are meant to provide specific functionality (e.g., public key encryption) while ensuring certain security qualities (e.g., random oracle model chosen-plaintext attack CPA security). To support the system’s security qualities, cryptosystems utilise the properties of the underlying cryptographic primitives. A sophisticated cryptosystem can be generated from a combination of numerous more rudimentary cryptosystems, as the distinction between primitives and cryptosystems is somewhat arbitrary. In many circumstances, the cryptosystem’s structure comprises back-and-forth communication between two or more parties in space (e.g., between the sender and recipient of a secure message) or across time (e.g., between the sender and receiver of a secure message) (e.g., cryptographically protected backup data).

To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.

The EITC/IS/CCF Classical Cryptography Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.

Read more
No Comments
  • 1
  • 2
Home » EITCA/IS

Certification Center

USER MENU

  • My Bookings

CERTIFICATE CATEGORY

  • EITC Certification (105)
  • EITCA Certification (9)

What are you looking for?

  • Introduction
  • How it works?
  • EITCA Academies
  • EITCI DSJC Subsidy
  • Full EITC catalogue
  • Your order
  • Featured
  •   IT ID
  • About
  • Contact

Eligibility for EITCA Academy 80% EITCI DSJC Subsidy support

80% of EITCA Academy fees subsidized in enrolment by 3/2/2023

    EITCA Academy Administrative Office

    European IT Certification Institute
    Brussels, Belgium, European Union

    EITC / EITCA Certification Authority
    Governing European IT Certification Standard
    Access contact form or call +32 25887351

    2 days agoThe #EITC/QI/QIF Quantum Information Fundamentals (part of #EITCA/IS) attests expertise in #Quantum Computation and… https://t.co/OrYWUOTC1X
    Follow @EITCI

    Automatically translate to your language

    Terms and Conditions | Privacy Policy
    Follow @EITCI
    EITCA Academy
    • EITCA Academy on social media
    EITCA Academy


    © 2008-2023  European IT Certification Institute
    Brussels, Belgium, European Union

    TOP
    Chat with Support
    Chat with Support
    Questions, doubts, issues? We are here to help you!
    End chat
    Connecting...
    Do you have a question? Ask us!
    Do you have a question? Ask us!
    :
    :
    :
    Send
    Do you have a question? Ask us!
    :
    :
    Start Chat
    The chat session has ended. Thank you!
    Please rate the support you've received.
    Good Bad