EITC/IS/CCF Classical Cryptography Fundamentals is the European IT Certification programme on theoretical and practical aspects of classical cryptography, including both the private-key and the public-key cryptography, with an introduction to practical ciphers widely used in the Internet, such as the RSA.
The curriculum of the EITC/IS/CCF Classical Cryptography Fundamentals covers introduction to private-key cryptography, modular arithmetic and historical ciphers, stream ciphers, random numbers, the One-Time Pad (OTP) unconditionally secure cipher (under assumption of providing a solution to the key distribution problem, such as is given e.g. by the Quantum Key Distribution, QKD), linear feedback shift registers, Data Encryption Standard (DES cipher, including encryption, key schedule and decryption), Advanced Encryption Standard (AES, introducing Galois fields based cryptography), applications of block ciphers (including modes of their operation), consideration of multiple encryption and brute-force attacks, introduction to public-key cryptography covering number theory, Euclidean algorithm, Euler’s Phi function and Euler’s theorem, as well as the introduction to the RSA cryptosystem and efficient exponentiation, within the following structure, encompassing comprehensive video didactic content as a reference for this EITC Certification.
Cryptography refers to ways of secure communication in the presence of an adversary. Cryptography, in a broader sense, is the process of creating and analyzing protocols that prevent third parties or the general public from accessing private (encrypted) messages. Modern classical cryptography is based on several main features of information security such as data confidentiality, data integrity, authentication, and non-repudiation. In contrast to quantum cryptography, which is based on radically different quantum physics rules that characterize nature, classical cryptography refers to cryptography based on classical physics laws. The fields of mathematics, computer science, electrical engineering, communication science, and physics all meet in classical cryptography. Electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications are all examples of cryptography applications.
Prior to the current era, cryptography was almost synonymous with encryption, turning information from readable to unintelligible nonsense. To prevent attackers from gaining access to an encrypted message, the sender only shares the decoding process with the intended receivers. The names Alice (“A”) for the sender, Bob (“B”) for the intended recipient, and Eve (“eavesdropper”) for the adversary are frequently used in cryptography literature.
Cryptography methods have become increasingly complex, and its applications have been more diversified, since the development of rotor cipher machines in World War I and the introduction of computers in World War II.
Modern cryptography is strongly reliant on mathematical theory and computer science practice; cryptographic methods are built around computational hardness assumptions, making them difficult for any opponent to break in practice. While breaking into a well-designed system is theoretically possible, doing so in practice is impossible. Such schemes are referred to as “computationally safe” if they are adequately constructed; nevertheless, theoretical breakthroughs (e.g., improvements in integer factorization methods) and faster computing technology necessitate constant reevaluation and, if required, adaptation of these designs. There are information-theoretically safe systems, such as the one-time pad, that can be proven to be unbreakable even with infinite computing power, but they are significantly more difficult to employ in practice than the best theoretically breakable but computationally secure schemes.
In the Information Age, the advancement of cryptographic technology has produced a variety of legal challenges. Many nations have classified cryptography as a weapon, limiting or prohibiting its use and export due to its potential for espionage and sedition. Investigators can compel the surrender of encryption keys for documents pertinent to an investigation in some places where cryptography is lawful. In the case of digital media, cryptography also plays a key role in digital rights management and copyright infringement conflicts.
The term “cryptograph” (as opposed to “cryptogram”) was first used in the nineteenth century, in Edgar Allan Poe’s short story “The Gold-Bug.”
Until recently, cryptography nearly solely referred to “encryption,” which is the act of turning ordinary data (known as plaintext) into an unreadable format (called ciphertext). Decryption is the opposite of encryption, i.e., going from unintelligible ciphertext to plaintext. A cipher (or cypher) is a set of techniques that perform encryption and decryption in the reverse order. The algorithm and, in each case, a “key” are in charge of the cipher’s detailed execution. The key is a secret (preferably known only by the communicants) that is used to decrypt the ciphertext. It is commonly a string of characters (ideally short so that it can be remembered by the user). A “cryptosystem” is the ordered collection of elements of finite potential plaintexts, cyphertexts, keys, and the encryption and decryption procedures that correspond to each key in formal mathematical terms. Keys are crucial both formally and practically, because ciphers with fixed keys can be easily broken using only the cipher’s information, making them useless (or even counter-productive) for most purposes.
Historically, ciphers were frequently used without any additional procedures such as authentication or integrity checks for encryption or decryption. Cryptosystems are divided into two categories: symmetric and asymmetric. The same key (the secret key) is used to encrypt and decrypt a message in symmetric systems, which were the only ones known until the 1970s. Because symmetric systems use shorter key lengths, data manipulation in symmetric systems is faster than in asymmetric systems. Asymmetric systems encrypt a communication with a “public key” and decrypt it using a similar “private key.” The use of asymmetric systems improves communication security, owing to the difficulty of determining the relationship between the two keys. RSA (Rivest–Shamir–Adleman) and ECC are two examples of asymmetric systems (Elliptic Curve Cryptography). The widely used AES (Advanced Encryption Standard), which superseded the earlier DES, is an example of a high-quality symmetric algorithm (Data Encryption Standard). The various children’s language tangling techniques, such as Pig Latin or other cant, and indeed all cryptographic schemes, however seriously meant, from any source prior to the introduction of the one-time pad early in the twentieth century, are examples of low-quality symmetric algorithms.
The term “code” is often used colloquially to refer to any technique of encryption or message concealing. However, in cryptography, code refers to the substitution of a code word for a unit of plaintext (i.e., a meaningful word or phrase) (for example, “wallaby” replaces “attack at dawn”). In contrast, a cyphertext is created by modifying or substituting an element below such a level (a letter, a syllable, or a pair of letters, for example) in order to form a cyphertext.
Cryptanalysis is the study of ways for decrypting encrypted data without having access to the key required to do so; in other words, it is the study of how to “break” encryption schemes or their implementations.
In English, some people interchangeably use the terms “cryptography” and “cryptology,” while others (including US military practice in general) use “cryptography” to refer to the use and practice of cryptographic techniques and “cryptology” to refer to the combined study of cryptography and cryptanalysis. English is more adaptable than a number of other languages, where “cryptology” (as practiced by cryptologists) is always used in the second sense. Steganography is sometimes included in cryptology, according to RFC 2828.
Cryptolinguistics is the study of language properties that have some relevance in cryptography or cryptology (for example, frequency statistics, letter combinations, universal patterns, and so on).
Cryptography and cryptanalysis have a long history.
History of cryptography is the main article.
Prior to the modern era, cryptography was primarily concerned with message confidentiality (i.e., encryption)—the conversion of messages from an intelligible to an incomprehensible form and again, rendering them unreadable by interceptors or eavesdroppers without secret knowledge (namely the key needed for decryption of that message). Encryption was designed to keep the conversations of spies, military leaders, and diplomats private. In recent decades, the discipline has grown to incorporate techniques such as message integrity checking, sender/receiver identity authentication, digital signatures, interactive proofs, and secure computation, among other things.
The two most common classical cipher types are transposition ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘hello world’ becomes ‘ehlol owrdl’ in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., ‘fly at once’ becomes ‘gmz bu Simple versions of either have never provided much privacy from cunning adversaries. The Caesar cipher was an early substitution cipher in which each letter in the plaintext was replaced by a letter a certain number of positions down the alphabet. According to Suetonius, Julius Caesar used it with a three-man shift to communicate with his generals. An early Hebrew cipher, Atbash, is an example. The oldest known usage of cryptography is a carved ciphertext on stone in Egypt (about 1900 BCE), however it’s possible that this was done for the enjoyment of literate spectators rather than to conceal information.
Crypts are reported to have been known to the Classical Greeks (e.g., the scytale transposition cipher claimed to have been used by the Spartan military). Steganography (the practice of concealing even the presence of a communication in order to keep it private) was also invented in ancient times. A phrase tattooed on a slave’s shaved head and hidden beneath the regrown hair, according to Herodotus. The use of invisible ink, microdots, and digital watermarks to conceal information are more current instances of steganography.
Kautiliyam and Mulavediya are two types of ciphers mentioned in India’s 2000-year-old Kamasutra of Vtsyyana. The cipher letter substitutions in the Kautiliyam are based on phonetic relationships, such as vowels becoming consonants. The cipher alphabet in the Mulavediya comprises of matching letters and employing reciprocal ones.
According to Muslim scholar Ibn al-Nadim, Sassanid Persia had two secret scripts: the h-dabrya (literally “King’s script”), which was used for official correspondence, and the rz-saharya, which was used to exchange secret messages with other countries.
In his book The Codebreakers, David Kahn writes that contemporary cryptology began with the Arabs, who were the first to carefully document cryptanalytic procedures. The Book of Cryptographic Messages was written by Al-Khalil (717–786), and it contains the earliest use of permutations and combinations to list all conceivable Arabic words with and without vowels.
Ciphertexts generated by a classical cipher (as well as some modern ciphers) reveal statistical information about the plaintext, which can be utilized to break the cipher. Nearly all such ciphers could be broken by an intelligent attacker after the discovery of frequency analysis, possibly by the Arab mathematician and polymath Al-Kindi (also known as Alkindus) in the 9th century. Classical ciphers are still popular today, albeit largely as puzzles (see cryptogram). Risalah fi Istikhraj al-Mu’amma (Manuscript for the Deciphering Cryptographic Messages) was written by Al-Kindi and documented the first known usage of frequency analysis cryptanalysis techniques.
Some extended history encryption approaches, such as homophonic cipher, that tend to flatten the frequency distribution, may not benefit from language letter frequencies. Language letter group (or n-gram) frequencies may give an attack for those ciphers.
Until the discovery of the polyalphabetic cipher, most notably by Leon Battista Alberti around 1467, virtually all ciphers were accessible to cryptanalysis using the frequency analysis approach, though there is some evidence that it was already known to Al-Kindi. Alberti came up with the idea of using separate ciphers (or substitution alphabets) for different parts of a communication (perhaps for each successive plaintext letter at the limit). He also created what is thought to be the first automatic encryption device, a wheel that executed a portion of his design. Encryption in the Vigenère cipher, a polyalphabetic cipher, is controlled by a key word that governs letter substitution based on which letter of the key word is utilized. Charles Babbage demonstrated that the Vigenère cipher was vulnerable to Kasiski analysis in the mid-nineteenth century, but Friedrich Kasiski published his findings ten years later.
Despite the fact that frequency analysis is a powerful and broad technique against many ciphers, encryption has remained effective in practice because many would-be cryptanalysts are unaware of the technique. Breaking a message without utilizing frequency analysis needed knowledge of the cipher employed and possibly the key involved, making espionage, bribery, burglary, defection, and other cryptanalytically uninformed tactics more appealing. The secret of a cipher’s algorithm was ultimately acknowledged in the 19th century as neither a reasonable nor feasible assurance of message security; in fact, any appropriate cryptographic scheme (including ciphers) should remain secure even if the opponent fully understands the cipher algorithm itself. The key’s security should be sufficient for a good cipher to retain confidentiality in the face of an assault. Auguste Kerckhoffs first stated this fundamental principle in 1883, and it is known as Kerckhoffs’s Principle; alternatively, and more bluntly, Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, restated it as Shannon’s Maxim—’the enemy knows the system.’
To help with ciphers, many physical gadgets and assistance have been utilized. The scytale of ancient Greece, a rod allegedly employed by the Spartans as a transposition cipher tool, may have been one of the first. Other aids were devised in medieval times, such as the cipher grille, which was also used for steganography. With the development of polyalphabetic ciphers, more sophisticated aids such as Alberti’s cipher disk, Johannes Trithemius’ tabula recta scheme, and Thomas Jefferson’s wheel cipher became available (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption systems were devised and patented in the early twentieth century, including rotor machines, which were famously employed by the German government and military from the late 1920s to World War II. Following WWI, the ciphers implemented by higher-quality instances of these machine designs resulted in a significant rise in cryptanalytic difficulty.
Cryptography was primarily concerned with linguistic and lexicographic patterns prior to the early twentieth century. Since then, the focus has evolved, and cryptography now includes aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics in general. Cryptography is a type of engineering, but it’s unique in that it deals with active, intelligent, and hostile resistance, whereas other types of engineering (such as civil or chemical engineering) merely have to deal with natural forces that are neutral. The link between cryptography difficulties and quantum physics is also being investigated.
The development of digital computers and electronics aided cryptanalysis by allowing for the creation of considerably more sophisticated ciphers. Furthermore, unlike traditional ciphers, which exclusively encrypted written language texts, computers allowed for the encryption of any type of data that could be represented in any binary format; this was novel and crucial. In both cipher design and cryptanalysis, computers have so supplanted language cryptography. Unlike classical and mechanical methods, which primarily manipulate traditional characters (i.e., letters and numerals) directly, many computer ciphers operate on binary bit sequences (occasionally in groups or blocks). Computers, on the other hand, have aided cryptanalysis, which has partially compensated for increased cipher complexity. Despite this, good modern ciphers have remained ahead of cryptanalysis; it is often the case that using a good cipher is very efficient (i.e., quick and requiring few resources, such as memory or CPU capability), whereas breaking it requires an effort many orders of magnitude greater, and vastly greater than that required for any classical cipher, effectively rendering cryptanalysis impossible.
Modern cryptography makes its debut.
The new mechanical devices’ cryptanalysis proved to be challenging and time-consuming. During WWII, cryptanalytic activities at Bletchley Park in the United Kingdom fostered the invention of more efficient methods for doing repetitive tasks. The Colossus, the world’s first completely electronic, digital, programmable computer, was developed to aid in the decoding of ciphers created by the German Army’s Lorenz SZ40/42 machine.
Cryptography is a relatively new field of open academic research, having only begun in the mid-1970s. IBM employees devised the algorithm that became the Federal (i.e., US) Data Encryption Standard; Whitfield Diffie and Martin Hellman published their key agreement algorithm; and Martin Gardner’s Scientific American column published the RSA algorithm. Cryptography has since grown in popularity as a technique for communications, computer networks, and computer security in general.
There are profound ties with abstract mathematics since several modern cryptography approaches can only keep their keys secret if certain mathematical problems are intractable, such as integer factorization or discrete logarithm issues. There are just a handful cryptosystems that have been demonstrated to be 100% secure. Claude Shannon proved that the one-time pad is one of them. There are a few key algorithms that have been shown to be secure under certain conditions. The inability to factor extremely big integers, for example, is the basis for believing that RSA and other systems are secure, but proof of unbreakability is unattainable because the underlying mathematical problem remains unsolved. In practice, these are widely utilized, and most competent observers believe they are unbreakable in practice. There exist systems similar to RSA, such as one developed by Michael O. Rabin, that are provably safe if factoring n = pq is impossible; however, they are practically useless. The discrete logarithm issue is the foundation for believing that some other cryptosystems are secure, and there are similar, less practical systems that are provably secure in terms of the discrete logarithm problem’s solvability or insolvability.
Cryptographic algorithm and system designers must consider possible future advances when working on their ideas, in addition to being cognizant of cryptographic history. For example, as computer processing power has improved, the breadth of brute-force attacks has grown, hence the required key lengths have grown as well. Some cryptographic system designers exploring post-quantum cryptography are already considering the potential consequences of quantum computing; the announced imminence of modest implementations of these machines may make the need for preemptive caution more than just speculative.
Classical cryptography in the modern day
Symmetric (or private-key) cryptography is a type of encryption in which the sender and receiver use the same key (or, less commonly, in which their keys are different, but related in an easily computable way and are kept in secret, privately). Until June 1976, this was the only type of encryption that was publicly known.
Block ciphers and stream ciphers are both used to implement symmetric key ciphers. A block cipher encrypts input in blocks of plaintext rather than individual characters, like a stream cipher does.
The US government has designated the Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) as cryptography standards (albeit DES’s certification was eventually withdrawn once the AES was established). DES (especially its still-approved and significantly more secure triple-DES variation) remains popular despite its deprecation as an official standard; it is used in a wide range of applications, from ATM encryption to e-mail privacy and secure remote access. There have been a slew of different block ciphers invented and released, with varying degrees of success. Many, including some designed by qualified practitioners, such as FEAL, have been extensively broken.
Stream ciphers, unlike block ciphers, generate an infinitely lengthy stream of key material that is coupled with plaintext bit-by-bit or character-by-character, similar to the one-time pad. The output stream of a stream cipher is generated from a concealed internal state that changes as the cipher functions. The secret key material is used to set up that internal state at first. The stream cipher RC4 is extensively used. By creating blocks of a keystream (instead of a pseudorandom number generator) and using an XOR operation to each bit of the plaintext with each bit of the keystream, block ciphers can be employed as stream ciphers.
Message authentication codes (MACs) are similar to cryptographic hash functions, with the exception that a secret key can be used to validate the hash value upon receipt; this extra intricacy prevents an attack against naked digest algorithms, and so is regarded to be worthwhile. A third sort of cryptographic technique is cryptographic hash functions. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.
Although a message or set of messages can have a different key than others, symmetric-key cryptosystems employ the same key for encryption and decryption. The key management required to use symmetric ciphers securely is a big disadvantage. Each individual pair of communicating parties should, ideally, share a different key, as well as possibly a different ciphertext for each ciphertext sent. The number of keys required grows in direct proportion to the number of network participants, necessitating complicated key management techniques to keep them all consistent and secret.
Whitfield Diffie and Martin Hellman invented the concept of public-key (also known as asymmetric key) cryptography in a seminal 1976 work, in which two distinct but mathematically related keys—a public key and a private key—are employed. Even though they are inextricably linked, a public key system is built in such a way that calculating one key (the’private key’) from the other (the’public key’) is computationally infeasible. Rather, both keys are produced in secret, as a linked pair. Public-key cryptography, according to historian David Kahn, is “the most revolutionary new notion in the field since polyalphabetic substitution arose in the Renaissance.”
The public key in a public-key cryptosystem can be freely transmitted, but the coupled private key must be kept hidden. The public key is used for encryption, whereas the private or secret key is utilized for decryption in a public-key encryption scheme. While Diffie and Hellman were unable to create such a system, they demonstrated that public-key cryptography was conceivable by providing the Diffie–Hellman key exchange protocol, a solution that allows two people to covertly agree on a shared encryption key. The most widely used format for public key certificates is defined by the X.509 standard.
The publication of Diffie and Hellman sparked widespread academic interest in developing a practical public-key encryption system. Ronald Rivest, Adi Shamir, and Len Adleman eventually won the contest in 1978, and their answer became known as the RSA algorithm.
In addition to being the earliest publicly known instances of high-quality public-key algorithms, the Diffie–Hellman and RSA algorithms have been among the most commonly utilized. The Cramer–Shoup cryptosystem, ElGamal encryption, and numerous elliptic curve approaches are examples of asymmetric-key algorithms.
GCHQ cryptographers foresaw several scholarly advancements, according to a document issued in 1997 by the Government Communications Headquarters (GCHQ), a British intelligence organization. According to legend, asymmetric key cryptography was invented by James H. Ellis about 1970. Clifford Cocks invented a solution in 1973 that was extremely similar to RSA in terms of design. Malcolm J. Williamson is credited with inventing the Diffie–Hellman key exchange in 1974.
Digital signature systems are also implemented using public-key cryptography. A digital signature is similar to a traditional signature in that it is simple for the user to create yet difficult for others to forge. Digital signatures can also be permanently linked to the content of the communication being signed; this means they can’t be’moved’ from one document to another without being detected. There are two algorithms in digital signature schemes: one for signing, which uses a secret key to process the message (or a hash of the message, or both), and one for verification, which uses the matching public key with the message to validate the signature’s authenticity. Two of the most used digital signature methods are RSA and DSA. Public key infrastructures and many network security systems (e.g., SSL/TLS, many VPNs) rely on digital signatures to function.
The computational complexity of “hard” problems, such as those arising from number theory, is frequently used to develop public-key methods. The integer factorization problem is related to the hardness of RSA, while the discrete logarithm problem is related to Diffie–Hellman and DSA. The security of elliptic curve cryptography is based on elliptic curve number theoretic problems. Most public-key algorithms include operations like modular multiplication and exponentiation, which are substantially more computationally expensive than the techniques used in most block ciphers, especially with normal key sizes, due to the difficulty of the underlying problems. As a result, public-key cryptosystems are frequently hybrid cryptosystems, in which the message is encrypted with a fast, high-quality symmetric-key algorithm, while the relevant symmetric key is sent with the message but encrypted with a public-key algorithm. Hybrid signature schemes, in which a cryptographic hash function is computed and only the resulting hash is digitally signed, are also commonly used.
Hash Functions in Cryptography
Cryptographic hash functions are cryptographic algorithms that produce and use specific keys to encrypt data for either symmetric or asymmetric encryption, and they can be thought of as keys. They take any length message as input and output a small, fixed-length hash that can be used in digital signatures, for example. An attacker cannot locate two messages that produce the same hash using good hash algorithms. MD4 is a widely used but now faulty hash function; MD5, an enhanced form of MD4, is likewise widely used but broken in practice. The Secure Hash Algorithm series of MD5-like hash algorithms was developed by the US National Security Agency: The US standards authority decided it was “prudent” from a security standpoint to develop a new standard to “significantly improve the robustness of NIST’s overall hash algorithm toolkit.” SHA-1 is widely used and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but is vulnerable to clashes as of 2011; and the SHA-2 family improves on SHA-1, but is vulnerable to clashes As a result, by 2012, a hash function design competition was to be held to pick a new US national standard, to be known as SHA-3. The competition came to a conclusion on October 2, 2012, when the National Institute of Standards and Technology (NIST) announced Keccak as the new SHA-3 hash algorithm. Cryptographic hash functions, unlike invertible block and stream ciphers, provide a hashed output that cannot be used to recover the original input data. Cryptographic hash functions are used to check the authenticity of data acquired from an untrustworthy source or to add an extra degree of protection.
Cryptographic primitives and cryptosystems
Much of cryptography’s theoretical work focuses on cryptographic primitives—algorithms having basic cryptographic properties—and how they relate to other cryptographic challenges. These basic primitives are then used to create more complex cryptographic tools. These primitives provide fundamental qualities that are utilized to create more complex tools known as cryptosystems or cryptographic protocols that ensure one or more high-level security properties. The boundary between cryptographic primitives and cryptosystems, on the other hand, is arbitrary; the RSA algorithm, for example, is sometimes regarded a cryptosystem and sometimes a primitive. Pseudorandom functions, one-way functions, and other cryptographic primitives are common examples.
A cryptographic system, or cryptosystem, is created by combining one or more cryptographic primitives to create a more complicated algorithm. Cryptosystems (e.g., El-Gamal encryption) are meant to provide specific functionality (e.g., public key encryption) while ensuring certain security qualities (e.g., random oracle model chosen-plaintext attack CPA security). To support the system’s security qualities, cryptosystems utilise the properties of the underlying cryptographic primitives. A sophisticated cryptosystem can be generated from a combination of numerous more rudimentary cryptosystems, as the distinction between primitives and cryptosystems is somewhat arbitrary. In many circumstances, the cryptosystem’s structure comprises back-and-forth communication between two or more parties in space (e.g., between the sender and recipient of a secure message) or across time (e.g., between the sender and receiver of a secure message) (e.g., cryptographically protected backup data).
To acquaint yourself in-detail with the certification curriculum you can expand and analyze the table below.
The EITC/IS/CCF Classical Cryptography Fundamentals Certification Curriculum references open-access didactic materials in a video form. Learning process is divided into a step-by-step structure (programmes -> lessons -> topics) covering relevant curriculum parts. Unlimited consultancy with domain experts are also provided.
For details on the Certification procedure check How it Works.
Main lecture notes
Understanding Cryptography by Christof Paar and Jan Pelzl, Online Course in the form of PDF Slides
Understanding Cryptography by Christof Paar and Jan Pelzl, Online Course in the form of Videos
Main classical cryptography book reference
Understanding Cryptography by Christof Paar and Jan Pelzl
Additional applied classical cryptography book reference
Handbook of Applied Cryptography by A. Menezes, P. van Oorschot and S. Vanstone: