Domain 3 Show
Eric Conrad, ... Joshua Feldman, in Eleventh Hour CISSP® (Third Edition), 2017 Cornerstone Cryptographic ConceptsCryptography is secret writing, a type of secure communication understood by the sender and intended recipient only. While it may be known that the data is being transmitted, the content of that data should remain unknown to third parties. Data in motion (moving on a network) and data at rest (stored on a device, such as a disk) may be encrypted for security. Key TermsCryptology is the science of secure communications. Cryptography creates messages with hidden meaning; cryptanalysis is the science of breaking those encrypted messages to recover their meaning. Many use the term cryptography in place of cryptology; however, it is important to remember that cryptology encompasses both cryptography and cryptanalysis. A cipher is a cryptographic algorithm. A plaintext is an unencrypted message. Encryption converts a plaintext to a ciphertext. Decryption turns a ciphertext back into a plaintext. Confidentiality, Integrity, Authentication, and NonrepudiationCryptography can provide confidentiality (secrets remain secret) and integrity (data is not altered without authorization). It is important to note that it does not directly provide availability. Cryptography can also provide authentication, which proves an identity claim. Additionally, cryptography can provide nonrepudiation, which is an assurance that a specific user performed a specific transaction that did not change. Confusion, Diffusion, Substitution, and PermutationDiffusion means the order of the plaintext should be “diffused” or dispersed in the ciphertext. Confusion means that the relationship between the plaintext and ciphertext should be as confused or random as possible. Cryptographic substitution replaces one character for another; this provides the confusion. Permutation, also called transposition, provides diffusion by rearranging the characters of the plaintext, anagramstyle. For example, “ATTACKATDAWN” can be rearranged to “CAAKDTANTATW.” Did You Know?Strong encryption destroys patterns. If a single bit of plaintext changes, the odds of every bit of resulting ciphertext changing should be 50/50. Any signs of nonrandomness can be clues for a cryptanalyst, hinting at the underlying order of the original plaintext or key. Cryptographic StrengthGood encryption is strong. For keybased encryption, it should be very difficult (ideally, impossible) to convert a ciphertext back to a plaintext without the key. The work factor describes how long it will take to break a cryptosystem (decrypt a ciphertext without the key). Secrecy of the cryptographic algorithm does not provide strength; in fact, secret algorithms are often proven quite weak. Strong crypto relies on math, not secrecy, to provide strength. Ciphers that have stood the test of time are public algorithms, such as the Triple Data Encryption Standard (TDES) and the Advanced Encryption Standard (AES). Monoalphabetic and Polyalphabetic CiphersA monoalphabetic cipher uses one alphabet, in which a specific letter substitutes for another. A polyalphabetic cipher uses multiple alphabets; for example, E substitutes for X one round, then S the next round. Monoalphabetic ciphers are susceptible to frequency analysis. Polyalphabetic ciphers attempt to address this issue via the use of multiple alphabets. Exclusive ORExclusive OR (XOR) is the “secret sauce” behind modern encryption. Combining a key with a plaintext via XOR creates a ciphertext. XORing the same key to the ciphertext restores the original plaintext. XOR math is fast and simple, so simple that it can be implemented with phone relay switches. Two bits are true (or 1) if one or the other (exclusively, not both) is 1. In other words: if two bits are different, the answer is 1 (true). If two bits are the same, the answer is 0 (false). XOR uses a truth table, shown in Table 3.2. This dictates how to combine the bits of a key and plaintext. Table 3.2. XOR Truth Table Data at Rest and Data in MotionCryptography protects data at rest and data in motion, or data in transit. Full disk encryption (also called whole disk encryption) of a magnetic disk drive using software such as BitLocker or PGP Whole Disk Encryption is an example of encrypting data at rest. An SSL or IPsec VPN is an example of encrypting data in motion. Protocol GovernanceCryptographic protocol governance describes the process of selecting the right method (ie, cipher) and implementation for the right job, typically on an organizationwide scale. For example, as we will learn later this chapter, a digital signature provides authentication and integrity, but not confidentiality. Symmetric ciphers are primarily used for confidentiality, and AES is preferable over DES due to its strength and performance. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128112489000036 Domain 3: Security Engineering (Engineering and Management of Security)Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Third Edition), 2016 Cornerstone Cryptographic ConceptsCryptography is secret writing: secure communication that may be understood by the intended recipient only. While the fact that data is being transmitted may be known, the content of that data should remain unknown to third parties. Data in motion (moving on a network) and at rest (stored on a device such as a disk) may be encrypted. The use of cryptography dates back thousands of years, but is very much a part of our modern world. Mathematics and computers play a critical role in modern cryptography. Fundamental cryptographic concepts are embodied by strong encryption, and must be understood before learning about specific implementations. Key TermsCryptology is the science of secure communications. Cryptography creates messages whose meaning is hidden; cryptanalysis is the science of breaking encrypted messages (recovering their meaning). Many use the term cryptography in place of cryptology: it is important to remember that cryptology encompasses both cryptography and cryptanalysis. A cipher is a cryptographic algorithm. A plaintext is an unencrypted message. Encryption converts a plaintext to a ciphertext. Decryption turns a ciphertext back into a plaintext. Confidentiality, Integrity, Authentication and NonRepudiationCryptography can provide confidentiality (secrets remain secret) and integrity (data is not altered in an unauthorized manner): it is important to note that it does not directly provide availability. Cryptography can also provide authentication (proving an identity claim). Additionally, cryptography can provide nonrepudiation, which is an assurance that a specific user performed a specific transaction and that the transaction did not change. The two must be tied together. Proving that you signed a contract to buy a car is not useful if the car dealer can increase the cost after you signed the contract. Nonrepudiation means the individual who performed a transaction, such as authenticating to a system and viewing personally identifiable information (PII), cannot repudiate (or deny) having done so afterward. Confusion, Diffusion, Substitution and PermutationDiffusion means the order of the plaintext should be “diffused” (or dispersed) in the ciphertext. Confusion means that the relationship between the plaintext and ciphertext should be as confused (or random) as possible. Claude Shannon, the father of information security, in his paper Communication Theory of Secrecy Systems, first defined these terms in 1949.[17] Cryptographic substitution replaces one character for another; this provides confusion. Permutation (also called transposition) provides diffusion by rearranging the characters of the plaintext, anagramstyle. “ATTACKATDAWN” can be rearranged to “CAAKDTANTATW,” for example. Substitution and permutation are often combined. While these techniques were used historically (the Caesar Cipher is a substitution cipher), they are still used in combination in modern ciphers such as the Advanced Encryption Standard (AES). Strong encryption destroys patterns. If a single bit of plaintext changes, the odds of every bit of resulting ciphertext changing should be 50/50. Any signs of nonrandomness may be used as clues to a cryptanalyst, hinting at the underlying order of the original plaintext or key. NoteThe dates and names (such as Claude Shannon) associated with cryptographic breakthroughs are generally not testable, unless the inventor’s name appears in the name of the device or cipher. This information is given to flesh out the cryptographic concepts (which are very testable). Cryptographic StrengthGood encryption is strong: for keybased encryption, it should be very difficult (and ideally impossible) to convert a ciphertext back to a plaintext without the key. The work factor describes how long it will take to break a cryptosystem (decrypt a ciphertext without the key). Secrecy of the cryptographic algorithm does not provide strength: secret algorithms are often proven quite weak. Strong crypto relies on math, not secrecy, to provide strength. Ciphers that have stood the test of time are public algorithms, such as the Triple Data Encryption Standard (TDES) and the Advanced Encryption Standard (AES). Monoalphabetic and Polyalphabetic CiphersA monoalphabetic cipher uses one alphabet: a specific letter (like “E”) is substituted for another (like “X”). A polyalphabetic cipher uses multiple alphabets: “E” may be substituted for “X” one round, and then “S” the next round. Monoalphabetic ciphers are susceptible to frequency analysis. Figure 4.17 shows the frequency of English letters in text. A monoalphabetic cipher that substituted “X” for “E,” “C” for “T,” etc., would be quickly broken using frequency analysis. Polyalphabetic ciphers attempt to address this issue via the use of multiple alphabets. Figure 4.17. Frequency of English Letters Modular MathModular math lies behind much of cryptography: simply put, modular math shows you what remains (the remainder) after division. It is sometimes called “clock math” because we use it to tell time: assuming a 12hour clock, 6 hours past 9:00 PM is 3:00 AM. In other words, 9 + 6 is 15, divided by 12 leaves a remainder of 3. As we will see later, methods like the runningkey cipher use modular math. There are 26 letters in the English alphabet; adding the letter “Y” (the 25th letter) to “C” (the third letter) equals “B” (the 2nd letter). In other words, 25 + 3 equals 28. 28 divided by 26 leaves a remainder of 2. It is like moving in a circle (such as a clock face): once you hit the letter “Z,” you wrap around back to “A.” Exclusive Or (XOR)Exclusive Or (XOR) is the “secret sauce” behind modern encryption. Combining a key with a plaintext via XOR creates a ciphertext. XORing the same key to the ciphertext restores the original plaintext. XOR math is fast and simple, so simple that it can be implemented with phone relay switches (as we will see with the Vernam Cipher). Two bits are true (or 1) if one or the other (exclusively, not both) is 1. In other words: if two bits are different the answer is 1 (true). If two bits are the same the answer is 0 (false). XOR uses a truth table, shown in Table 4.3. This dictates how to combine the bits of a key and plaintext. Table 4.3. XOR Truth Table If you were to encrypt the plaintext “ATTACK AT DAWN” with a key of “UNICORN,” you would XOR the bits of each letter together, letter by letter. We will encrypt and then decrypt the first letter to demonstrate XOR math. “A” is binary 01000001 and “U” is binary 01010101. We then XOR each bit of the plaintext to the key, using the truth table in Table 4.3. This results in a Ciphertext of 00010100, shown in Table 4.4. Table 4.4. 01000001 XORed to 01010101 Now let us decrypt the ciphertext 00010100 with a key of “U” (binary 01010101). We XOR each bit of the key (01010101) with the ciphertext (00010100), again using the truth table in Table 4.3. We recover our original plaintext of 01000001 (ASCII “A”), as shown in Table 4.5. Table 4.5. 00010100 XORed to 01010101 Data at Rest and Data in MotionCryptography is able to protect both data at rest and data in motion (AKA data in transit). Full disk encryption (also called whole disk encryption) of a magnetic disk drive using software such as TrueCrypt or PGP Whole Disk Encryption is an example of encrypting data at rest. An SSL or IPsec VPN is an example of encrypting data in motion. Protocol GovernanceCryptographic Protocol Governance describes the process of selecting the right method (cipher) and implementation for the right job, typically at an organizationwide scale. For example: as we will learn later in this chapter, a digital signature provides authentication and integrity, but not confidentiality. Symmetric ciphers are primarily used for confidentiality, and AES is preferable over DES due to strength and performance reasons (which we will also discuss later). Organizations must understand the requirements of a specific control, select the proper cryptographic solution, and ensure factors such as speed, strength, cost, complexity (and others) are properly weighed. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128024379000047 Domain 5
Eric Conrad, ... Joshua Feldman, in CISSP Study Guide (Second Edition), 2012 Cornerstone Cryptographic ConceptsFundamental cryptographic concepts are embodied by all strong encryption and must be understood before learning about specific implementations. Key termsCryptology is the science of secure communications. Cryptography creates messages whose meaning is hidden; cryptanalysis is the science of breaking encrypted messages (recovering their meaning). Many use the term cryptography in place of cryptology, but it is important to remember that cryptology encompasses both cryptography and cryptanalysis. A cipher is a cryptographic algorithm. A plaintext is an unencrypted message. Encryption converts the plaintext to a ciphertext. Decryption turns a ciphertext back into a plaintext. Confidentiality, integrity, authentication, and nonrepudiationCryptography can provide confidentiality (secrets remain secret) and integrity (data is not altered in an unauthorized manner). It is important to note that it does not directly provide availability. Cryptography can also provide authentication (proving an identity claim). Additionally, cryptography can provide nonrepudiation, which is an assurance that a specific user performed a specific transaction and that the transaction did not change. The two must be tied together. Proving that you signed a contract to buy a car is not useful if the car dealer can increase the cost after you signed the contract. Nonrepudiation means the individual who performed a transaction, such as authenticating to a system and viewing personally identifiable information (PII), cannot repudiate (or deny) having done so afterward. Confusion, diffusion, substitution, and permutationDiffusion means the order of the plaintext should be “diffused” (or dispersed) in the ciphertext. Confusion means that the relationship between the plaintext and ciphertext should be as confused (or random) as possible. Claude Shannon, the father of information security, first defined these terms in 1949 in his paper “Communication Theory of Secrecy Systems.” [1] Cryptographic substitution replaces one character for another, which provides confusion. Permutation (also called transposition) provides diffusion by rearranging the characters of the plaintext, anagramstyle. “ATTACKATDAWN” can be rearranged to “CAAKDTANTATW,” for example. Substitution and permutation are often combined. Although these techniques were used historically (the Caesar cipher is a substitution cipher), they are still used in combination in modern ciphers, such as the Advanced Encryption Standard (AES). Strong encryption destroys patterns. If a single bit of plaintext changes, the odds of every bit of resulting ciphertext changing should be 50/50. Any signs of nonrandomness may be used as clues to a cryptanalyst, hinting at the underlying order of the original plaintext or key. Note The dates and names (such as Claude Shannon) associated with cryptographic breakthroughs are generally not testable, unless the inventor's name appears in the name of the device or cipher. This information is given to flesh out the cryptographic concepts (which are very testable). Cryptographic strengthGood encryption is strong—for keybased encryption, it should be very difficult (and ideally impossible) to convert a ciphertext back to a plaintext without the key. The work factor describes how long it will take to break a cryptosystem (decrypt a ciphertext without the key). Secrecy of the cryptographic algorithm does not provide strength; in fact, secret algorithms are often proven quite weak. Strong crypto relies on math, not secrecy, to provide strength. Ciphers that have stood the test of time are public algorithms, such as the Triple Data Encryption Standard (TDES) and the Advanced Encryption Standard (AES). Monoalphabetic and polyalphabetic ciphersA monoalphabetic cipher uses one alphabet. A specific letter (e.g., “E”) is substituted for another (e.g., “X”). A polyalphabetic cipher uses multiple alphabets: “E” may be substituted for “X” one round and then “S” the next round. Monoalphabetic ciphers are susceptible to frequency analysis. Figure 6.1 shows the frequency of English letters in text. A monoalphabetic cipher that substituted “X” for “E,” “C” for “T,” etc., would be quickly broken using frequency analysis. Polyalphabetic ciphers attempt to address this issue via the use of multiple alphabets. Figure 6.1. Frequency of English Letters. Modular mathModular math lies behind much of cryptography; simply put, modular math shows you what remains (the remainder) after division. It is sometimes called “clock math” because we use it to tell time. Assuming a 12hour clock, 6 hours past 9:00 PM is 3:00 AM. In other words, 9 + 6 is 15, divided by 12 leaves a remainder of 3. As we will see later, methods like the runningkey cipher use modular math. There are 26 letters in the English alphabet; adding the letter “Y” (the 25th letter) to “C” (the third letter) equals “B” (the 2nd letter). In other words, 25 + 3 equals 28. 28 divided by 26 leaves a remainder of 2. It is like moving in a circle (such as a clock face): Once you hit the letter “Z,” you wrap around back to “A.” Exclusive Or (XOR)Exclusive Or (XOR) is the “secret sauce” behind modern encryption. Combining a key with a plaintext via XOR creates a ciphertext. XORing ciphertext with the same key restores the original plaintext. XOR math is fast and simple, so simple that it can be implemented with phone relay switches (as we will see with the Vernam cipher). Two bits are true (or 1) if one or the other (exclusively, not both) is 1; in other words, if two bits are different the answer is 1 (true). If two bits are the same, the answer is 0 (false). XOR uses a truth table, shown in Table 6.1. This dictates how to combine the bits of a key and plaintext. Table 6.1. XOR Truth Table If you were to encrypt the plaintext “ATTACK AT DAWN” with a key of “UNICORN,” you would XOR the bits of each letter together, letter by letter. We will encrypt and then decrypt the first letter to demonstrate XOR math. “A” is binary 01000001 and “U” is binary 01010101. We then XOR each bit of the plaintext to the key, using the truth table in Table 6.1. This results in a Ciphertext of 00010100, shown in Table 6.2. Table 6.2. 01000001 XORed to 01010101 Now let us decrypt the ciphertext 00010100 with a key of “U” (binary 01010101). We XOR each bit of the key (01010101) with the ciphertext (00010100), again using the truth table in Table 6.1. We recover our original plaintext of 01000001 (ASCII “A”), as shown in Table 6.3. Table 6.3. 00010100 XORed to 01010101 Types of cryptographyThere are three primary types of modern encryption: symmetric, asymmetric, and hashing. Symmetric encryption uses one key; the same key encrypts and decrypts. Asymmetric cryptography uses two keys; if you encrypt with one key, you may decrypt with the other. Hashing is a oneway cryptographic transformation using an algorithm (and no key). Cryptographic protocol governance describes the process of selecting the right method (cipher) and implementation for the right job, typically at an organizationwide scale. For example, as we will learn later in this chapter, a digital signature provides authentication and integrity but not confidentiality. Symmetric ciphers are primarily used for confidentiality, and AES is preferable over DES for strength and performance reasons (which we will also discuss later). Organizations must understand the requirements of a specific control, select the proper cryptographic solution, and ensure that factors such as speed, strength, cost, and complexity, among others, are properly weighted. Data at rest and data in motionCryptography is able to protect both data at rest and data in motion (data in transit). Full disk encryption (also called whole disk encryption) of a magnetic disk drive using software such as TrueCrypt or PGP® Whole Disk Encryption is an example of encrypting data at rest. An SSL or IPsec VPN is an example of encrypting data in motion. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9781597499613000066 An Introduction To CryptographyIn Next Generation SSH2 Implementation, 2009 Frequently Asked QuestionsQ What is cryptology? ACryptology is the combined study of cryptography and cryptanalysis. Cryptology is concerned with coding and decoding of messages. QWhat is cryptography? ACryptography is the science of designing cryptosystems for encryption and decryption. QWhat is plaintext? AThe term plaintext refers to the original message prior to encryption. QWhat is ciphertext? ACiphertext is the name used in cryptography for an encrypted message. QWhat is an encryption key AAn encryption key is a piece of information. QWhat is a symmetric key cryptosystem? AA symmetric key encryption system is one where the same key is used for both encrypting and decrypting a message. QWhat is an asymmetric key cryptosystem? AAn asymmetric key cryptosystem is one where two separate keys are used for encryption and decryption. These keys are called a public and private key pair. QWhat is a oneway cryptographic hash? AA oneway hash is a mathematical function that generates a fixed size output from an arbitrarily sized string. It is called a oneway hash because deriving the input string from the fixed length is considered to be impossible. QWhat is a digital signature? AA digital signature is a message digest encrypted with the sender's private key. A digital signature is used to verity the authenticity and integrity of the message. QWhat is cryptographic key management? ACryptographic key management refers to processes related to the secure generation, distribution, storage, and revocation of keys. QWhat is cryptanalysis? ACryptanalysis is the study of how to inverse cryptography. It is concerned with deciphering messages without knowledge of the cryptosystem. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9781597492836000039 Public Key InfrastructureTerence Spies, in Computer and Information Security Handbook, 2009 17. Alternative Key Management ModelsPKI systems can be used for encryption as well as digital signatures, but these two applications have different operational characteristics. In particular, systems that use PKIs for encryption require that an encrypting party has the ability to locate certificates for its desired set of recipients. In digital signature applications, a signer only requires access to his own private key and certificate. The certificates required to verify the signature can be sent with the signed document, so there is no requirement for verifiers to locate arbitrary certificates. These difficulties have been identified as factors contributing to the difficulty of practical deployment of PKIbased encryption systems such as S/MIME. In 1984, Adi Shamir27 proposed an IdentityBased Encryption (IBE) system for email encryption. In the identitybased model, any string can be mathematically transformed into a public key, typically using some public information from a server. A message can then be encrypted with this key. To decrypt, the message recipient contacts the server and requests a corresponding private key. The server is able to mathematically derive a private key, which is returned to the recipient. Shamir disclosed how to perform a signature operation in this model but did not give a solution for encryption. This approach has significant advantages over the traditional PKI model of encryption. The most obvious is the ability to send an encrypted message without locating a certificate for a given recipient. There are other points of differentiation: •Key recovery. In the traditional PKI model, if a recipient loses the private key corresponding to a certificate, all messages encrypted to that certificate’s public key cannot be decrypted. In the IBE model, the server can recompute lost private keys. If messages must be recoverable for legal or other business reasons, PKI systems typically add mandatory secondary public keys to which senders must encrypt messages to. •Group support. Since any string can be transformed to a public key, a group name can be supplied instead of an individual identity. In the traditional PKI model, groups are done by either expanding a group to a set of individuals at encrypt time or issuing group certificates. Group certificates pose serious difficulties with revocation, since individuals can only be removed from a group as often as revocation is updated. In 2001, Boneh and Franklin gave the first fully described secure and efficient method for IBE.28 This was followed by a number of variant techniques, including Hierarchical IdentityBased Encryption (HIBE) and Certificateless Encryption. HIBE allows multiple key servers to be used, each of which control part of the namespace used for encryption. Certificateless29 encryption adds the ability to encrypt to an end user using an identity but in such a way that the key server cannot read messages. IBE systems have been commercialized and are the subject of standards under the IETF (RFC 5091) and IEEE (1363.3). Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780123743541000261 Hardware ObfuscationSwarup Bhunia, Mark Tehranipoor, in Hardware Security, 2019 14.1.1.1 DefinitionThe term “obfuscation” represents the method of obscuring or covering the actual substance of an information or the functional behavior of a product to protect the inherent intellectual property. In cryptology and software, an obfuscator Z is formally characterized as a “compiler” that reconstructs a program P to an obfuscated form Z(P). Z(P) must have the same functionality as P while being incomprehensible to an attacker aiming to construct P from Z(P). Obfuscation, in the context of a hardware design, i.e., “hardware obfuscation” is concerned with the protection of hardware IPs. These IPs are reusable block of logic, memory, or analog circuits, which are owned by their developers, and used by themselves, or other SoC design houses. Though the techniques for obfuscating hardware and software differ significantly, the primary goal of obfuscation remains unchanged: protection of IP from bad actors that are capable of piracy, reverse engineering, and malicious modification. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128124772000198 From Algorithms to ArchitecturesIn TopDown Digital VLSI Design, 2015 3.7.5 Nonlinear or general loopsA nonlinear difference equation implies that the principle of superposition does not hold. The most general case of a firstorder recursion is described by (3.69)y(k)=f(y(k1),x(k)) and can be unfolded an arbitrary number of times. For simplicity we will limit our discussion to a single unfolding step, i.e. to p = 2 where (3.70)y(k)=f(f(y(k2),x(k1)),x(k)) The associated DDG of fig.3.39c shows that loop unfolding per se does not relax the original timing constraint, the only difference is that one can afford two cycles for two operations f instead of one cycle for one operation. As confirmed by fig.3.39d, there is no room for any meaningful retiming in this case. Figure 3.39. Architectural alternatives for nonlinear timevariant firstorder feedback loops. Original DDG (a) and isomorphic architecture (b), DDG after unfolding by a factor of p = 2 (c), same DDG with retiming added on top (d). DDG reorganized for an associative function f (e), pertaining architecture after pipelining and retiming (f), DDG with the two functional blocks for f combined into f " (g), pertaining architecture after pipelining and retiming (h). Yet, the unfolded recursion can serve as a starting point for more useful reorganizations. Assume function f is known to be associative. Following an associativity transform the DDG is redrawn as shown in fig.3.39e. The computation so becomes amenable to pipelining and retiming, see fig.3.39f, which cuts the longest path in half when compared to the original architecture of fig.3.39b. Even more speedup can be obtained from higher unfolding degrees, the price to pay is multiplied circuit size and extra latency, though. In summary, architecture, performance, and cost figures resemble those found for linear computations. The situation is definitely more difficult when f is not associative. Still, it is occasionally possible to relax the loop constraint to some extent by playing a trick. Reconsider fig.3.39c and think of the two occurrences of f being combined into an aggregate computation (3.71)y(k)=f"(y(k2),x(k1),x(k)) as sketched in fig.3.39g. If that aggregate computation can be made to require less than twice as much time as the original computation, then the bottleneck gets somewhat alleviated. This is because it should then be possible to insert a pipeline register into the datapath unit for f” so that the maximum path length in either of the two stages becomes shorter than the longest delay in a datapath that computes f alone. (3.72)tlpf″=max(tlpf1″,tlpf2″)<tlpf More methods for speeding up general timevariant firstorder feedback loops are examined in [77]. One technique, referred to as expansion or lookahead, is closely related to aggregate computation. The idea is to process two or more samples in each recursive step so that an integer multiple of the sampling interval becomes available for carrying out the necessary computations. In other terms, the recursive computation is carried out at a lower pace but on wider data words. This approach should be considered when the combinational logic is not amenable to pipelining, for example because it is implemented as table lookup in a ROM. The limiting factor is that the size of the lookup table (LUT) tends to increase dramatically. Yet another approach, termed concurrent block technique, groups the incoming data stream into blocks of several samples and makes the processing of these blocks independent from each other. While data processing within the blocks remains sequential, it so becomes possible to process the different blocks concurrently. The unified algebraic transformation approach promoted in [78] combines both universal and algebraic transforms to make the longest path independent of problem size in computations such as recursive filtering, recursive least squares algorithm, and singular value decomposition. Any of the various architectural transforms that helps to successfully introduce a higher degree of parallel processing into recursive computations takes advantage of algorithmic properties such as linearity, associativity, fixed coefficients, limited word width, or of a small set of register states. If none of these applies, we can't help but agree with the authors of [77]. Observation 3.9 When the state size is large and the recurrence is not a closedform function of specific classes, our methods for generating a high degree of concurrency cannot be applied. Example Cryptology provides us with a vivid example for the implications of nonlinear nonanalytical feedback loops. Consider a block cipher that works in electronic code book (ECB) mode as depicted in fig.3.41a. The algorithm implements a combinational function y(k)=c(x(k),u(k))where u(k) denotes the key and k the block number or time index. However complex function c, there is no fundamental obstacle to pipelining or to replication in the datapath. Figure 3.41. DDGs for three block ciphering modes. Combinational operation in ECB mode (a) vs. timevariant nonlinear feedback loop in CBC mode (b), and CBC8 operation b (c). Unfortunately, ECB is cryptologically weak as two identical blocks of plaintext result in two identical blocks of ciphertext because y(k) = y(m) if x(k) = x(m) and u(k) = u(m). If a plaintext to be encrypted contains sufficient repetition, the ciphertext necessarily carries and betrays patterns from the original plaintext. Fig.3.40 nicely illustrates this phenomenon. Figure 3.40. A computer graphics image in clear text, ciphered in ECB mode, and ciphered in CBC1 mode (from left to right, Tux by Larry Ewing). To prevent this from happening, block ciphers are typically used with feedback. In cipher block chaining (CBC) mode, the ciphertext gets added to the plaintext before encryption takes place, see fig.3.41b. The improved cipher algorithm so becomes y(k) = c(x(k) ⊕ y(k − 1), u(k)) and is sometimes referred to as CBC1 mode because y(k − 1) is being used for feedback. From an architectural point of view, however, this firstorder recursion is awkward because it offers little room for reorganizing the computation. This is particularly true in ciphering applications where the nonlinear functions involved are chosen to be complex, labyrinthine, and certainly not analytical. The fact that word width (block size) is on the order of 64 or 128 bit makes everything worse. Inserting pipeline registers into the computational unit for c does not help since this would alter algorithm and ciphertext. Throughput in CBC mode is thus limited to a fraction of what is obtained in ECB mode.57 Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128007303000034 Theoretical ResearchThomas W. Edgar, David O. Manz, in Research Methods for Cyber Security, 2017 BackgroundA theory is a proposed model or inner working of why a system behaves in a specific way. The word is often used colloquially as to mean a guess. However, in science, a theory represents a foundational piece of knowledge around which research and even fields of research are built. A scientific theory follows the lifecycle of the scientific process. It starts as a belief based on observations. This belief is a cognitive model that can be formulated with language. Through iterations of research a cognitive model becomes a formal model. A formal definition of a theory is a mathematical representation of the behavior of a system. A theory that has significant empirical support and is widely accepted becomes a law. A law is a theory of a system behavior that is accepted as accurate. Formal research is one method of formalizing a cognitive model. The mathematical techniques of formal theoretical methods enable you to define and then explore a formal model. Information theory, cryptography, cryptanalysis, and cryptology have strong theoretical underpinnings, supporting advanced mathematical analysis and evaluation. This is generally not true about all the disciplines that fall under the banner of cyber security science, especially disciplines focusing on adversarial behaviors. Dig Deeper: Terminology Theory, theorems, axioms, lemmas, and so on. Each research field develops it own specific terminology. Since cyber security is such a nascent field, there has not been sufficient time to mature a language to go along with the research. The field does, however, borrow from other domains, and in theory this borrows especially from mathematics. In mathematics, theory can be a body of knowledge, but for the purposes of this chapter, the concept of mathematical theory is more akin to a mathematical model, which uses the language and structure of math to describe a system. In mathematics and logic a theorem is a statement that has been or can be proven true. This is only applicable in deductive contexts. These statements can take the form of a hypothesis that can be (again mathematically, deductively) proven true. In observational and experimental research, the inductive analog is a scientific principle or law. An axiom is a concept or statement that is meant to be taken as true, this is similar to the explicit assumptions in experimental and observational research. A lemma is an intermediate step or component of a larger theory. There are analogs between theoretical and experimental/observational research, because both strive to achieve shared scientific objectives of reproducible, validatable, falsifiable, and so on. The first question in pursuing theoretical research, that one should ask, is why not another research method? Can the topic or problem you are interested in exploring be refined into a testable statement that can lead to an experiment. Or, are you curious about behavior or phenomenology in cyber space that can lend itself to an observational study? If the answer to both is no (because you cannot get the data to study, and do not have enough precision or resources for an experiment), then theoretical research might be the right path. For example, if a researcher wants to explore a setting where the average person has access to a computer capable of 100 petaFLOPS, then theory would be the only practical approach. Simulation would likely not be able to keep up with every person having the computing power of the top supercomputer circa 2016. Another example might be research into engineering techniques for processing and memory storage that far surpass the engineering limits today. This sort of research is highly valuable, because, as innovation progresses, the engineering will catch up, and the designs and theory of today can be tested and implemented tomorrow. Another type of research that lends itself to theoretical research is one where experimentation is all but impossible. For example, extrasolar astronomy and cosmology, for the time being, have no applicable engineering or applied research possibilities. This leaves only observation and theory, and for that field the cycle of theory development observational refuting can be a very long cycle indeed. In cyber security, the challenge with engineering might still apply but typically the lifecycle can be much quicker. As we have mentioned several times, one of the key challenges in cyber security research is the inherent adversarial nature. Other challenges include the dynamic nature of the environment (new devices, technologies, and configurations are introduced all the time). These characteristics make for a complex system to study, experiment, or reason over. Science is based on observation, experimentation, mistakes, and guesses. Similarly, the process of constructing a theory can use all of those as well. Often theorists are considered “Ivory Tower” elites who do not need to consider or interact with the real world, because their theories are “so far ahead” or “advanced” from the day to day. This myth like many might have some grounding in reality, but for most the truth is far more prosaic. A key observation might lead to a new theory, or a frustrating experiment might cause the experimentalist to revisit the motivating theories anew. Like real life, science is often messy and ambiguous, and the line between the various research methods can become mutable and blur over time, for one research effort, or one researcher. The key to theoretical research is to better understand or predict cyber security. The distinction comes when observational or experimental methods cannot be applied, for whatever reason. The theorist, in the absence of these techniques must make do with what is at hand, from mathematical formalisms, to software simulation, to social models, to help explain and vet their theories. Characteristics of a Good TheoryGood theory should be coherent, parsimonious, and systematic explanation or view of phenomena, events, situations, and behaviors. Beyond that, it must be predicative. It should encourage testing and the expansions and evaluations of hypothesis. Good theory is designed to be testable and should be able to be disproved through observations, experiments, and logical reasoning. Theory should be able to be refined and reformed through additional results, outcomes, and information. Good theory should focus on effects, not on causes. They should inform on the outcomes of the next step of the experiment, and so on. Theory is not statements of fact, but instead focuses on likelihood. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128053492000078 Resistance StrategiesTimothy J. Shimeall, Jonathan M. Spring, in Introduction to Information Security, 2014 IntroductionThis chapter continues the discussion of resistance strategies with the discussion of encryption and modern cryptography. Cryptography may be defined as the study of rigorous scientific methods of obscuring information from those who are not meant to read it. This is a more colloquial definition than other authors may use. For example, modern cryptography has also been defined as “the scientific study of techniques for securing digital information, transactions, and distributed computations” [1, p. 3]. Encryption and decryption are the two sides of cryptography. Encryption obscures information, and decryption recovers the information. Cryptography is a subset of the scientific field of cryptology, which also includes the study of attacking and breaking encryption, called cryptanalysis. A discussion of cryptanalysis is beyond the scope of this book. Additionally, although the history of cryptography goes back over 2,500 years, past ancient Rome, this book focuses on modern cryptography. “Modern” in cryptography begins about 19801[1]. Cryptography is an extremely useful tool in securing computers and networks. It is not, however, a panacea or a solution to all problems. Encryption is a particularly useful tool for resisting an adversary who has the ability to read the defender’s data, either on the network or on a computer. The first section discusses the general principles of cryptography, as well as some limitations. It also discusses cryptography in contrast to a related but distinct field: steganography, the hiding of information. The older kind of encryption in modern cryptography is symmetric encryption, which the second section focuses on, as well as various methods for using it. The newer cryptographic method, asymmetric encryption, is discussed next. Asymmetric encryption is particularly important in the discussion of key management. The final section briefly covers host identification. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9781597499699000080 What is the difference between ciphertext and plaintext?If you can make sense of what is written, then it is in plaintext. Ciphertext, or encrypted text, is a series of randomized letters and numbers which humans cannot make any sense of. An encryption algorithm takes in a plaintext message, runs the algorithm on the plaintext, and produces a ciphertext.
What do you call the method of changing from plaintext to ciphertext?encryption – the process of converting plaintext to ciphertext (occasionally you may see it called 'encipherment') decryption – the process of reverting ciphertext to plaintext (occasionally 'decipherment').
What is confusion cryptography?Confusion means that each binary digit (bit) of the ciphertext should depend on several parts of the key, obscuring the connections between the two. The property of confusion hides the relationship between the ciphertext and the key.
What is the term used in cryptography for the message after encryption?Confidentiality. Which of the following shows the word "CAT" encrypted with the Caesar cipher with a key of 1? DBU. True or False: A message in unencrypted form is called ciphertext.
