logo

The CIA Triangle and Its Real-World Application

What is the CIA triad?

Information security revolves around the three key principles:  confidentiality, integrity and availability (CIA). Depending upon the environment, application, context or use case, one of these principles might be more important than the others. For example, for a financial agency, confidentiality of information is paramount, so it would likely encrypt any classified document being electronically transferred in order to prevent unauthorized people from reading its contents. On the other hand, organizations like internet marketplaces would be severely damaged if their network were out of commission for an extended period, so they might focus on strategies for ensuring high availability over concerns about encrypted data.


Confidentiality

Confidentiality is concerned with preventing unauthorized access to sensitive information. The access could be intentional, such as an intruder breaking into the network and reading the information, or it could be unintentional, due to the carelessness or incompetence of individuals handling the information. The two main ways to ensure confidentiality are cryptography and access control.

 

 

 

 

Cryptography

Encryption helps organization meet the need to secure information from both accidental disclosure and internal and external attack attempts. The effectiveness of a cryptographic system in preventing unauthorized decryption is referred to as its strength. A strong cryptographic system is difficult to crack. Strength is also be expressed as work factor, which is an estimate of the amount of time and effort that would be necessary to break a system.

A system is considered weak if it allows weak keys, has defects in its design or is easily decrypted. Many systems available today are more than adequate for business and personal use, but they are inadequate for sensitive military or governmental applications. Cryptography has symmetric and asymmetric algorithms.

Symmetric Algorithms

Symmetric algorithms require both the sender and receiver of an encrypted message to have the same key and processing algorithms. Symmetric algorithms generate a symmetric key (sometimes called a secret key or private key) that must be protected; if the key is lost or stolen, the security of system is compromised. Here are some of the common standards for symmetric algorithms:

  • Data Encryption Standard (DES). DES has been used since the mid-1970s. For years, it was the primary standard used in government and industry, but it is now considered insecure because of its small key size — it generates a 64-bit key, but eight of those bits are just for error correction and only 56 bits are the actual key. Now AES is the primary standard.
  • Triple-DES (3DES). 3DES is a technological upgrade of DES. 3DES is still used, even though AES is the preferred choice for government applications. 3DES is considerably harder to break than many other systems, and it’s more secure than DES. It increases the key length to 168 bits (using three 56-bit DES keys).
  • Advanced Encryption Standard (AES). AES has replaced DES as the standard used by U.S. governmental agencies. It uses the Rijndael algorithm, named for its developers, Joan Daemen and Vincent Rijmen. AES supports key sizes of 128, 192 and 256 bits, with 128 bits being the default.
  • Ron’s Cipher or Ron’s Code (RC). RC is an encryption family produced by RSA laboratories and named for its author, Ron Rivest. The current levels are RC4, RC5 and RC6. RC5 uses a key size of up to 2,048 bits; it’s considered to be a strong system. RC4 is popular with wireless and WEP/WPA encryption. It is a streaming cipher that works with key sizes between 40 and 2,048 bits, and it is used in SSL and TLS. It is also popular with utilities; they use it for downloading torrent files. Many providers limit the download of those files, but using RC4 to obfuscate the header and the stream makes it more difficult for the service provider to realize that it’s torrent files that are being moved about.
  • Blowfish and Twofish. Blowfish is an encryption system invented by a team led by Bruce Schneier that performs a 64-bit block cipher at very fast speeds. It is a symmetric block cipher that can use variable-length keys (from 32 bits to 448 bits). Twofish is quite similar but it works on 128-bit blocks. Its distinctive feature is that it has a complex key schedule.
  • International Data Encryption Algorithm (IDEA). IDEA was developed by a Swiss consortium and uses a 128-bit key. This product is similar in speed and capability to DES, but it’s more secure. IDEA is used in Pretty Good Privacy (PGP), a public domain encryption system many people use for email.
  • One-time pads. One-time pads are the only truly completely secure cryptographic implementations. They are so secure for two reasons. First, they use a key that is as long as a plain-text message. This means that there is no pattern in the key application for an attacker to use. Second, one-time pad keys are used only once and then discarded. So even if you could break a one-time pad cipher, that same key would never be used again, so knowledge of the key would be useless.

Asymmetric Algorithms

Asymmetric algorithms use two keys: a public key and a private key. The sender uses the public key to encrypt a message, and the receiver uses the private key to decrypt it. The public key can be truly public or it can be a secret between the two parties. The private key, however, is kept private; only the owner (receiver) knows it. If someone wants to send you an encrypted message, they can use your public key to encrypt the message and then send you the message. You can use your private key to decrypt the message. If both keys become available to a third party, the encryption system won’t protect the privacy of the message. The real “magic” of these systems is that the public key cannot be used to decrypt a message. If Bob sends Alice a message encrypted with Alice’s public key, it does not matter if everyone else on Earth has Alice’s public key, since that key cannot decrypt the message. Here are some of the common standards for asymmetric algorithms:

  • RSA. RSA is named after its inventors, Ron Rivest, Adi Shamir and Leonard Adleman. The RSA algorithm is an early public key encryption system that uses large integers as the basis for the process. It’s widely implemented, and it has become a de facto standard. RSA works with both encryption and digital signatures. RSA is used in many environments, including Secure Sockets Layer (SSL), and it can be used for key exchange.
  • Diffie-Hellman. Whitfield Diffie and Martin Hellman are considered the founders of the public/private key concept. Their Diffie-Hellman algorithm is used primarily to generate a shared secret key across public networks. The process isn’t used to encrypt or decrypt messages; it’s used merely for the creation of a symmetric key between two parties.
  • Elliptic Curve Cryptography (EEC). ECC provides functionality similar to RSA but uses smaller key sizes to obtain the same level of security. ECC encryption systems are based on the idea of using points on a curve combined with a point at infinity and the difficulty of solving discrete logarithm problems.

Access Control

Encryption is one way to ensure confidentiality; a second method is access control. There are several approaches to access control that help with confidentiality, each with its own strengths and weaknesses:

  • Mandatory access control (MAC). In a MAC environment, all access capabilities are predefined. Users can’t share information unless their rights to share it are established by administrators. Consequently, administrators must make any changes that need to be made to such rights. This process enforces a rigid model of security. However, it is also considered the most secure cybersecurity model.
  • Discretionary Access Control (DAC). In a DAC model, users can share information dynamically with other users. The method allows for a more flexible environment, but it increases the risk of unauthorized disclosure of information. Administrators have a more difficult time ensuring that only appropriate users can access data.
  • Role-Based Access Control (RBAC). Role-based access control implements access control based on job function or responsibility. Each employee has one or more roles that allow access to specific information. If a person moves from one role to another, the access for the previous role will no longer be available. RBAC models provide more flexibility than the MAC model and less flexibility than the DAC model. They do, however, have the advantage of being strictly based on job function as opposed to individual needs.
  • Rule-Based Access Control (RBAC). Rule-based access control uses the settings in preconfigured security policies to make decisions about access. These rules can be set up to:
    • Deny all but those who specifically appear in a list (an allow access list)
    • Deny only those who specifically appear in the list (a true deny access list)

Entries in the list can be usernames, IP addresses, hostnames or even domains. Rule-based models are often used in conjunction with role-based models to achieve the best combination of security and flexibility.

  • Attribute-based access control (ABAC). ABAC is a relatively new method for access control defined in NIST 800-162, Attribute Based Control Definition and Considerations. It is a logical access control methodology where authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environmental conditions against security policy, rules or relationships that describe the allowable operations for a given set of attributes.
  • Smartcards are generally used for access control and security purposes. The card itself usually contains a small amount of memory that can be used to store permissions and access information.
  • A security token was originally a hardware device required to gain access, such as a wireless keycard or a key fob. There are now also software implementations of tokens. Tokens often contain a digital certificate that is used to authenticate the user.

Integrity

Integrity has three goals that help to achieve data security:

  • Preventing the modification of information by unauthorized users
  • Preventing the unauthorized or unintentional modification of information by authorized users
  • Preserving internal and external consistency:
    • Internal consistency — Ensures that the data is internally consistent. For example, in an organizational database, the total number of items owned by an organization must equal the sum of the same items shown in the database as being held by each element of the organization.
    • External consistency — Ensures that the data stored in the database is consistent with the real world. For instance, the total number of items physically sitting on the shelf must match the total number of items indicated by the database.

Various encryption methods can help ensure achieve integrity by providing assurance that a message wasn’t modified during transmission. Modification could render a message unintelligible or, even worse, inaccurate. Imagine the serious consequences if alterations to medical records or drug prescriptions weren’t discovered. If a message is tampered with, the encryption system should have a mechanism to indicate that the message has been corrupted or altered.

Hashing

Integrity can also be verified using a hashing algorithm. Essentially, a hash of the message is generated and appended to the end of the message. The receiving party calculates the hash of the message they received and compares it to the hash they received. If something changed in transit, the hashes won’t match.

Hashing is an acceptable integrity check for many situations. However, if an intercepting party wishes to alter a message intentionally and the message is not encrypted, then a hash is ineffective. The intercepting party can see, for example, that there is a 160-bit hash attached to the message, which suggests that it was generated using SHA-1 (which is discussed below). Then the interceptor can simply alter the message as they wish, delete the original SHA-1 hash, and recalculate a hash from the altered message.

Hashing Algorithms

The hashes used to store data are very different from cryptographic hashes. In cryptography, a hash function must have three characteristics:

  1. It must be one-way. Once you hash something, you cannot unhash it.
  2. Variable-length input produces fixed-length output. Whether you hash two characters or two million, the hash size is the same.
  3. The algorithm must have few or no collisions. Hashing two different inputs does not give the same output.

Here are hashing algorithms and related concepts you should be familiar with:

  • Secure Hash Algorithm (SHA). Originally named Keccak, SHA was designed by Guido Bertoni, Joan Daemen, Michaël Peeters and Gilles Van Assche. SHA-1 is a one-way hash that provides a 160-bit hash value that can be used with an encryption protocol. In 2016, issues with SHA-1 were discovered; now it is recommended that SHA-2 be used instead. SHA-2 can produce 224, 256, 334 and 512 bit hashes. There are no known issues with SHA-2, so it is still the most widely used and recommended hashing algorithm. SHA-3 was published in 2012 and is widely applicable but not widely used. This is not due to any problems with SHA-3, but rather the fact that SHA-2 is perfectly fine.
  • Message Digest Algorithm (MD). MD is another one-way hash that creates a hash value used to help maintain integrity. There are several versions of MD; the most common are MD5, MD4 and MD2. MD5 is the newest version of the algorithm; it produces a 128-bit hash. Although it is more complex than its MD predecessors and offers greater security, it does not have strong collision resistance, and thus it is no longer recommended for use. SHA (2 or 3) are the recommended alternatives.
  • RACE Integrity Primitives Evaluation Message Digest (RIPEMD). RIPEMD was based on MD4. There were questions regarding its security, and it has been replaced by RIPEMD-160, which uses 160 bits. There are also versions that use 256 and 320 bits (RIPEMD-256 and RIPEMD-320, respectively).
  • GOST is a symmetric cipher developed in the old Soviet Union that has been modified to work as a hash function. GOST processes a variable-length message into a fixed-length output of 256 bits.
  • Prior to the release of Windows NT, Microsoft’s operating systems used the LANMAN protocol for authentication. While functioning only as an authentication protocol, LANMAN used LM Hash and two DES keys. It was replaced by the NT LAN Manager (NTLM) with the release of Windows NT.
  • Microsoft replaced the LANMAN protocol with NTLM (NT LAN Manager) with the release of Windows NT. NTLM uses MD4/MD5 hashing algorithms. Several versions of this protocol exist (NTLMv1 and NTLMv2), and it is still in widespread use despite the fact that Microsoft has named Kerberos its preferred authentication protocol. Although LANMAN and NTLM both employ hashing, they are used primarily for the purpose of authentication.
  • A common method of verifying integrity involves adding a message authentication code (MAC) to the message. A MAC is calculated by using a symmetric cipher in cipher block chaining mode (CBC), with only the final block being produced. Essentially, the output of the CBC is being used like the output of a hashing algorithm. However, unlike a hashing algorithm, the cipher requires a symmetric key that is exchanged between the two parties in advance.
  • HMAC (hash-based message authentication code) uses a hashing algorithm along with a symmetric key. Thus, for example, two parties agree to use an MD5 hash. Once the hash is computed, it is exclusively OR’d (XOR) with the digest, and that resultant value is the HMAC.

Baseline

Establishing a baseline (configuration, baseline, systems baseline, activity baseline) is an important strategy for secure networking. Essentially, you find a baseline that you consider secure for a given system, computer, application or service. Certainly, absolute security is not possible — the goal is secure enough, based on your organization’s security needs and risk appetite. Any change can be compared to the baseline to see if the change is secure enough. Once a baseline is defined, the next step is to monitor the system to ensure that it has not deviated from that baseline. This process is defined as integrity measurement.

Availability

Availability ensures that a system’s authorized users have timely and uninterrupted access to the information in the system and to the network. Here are the methods of achieving availability:

  • Distributive allocation. Commonly known as load balancing, distributive allocation allows for distributing the load (file requests, data routing and so on) so that no device is overly burdened.
  • High availability (HA). High availability refers to measures that are used to keep services and information systems operational during an outage. The goal of HA is often to have key services available 99.999 percent of the time (known as “five nines” availability). HA strategies include redundancy and failover, which are discussed below.
  • Redundancy. Redundancy refers to systems that either are duplicated or fail over to other systems in the event of a malfunction. Failover refers to the process of reconstructing a system or switching over to other systems when a failure is detected. In the case of a server, the server switches to a redundant server when a fault is detected. This strategy allows service to continue uninterrupted until the primary server can be restored. In the case of a network, this means processing switches to another network path in the event of a network failure in the primary path.
    Failover systems can be expensive to implement. In a large corporate network or e-commerce environment, a failover might entail switching all processing to a remote location until your primary facility is operational. The primary site and the remote site would synchronize data to ensure that information is as up to date as possible.
    Many operating systems, such as Linux, Windows Server and Novell Open Enterprise Server, are capable of clustering to provide failover capabilities. Clustering involves multiple systems connected together cooperatively (which provides load balancing) and networked in such a way that if any of the systems fail, the other systems take up the slack and continue to operate. The overall capability of the server cluster may decrease, but the network or service will remain operational. To appreciate the beauty of clustering, contemplate the fact that this is the technology on which Google is built. Not only does clustering allow you to have redundancy, but it also offers you the ability to scale as demand increases.
    Most ISPs and network providers have extensive internal failover capability to provide high availability to clients. Business clients and employees who are unable to access information or services tend to lose confidence.
    The trade-off for reliability and trustworthiness, of course, is cost: Failover systems can become prohibitively expensive. You’ll need to study your needs carefully to determine whether your system requires this capability. For example, if your environment requires a high level of availability, your servers should be clustered. This will allow the other servers in the network to take up the load if one of the servers in the cluster fails.
  • Fault tolerance. Fault tolerance is the ability of a system to sustain operations in the event of a component failure. Fault-tolerant systems can continue operation even though a critical component, such as a disk drive, has failed. This capability involves over-engineering systems by adding redundant components and subsystems to reduce risk of downtime. For instance, fault tolerance can be built into a server by adding a second power supply, a second CPU and other key components. Most manufacturers (such as HP, Sun and IBM) offer fault-tolerant servers; they typically have multiple processors that automatically fail over if a malfunction occurs.
    There are two key components of fault tolerance that you should never overlook: spare parts and electrical power. Spare parts should always be readily available to repair any system-critical component if it should fail. The redundancy strategy “N+1” means that you have the number of components you need, plus one to plug into any system should it be needed. Since computer systems cannot operate in the absence of electrical power, it is imperative that fault tolerance be built into your electrical infrastructure as well. At a bare minimum, an uninterruptible power supply (UPS) with surge protection should accompany every server and workstation. That UPS should be rated for the load it is expected to carry in the event of a power failure (factoring in the computer, monitor and any other devices connected to it) and be checked periodically as part of your preventive maintenance routine to make sure that the battery is operational. You will need to replace the battery every few years to keep the UPS operational.
    A UPS will allow you to continue to function in the absence of power for only a short duration. For fault tolerance in situations of longer duration, you will need a backup generator. Backup generators run on gasoline, propane, natural gas or diesel and generate the electricity needed to provide steady power. Although some backup generators can come on instantly in the event of a power outage, most take a short time to warm up before they can provide consistent power. Therefore, you will find that you still need to implement UPSs in your organization.
  • Redundant Array of Independent Disks (RAID). RAID is a technology that uses multiple disks to provide fault tolerance. There are several RAID levels: RAID 0 (striped disks), RAID 1 (mirrored disks), RAID 3 or 4 (striped disks with dedicated parity), RAID 5 (striped disks with distributed parity), RAID 6 (striped disks with dual parity), RAID 1+0 (or 10) and RAID 0+1. You can read more about them in this list of data security best practices.
  • Disaster recovery (DR) plan. A disaster recovery plan helps an organization respond effectively when a disaster occurs. Disasters include system failures, network failures, infrastructure failures, and natural disasters like hurricanes and earthquakes. A DR plan defines methods for restoring services as quickly as possible and protecting the organization from unacceptable losses in the event of a disaster.
    In a smaller organization, a disaster recovery plan can be relatively simple and straightforward. In a larger organization, it could involve multiple facilities, corporate strategic plans and entire departments.
    A disaster-recovery plan should address access to and storage of information. Your backup plan for sensitive data is an integral part of this process.

F.A.Q.

What are the components of the CIA triad?

  • Confidentiality: Systems and data are accessible to authorized users only.
  • Integrity: Systems and data are accurate and complete.
  • Availability: Systems and data are accessible when they are needed.

Why is the CIA triad important to data security?

The ultimate goal of data security is to ensure confidentiality, integrity and availability of critical and sensitive data. Applying the principles of the CIA triad helps organizations create an effective security program to protect their valuable assets.

How can the CIA triad be applied in risk management?

During risk assessments, organizations measure the risks, threats and vulnerabilities that could compromise the confidentiality, integrity and availability of their systems and data. By implementing security controls to mitigate those risks, they satisfy one or more of the CIA triad’s core principles.

How can data confidentiality be compromised?

Confidentiality requires preventing unauthorized access to sensitive information. The access could be intentional, such as an intruder breaking into the network and reading the information, or it could be unintentional, due to the carelessness or incompetence of individuals handling the information.

What measures can help to preserve data confidentiality?

One best practice for protecting data confidentiality is to encrypt all sensitive and regulated data. No one can read the contents of an encrypted document unless they have the decryption key, so encryption protects against both malicious and accidental compromises of confidentiality.

How can data integrity be compromised?

Data integrity can be compromised both through human errors and cyberattacks like destructive malware and ransomware.

What measures can help to preserve data integrity?

To preserve data integrity, you need to:

  • Prevent changes to data by unauthorized users
  • Prevent unauthorized or unintentional changes to data by authorized users
  • Ensure the accuracy and consistency of data through processes like error checking and data validation

A valuable best practice for ensuring data accuracy is file integrity monitoring (FIM). FIM helps organizations detect improper changes to critical files on their systems by auditing of all attempts to access or modify files and folders containing sensitive information, and checking whether those actions are authorized.

How can data availability be compromised?

Threats to availability include infrastructure failures like network or hardware issues; unplanned software downtime; infrastructure overload; power outages; and cyberattacks such as DDoS or ransomware attacks.

What measures can help to preserve data availability?

It’s important to deploy safeguards against interruptions to all systems that require continuous uptime. Options include hardware redundancy, failover, clustering and routine backups stored in a geographically separate location. In addition, it’s crucial to develop and test an comprehensive disaster recovery plan.

Product Evangelist at Netwrix Corporation, writer, and presenter. Ryan specializes in evangelizing cybersecurity and promoting the importance of visibility into IT changes and data access. As an author, Ryan focuses on IT security trends, surveys, and industry insights.
Download a free trial classification software that empowers you to identify and secure sensitive content