Our cybersecurity overview tutorial begins with the notional cybersecurity landscape of the Reference Diagram, which creates a context for introducing concepts and terminology of which you will hear – it is not an accurate representative of a real cybersecurity defense and as such it can apply to physical and virtualized environments. With many references to it, we will repeat the Reference Diagram further into our discussion.
Reference Diagram. Notional view of the cybersecurity landscape as a basis of discussion.
“Bad actors” is a general term for entities (individuals, criminal enterprises, nation states, etc.) who act to breach or use an IT system counter to the desire of its operators. Their motivations include theft or stealth control of IT assets. Theft targets could be personal and/or financial information, financial assets, intellectual property, trade or national secrets, etc. Stealth control could be exercised to damage a system, hide evidence of a breach, repurpose system elements for malicious use (e.g. using the compromised asset to attack other systems), etc. Bad actors are skilled, highly motivated, and often well funded. Whether for financial or national security gains, bad actors can devote considerable time and expense in pursuit of a specific target, constituting what is called an advanced persistent threat (APT).
In this cybersecurity overview tutorial we are only considering bad actors acting remotely, but internal bad actors pose a different and potentially more dangerous threat. (Sed quis custodiet ipsos custodes?)
Generally speaking, bad actors seek to achieve their goals through the placement and execution of malicious software (malware) that exploits a vulnerability in an information system. Vulnerabilities are flaws in software/code that provide a means for malware to be installed and executed, typically without any overt sign of its presence to system users. (Releasing alerts of vulnerabilities publicly, the U.S. Department of Homeland Security (DHS), parent organization of the United States Computer Emergency Readiness Team (US-CERT), co-sponsors the Common Vulnerabilities and Exposures (CVE) dictionary of publicly known information security vulnerabilities and exposures with the MITRE Corporation.)
In many regards, the battle in cybersecurity is between bad actors seeking to exploit vulnerabilities that have yet to be repaired (or patched in a security update) or to discover and exploit a new one. The first exploitation of a newly-discovered vulnerability for which no patch is available (or even under consideration) is called a zero-day attack, and it is a very dangerous condition in cybersecurity for its potential to conduct undetected mayhem. The OpenSSL vulnerability known as “Heartbleed” (CVE-2014-0160) was undetected for approximately two years, so only a bad actor is certain of the zero-day attack on it.
Often you will hear malware described by its purpose (e.g. spyware, adware) or its means of propagation (e.g. worms, which largely spread without user interaction, or viruses or trojan (horses), which do require user interaction to launch). “Trojans” are a very common category of malware. They are executable code embedded in an application and are launched when the application is launched. For example, a trojan buried in a PDF file attached to an email is launched when the PDF file is opened.
How Malware Is Delivered
Malware approaches systems in many different ways. Perhaps you have received an email with a forged sender address (possibly of a friend) and an attachment that contained a trojan. Many of us have received unsolicited emails (spam), generally with an invitation to click a link to a website. Doing so may lead to the bad actor exploiting a vulnerability in your system. Blanketed, largely untargeted spam is a form of “phishing” (fishing) for victims. The trickier variant, spear phishing, targets a specific victim or victim subset and is often the product of considerable intelligence gathering and research. Malware can even be hidden in online ads in legitimate websites. Sometimes the encounter with malware doesn’t even require that you click on anything; you can be the victim of a means of attack (an attack vector) called a drive-by download.
In a more sophisticated variant, malware may be delivered in a multi-flow attack. As the name suggests, the malware is not delivered as a single file in a single attack, but as file fragments delivered over multiple attacks. The malware delivery is completed when the fragments are recombined.
Bad actors will exploit bad code – and there is no shortage of either.
After delivery, the malware must exploit the vulnerability to be installed on the attacked system (infiltration). From the Common Weaknesses Enumeration section of the National Vulnerability Database, here are definitions of three, often cited types of vulnerabilities – that is, means by which malware can infiltrate:
- Buffer Overflows: Buffer overflows and other buffer boundary errors in which a program attempts to put more data in a buffer than the buffer can hold, or when a program attempts to put data in a memory area outside of the boundaries of the buffer.
- Cross Site Scripting (XSS): Failure of a site to validate, filter, or encode user input before returning it to another user’s web client.
- Code Injection: Causing a system to read an attacker-controlled file and execute arbitrary code within that file. Includes PHP remote file inclusion, uploading of files with executable extensions, insertion of code into executable files, and others.
There are many vulnerabilities in code everywhere, and more are being created (and patched or corrected) all the time. By contrast, there are only a relatively small number of fundamental vulnerability types, such as the three above.
Malware Goes to Work
Some malware carries its mission within its code, but in general malware must find and report to its bad actor to seek instructions (establishment of a command and control channel, C2), and then find and deliver (exfiltrate) the goods. This process could happen almost immediately or could extend for months to years.
The process of the malware reaching outward to establish the command and control channel(s) and complete the mission can pose technical challenges, and can expose the attack to detection (of its C2 beacon) and cybersecurity measures. Techniques to establish a C2 and avoid detection are sophisticated and beyond the scope of this discussion.
Once exfiltration has occurred, the malware may cover its tracks and disappear, leaving the victim unaware of the attack. Just as feasible, it may remain in place and continue its malicious activities or await its next mission (possibly after re-establishing a control channel).
As suggested, malicious attacks follow a general process or flow of events from conception to mission success: a target has to be researched; the malware devised, delivered, installed and activated; the control channel has to be established; and the mission itself must be completed. Such a process goes by several names, most of which are a variant of “kill chain”* because of the many points or links in the chain (Figure 1) where defenders can thwart the mission. These many points of intervention suggest layers of defense, called cybersecurity defense-in-depth. Incidentally, the cost of thwarting the mission increases “left to right” along the kill chain.
Figure 1. Lifecycle of a cyber attack, often called a kill-chain.
The Defense Rises
Whether written or assumed, all organizations – corporate, industrial, government – prescribe acceptable use and security policies for their information systems. The intentions of policies are to reduce the cybersecurity risk profile, and often to maintain productivity, reduce liability, and protect organizational property, integrity and confidences. Policies invariably lead to the establishment of controls, for example: password change requirements, restrictions on user control of computer software, or requiring up to date anti-virus software on all computers. Controls are thoroughly discussed in Part 2.
Much of policy enforcement ultimately involves the translation of policy goals into actions carried out by information security systems. For example, policies pertaining to productivity, liability, and cybersecurity lead to blocking access to gaming and adult content sites. However, it falls to systems to recognize and block such sites, ideally without error. Similarly, if it were policy that nobody except those in marketing could access Facebook for more than 15 minutes per day – and nobody could use Facebook chat – then the systems enforcing this policy would have to track this site and employees, maintain timers, and allow or deny applications (chat or others).
Building on a foundation of controls, filtering and inspection is the first active level of policy enforcement in the dynamic Internet environment; and the primary agent of this enforcement is the firewall. We will consider the firewall as a distinct security system or appliance, but note that firewall functions can be implemented in a device like a router, or even your home computer.
Filtering and Inspection
Firewalls have been getting more powerful as Internet applications expand and the cybersecurity stakes increase. Indeed, new “next generation” firewalls (NGFW) perform functions (such as deep packet inspection and whitelisting) that have been considered beyond firewalls’ scope. Whitelisting is the set of processes (application control programs) employed so that only specific applications and their components (libraries, configuration files, etc.) are allowed entry to the network and/or authorized to be activated on some or all hosts. However, in this discussion we will consider the role of the firewall as performing filtering and inspection. U.S. government publication NIST SP 800-41 presents an overview of firewalls and their application, and NIST SP 800-167 discusses whitelisting.
In the filtering and inspection process, a firewall looks at incoming (left to right in the Reference Diagram) and outgoing traffic/packets and, based on policy rules, either passes the traffic or blocks it. Blocking usually involves discarding the traffic, but could involve holding it for evaluation.
The most basic level of filtering and inspection involves stateless firewalling, in which traffic processing decisions are made based upon relatively simple criteria. For example, firewalls and routers inspect lists (access control lists, ACLs) of Internet Protocol (IP) address and may block or pass the traffic according to its source and/or destination IP address (Figure 2). Often provided as part of a service, ACLs may identify out-of-policy websites or known “spammers”.
Figure 2. Portions of IP and TCP frame headers used in inspection and filtering.
Traffic might also be blocked according to the packet’s Transmission Control Protocol (TCP) port, at layer 4 of the OSI network model, the transport layer. Portions of the headers for IP and TCP frames are recapped in Figure 2 as they are an important part of stateful firewalling.
The function of TCP is to establish, maintain and terminate a reliable data connection between communications end points. In establishing a connection, the 1-bit SYN and ACK flags are set, and in maintaining the reliability of the connection, the sequence and acknowledgement numbers update, i.e. they change state. Thus a connection contains state information, and stateful firewall operation involves tracking a connection’s state and acting on deviations from expected values or states, which can be an indication of malicious activity (e.g. hiding in an existing connection).
Note that a connection request is initiated by a SYN request, and, upon receipt, the receiving end (typically a server) will reserve resources in anticipation of the connection. In a type of denial-of-service (DoS) attack called a SYN Flood, a bad agent floods the server with SYN requests in an attempt to overwhelm the server and disrupt service. Depressingly, a SYN Flood is one of many approaches designed to overwhelm the resources of the attacked site. In a distributed-denial-of-service attack (DDoS), bad agents can direct thousands of compromised, very widely distributed systems (a robot or “bot” army) to flood a target. The wide distribution makes combating the attack more difficult. DoS or DDoS attacks often are used as diversions for other attack vectors.
The TCP header also contains the TCP port number. On the receiving (destination) end point, the port number is used to provide application service information to higher layers in the network model. With 16 bits, the TCP header allows for some 65,000 port numbers. Port numbers 0-1023 are the “well known” ports, and have been registered and assigned by IANA (Internet Assigned Numbers Authority) for privileged services. These ports are often referenced in cybersecurity discussions. A few of these services and their port numbers are shown below:
Ports 1024-49151 are registered but not privileged, although typically their use is respected. For example, port 1293 is used by IPSec, port 1720 in used in H.323 call set-up, and Bit Torrent uses ports 6880-6900. Ports above 49151 are considered dynamic. Many of the port numbers used by TCP are also used by UDP (User Datagram Protocol), a connectionless Layer 4 protocol.
Firewalls continue to block TCP ports (per policy), but the strict use and mapping of applications to TCP ports (and remaining fixed throughout the duration of a connection) is changing. This makes it more difficult for security analysts to understand their networks, and is another factor pushing firewalls to inspect packets more deeply.
Firewalls work in both directions of traffic and therefore can be one of the last lines of defense against malicious exfiltration or even minor policy infractions. An example of the latter might be sending an email containing personally identifying information (PII) such as U.S. social security numbers, driving license numbers, user names or passwords, credit card numbers, etc. Action taken to prevent the loss of such information, minor or major, is called data loss or data leakage prevention, generally called DLP.
Reference Diagram (Repeated)
Proxies Provide Physical and Logical Isolation
The Reference Diagram suggests proxy servers operating in a network called the DMZ (demilitarized zone) not within the core network infrastructure. A technique in a defense-in-depth framework is to offer a higher degree of protection to some highly trafficked, highly targeted applications, such as public-facing website servers. Considering an example of a proxy website server, the contents of the actual website server, which is within in the internal network, are pushed to the proxy server in the DMZ. Public/external users do not recognize that they only interact with the proxy website server, whose ability to reach back into the core network is restricted, thereby isolating the actual website server from users.
Figure 3. Proxies are commonly used to isolate clients and servers.
As suggested in Figure 3, proxies are commonly used to isolate and create restrictions between clients and servers, and a proxy function is not always performed within a DMZ. Proxies can function in both directions, such that local users on the internal network reaching outward to the Internet may themselves interact with a local proxy that manages the communications with the remote, external server – or even its proxy.
In cases of secure communications (e.g. HTTPS/SSL/TLS, IPSec), a common proxy function is the interception and imitation of the secure connection to both parties. If this action were not being employed in the cybersecurity context, it would be considered a severe, man-in-the-middle security breach! However, it is an essential function as bad actors hide malware with encryption and traffic can only be examined when it is unencrypted.
Intrusion Detection and Prevention Systems (IDPS)
Malware intrusion detection and prevention largely falls to specialized, powerful network appliances (as suggested in Figure 1). IDPS are considered as anti-viral (AV) systems, although the power of network-based intrusion detection systems (NIDS) is vastly greater than that of the AV software on your PC (an endpoint device). The latter is an example of a host-based intrusion detection system (HIDS). HIDS increasingly are using more sophisticated “multi-method” techniques to combat malware. Virtually all security-conscious organizations employ NIDS and HIDS. U.S. government publication NIST SP 800-94 presents an overview of IDPS and their application.
Defining characteristics of IDPS include their abilities to:
- Perform, high speed, deep inspection (all the way to the application layer, Layer 7) of incoming and outgoing packets to discover malware and exfiltrating information.
- Maintain historical perspective to set baselines and detect deviations (anomalies) thereof, which may be a flag for improper activity.
- Integrate tightly with the Security Operations Center (SOC) so that security analysts and operators can quickly invoke policy and rule changes in the IPDS.
As mentioned, the nominal functions of firewalls and IPDS are converging in certain security appliances, but we can be less concerned about which device performs them if we focus on the functions themselves.
As noted above, a key activity of IDPS is to inspect traffic/packets for malware. This is largely performed in two ways:
- Scan the packet for known malicious software, seeking specific strings of characters or code called indicators or, commonly, signatures. (According to context, “signature” is often used to mean either the malware code, i.e. the indicator, or the rules to search for it.)
- Actually execute (“explode”) the code carried in packets in a virtual environment (a sandbox) to investigate its purpose. For example, code for a macro in a Windows file would be executed in a virtual Windows environment and its behavior studied.
Signatures and sandboxing are important concepts in cybersecurity. Signatures rely on some level of foreknowledge of malware. For example, once a vulnerability and new malware have been identified, information about the malware’s signature (code) is disseminated and is added to IDPS signature (rule) libraries. (Signature rules are often written in Sourcefire’s Snort®, an open source network-based IDPS that is widely used.) At the same time, SOC analysts attempt to find and disable any of the malware that had passed the defenses – and the owner of the vulnerable software gets to work on a patch. The manufacturers of cybersecurity equipment are very engaged in this battle. They see malicious activity throughout the Internet and continually update their signature libraries for their equipment and customers.
Skilled bad actors will make slight – and frequent – variations to the code (polymorphic signatures) to avoid detection by one specific signature, so security analysts will have to write rules to detect variants. Bad actors also will seek to deliver their malware over many packets to disguise the signatures, so the battle rages with little rest.
A weakness of signatures is that they are predicated on foreknowledge and an ability to predict. Bad actors know this; hence they always look to discover new vulnerabilities with the goal of launching dangerous zero-day attacks.
The use of sandboxing to supplement signature-based IDPS approaches is now more common as increasing processing power makes it more practical to do so. The goal with sandboxing is to observe codes’ behavior upon execution, rather than speculate on it. As noted above, the execution occurs in a virtual environment, often a supplier’s “sandbox in the cloud”. Not surprisingly, bad actors have responded with a number of sandbox detection evasion techniques.
Upon examining traffic, the IDPS may: pass valid traffic; pass it but with an alert to the SOC; remove the malware (as in an infected file attachment or questionable URL) and pass the remaining traffic; block it; quarantine it for examination by SOC analysts, etc. Virtually all IDPS activities are logged, which leads to large log files.
Defenders also employ decoy servers and other resources known as “honeypots” (more formally, a deception environment) to lure malware to expose itself. Malware captured by honeypots can teach defenders about shortcomings in the defense and potentially serve as the basis to turn the tables on the bad actors.
Associated with Email: SPF and DKIM
Email is a common avenue (attack vector) for malware enticement and delivery, but conceptually it is processed for security along the lines discussed – there’s just a lot of it to process in a hurry. In cybersecurity, terms more applicable to email that you will hear are SPF (Sender Policy Framework) and DKIM (Domain Keys Identified Mail).
In spoofing email, bad actors make their emails appear as if they are coming from a legitimate domain or address, such that things may look fine to both the receiving email system (which looks at envelope from-addresses) and the receiver (who sees the header from-address). SPF is an anti-spoofing standard in which the owner of a (sending) domain specifies (by their IP addresses) which of its hosts or systems can send email. This information (the SPF record) is placed in the domain holder’s DNS record. The receiving mail system checks the IP addresses of incoming mail and considers the domain legitimate if these addresses correspond to those in the sending domain’s SPF record. If they do not match, they are considered spoofed (forged).
DKIM employs encryption keys. (Key pairs are mathematical formulations for encryption. If one key locks (encrypts) a message, only the other key can unlock it. In this manner, one key can be made public while the other remains private, so communication involving the private key holder remains secure. Note that a man-in-the-middle attack can occur if the private key is compromised.) The sender encrypts the email headers, and sometimes content, with its private key. Its public key is part of its DNS record, which the receiving system uses to validate the sender/contents.
Security Operations Center (SOC)
As the name suggests, the SOC is the focal point of an organization’s cybersecurity operations. SOC personnel manage the security aspects of the appliances and software we have mentioned. They range in skill levels from security technicians and junior analysts to computer and cybersecurity scientists.
There are a number of security tools associated with the SOC, but the central SOC tool is the Security Information and Event Management (SIEM, pronounced “SIM”) system. SIEM systems are powerful as they must track, correlate, analyze and suggest actions on a very large number of events, essentially in real time. No less demanding, although sometimes with a very long time horizon, the SIEM also must manage complex analytical tools used to uncover deeply hidden malware and/or the activities of APTs.
Bad actors have their own, well run “malware development and quality control” programs, and they can control the pace of their attacks. So despite measures against them, malware attacks continue to find success. Even when malware penetrates all of the defenses, it must still act to establish the command channel(s) and execute its mission. Cybersecurity analysts and scientists increasingly are tuning to “big data analytics” to uncover stealthy, deeply operating malicious activities in oceans of data. Such specialized capabilities are part of the SOC toolkit. Note that as cybersecurity tools represent threats to bad actors, they view the tools themselves as targets.
As part of security operations, essentially every action taken by users and systems and its outcomes are recorded in systems logs (syslogs), with precise network timing. Therefore, SOC tools include systems for log management, access and review. There is a defacto interoperability standard for log and event information called Common Event Format (CEF), which was developed by ArcSight and is supported by virtually all security equipment and systems manufacturers.
In many cases, all traffic is stored (packet capture) should it have to examined for event response, forensic analyses, regulatory compliance, etc. Therefore, depending on retention requirements, storage volumes can be enormous and rapid data search and retrieval of such volumes itself requires specialized tools. If all traffic is stored, then NetFlows, essentially traffic/packet header information or “metadata” used in analysis, are also captured.
As part of a larger security risk management program, SOC operations involve continuous monitoring to monitor and maintain awareness of the organization’s security state, vulnerabilities, and threats. In context, “continuous” monitoring suggests that security controls and organizational risks are assessed and analyzed at a frequency sufficient to support risk-based security decisions to adequately protect the organization’s information. Continuous monitoring encompasses a number of areas including the management of vulnerabilities, patches, events, configurations, licenses and assets. A number of specialized tools, some of which perform automated scans for vulnerabilities and of networks, are available to help with these ongoing efforts. To facilitate the interoperability of security software sets to communicate security flaw and configuration information, vendors of automated security tools are coalescing around a set of specifications called the Security Content Automation Protocol (SCAP).
Summary of Part 1
In this cybersecurity overview tutorial, we have seen how some bad actors scattershot their targets, while others will meticulously research, design and operate very specific attacks on high value targets. The uniqueness and careful execution of the latter makes them more difficult to detect and counter.
Layered security provides increasing levels of protection. Filtering and inspection enforce basic policies and address much of the basic defense. Signature-based approaches can address many of the attacks on vulnerabilities that that have been noted and quantified, and signature information is updated and disseminated rapidly to contain damage. Advances in processing power are making it easier to inspect and execute all potentially executable code in a (virtual) sandbox environment. Deeper analytical tools provide a further layer of defense against deeply hidden and persistent malware.
In Part 2 we look at the items often discussed for cybersecurity in the context of the civilian side of the U.S. Government, while Part 3 examines cybersecurity and the U.S. Department of Defense. Return to cybersecurity introduction page where, at the bottom, you can also find video versions of all three parts.
* The concept of a “kill chain” was first expressed by Hutchins et. al of Lockheed Martin Corporation in Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains, 2011.