How to share cyber intelligence


(Sean Whalen) #1

Overview

The goal of information sharing is to raise awareness and understanding within security communities. Timely and relevant information is critical. For individuals and organizations, figuring out the best way to share cyber threat intelligence can be a daunting task. This guide seeks to lessen the learning curve by explaining sharing standards and best practices in relation to common attacks.

If, after reading this guide, you have questions, or would like to provide your first intel report, don’t hesitate to make a post. Friendly feedback helps everyone.

Intelligence sharing guidance

The main obstacle to information sharing is fear: fear of embarrassment, and fear of disclosing what might be confidential data. When considering what and how to share, it is important to remember that the goal of intelligence sharing is to raise awareness about attackers and their methods - not airing dirty laundry. Simply share the details of an attack that you have observed; there is no need to disclose whether the attack was successful or not. For example, when you receive a phishing email that delivers some malware, you could share all of the technical details of the attack without mentioning if the data came from a sandbox analysis, or an active IR investigation.

That said, the need for confidentiality within sharing groups exists so that the attackers and the public are not privy to an organization’s capabilities, or lack thereof. Standards such as the Traffic Light Protocol (TLP) help govern the sharing of information, so it is clear what information can be shared with whom. If some information is restricted in some way from being shared (e.g. TLP: Amber), you should not disclose it without permission (e.g. TLP: Green), even to another private group. However, if you find that same information from an independent source that does not have the same restriction, you are free to share that data.

For example, you may be entrusted with classified information or other restricted data. You cannot share (or in most cases even directly act on) that information with anyone that does not have the proper clearance, signed NDA, and need-to-know. However, if you come across some of the same information in a purely Unclassified source (without directly searching for it, and thus leaking it), such as a researcher’s blog post that is not based on, and is independent of classified sources, then you can share that information in an Unclassified context, as long as you are not including any additional details from classified knowledge or sources, including any leaked information.

The same principle applies to your own information sources. For example, you may have received an alert based on a log match with a proprietary subscription feed. While you cannot share the details and context offered in the proprietary feed (or in many cases, even the fact that you got an alert from that source), you can share the details from your own logs, and the results of any of your own investigations based on those logs, and impose any sharing restrictions that you may want, because you own the information that was generated when your controls wrote their logs.

When sharing your own information, you will likely only mark information as TLP: Amber, or TLP: Green. Use TLP: Amber when you don’t want the information to move beyond the organizations that you share it with. For example, this is useful when tracking a clandestine actor who targets a specific industry. Use TLP: Green if it is OK for others to share that information with any other groups, as long as they are not public. For example, because ransomeware campaigns tend to be widespread and opportunistic, you could publish those indicators under TLP: Green, so that the people you share them with can in turn share them with other closed groups, such as ISACs.

Traffic Light Protocol (TLP)

The Traffic Light Protocol (TLP) is a set of designations used to ensure
that sensitive information is shared with the correct audience. It
employs four colors to indicate different degrees of sensitivity and the
corresponding sharing considerations to be applied by the recipient(s).

Red

The highest restriction. Information marked TLP: RED cannot be shared with any persons other than who it has been shared with by the information owner.

When should it be used?

Sources may use TLP: RED when information cannot be effectively acted upon by additional parties, and could lead to impacts on a party’s privacy, reputation, or operations if misused.

When should it be shared?

Recipients may not share TLP: RED information with any parties outside of the specific exchange, meeting, or conversation in which it is originally disclosed

Amber

Non-public information that may only be shared within a member’s organization (i.e. employer), and only to those who have a need-to-know (i.e. other security and IT personnel). This information should not be shared on other forums or mailing lists, or with non-member partners or clients.

When should it be used?

Sources may use TLP: AMBER when information requires support to be effectively acted upon, but carries risks to privacy, reputation, or operations if shared outside of the organizations involved.

When should it be shared?

Recipients may only share TLP: AMBER information with members of their
own organization who need to know, and only as widely as necessary to
act on that information.

Green

Non-public information that may be shared on other closed, security-related forums or mailing lists, or with non-member partners or clients on need-to-know basis.

When should it be used?

Sources may use TLP: GREEN when information is useful for the awareness of all participating organizations as well as with peers within the broader community or sector.

When should it be shared?

Recipients may share TLP: GREEN information with peers and partner organizations within their sector or community, but not via publicly accessible channels.

White

Information that is public, and/or may be shared publicly.

When should it be used?

Sources may use TLP: WHITE when information carries minimal or no foreseeable risk of misuse, in accordance with applicable rules and procedures for public release.

When should it be shared?

TLP: WHITE information may be distributed without restriction, subject to copyright controls.

Best practices

Redact Personally Identifiable Information (PII)

Intelligence sharing involves sharing information about attackers, attack techniques, and general (i.e industry) targeting, not individual targets. For privacy and ethical reasons, redact any PII of targets or victims, including but not limited to:

  • Phishing recipient names and email addresses (individuals and organizations)
  • End recipient mail servers
  • Victim domains and IP addresses (except when they are used by attackers in other attacks)

Use [REDACTED] to replace redacted material.

Attachments

Before uploading any potentially malicious attachment, such as a malware sample, place it in an encrypted archive, using the industry standard password of infected. The zip format is preferred for the greatest compatibility.

Please note that attachments uploaded to this site are accessible to anyone who has the URL for it. For sensitive material, use an encrypted archive with a strong password, and place the password in the body of the post.

Defanging

Replacing components of a malicious email address or URL, so they are not automatically converted into links that can be accidentally clicked on. For example, common conventions are to replace . with [.] and/or http with hxxp. This is helpful when using platforms such as email, IM, and forums. This practice should not be used when writing reports where automatic links can be removed, or in intel databases, so data can be properly used in signatures, and can be easily copied by other analysts.

Malware samples

Whenever possible, you should share the actual malware sample, rather than just a hash or other metadata, in case a partner is able to glean additional details from it. Keep in mind that not everyone has a subscription to VirusTotal Intelligence. However, when you do share, do so with care. Store the sample in an encrypted zip file. This is to prevent IDS/IPS/AV from alerting on the sample when someone downloads it, and to prevent someone from accidentally opening it.

By convention, zips containing malware should use the password infected. Most sandboxes and other analysis systems will automatically try this password when an encrypted zip is submitted. Please note that many sites, including this one, serve attachments as static files that are accessible to anyone with the URL. If the sample is something that is particularly sensitive or confidential, set a different password, and include it in your message,

When working with malware, keep in mind that it might be possible for malware to compromise one of your analysis tools, even if it is for a different OS. For example, there was an exploitable bug in a library used by the strings command, which is included in many Linux distributions, and is one of the first things an analyst or sandbox uses. So, keep your malware analysis host up-to-date, and use as isolated of an environment as possible.

Timestamps

Always post the timezone with timestamps. UTC/GMT is greatly preferred for uniform simplicity.

Definitions

Event

An individual attack or recon incident

Tactics, Techniques, and Procedures (TTPs)

A description of a defining characteristic of a campaign. Examples:

  • Uses compromised WordPress sites to host C2
  • Uses AutoIt
  • Scans web apps before attacking
  • Traffic originates from China

Campaign

A series of events that fall on the same timeline and/or have related TTPs. Often given a code name.

Threat actor

A person, group or entity suspected to be responsible for a campaign.

OSINT

Open-source intelligence, not to be confused with Open-Source software. Intelligence obtained or derived from public, unclassified sources, such as a web site, or Shodan.

Actionable Intelligence

Actionable intelligence, also known as high-fidelity indicators, are details that can be reliably used in signatures and other detection methods, and can reasonably be expected to result in a low number of false positive alerts. While you should document all details of an attack, not every detail will be something that is directly actionable. For example, a malware family may spoof a common user agent string in communication, such as a version of Internet Explorer. A signature for that string would generate many, many false positives, so the user agent string is not actionable. However, it is still worth documenting, so you can observe if and when it changes over time.

Phishing

A form of email, distinct from spam, that aims to trick the user into taking an action and/or making disclosures that puts information at risk. May or may not be opportunistic.

Spear phishing

Phishing that is highly targeted, and uses specific information about the targeted individual or organization in its lure.

Watering hole

A legitimate website that has been compromised to distribute malware to, collect information from, or redirect its specific audience. Not to be confused with drive-by malvertising, which is the opportunistic abuse of advertising networks, and is not targeted at specific sites.

Air gapped

A system or network that has no physical or wireless method of accessing the internet. This is a common or required practice for systems controlling industrial equipment, or systems that contain restricted information or trade secrets.

Sneakernet

A method of distributing malware or other data by using removable media (thus having to move from machine to machine in your sneakers). Frequently used for both legitimate and illegitimate purposes, especially with air gapped networks.

USB phishing

Distributing malicious USB devices such as flash drives, in a conspicuous area such as a parking lot, where they might be picked up and used by unwitting users. A common method to attempt to “bridge” an air gap.

The Lockheed Martian Cyber Kill Chain

The Lockheed-Martin Cyber Kill Chain (commonly referred to in information security as The Kill Chain) is a way of modeling cyber attacks. It is described in the paper Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains. The basic concept is: In order to properly defend against persistent threats, you should collect and document all that you have on an attack, and adapt your countermeasures to match the tactics, techniques, and procedures (TTPs) at each stage of an attacker’s process.

Inevitably, the attacker will eventually change tactics in an attempt to bypass your controls. Because the attacker is likely to change a few TTPs, rather than their entire approach, it is likely that you will be able to stop, or at least detect this next attack. However, even when an attack has been successfully blocked or detected, you should gather information and adapt to those methods as well. As the attackers try new things at different phases of the attack, that is your opportunity to evaluate your defenses and update your signatures, so that you are able to identify all of the different combinations of TTPs the attackers have been known to try, both successful and unsuccessful. Your intelligence should always be up-to-date to account for shifts in TTPs. That way, you are using an attacker’s persistence against them: the more they try, the more you learn.

Phases of the Kill Chain

Reconnaissance

Gathering information about targets. Using public sources, such as company websites, press releases, and social media to learn the organizational structure, communication formatting, strategic partnerships, and commonly used websites and services. Using Shodan to passively gather information on network infrastructure and applications. Running active scans (preferably through some kind of proxy) to look for vulnerabilities in more detail.

To defend against reconnaissance, train your employees to be aware of the risks of posting detailed information about their roles on social media. Perform counter-reconnaissance by examining the logs of your public facing websites, so you can see if someone seems particularly interested in a product or service, especially shortly before an attack. To aid in your search, you should start by looking at traffic from areas that you do not service. However, you should not limit your searching to these areas, because origins can be masked through proxies. Create detailed social media profiles for fake employees who work with R&D or other valueable data as a social engineering honeypot. Use Intrusion Detection Systems (IDSs) that alert on recon activity.

Weaponization

Creating the infrastructure, tools, and payloads necessary to carry out the attack.

This is the hardest phase to gather intelligence on and defend against, because it largely takes place in private, before any attacks occur.

To monitor this phase, you can use a service such as VirusTotal intelligence, to track new samples as attackers test them or victims upload them.

Delivery

Delivering payloads to targets via phishing, watering holes, USB, etc.

This is one of the most important phases in the Kill Chain. There are only a few ways that an attacker can deliver malicious payloads, vs many different kinds of exploits. By focusing defensive security controls for this phase (e.g. email sandboxing, web filtering), you make it easier to stop attacks from advancing by stopping them before they can really start, rather than depending solely on controls that are easier to bypass such as anti virus.

For a phishing email, collect:

  • Full email headers (redact recipients for privacy), pay special attention to these, as they might make good indicators:
    • From name and/or email address (and whether it was spoofed)
    • Reply-to name and/or email address (and whether it was spoofed)
    • Subject
  • ehlo
  • x-mailer
  • x-originating-ip
  • Mail server addresses
  • Date (Include time zone)
  • Message boundary
  • Email body
  • URLs
  • Attachments (in an encrypted zip, password protected)
  • Aggregate targeting information (e.g. one in marketing; three in engineering)

Phishing can also occur over text message/SMS (smishing), collect:

  • The number it was sent from
  • The time it was sent
  • The message body
  • Any URLs or attachments
  • Aggregate targeting information (e.g. one in marketing; three in engineering)

Or social media:

  • The name of the social network
  • Name of the attacker’s profile
  • The URL of the attacker’s profile
  • Date/time of contact (Include time zone)
  • Message content
  • Any URLs or attachment
  • Aggregate targeting information (e.g. one in marketing; three in engineering)

OPSEC Warning: LinkedIn records visitors to a profile for the owner to view

For phone calls, report:

  • The time of the call
  • the caller ID
  • The conversation
  • The duration
  • Perceived caller characteristics (e.g. Gender, accent, and emotional state)
  • Aggregate targeting information (e.g. one in marketing; three in engineering)

When reporting a watering hole attack, include:

  • Time first observed
  • Still active?
  • Selectivity (i.e. does it only attack certain configurations)
  • The URL of the compromised page
  • The method of delivery (iframe, plugin, JavaScript etc)
  • Sample defanged HTML (wrapped in <pre></pre> tags)
  • Any malicious URLs
  • Attack any malicious files in encrypted zips
  • Aggregate victim information (e.g. one in marketing; three in engineering)

When reporting a USB attack, include

  • The OS targeted
  • Execution method (e.g. Autorun, .lnk)
  • Picture of the drive (if possible)
  • Malware sample in an encrypted zip

With this information, you can configure rules on proxies, email gateways, and IDS to stop and/or alert on future deliveries using tools like YARA and Suricata.

Exploitation

Exploit vulnerable software and/or exploit users through social engineering.

Track the use of exploits to prioritize patching:

  • CVE numbers (if available)
  • Affected applications
  • Social engineering styles

Installation

Include:

  • File hashes
  • Method of execution (e.g. run by dropper, injected into a process)
  • Any persistence methods used (e.g. Registry keys, scheduled tasks, services installed, files replaced)
  • Any attempts to open mutexes or named pipes
  • Any files dropped
  • Modified registry keys/values/data
  • YARA rules that match files on the disk or values in memory

Command and Control (C2)

Malware establishes communication

Include:

  • Any IP addresses contacted/DNS queries made
  • The protocols used
  • Full headers (if applicable)
  • HTTP method and path (if applicable)
  • An example beacon or other traffic
  • A Suricata rule to match

Warning: Take care to redact any sensitive information that may be present in the sample traffic. Make redacted locations clear.

Actions on Objectives

Actions that the attackers take after establishing control

Examples:

  • Dropping more tools
  • Lateral movement
  • Sending email
  • Deleting data
  • Encrypting data
  • Keylogging
  • Data exfiltration

Conclusion

Studying all of the phases of an attack can provide insights into an attacker’s goals and intentions, which can help you build a more complete narrative of an attack.

Once you have everything documented, you can create rules based on the intel that you collected.

How to share intel on this site

Create a post detailing your findings i one of the three Intelligence TLP subcategories of your choice (i.e. Amber, Green, and White/OSINT).


Welcome to the speakeasy - please read before posting