Evaluate your SIEM
Get the guideComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
May 4, 2017
Across all of the nation-state targeted attacks, insider thefts, and criminal enterprises that CrowdStrike® has investigated, one thing is clear: logs are extremely important. Event logs from individual computers provide information on attacker lateral movement, firewall logs show the first contact of a particular command and control domain, and Active Directory authentication logs build a timeline of user accounts moving throughout the network. Sounds great, right? It is, as long as all the logs are saved and searchable.
Logs that provide an effective source of data for identifying targeted attacks as well as helping to determine what actions have been taken need to be protected. Since some attackers attempt to remove all traces of their actions, it is critical that logs are centralized, making it more difficult for the complete removal of log data.
Enterprises also need to look at future potential investigations and determine where their weaknesses might be. For instance, if an organization does not track DHCP logs, it will be harder to track older activity that originated from an internal system. In the example of targeted attacks, businesses that track failed VPN logins might see a pattern and have warning when an attacker is knocking on the door.
At a high level, CrowdStrike recommends organizations collect remote access logs, Windows Event Logs, network infrastructure device logs, Unix system logs, Firewall event logs, DHCP logs, and DNS debug logs. Businesses intent on using logs for troubleshooting and investigation should strive to collect and store the items below.
Log Source | Log Types | Retention Period |
DNS Logs | Requests | 3 months |
Windows Event Logs | Application, Security, & System | 12 months |
Web Proxy Logs | Access, Errors | 6 months |
Active Directory Authentication Logs | Authentication | 6 months |
Remote Access Authentication Logs | Authentication | 6 months |
DHCP Lease Logs | Lease information | 12 months |
Router Logs | Netflow | 3 months |
IDS/IPS Alert Logs | Connections, Access | 12 months |
VPN Logs | Connections, Access | 12 months |
Two-Factor Authentication Logs | Connections, Access | 12 months |
SNMP Logs | Audit | 6 months |
Firewall Logs | Connections, Access, Health | 3 months |
Outside of the logs themselves, it is critical for organizations to be able to aggregate, correlate, monitor, and analyze event logs from multiple sources in a network. Many papers have been written about central log management (CLM and security information and event management systems (SIEM) and it is impossible to do the topic justice in a short blog post due to the importance and complexity of undertaking such a project. However, the impact of an efficient CLM/SIEM is immediately evident when an investigation occurs.
Mature companies are also able to identify log sources or individual events that do not need to be kept as too much information can also be a downfall. If information cannot be quickly retrieved from the CLM/SIEM or other log aggregation source, then the collected data is not helping the company. Choosing and managing a log correlation engine is a difficult, but necessary project.
To help highlight the importance and useful of logs, a recent CrowdStrike investigation involved assisting a client with an investigation into a malicious insider. The organization had an employee in IT who decided to delete an entire SAN volume while out of the office. Our analysts used six different sets of logs to help create a complete timeline of events for the client. As the client had robust logging in place and completed timely searches in response to our queries, we were able to quickly sequence the malicious actions. A portion of the timeline appears below to highlight how using different log sets can provide a clear view of events.
Date/Time (UTC) | Description | Source |
2015-10-15 14:49:33 | Suspected employee from external IP Address 204.32.xx.xx started VPN session as USER-A and was assigned internal IP Address 10.x.xx.101. | VPN Logs |
2015-10-15 14:51:20 | IP Address 10.x.xx.101 initiated Remote Desktop session with system 10.x.xx.202. | Netflow Logs |
2015-10-15 14:51:25 | Suspected employee logs into the desktop workstation with IP Address 10.x.xx.202 as USER-B. | Active Directory Authentication Logs |
2015-10-15 | DHCP logs showed IP Address 10.x.xx.202 was previously assigned to hostname ABC-123, a desktop computer belonging to USER-C. | DHCP Logs |
2015-10-15 14:53:46 | IP Address 10.x.xx.202 successful logs into SAN using Administrator account. | SAN Logs |
2015-10-15 14:54:32 | SAN Volume “ImportantDatastore” deleted. | SAN Logs |
2015-10-15 14:55:03 | VPN Session for USER-A ended. | VPN Logs |
Through log analysis we were able to show the actions of an outside user who logged into the VPN, began an RDP session with a desktop computer assigned to a separate user, used the desktop to log into the SAN, and delete an entire SAN volume. Without good log aggregation, the company wouldn’t have been able to produce the entries above.
Aggregating logs is critical to successful troubleshooting and investigation. Businesses need to spend time knowing what to collect, what not to keep, and how to search older records. Organizations should conduct regular self assessments to determine if the current level of logging is sufficient across the enterprise and find any gaps that can be fixed. Many times businesses do not see their deficiency until an investigation occurs and it’s too late to fix it. We have found that organizations that conduct table-top or similar simulation exercises are much better prepared than businesses that do not undertake periodic assessment.
This guest blog was written by Matt Churchill, who manages the Technical Operations team for CrowdStrike, helping to drive innovation and build a world class forensic lab and services. Matt can be reached at https://www.linkedin.com/in/mchurchill/.
This blog post – part 1 – focuses primarily on logging within a traditional on-prem datacenter. Stay tuned for an upcoming post – part 2 – that will discuss logging within AWS IaaS.
Reduce downtime and move from reactive to proactive monitoring.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial