At Elastic, we internally use, test, and provide feedback on all of our products. For example, the Information Security team is helping the Product team build a stronger solution for our customers. The InfoSec team is an extremely valuable resource who acts not only as an extension of Quality Assurance/Testing, but also as a data custodian. In fact, our internal detections team utilizes internal Elastic InfoSec data to help build and test detection rules that ultimately find their way into the Elastic Security product. Last month, I was afforded the wonderful opportunity of “riding along” with our InfoSec team to better understand how we use Elastic internally. Over the course of three days, spread across three weeks, I saw the underbelly of our internal systems, how they are used, and how the team uses Elastic Security every day. [At the time of this ride-along, version 7.12 of the Elastic Stack had been released and running in the wild for well over a month. However, the InfoSec team usually runs several minor iterations ahead of general availability.] Day one: Lay of the landOn day one, I learned about all the tools InfoSec uses to keep Elasticians safe. These include, but are not limited to, Case Management (Hive), Identity Management (Okta), various Threat Intelligence feeds, Slack, and Elastic Endgame. Internal Elastic data sources that are being pulled into Elasticsearch range from cloud-specific logs (AWS Cloudtrail, Azure Activity/Diagnostic Logs, GCP Stackdriver) to network-specific logs (load balancer, proxy, web server, Github, VPC Flow, authentication, and vulnerability) to more host-specific (Auditbeat/Filebeat, Endpoint Protection and Telemetry). With all this data stored and available for searching, one key area of functionality the InfoSec team requires that is used every day is cross-cluster search. With this setup, a single cluster serves as the search head, which can query and alert on events across all additional clusters. All out-of-the-box detection rules (currently numbering 525+) are enabled and running against the corresponding data sources. As a best practice, InfoSec focuses on cloud detections first (AWS, Azure, GCP). Based on industry trends, they have a specific emphasis around living-off-the-land (LOLbins) detections as well. In addition to the standard machine learning jobs, InfoSec leverages 15+ custom jobs that pinpoint rare environmental occurrences. These machine learning job types include:
Process/Executable (Process by System/Provider/Team)
Process arguments by Process
Process by Execution Location
Login Location (Geo and IP)
Currently, detection alerts are forwarded into a generalized Slack channel and might look like the following:
In general, this alert may consist of:
A high-level description of the event
Hyperlink to Kibana alert
3-4 information fields (offending source, acting process, etc.)
While internally at Elastic we leverage Slack for notifications, there are several alternative detection rule notification paths available for customers:
An analyst will pick up the alert, then pivot into Kibana and the Case Management tool to start their triage process. For this triage work, an analyst often walks through a process of elimination in determining what is the alert/event and what needs to be done with it. Analysts pick indicators of compromise (IoCs) to cross-reference and correlate across data sources such as VirusTotal, URLscan.io, and https://www.elastic.co/blog/how-the-elastic-infosec-team-uses-elastic-security
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup
Version 7.17.27 of the Elastic Stack was released today. We recommend you upgrade to this latest version. We recommend 7.17.27 over the previous versi