You are on page 1of 52

Monitoring and Alerting

AWS Security Best Practices

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

Threats are continuously changing. For many organizations, they are increasing in volume and severity as
operations become more and more dependent on digital resources. Visibility of these threats is important to an
organization's ability to respond, and in some cases is a legal requirement. Amazon Web Services provides
several ways that you can monitor your resources, create alerts, and even automate remediation activities.

In this module, you will learn information on monitoring and alerting for your AWS environment, based on
various best practices, frameworks, and standards.
Module 4
Objectives Agenda
By the end of this module, you This module is organized into the
will be able to do the following: following sections:
• Configure service and application • Logging network traffic
logging. • Logging user and API traffic
• Analyze logs, findings, and metrics • Visibility with Amazon CloudWatch
centrally.
• Enhancing monitoring and alerting
• Automate response to events as
much as possible. • Verifying your AWS environment

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
2

By the end of this module, you will be able to do the following:

• Configure service and application logging.


• Analyze logs, findings, and metrics centrally.
• Automate response to events as much as possible.
Logging network traffic
Section 1 of 5

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
VPC Flow Logs
What they are Best practices
VPC Flow Log capture packet metadata • VPC flow logging should be enabled
like the source IP address, destination IP for packet rejects for all VPCs.
address, ports, protocol, packet size • Flow logging is instrumental to
and other metadata. network traffic investigations.
• Flow Logs cannot monitor packet contents
(payload or application layer data).
• AWS Config has a rule to check if a
VPC has flow logging enabled.
• They are not real-time, they use aggregation
interval for capture.
• Some types of traffic traversing your network
are NOT captured by Flow Logs.
• They have no affect on network throughput
or latency.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
4

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from
network interfaces in your VPC. Flow logs can help you with many tasks, such as diagnosing overly restrictive
security group rules or monitoring the traffic that is reaching your instance.

VPC Flow Logs can be turned on per elastic network interface, per subnet, or per Virtual Private Network or
VPC* to help you with several tasks, such as the following:
• Diagnosing overly restrictive security group rules
• Monitoring the traffic that is reaching your instance and determining the direction of the traffic to and from
the network interfaces

Turning on VPC Flow Logs on an entire VPC or subnet may generate a very large volume of logs, therefore you
should do the following:
• Filter for desired results based on need. Think before turning on VPC Flow Logs on an entire VPC or subnet.
(Will you use it?)

Flow logs can be sent to an Amazon Simple Storage Service or Amazon S3 bucket or Amazon CloudWatch
Logs, where you set up alarms or visualize the data.
• Use S3 Lifecycle policies to manage large amounts of log data by moving logs to the appropriate storage tier
or expiring log files that are no longer needed.
• Query logs in Amazon S3 using Amazon Athena or analyze data with CloudWatch Logs with Insights.

Not all traffic traversing your network is captured by VPC Flow Logs. Types of traffic that are not captured
include:
• Traffic destined to Amazon Domain Name Service or DNS server, Windows instance traffic for Amazon
Windows license activation, DHCP traffic, Mirrored traffic, traffic to and from 169.254.169.254 for instance
metadata, traffic to and from 169.254.169.123 for the Amazon Time Sync Service, traffic to the reserved IP
address for the default VPC router and traffic between an endpoint network interface and a Network Load
Balancer network interface.
Anatomy of a log
Default format Custom format
• You cannot customize or change the • Specify fields and order included in
default format. flow log records (any number, but at
least one field is required).
• Simplify log processing.

Default format example

1 2 3 4 5 6 7
2 123…0 eni- 172.31.9.2 172.31.1.6 49761 3389 6 20 4249 141…1 141..9 REJECT OK
12ab…9
Longer fields in the example above have been truncated using “…” to allow the entire log to be shown on a single line.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
5

Each network interface that produces a flow log is assigned its own unique log stream. Although flow logs do
not capture real-time log streams for your network interfaces, they can still provide valuable information for
security monitoring, alerting, or troubleshooting. Logs can be collected and stored in a default or custom
format. With the default format, the flow log records include version 2 fields, in the order shown in the example
log on the following slide. Later versions added additional fields that can be used with custom logs. You cannot
customize or change the default format. To capture any additional fields or a different subset of the default
fields, you must specify a custom format.

With a custom format, you can specify which fields are included in the flow log records and in which order. This
way, you can create flow logs that are specific to your needs and omit fields that are not relevant. You can
specify any number of the available flow log fields, but you must specify at least one. Additional fields include a
variety of information such as Region, az-id, tcp-flags (set), traffic-path, flow-direction, and more.

The example log on the slide shows Remote Desktop Protocol or RDP traffic (destination port 3389, TCP
protocol 6) sent to network interface eni-12abb8ca123456789 in account 123456789010 was rejected. Some of
the important fields in the log are as follows:
1. account-id: This is the AWS account ID of the owner of the source network interface for which traffic is
recorded. If the network interface is created by an AWS service (for example, when creating a VPC endpoint
or Network Load Balancer), the record may display unknown for this field.
2. interface-id: This is the ID of the network interface for which the traffic is recorded.
3. srcaddr: This is the source address for incoming traffic, or the IPv4 or IPv6 address of the network interface
for outgoing traffic on the network interface.
4. dstaddr: This is the destination address for outgoing traffic, or the IPv4 or IPv6 address of the network
interface for incoming traffic on the network interface.
5. srcport: This is the source port of the traffic.
6. dstport: This is the destination port of the traffic.
7. action: This is the action that is associated with the traffic. It can be either accept or reject, based on
whether the traffic was allowed through filtering mechanisms.
Traffic Mirroring
Using traffic mirroring provides a detective control that
allows you to send your traffic to out-of-band security
appliances for the following:
• Content inspection
• Threat monitoring
• Troubleshooting

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

While VPC Flow Logs can be used for basic flow analysis, they lack full packet-level information. The AWS Traffic
Mirroring service allows you to copy your traffic from an Amazon Elastic Cloud Compute or Amazon EC2
network interface and send it to a supported target. You can now “sniff” the cloud network traffic traveling in
and out of your EC2 instances. The copied traffic can be sent to a security or monitoring device for inspection,
threat monitoring, or even troubleshooting.
Reasons for Traffic Mirroring
• Detect network and security anomalies
• You can extract traffic of interest from any workload in a VPC and route it to the detection
tools of your choice. You can detect and respond to attacks more quickly than is possible with
traditional log-based tools.
• Implement compliance and security controls
• You can meet regulatory and compliance requirements that mandate monitoring, logging, and
so forth.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
7

Traffic Mirroring is another way to perform monitoring. From a security perspective, you can use this to deploy
out-of-band intrusion detection and analysis tools. Prior to the availability of traffic monitoring, there was no
way to look at our traffic as a bit-by-bit copy. The only options were to route traffic through another instance,
which essentially changes some of the information within the traffic, or deploy local collection agents on
instances.

Target intrusion detection devices or analysis tools can be deployed as individual instances or as a fleet of
instances behind a Network Load Balancer. Traffic mirroring also supports filters and packet truncation, so you
only extract only traffic of interest.

Note: When turned on, the AWS GuardDuty Service performs some network threat and anomaly detection
using the VPC Flow Log data, but it is limited based on the contents of a flow log. Remember, flow logs do not
capture an exact copy of traffic. Application layer data, for example, is not considered in a VPC Flow log.
GuardDuty is still an important tool for layering defenses, providing anomaly detection on AWS API calls through
AWS CloudTrail analysis.

Learn more about traffic mirroring at https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-


mirroring.html
Traffic Mirroring components
• Target—The destination for mirrored
VPC
traffic. A single instance, appliance, or
Private subnet
a load balancer connecting to a fleet
• Filter—A set of rules that defines the
traffic that is of interest. Traffic that
will be copied in the traffic mirror Monitored EC2 instances

session
• Session—An entity that describes
Traffic Mirroring from a source to a Traffic mirroring
target using filters

Target EC2 instances

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
8

Unlike with AWS services, using out-of-band, third-party intrusion detection or analysis solutions requires the
use of Traffic Mirroring. If you are going to implement Traffic Mirroring, you should be familiar with the basic
components as shown on the slide.
Logging user and API traffic
Section 2 of 5

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
AWS CloudTrail functions
• Simplify compliance audits by automatically recording and storing
activity logs for an AWS account.
• Increase visibility into user and resource activity.
• Discover and troubleshoot security and operational issues by
capturing a comprehensive history of changes that occurred in an
AWS account.

AWS CloudTrail tracks the who, what, where, and when


of activity that occurs in your AWS environment and
records this activity in audit logs.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
10

CloudTrail is turned on in your AWS account when you create it. When activity occurs in your AWS account, that
activity is recorded in a CloudTrail Event. With CloudTrail being turned on by default, you can log into CloudTrail
and review your Event History. In this view, not only do you see the last 7 days of events, you can also select a
specific event to view more information about it.

To access your CloudTrail log files directly or archive them for auditing purposes past the 7-day window, you can
create a specific trail and specify the S3 bucket for log file delivery. Creating a trail (as opposed to just viewing
the default CloudTrail information) also lets you deliver events to CloudWatch Logs and CloudWatch Events for
further action.
Security benefits and uses
• Perform security analysis and detect
behavior patterns by ingesting
CloudTrail API call history into log
management and analytics solutions
• Maintain compliance with internal
policies or regulatory standards
• Detect malicious activities and integrate
other AWS services to automate
remediation

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
11

Security analysis: Perform security analysis and detect user behavior patterns by ingesting CloudTrail API call
history into log management and analytics solutions such as CloudWatch Logs, CloudWatch Events, Athena,
Amazon OpenSearch Service, or another third-party solution.

Compliance aid: CloudTrail facilitates compliance with internal policies and regulatory standards by providing a
history of API calls in your AWS account.

Automated remediation: Detect malicious activities such as data exfiltration by collecting activity data on S3
objects through object-level API events recorded in CloudTrail. After data is collected, use other AWS services,
such as Amazon EventBridge and AWS Lambda, to initiate response procedures.
CloudTrail configuration
You can configure two types of “trails”:
1. A trail that applies to one Region
2. A trail that applies to all Regions
• This is the default setting when you create a trail in the CloudTrail console.
• This is a best practice recommendation.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
12

You can configure CloudTrail to deliver log files from multiple Regions to a single S3 bucket for a single account.
When you change an existing single-Region trail to log all Regions, CloudTrail logs events from all Regions in
your account. As long as CloudTrail has permissions to write to the target S3 bucket, the bucket for a multi-
Region trail does not have to be in the trail's home Region.

Logging events in a single Region is not recommended.


Best practice: Multi-Region configuration
{
"IncludeGlobalServiceEvents": true,
"Name": "my-trail",
"TrailARN": "arn:aws:cloudtrail:us-east-2:123456789012:trail/my-trail",
"LogFileValidationEnabled": false,
"IsMultiRegionTrail": true,
"IsOrganizationTrail": false,
"S3BucketName": "my-bucket"
}

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
13

Enabling Multi-Region on your CloudTrail configuration ensures that you get a complete record of events taken
by a user, role, or service in AWS accounts. You should ensure that you set up these trails in every AWS account
used by your company or organization. Multi-Region is a default configuration and a best practice because it
allows you to detect unexpected activity in otherwise unused Regions. Global service events (such as AWS
Identity and Access Management) are also included and logged. If you have a multi-account setup through AWS
Organizations, you can create a trail that logs all events for all AWS accounts in that organization. This
centralization is important for thorough and accurate monitoring.

To confirm that a trail applies to all Regions, the "IsMultiRegionTrail" element should show true within a
CloudFormation template, or the setting is enabled in the AWS Management Console, as shown in the images
on the slide.
AWS CloudTrail best practices

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Centralizing multi-account CloudTrail logging
Many-to-one centralization
• Use AWS Organizations to centralize logging;
• From multiple Regions into one S3 bucket (all-
Regions/one-account)
• From multiple accounts into one account’s Amazon
Simple Storage Service (S3) bucket

• AWS Control Tower centralizes logging for


AWS Organizations by default.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
15

Centralized CloudTrail logging is a generally recommended deployment to ensure the integrity of logs. This is
also the recommended deployment when an organization has a dedicated security team or managed service
provider that will be exclusively handling the logs.

In a multi-account environment using AWS Organizations, you can enable CloudTrail once in the management
account and have it applied to all AWS accounts.
• Log prefix changes from “/AWSLogs/<accountID>/” to “/AWSLogs/<OrganizationID>/”.
• There is no more updating of the S3 bucket policies.

One option for centralizing CloudTrail logging is by using AWS Control Tower. AWS Control Tower provides
enhanced governance and control when you are using AWS Organizations to manage multiple AWS accounts.
AWS Control Tower sets up a new trail when you set up a landing zone (which is a well-architected, multi-
account environment, based on best practices) and configures CloudTrail to enable centralized logging and
auditing. When you enroll a new account into AWS Control Tower, your account is governed by the AWS
CloudTrail trail for the AWS Control Tower organization. If you have an existing deployment of a CloudTrail trail
in that account, you may see duplicate charges unless you delete the existing trail for the account before you
enroll it in AWS Control Tower.

For more information about using AWS Control Tower, visit


https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
For more about the use of AWS Control Tower and other AWS solutions for achieving security governance at
scale, explore our 1-day classroom training at https://aws.amazon.com/training/classroom/aws-security-
governance-at-scale/
AWS CloudTrail with AWS Organizations

• Turn on CloudTrail for


your Organization.

• Update bucket policy.

• Turn on CloudTrail for


222222222222.

• Turn on CloudTrail for


3333333333.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
16
Centralized logging solution
without AWS Control Tower

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
17

The diagram here presents an architecture you can automatically deploy in about 30 minutes using an
implementation guide and accompanying CloudFormation templates (provided by AWS). This solution contains
log ingestion, log indexing, and visualization. The implementation guide and CloudFormation template are
provided free of charge; however, the customer is responsible for the cost of running and using various services
contained within the solution. See more about the estimated costs for this solution at
https://docs.aws.amazon.com/solutions/latest/centralized-logging/cost.html.

Solution details:
1. Log ingestion: Amazon CloudWatch Logs destinations deploy in the primary account and are created with
the required permissions in each of the selected Regions. CloudWatch Logs subscription filters can be
configured for log groups to be streamed to the Centralized Logging account.
2. Log indexing: A centralized Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose are provisioned
to index log events on the centralized Amazon OpenSearch Service domain. The CloudWatch Logs
destinations created to stream log events have Kinesis Data Streams as their target. Once the log events
stream to Kinesis Data Streams, the service invokes an AWS Lambda function to transform each log event to
an Amazon OpenSearch Service document, which is then put into Kinesis Data Firehose. You can monitor
Kinesis Data Firehose while it sends custom CloudWatch Logs containing detailed monitoring data for each
delivery stream.
3. Visualization: Amazon OpenSearch Service and Kibana provide data visualization and exploration support.
An Amazon OpenSearch Service domain is created inside an Amazon VPC, preventing public access to the
Kibana dashboard. Optionally, a Microsoft Windows Jumpbox Server can be launched to access the Amazon
OpenSearch Service cluster and Kibana dashboard.

More resources can be found at https://aws.amazon.com/solutions/implementations/centralized-logging/.


Amazon S3 log storage
Best practice
• Use a dedicated S3 bucket for CloudTrail logs.
• Implement least-privilege access to buckets where you store log files.
• Enable multi-factor authentication (MFA) Delete on the log storage bucket.
• Limit access to the “AWSCloudTrail_FullAccess” policy.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
18

The following are some best practices for the Amazon S3 bucket where you store logs from CloudTrail:

• You can configure CloudTrail to deliver log files from multiple AWS accounts to a single S3 bucket.
• A default descriptive folder structure makes it efficient to store log files from multiple accounts and
Regions in the same S3 bucket.
• A detailed log file name helps identify the contents of the log file.
• A unique identifier in the file name prevents overwriting log files.

• Implement least-privilege access to buckets where you store log files.


• Review the Amazon S3 bucket policy for any buckets where you store log files and adjust it if
necessary. This bucket policy will be generated for you if you create a trail using the CloudTrail
console, but can also be created and managed manually.
• Be sure to manually add a aws:SourceArn condition key to the bucket policy. More information on this
can be found at https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-s3-bucket-
policy-for-cloudtrail.html

• Enable multi-factor authentication or MFA. Delete on the bucket where you store log files.
• Configuring multi-factor authentication (MFA) ensures that attempts to alter the versioning state of
your bucket or permanently delete an object version require additional authentication. This helps
prevent actions that could compromise the integrity of your log files, even if an IAM user with
permissions to delete Amazon S3 objects is compromised.

• Limit access to the “AWSCloudTrail_FullAccess” policy.


• Users with the “AWSCloudTrail_FullAccess” policy can disable or reconfigure the most sensitive and
important auditing functions in their AWS accounts. Limit application of this policy to as few
individuals as possible to maintain the principle of least privilege and protect the integrity of log files.
CloudTrail: Lifecycle management
Best practices
• Configured through Amazon S3
• Available actions:
• Transition to different storage tier
• Expire (delete) object
• Transition and expire

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
19

Configure object lifecycle management for the bucket where you store log files: Define retention policies that
meet your business and auditing needs. These may require consideration of legal or regulatory requirements to
retain logs, and in some cases cost. For example, you might want to archive log files that are more than a year
old to Amazon Glacier. You can also delete log files after a certain amount of time has passed in order to save on
costs for storing logs that are no longer needed.

Transition actions define when objects transition to another Amazon S3 storage class. For example, when
moving a log object to the Amazon S3 Infrequent Access storage class 30 days after creation or archive objects
to Amazon Simple Storage Service Glacier storage class 1 year after creation.

Expiration actions specify when the objects expire (are deleted on your behalf). Note: This option
deletes all objects in the bucket that meet the criteria regardless of the file type. When an object has been
expired, it cannot be recovered.
CloudTrail confidentiality: AWS KMS encryption
Best practice
• Create or use an existing AWS Key Management Service (KMS) key and apply key policy to
allow CloudTrail to encrypt and SecOps engineers to decrypt.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
20

Use server-side encryption with AWS Key Management Services or KMS managed keys: You can encrypt
CloudTrail logs through AWS KMS. By default, the files are encrypted using S3 Server-Side Encryption or SSE-S3,
and then transparently decrypted when you read them. Optionally, you can specify a KMS key or SSE-KMS, and
it will be used to encrypt your log files. See the diagram for an example of this process.

1. Create an AWS KMS key: Create or use an existing AWS KMS key and apply key policy to allow CloudTrail to
encrypt and the SecOps engineers to decrypt the logs.
2. Specify the AWS KMS key: Specify the key to CloudTrail.
3. Retrieve the object: Use the S3 GetObject API call to retrieve the desired log file.
4. Decrypt the log files: The SecOps engineer uses the key to decrypt the log files.
Enable log integrity validation
Best practice
Once you turn on log file integrity validation, CloudTrail
will start delivering digest files on an hourly basis to the
same S3 bucket where you receive your CloudTrail log
files, but with a different prefix.

• CloudTrail log files are delivered to:


/optional_prefix/AWSLogs/AccountID/CloudTrail/*

• CloudTrail digest files are delivered to:


/optional_prefix/AWSLogs/AccountID/CloudTrail-
Digest/*

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
21

Enable CloudTrail log file integrity: To determine whether a log file was modified, deleted, or unchanged after
CloudTrail delivered it, you can use CloudTrail log file integrity validation. Validated log files are invaluable in
security and forensic investigations.

This feature is built using industry-standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital
signing. You can use the AWS Command Line Interface or AWS CLI to validate the files in the location where
CloudTrail delivered them.
Integrate with CloudWatch Logs
Best practices
• Monitor and alert on specific events.
• Simple searching is provided.
• Use AWS Config to ensure CloudTrail is sending
events to CloudWatch Logs.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
22

Integrate with Amazon CloudWatch Logs: CloudWatch Logs allows you to monitor and receive alerts for specific
events captured by CloudTrail. For example, you can monitor key security and network-related management
events, such as failed AWS Management Console sign-in events. You can also configure AWS Config to provide
ongoing detection to help ensure that all trails are sending events to CloudWatch Logs using the “cloud-trail-
cloud-watch-logs-enabled” rule.
Visibility with Amazon
CloudWatch
Section 3 of 5

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

In order to detect potential security incidents within your environment, you must be able to comprehensively
monitor your environment. Amazon CloudWatch provides functions to allow for monitoring and alerting. Before
we explore this service, let’s look at some of the things that you can gain visibility over with Amazon
CloudWatch.
Indicators of compromise
• Abnormal CPU utilization
• Significant or sudden increases in database reads
• HTML response sizes
• Mismatched port-application traffic
• Unusual DNS requests
• Unusual outbound network traffic
• Anomalies in privileged user account activity
• Geographical irregularities (source of traffic)
• Unusually high traffic at irregular hours
• Multiple, repeated, or irregular login attempts

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
24

Indicators of Compromise or IoC are largely similar in cloud environments to how they are in traditional IT
environments. Logging and alerting on anomalies is helpful in recognizing potential malware, malicious
activities, or other indicators of a compromised system. Some of the types of anomalies that may be recognized
by the use of CloudWatch include the examples noted on the slide.
CloudWatch Alarms best practices
These are just a few examples of areas that should be monitored with
CloudWatch Alarms:
• AWS Console sign-In requests without MFA
• IAM policy configuration changes
• Root account usage
• Authorization failures; unauthorized API calls made within your AWS account
• AWS KMS key configuration changes
• AWS CloudTrail configuration changes
• AWS EC2 instance and S3 changes
• AWS VPC, Route table, Internet Gateway, ACLs or security group configuration
changes
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
25

There are many simple alarms that you can implement to monitor your environment. Recommendations for
creating alarms are usually very specific to an organization’s architecture and needs; however they are the
responsibility of the customer (remember the AWS Shared Responsibility Model). CloudWatch alarms come in
two types, which can help you to customize what you are monitoring and ensure that even complex situations
composed of corresponding events or metrics can be captured. Next, you will look at the differences between
metric alarms and composite alarms.
Metric alarms
A metric alarm has the following possible states:
• OK – The metric or expression is within the defined threshold.
• ALARM – The metric or expression is outside the defined threshold.
• INSUFFICIENT_DATA – The alarm has just started, the metric is not available, or
not enough data is available for the metric to determine the alarm state.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
26

Metric alarms watch a single CloudWatch metric or the result of a math expression based on CloudWatch
metrics. The alarm performs one or more actions based on the result, such as sending a notification to an
Amazon Simple Notification Service or SNS topic, performing an Amazon EC2 action or an Amazon EC2 Auto
Scaling action, or creating an OpsItem or incident in AWS Systems Manager.
Composite alarms
Alarms can be combined and grouped.
• They are hierarchical.
• They use Boolean logic AND, OR, and NOT.
• They can help to alleviate or avoid alarm fatigue by reducing noise.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
27

A single event in a complex environment can generate multiple alarms. A continuous large volume of alarms can
overwhelm you or mislead the triage and investigation process. If this happens, you can end up dealing with
alarm fatigue or wasting time reviewing false positives (a false positive is an alert that incorrectly indicates that
malicious activity is occurring).

With composite alarms, you can combine multiple alarms into alarm hierarchies. This reduces alarm noise by
initiating just once when multiple alarms are initiated at the same time. You can use this to provide an overall
state for a grouping of resources such as an application, AWS Region, or Availability Zone. You can also add logic
and group alarms into a single high-level alarm, initiated when the underlying conditions are met. This means
you can introduce intelligent decisions and minimize false positives. Composite alarms are created using one or
more alarm states combined with Boolean operators AND, OR, and NOT and constants TRUE and FALSE. A
composite alarm is initiated when its expression evaluates to be TRUE.

Note: Currently, composite alarms only support an action of notifying Amazon SNS topics.
Using CloudWatch anomaly detection

• The expected range of


values is shown as a wide
gray band.

• Actual values outside this


band are shown as red (the
points extending above the
wide band).

• Anomaly detection
algorithms account for the
seasonality and trend
changes of metrics.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
28

When you turn on anomaly detection for a metric, CloudWatch applies statistical and machine learning
algorithms. These algorithms continuously analyze metrics of systems and applications, determine normal
baselines, and surface anomalies with minimal user intervention.

The algorithms generate an anomaly detection model. The model generates a range of expected values that
represent normal behavior. With this feature, you can create anomaly detection alarms based on a metric's
expected value. This type of metric alarm doesn't have a static threshold. Instead, the alarm compares the
metric's value to the expected value based on the anomaly detection model. You can initiate an alarm when a
metric value is above or below the band of expected values.
Alerting: Notifications of API activity

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
29

You can specify what actions an alarm takes when it changes state between the OK, ALARM, and
INSUFFICIENT_DATA states. These actions may include one or many of the following:
• Notify one or more people by sending a message to an Amazon SNS topic.
• Perform EC2 actions (for alarms based on EC2 metrics).
• Perform actions to scale an Auto Scaling group.
• Initiate a Lambda function.
• Create OpsItems in Systems Manager Ops Center or create incidents in AWS Systems Manager Incident
Manager (performed only when the alarm goes into ALARM state).
Enhancing monitoring and
alerting
Section 4 of 5

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

Monitoring of your resources can be greatly enhanced by integrating other AWS services. This section will look
at Amazon GuardDuty and AWS Security Hub, and how they can provide even more insight into your
environments’ security.
Detect with: • One-click activation without architectural
or performance impact
Amazon
• Continuous monitoring of AWS accounts
GuardDuty and resources
• Instant On provides findings in minutes
• No agents, no sensors, no network
appliances
• Global coverage, regional results
• Built-in anomaly detection with machine
learning
• Partner integrations for additional
protections
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
31

Logs are also a useful source of information for automated threat detection. GuardDuty is a managed,
continuous security monitoring service that analyzes and processes events from several sources, such as VPC
Flow Logs, CloudTrail management event logs, CloudTrail Amazon S3 data Event logs, and DNS logs. It uses
threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify
unexpected and potentially unauthorized and malicious activity within your AWS environment. GuardDuty is
a passive service; however, it can be used in a multi-service workflow to initiate remediation through Lambda
or other AWS services and features.
GuardDuty data sources
Flow Logs DNS events CloudTrail events
• CloudTrail history of
• Flow logs for VPCs do • DNS logs are based on AWS API calls used to
not need to be turned on queries made from EC2 access the console,
to generate findings. instances to known SDKs , AWS Command
• Data is consumed questionable domains. Line Interface or AWS
through independent, CLI, and so on, parsed
• DNS logs are in addition
duplicate stream. by GuardDuty.
to Route 53 query logs.
• Turn on VPC Flow Logs • Identification of user
to augment data • Amazon Route 53 is not and account activity
analysis (charges apply). required for GuardDuty including source IP
to generate DNS based address is used to make
findings. the calls.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
32
GuardDuty: Findings

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
33

When GuardDuty detects suspicious or unexpected behavior, it generates a finding. A finding is a notification
that contains the details about a potential security issue that GuardDuty discovers. One very useful piece of
information in the finding details is a finding type. The purpose of the finding type is to provide a concise yet
readable description of the potential security issue.

For example, the GuardDuty UnauthorizedAccess:EC2/SSHBruteForce finding type quickly informs you that
somewhere in your AWS environment, an EC2 instance has been targeted by an attacker trying to gain access.
Manage and • Managed AWS service
remediate with: • Consolidates and aggregates findings.
AWS Security Hub • Provides controls for the following
standards:
• Center for Internet Security (CIS) AWS
Foundations
• Payment Card Industry Data Security Standard
(PCI DSS)
• AWS Foundational Security Best Practices
• Integrates with ticketing, chat, incident
management, investigation, GRC, SOAR,
and SIEM tools.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
34

Security Hub is a fully managed AWS service offering that is turned on within a Region, and aggregates findings
across all of your accounts within minutes. With Security Hub, you can centrally manage security and
compliance findings in one location, reducing the time spent wrangling data from different locations within the
AWS Management Console.

Security Hub provides automated security checks for the following standards:
• Center for Internet Security or CIS AWS Foundations
• Payment Card Industry Data Security Standard or PCI DSS
• AWS Foundational Security Best Practices

Security Hub has integrations with various ticketing, chat, incident management, threat investigation,
Governance Risk and Compliance or GRC, Security Orchestration Automation and Response or SOAR, and
Security Information and Event Management or SIEM tools that can automatically receive findings from Security
Hub. In addition to the default insights that are provided by AWS and AWS Partners, you can also create your
own insights to track issues that are unique to their environment. This benefit provides you with a certain level
of customization that can come in handy when dealing with company security requirements and regulations.
Remediation with Security Hub
Manual remediation Automatic remediation
• This is best for anything that • This is best when there is a low
has the potential to impact risk of a negative impact to the
business objectives. This type of workloads in the account.
intervention is slower, but • For example, you would not use
notifications can help expedite an automatic remediation that
response. stops an EC2 instance
• This option should also be used responsible for a business-
to test newly created automatic critical function.
remediations before they are
put into a production
environment.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
35

Security Hub integrates with EventBridge, helping you create custom response and remediation workflows.
Response and remediation actions can be fully automated, or they can be initiated manually in the console. You
can also use Systems Manager Automation documents, AWS Step Functions, and Lambda functions to build
automated remediation workflows that can be initiated from Security Hub.

Even for low-impact workloads, automatic remediation should be thoroughly tested before being deployed
into a production environment. Iterating and evolving automatic remediation is key to ensuring these
activities do not impact production environments.
Auto remediation example

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
36

An example of a safe and good use for auto remediation is CloudTrail logging. It is a best practice to have
CloudTrail logging turned on. If it is turned off, whether accidentally or maliciously, an auto remediation task
could be set up to turn CloudTrail logging back on. With CloudTrail logging back on, it can automatically resolve
the finding in the Security Hub workflow status and send Amazon SNS message to the security team to let them
know it was remediated.
1. Integrated services send their findings to Security Hub.
2. From the Security Hub console, you’ll choose a custom action for a finding. Each custom action is then
emitted as a CloudWatch Event.
3. The CloudWatch Event rule initiates a Lambda function. This function is mapped to a custom action based
on the custom action’s Amazon Resource Name or ARN.
4. Depending on the rule, the Lambda function that is invoked will perform a remediation action on your
behalf.

Read more about how to implement this auto remediation from the AWS security blog Automated Response
and Remediation with AWS Security Hub at
https://aws.amazon.com/blogs/security/automated-response-and-remediation-with-aws-security-hub/.
Auditing your AWS
environment
Section 5 of 5

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

Once you have implemented controls, you must regularly audit your environment to accurately assess risk and
compliance with regulations and industry standards. In many regulated industries, you will be subject to audits
from external parties. In this section, you will explore AWS Audit Manager and how it is used to automate
evidence collection. This reduces the manual effort that often happens in preparation for audits. With Audit
Manager, it is easy to assess if your policies, procedures, and controls are operating effectively.
AWS Audit AWS Audit Manager provides an automated
and continuous process for the following:
Manager • Collects evidence of security controls
• Assesses whether controls are operating
effectively
• Provides assessment reports to streamline audit
preparation

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
38

AWS Audit Manger is an AWS managed service. Using this service, you can establish a framework of choice and
set up an automated and continuous process to review and collect data based on this framework. These
assessments help you to assess whether your controls are operating effectively. Because the process is
automated, it streamlines risk assessments and compliance with regulations and industry standards and helps
you maintain a continuous, audit-ready posture across your compute resources.

Remember: The evidence that is collected through Audit Manager might not include all of the information
about your AWS usage that is needed for an audit performed by an enforcing entity. Audit Manager is a valuable
resource, but it is NOT a substitute for legal counsel or compliance experts.
Choose a framework

• Numerous frameworks are


available specific to
industry, location-based
regulatory guidance, and
international standards.

• Here, you can see the NIST


Cybersecurity Framework is
selected.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
39

With Audit Manager, you can build custom frameworks, either from scratch to address your organizational
requirements, or based on existing frameworks with your necessary modifications. You can use one of the many
prebuilt frameworks available with the service if you don't require any modifications or additions to the
assessment.
Explore framework controls

• Controls are
categorized
as standard or
custom.

• Data source is
the service or
artifact from
which the
evidence is
derived.
© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
40

Each framework has several controls assigned including the following:


• Controls are categorized by type and data source.
• Data source is the service or artifact from which the evidence is derived.
Define audit scope

Select:
• Accounts
• Services
• Audit owners

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
41

Audit owners drive the audit preparation across your organization and have full permission to manage the
assessment they are assigned to. Define the audit scope by selecting the following:
• Accounts in scope
• Services in scope
• Audit owners
Gather evidence

• Evidence is automatically collected and stored in folders with a default


name of the date it was collected.

• You can also manually upload evidence (this is required by some control
types).

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
42
Evidence summary
• The summary section provides a high-level overview of the items in the evidence
folder.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
43

The summary section provides a high-level overview of the items in the evidence folder. This includes the
following:
• The date that the folder was created or the evidence was collected.
• The name of the control associated with the evidence folder
• The number of evidence items that were manually selected for inclusion in the assessment report
• The total number of evidence items in the evidence folder
• The total number of AWS resources that were assessed when generating the evidence in this folder
• The number of evidence items that fall under the user activity category; this evidence is collected from AWS
CloudTrail logs
• The number of evidence items that fall under the configuration data category; this evidence is collected
from configuration snapshots of other AWS services such as Amazon EC2, Amazon S3, or IAM
• The number of evidence items that fall under the manual category; this evidence is uploaded manually
• The number of evidence items that fall under the compliance check category; this evidence is collected from
AWS Config or AWS Security Hub
• The total number of issues that were reported directly from AWS Security Hub, AWS Config, or both
Compile a report
• After you select the evidence to
include in your assessment report,
you can generate the final
assessment report to share with
auditors.
• When you generate an assessment
report, it is placed into the S3 bucket
that you designated as your
assessment report destination.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
44

For more information about generating a report, see the Audit Manager User Guide at
https://docs.aws.amazon.com/audit-manager/latest/userguide/generate-assessment-report.html.

The controls offered by Audit Manager through the prebuilt frameworks do not guarantee that you will pass an
assessment associated with that framework. Instead, they help reduce effort and time in your assessment
preparation and review. In addition to Audit Manager, AWS Artifact should be used to help gather supplemental
evidence to assist in the assessment preparation and review.
Module 4: Remember…
Monitoring and • Use service and application logging.
Alerting • AWS CloudTrail
• VPC Flow Logs
• Automate response to events as much as
possible.
• Some key services and features include the
following:
• CloudWatch Alarms
• Amazon GuardDuty
• Security Hub
• AWS Audit Manager
Let’s check our knowledge with a few questions.

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 45
Question 1
Which services can VPC Flow Logs records be published to? (Select
TWO)

A. Amazon S3
B. Amazon RDS
C. Amazon DynamoDB
D. Amazon CloudWatch Logs
E. AWS CloudTrail

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
46
Answer 1
Which services can VPC Flow Logs records be published to? (Select
TWO)

A. (Correct) Amazon S3
B. Amazon RDS
C. Amazon DynamoDB
D. (Correct) Amazon CloudWatch Logs
E. AWS CloudTrail

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
47

A. (Correct) The two destinations that VPC Flow logs can be published to are Amazon S3 and Amazon
CloudWatch Log.
B. (Incorrect) VPC Flow logs are not published to Amazon RDS.
C. (Incorrect) VPC Flow logs are not published to Amazon Dynamo DB.
D. (Correct) The two destinations that VPC Flow logs can be published to are Amazon S3 and Amazon
CloudWatch Log
E. (Incorrect) VPC Flow logs are not published to AWS Cloudtrail.
Question 2
AWS CloudTrail log file integrity validation is invaluable in security
and forensic investigations. ​Which industry standard algorithm is used
for validation hashing?

A. MD5
B. SHA-256
C. AES-256
D. DES

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
48• c
Answer 2
AWS CloudTrail log file integrity validation is invaluable in security and forensic
investigations. ​Which industry standard algorithm is used for validation hashing?

A. MD5
B. (Correct) SHA-256
C. AES-256
D. DES

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
49

a. (Incorrect) MD5 is a deprecated hashing algorithm.


b. (Correct) SHA-256 is the industry standard algorithm used for validation hashing.
c. (Incorrect) AES-256 is a symmetric encryption algorithm, not a hashing algorithm.
d. (Incorrect) DES is a deprecated symmetric encryption algorithm, not a hashing algorithm.
Lab 3: Security By the end of this lab, you will be able to do
do the following:
Monitoring
• Configure an Amazon Linux 2 instance to send
log files to Amazon CloudWatch
• Create Amazon CloudWatch alarms and
notifications to monitor for failed login attempts
• Create Amazon CloudWatch alarms to monitor
network traffic through a Network Address
Translation (NAT) gateway

Lab duration: 45 minutes

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved. 50

Overview
As a security engineer at AnyCompany, you are responsible for monitoring the company network and Amazon
Elastic Compute Cloud instances for abnormal activity.

In this lab, you configure an Amazon Linux 2 instance to send log files to Amazon CloudWatch. You then create
Amazon CloudWatch alarms and notifications to alert you to a specified number of login failures on your EC2
instances. Finally, you create a CloudWatch alarm and notification to monitor outgoing traffic through a NAT
gateway.

Objectives
By the end of this lab, you will be able to do the following:
Configure an Amazon Linux 2 instance to send log files to Amazon CloudWatch
Create Amazon CloudWatch alarms and notifications to monitor for failed login attempts
Create Amazon CloudWatch alarms to monitor network traffic through a NAT gateway

Duration
This lab requires approximately 45 minutes to complete.
Lab Architecture

© 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.
51

Environment overview
The diagram shows the basic architecture of the lab environment.

The following list details the major resources in the diagram:


• A VPC with one public subnet and two private subnets in one Availability Zone, and one public subnet in a
second Availability Zone.
• A Network Load Balancer with two nodes, one in each public subnet.
• An EC2 instance acting as a web server in the first private subnet.
• An EC2 instance acting as a database server in the second subnet.
• Two security groups, one for each instance based on its purpose.
The network traffic flows from an external user, through an internet gateway to one of the two Network Load
Balancer nodes, to the web server. If the URL of the WordPress blog site running on the web server is
requested, traffic flows to the database server as well.
Thank you
Corrections, feedback, or other questions? Contact us at
https://support.aws.amazon.com/#/contacts/aws-training.
All trademarks are the property of their owners.

52 © 2022, Amazon Web Services, Inc. or its affiliates. All rights reserved.

You might also like