You are on page 1of 91

SAP-AWS Certified Solutions Architect - Professional (CSAP)

Q1. You are managing a new team tasked with designing network infrastructures
for clients. You hold a training session to go over how to configure subnets. How
would you explain the rules of associating subnets with a specific network ACL?
(Choose 3 answers)

A. A subnet can be associated with only one network ACL.


B. Subnets not associated with any custom ACL will be associated with the
default network ACL.
C. All subnets associated with a network ACL will have the associated rules
applied.
D. Subnets can be associated with more than one network ACL.

Answer: A,B,C



Explanation: To apply the rules of a network ACL to a particular subnet, you must


associate the subnet with the network ACL. You can associate a network ACL


with multiple subnets; however, a subnet can be associated with only one


network ACL. Any subnet not associated with a particular ACL is associated with
the default network ACL by default. 用
使
Reference:

97 oo 魔

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html#Ne

46 ze 算狂

tworkACL
13 k
]号

Q2. Your team has found that a client's load balancer needs to be configured with
[8 : 云计

support for SSL offload using the default security policy. When negotiating the
30 _b

SSL connections between the client and the load balancer, you want the load
仅 信 号:

balancer to determine which cipher is used for the SSL connection. Which
79 yi

actions perform this process on the load balancer? (Choose 3 answers)


微 众
限 号

A. Select the default security policy.


B. Enable SSL offload.
C. Select a client configuration preference option.
D. Choose the server order preference option.

Answer: A,B,D

Explanation: Elastic Load Balancing uses an Secure Socket Layer (SSL)


negotiation configuration, known as a security policy, to negotiate SSL
connections between a client and the load balancer. A security policy is a
combination of SSL protocols, SSL ciphers, and the Server Order Preference
option.
Reference:
http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-s
sl-load-ba lancer.html#config-backend-auth
Q3. You are looking for a simple way to configure fault tolerance for your EC2
instances. You need to create a plan for replacing unhealthy or failed instances. It
is acceptable to have a short amount of downtime in order to keep costs down.
Which process is appropriate for achieving this?

A. Create a custom AMI and use it to create an Auto Scaling Launch


Configuration; then create a "Steady State" AS policy using a min of 2 instances
and a max of 2 instances.
B. Create a custom AMI of you EC2 instance, and configure a CloudWatch alarm
based on StatusCheckFailed_Instance with an EC2 action of "Reboot this
instance."
C. Create a custom AMI of your EC2 instances, use the custom AMI to create a
new EC2 instance if there are issues with a current EC2 instance, and move the
EIP to the new instance.
D. Create a custom AMI of your EC2 instance, and configure a CloudWatch
alarm based on StatusCheckFailed_Instance with an EC2 action of "Recover this


instance."






使

97 oo 魔


46 ze 算狂

13 k
]号
[8 : 云计

30 _b
仅 信 号:

79 yi
微 众

Answer: C
限 号

Explanation: Creating a custom AMI of the instance for which you are trying to
provide HA allows you to bring the instance online quickly with no build time.
Moving the EIP from the instance you are replacing to the new instance will send
all traffic to the new instance without any change to DNS, which would take time
to propagate. Using the AutoRecover option will not replace the unhealthy or
failing instance. It will only try to restart it on another host. Creating a "Steady
State" Auto Scaling Group would also be a good solution, although using 2 as a
minimum would have a higher cost.
Reference:
http://media.amazonwebservices.com/AWS_Building_Fault_Tolerant_Application
s.pdf
Q4. You are planning to deploy storage gateway on-premises. What are the
minimum resources that has to be dedicated to the storage gateway VM?
(Choose 3 answers)
A. 80 GB of free disk space
B. 4 virtual processors
C. 100 GB of free disk space
D. 16 GB of RAM

Answer: A,B,D

Explanation: When deploying your gateway on-premises, you must make sure
that the underlying hardware on which you are deploying the gateway VM is able
to dedicate the following minimum resources:
. Four virtual processors assigned to the VM.
. 16 GB of RAM assigned to the VM
. 80 GB of disk space for installation of VM image and system data Reference:
http://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.htm
l
Q5. You are developing a new application in which you need to transfer files over


long distances between client-side storage and an S3 bucket. You decide to try


sending data to the S3 bucket using S3 Transfer Acceleration. What must you do


to achieve this? (Choose 2 answers)



A. Use the Cli S3 accelerate upload commands.
B. Use the SDK S3 accelerate upload commands. 用
使
C. Turn on S3 Transfer Acceleration for the bucket.

97 oo 魔

D. Use the new accelerate endpoints to transfer your data to S3.



46 ze 算狂

13 k
]号

Answer: C,D
[8 : 云计

30 _b
仅 信 号:

79 yi
微 众
限 号

Explanation: After you turn on S3 Transfer Acceleration for a bucket, two new
endpoints are created for the bucket: one for IPv4 and one for IPv6. You can use
either the accelerate endpoints or the standard endpoints if you choose not to
use the accelerate feature. Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Q6. An instance is connected to an elastic network interface hosted on a subnet.
The elastic network interface of the instance is then changed to a different elastic
network interface hosted on a different subnet. What changes occur in regards to
the instance and the NACLs assigned at the subnet? (Choose 2 answers)
A. The instance follows the rules of the newer subnet.
B. The instance follows the rules of the original subnet.
C. The NACLs of the new subnet apply to the instance.
D. The instance follows both rules of both subnets.

Answer: A,C

Explanation: The ENI subnet location is controlled by the associated NACLs. For
example, if you're launching an instance into a subnet that has an associated
IPv6 CIDR block, you can specify IPv6 addresses for any network interfaces that
you attach. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
Q7. The CTO of a customer has asked you to plan a move of 100s of TB of data
into AWS. You typically use Amazon Snowball for these types of requests. What
solution would provide the fastest transfer of data to Snowball?



A. Use a server with lots of memory, CPU, and networking capacity to run the


client software.


B. Use a client workstation with lots of memory, CPU, and networking capacity to


run the client software.

C. Use multiple workstations to run the client software.
使
D. Use a powerful EC2 instance type to run the client software.

97 oo 魔


46 ze 算狂

Answer: C
13 k
]号
[8 : 云计

Explanation: Uploading data to the Snowball Appliance requires a client


30 _b

application. The upload is CPU, memory, and networking intensive. If you are
仅 信 号:

uploading large amounts of data, Amazon recommends that you run the client
79 yi

software on multiple workstations to distribute the load and thereby shorten the
微 众

time the upload will take.


限 号

Reference:
http://docs.aws.amazon.com/snowball/latest/ug/transfer-petabytes.html
Q8. Your team is setting up DynamoDB for a client. You need to explain to them
how DynamoDB tables are partitioned. Which calculations are used to determine
the number of partitions that will be created? (Choose 2 answers)

A. The total table size divided by 40 GB


B. The total RCU divided by 5000 + total WCU divided by 1000
C. The total RCU divided by 3000 + total WCU divided by 1000
D. The total table size divided by 10 GB

Answer: C,D
Explanation: DynamoDB tables are portioned based on the following: First,
calculate total RCU/3000 + total WCU/1000. Then calculate total size/10 GB.
Then round up the higher of the two results.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Guidelines
ForTables.
html
Q9. You've carefully designed your auto-scaling groups and launch
configurations for application servers based on the recommended specifications
from the developers. The applications will be launched into separate regions. In
US East there are no issues when initializing the application cluster. US West
deployments generate error messages indicating the user request of an auto
scaling group has failed. How can you attempt to solve this problem? (Choose 2
answers)

A. Choose a different region in which to launch the application servers.


B. Update your auto-scaling group with a new launch configuration and new


instance type.


C. Ask the design team for different specifications for the application servers.


D. Create a new launch configuration following the recommendations listed in the


error message.

使
Answer: B,D

97 oo 魔


46 ze 算狂

Explanation: Different regions will have different resources available at different


13 k
]号

times. In almost all cases, updating your Auto Scaling group with a new
[8 : 云计

placement group or launch configuration is warranted.


30 _b

Reference:
仅 信 号:

http://docs.aws.amazon.com/autoscaling/latest/userguide/CHAP_Troubleshootin
79 yi

g.html
微 众

Q10. You are designing monitoring and operation management for your
限 号

environment on AWS and in the process of deciding which metrics to start with
for your monitoring. Which of the following metrics should be included in your
initial monitoring plan at minimum? (Choose 3 answers)

A. Disk Performance (Read and Write OPS)


B. CPU Utilization
C. Volume Queue Length
D. Memory Utilization

Answer: A,B,D

Explanation: To establish a baseline you should, at a minimum, monitor the


following items:
CPU Utilization, Memory Utilization, Network Utilization, Disk Performance and
Disk Space. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring_ec2.html
Q11. You've successfully deployed a three-tier application at AWS. One of the
components includes a monitoring instance that monitors key components and
notifies Cloud Watch when failures occur. The system works flawlessly; however,
you need to monitor the monitoring instance and be notified when it becomes
unhealthy. How can you quickly achieve monitoring of the monitoring instance?

A. Run an additional monitoring instance that pings the original monitoring


instance and alerts the operations team when failures occur.
B. Have the monitoring instance send messages to an SQS queue, and also
queue these messages on another, backup monitoring instance; when the queue
stops receiving new messages, failover to the backup monitor.
C. Define a Cloud Watch alarm based on EC2 instance status checks; if status
checks fail, alert the operations team via email.


D. Create an auto-scaling group of a minimum and maximum of one instance; set


up Cloud Watch alerts to scale the auto scaling group.



Answer: C



使

97 oo 魔


46 ze 算狂

13 k
]号
[8 : 云计

30 _b
仅 信 号:

79 yi
微 众
限 号

Explanation: Cloud Watch alarms are the easiest to set up for this example. You
can add the stop, terminate, reboot, or recover actions to any alarm that is set on
an Amazon EC2 per-instance metric, including basic and detailed monitoring
metrics provided by Amazon CloudWatch (in the AWS/EC2 namespace), as well
as any custom metrics that include the "InstanceId=" dimension, as long as the
InstanceId value refers to a valid running Amazon EC2 instance.
Reference:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmA
ctions.htm l
Q12. While designing network security for your environment on AWS you are
considering the role of Network Access Control List and how it will affect
resources. In that context you have created a custom NACL that is intended for
private subnets in your VPC. Which services and resources below are restricted
based on this NACL rules? (Choose 2 answers)
A. Customer gateway attached through VPN connection
B. EC2 instances in any subnet (public or private) that has this NACL associated
with it
C. EC2 instances in private subnets even if the NACL is not applied on it
D. RDS instances created in private subnets with this NACL associated with it

Answer: B,D

Explanation: NACLs control stateless access at the subnet level for all traffic.
These rules apply to all instances in the subnet, so you must be careful not to
make your security group rules too permissive.
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html#AC
Ls
Q13. You are working on a plan to mitigate DDoS attacks. You want to make sure
that your front-line EC2 instances can handle the larger volumes of incoming


traffic that would be delivered during an attack. Which EC2 instances would best


provide this functionality?



A. EC2 instances with a very limited number of ports open


B. EC2 instances with multiple ENIs

C. EC2 instances with a higher ratio of CPU to memory
使
D. EC2 instances that support "Enhanced Networking"

97 oo 魔


46 ze 算狂

Answer: D
13 k
]号
[8 : 云计

Explanation: EC2 instances with Enhanced Networking provide 10 Gb/s


30 _b

interfaces, which can handle a much higher volume of traffic into the interface.
仅 信 号:

You are not charged for inbound traffic. Having a higher CPU-to-memory ratio
79 yi

would not allow a higher volume of network traffic. Additional ENIs do not
微 众

increase network throughput. Limiting the open ports would not help as the attack
限 号

would be directed at only one open port.


Reference:
https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
Q14. You are an engineer at a large bank, responsible for managing your firm's
AWS infrastructure. Your finance team has approached you, indicating their
concern over the growing EC2 budget. They have asked you to identify strategies
to reduce the EC2 spend by at least 25% before the next monthly billing cycle.
How could you accomplish this? (Choose 3 answers)

A. Migrate from hvm to pv instances.


B. Reduce or eliminate over-provisioned or unused instances.
C. Look for opportunities to use reserved or spot instances.
D. Consolidate AWS accounts for billing.

Answer: B,C,D
Explanation: You can reduce EC2 spend by migrating to reserved/spot instances,
eliminating/shrinking unused resources, or consolidating AWS accounts (to
qualify for volume discounts). The paying account can benefit from volume
pricing discounts gained thru aggregate account usage.
Reference: https://aws.amazon.com/ec2/pricing/
Q15. Your team is developing an application using Elastic Beanstalk and
discussing the most appropriate environment for deployment. What two types of
environments can be created when using Elastic Beanstalk? (Choose 2 answers)

A. Load-balancing and auto-scaling environment


B. Multi-region, multiple-instance environment
C. Web worker environment
D. Single-instance environment

Answer: A,D



Explanation: In Elastic Beanstalk, you can create a load-balancing, autoscaling


environment or a single-instance environment. The type of environment that you


require depends on the application that you deploy. For example, you can


develop and test an application in a single-instance environment to save costs

and then upgrade that environment to a load-balancing, autoscaling environment
使
when the application is ready for production.

97 oo 魔

Reference:

46 ze 算狂

http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-
13 k
]号

env-types.
[8 : 云计

html
30 _b

Q16. You have created an S3 bucket where project managers can upload their
仅 信 号:

projects' files. Project files change frequently, so retaining multiple copies of files
79 yi

when changes occur is essential. Each project is also considered confidential,


微 众

each project file must be encrypted at rest when stored in S3. How could you
限 号

meet these requirements for your bucket and contents?

A. Versioning should be enabled on each project file; then client-side or


server-side encryption can be utilized.
B. Delete all prior versions after a certain timestamp alert is met.
C. Versioning should be enabled on the bucket; then client-side or server-side
encryption can be utilized.
D. Server-side encryption should be enabled on the S3 bucket.

Answer: C

Explanation: Versioning provides redundancy as it keeps multiple variants of an


object in the same bucket. You can use versioning to preserve, retrieve, and
restore every version of every object stored in your Amazon S3 bucket. With
versioning, you can easily recover from both unintended user actions and
application failures.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html
Q17. One of your developers is creating an application that must upload large
files to a S3 bucket. You suggest that they use the multipart upload feature.
Which actions are required from the developer to complete a multipart upload?
(Choose 2 answers)

A. Create ordered ETag values to label each part.


B. Upload each part with an upload ID and a part number,
C. Construct the final object from the parts.
D. Send a request to initiate a multipart upload.

Answer: B,D


Explanation: When you send a request to initiate a multipart upload, Amazon S3


returns a response with an upload ID, which is a unique identifier for your


multipart upload. You must include this upload ID whenever you upload parts, list


the parts, complete an upload, or abort an upload. When uploading a part, in


addition to the upload ID, you must specify a part number that uniquely identifies

a part and its position in the object you are uploading. Amazon S3 returns an
使
ETag header in its response. For each part upload, you must record the part

97 oo 魔

number and the ETag value for use in each subsequent request. Reference:

46 ze 算狂

http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
13 k
]号

Q18. A member of your network operations center team needs to find out which
[8 : 云计

AWS API services the group has utilized over the last month. What is the best
30 _b

way to access this information?


仅 信 号:

79 yi

A. Run the Security Credentials script.


微 众

B. Enable flow logs to track traffic flow.


限 号

C. Use AWS Inspector.


D. Use CloudTrail logging.

Answer: D

Explanation: Authenticated requests to AWS service APIs are logged by


CloudTrail, and these log entries contain information about who generated the
request. The user identity information helps you determine whether the request
was made with IAM user credentials, with temporary security credentials for a
role or federated user, or by another AWS service. Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
Q19. You've deployed your application and database servers at AWS. For the
first month performance was adequate. Now, due to increased customer demand,
you want to change the instance type for new instances that will run in your
application tier. In which area of auto scaling would you change the existing
instance type definition?

A. Auto scaling group


B. Auto scaling launch configuration
C. Auto scaling tags
D. Auto scaling policy

Answer: A

Explanation: The auto scaling group is where instance changes would be made.
The AWS::AutoScaling::LaunchConfiguration type creates an Auto Scaling
launch configuration that can be used by an Auto Scaling group to configure
Amazon EC2 instances in the Auto Scaling group.
Reference:
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properti


es-as-laun chconfig.html


Q20. Jani has just joined your DevOps team. As part of her DevOps credentials


she now has access to list objects in a bucket but she doesn't have access to


download these objects according to bucket policy. You asked Jani to generate


pre-signed URLs to some objects in this bucket and share them with employees.

Will Jani be able to generate working pre-signed URLs to these objects? (Select
使
the most accurate answer.)

97 oo 魔


46 ze 算狂

A. Jani can generate working pre-signed URLs only if bucket policy allows
13 k
]号

generate-url action
[8 : 云计

B. No, Jani cannot generate working pre-signed URLs because the bucket is
30 _b

protected with bucket policy


仅 信 号:

C. Yes, Jani has access to list objects in this bucket and this inherently grants her
79 yi

access to generate pre-signed urls


微 众

D. No, Jani will not be able to generate working signed URLs as she doesn't
限 号

have access to download the objects

Answer: D

Explanation: All objects by default are private. Only the object owner has
permission to access these objects. However, the object owner can optionally
share objects with others by creating a pre-signed URL, using their own security
credentials, to grant time-limited permission to download the objects.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.h
tml
Q21. Your organization is migrating applications to AWS. Your new security policy
mandates that all user accounts be created and managed through IAM. Currently
your corporation is using Active Directory as their on-premise LDAP service.
Once applications go live at AWS, all users must utilize applications using
temporary access credentials, and all IAM users must have passwords that are
rotated on a set schedule. Which of the following actions will allow you to enforce
this security policy? (Choose 3 answers)

A. Create required IAM user accounts.


B. Create and enforce a single-use password policy option for all IAM users.
C. Enable multifactor authentication for all IAM users.
D. Deploy federation services that support the Security Token Service.

Answer: A,B,D

Explanation: STS is the "glue" that supports temporary access credentials for
federation. If you are running code, AWS CLI, or Tools for Windows PowerShell
commands inside an EC2 instance, you can take advantage of roles for Amazon
EC2. Otherwise, you can call an AWS STS API to get the temporary credentials,
and then use them explicitly to make calls to AWS services.


Reference:


http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-res


ources.html


Q22. Your developers have reported that when they launch their Elastic


Beanstalk environments, they are receiving command timeout errors. You review

the commands listed in their configuration file and make suggestions to remove
使
the timeout errors. What steps should you advise the developers to take?

97 oo 魔

(Choose 3 answers)

46 ze 算狂

13 k
]号

A. Open the Elastic Beanstalk console.


[8 : 云计

B. Choose the environment and select Configuration.


30 _b

C. Open the command line interface (CLI) and run create_environment.


仅 信 号:

D. Change the command timeout to a higher value.


79 yi
微 众

Answer: A,B,D
限 号

Explanation: Commands that execute in the configuration file need a higher


timeout to finish executing.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/events.common.comman
dtimeout.ht ml
Q23. Your database is under an increased load over 50% of the time. To handle
the higher loads more effectively you've decided to vertically scale your database
instance. You've confirmed that your licensing can handle the increased scale;
next, you must determine when the changes will be applied. When can you apply
the changes? (Choose 2 answers)

A. When manually selecting a larger instance and rebooting


B. During the maintenance window for the database instances
C. When using CloudFront metrics
D. Immediately, when choosing a larger instance size

Answer: B,D

Explanation: When scaling an RDS instance, changes can be applied


immediately or during the maintenance window specified. It is also recommended
that you before you scale, make sure you have the correct licensing in place for
commercial engines (SQL Server, Oracle) especially if you Bring Your Own
License (BYOL). One important thing to call out is that for commercial engines,
you are restricted by the license, which is usually tied to the CPU sockets or
cores.
Reference:
https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertic
ally-and-h orizontally/
Q24. You are an engineer at a large bank, responsible for managing your firm's
AWS infrastructure. Your finance team has approached you, indicating their


concern over the growing S3 budget. They have asked you to identify strategies


to reduce the S3 spend by at least 25% before the next monthly billing cycle.


How could you accomplish this? (Choose 2 answers)



A. Utilize Infrequent Access storage for objects that are not requested so
frequently 用
使
B. Use Snowball for large data objects to avoid data transfer rates

97 oo 魔

C. Implement lifecycle to archive older objects to Glacier



46 ze 算狂

D. Enable object compression to reduce object sizes


13 k
]号
[8 : 云计

Answer: A,C
30 _b
仅 信 号:

Explanation: You can use the infrequent access to reduce the cost of certain
79 yi

classes of objects, as well as send older objects to Glacier for archiving at a


微 众

much lower cost. Note that data transfer into S3 is always free. So if you want to
限 号

reduce your cost, using Snowball might speed up moving large data sets
however it will not reduce your cost Reference:
https://aws.amazon.com/s3/pricing/
Q25. You've deployed an application in a custom AMI image into the Amazon
cloud. It is deployed in a separate VPC. You would like to take advantage of
being able to failover to another instance without having to reconfigure the
application. Which of these solutions could be utilized? (Choose 2 answers)

A. Add a secondary private IP address to the primary network interface that could
then be used to move to a failover instance.
B. Utilize Cloud Watch health checks for failover.
C. Use load balancing to balance traffic to additional instances.
D. Use an additional elastic network interface for failover to another instance.

Answer: A,C
Explanation: The ENI can only be attached to an instance hosted in a VPC.
When you move a network interface from one instance to another, network traffic
is redirected to the new instance. Some network and security appliances, such
as load balancers, network address translation (NAT) servers, and proxy servers
prefer to be configured with multiple network interfaces. You can create and
attach secondary network interfaces to instances in a VPC that are running these
types of applications and configure the additional interfaces with their own public
and private IP addresses, security groups, and source/destination checking.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
Q26. Your organization is thinking of running parts of their workload on AWS
while keeping the critical and sensitive servers on-premises with continuous
connectivity between instances in AWS and corporate data center. In such a
hybrid cloud environment what are the minimum requirements to maintain a
resilient continuous connectivity between AWS and corporate data center?


(choose one)



A. A primary and a backup direct connect line and at least two routers in the


corporate data center


B. A primary direct connect line connected to at least two routers in the corporate
data center 用
使
C. A primary and a backup direct connect line connected to the primary router in

97 oo 魔

the corporate data center



46 ze 算狂

D. A primary direct connect line and a VPN connection for backup connected to
13 k
]号

the corporate primary router


[8 : 云计

30 _b

Answer: A
仅 信 号:

79 yi

Explanation: In Hybrid cloud environment where connectivity between AWS and


微 众

Corporate datacenter is critical, redundant active/active or active/passive


限 号

configurations should be implemented for connectivity resources. redundancy


has to be on both sides (AWS and On-premises) hence the minimum
requirements to implement this architecture is two direct connect lines and two
customer devices (ideally in two different data centers) Reference:
https://aws.amazon.com/answers/networking/aws-multiple-data-center-ha-networ
k-connect ivity/
Q27. You are setting up DynamoDB for a client and want to understand which
key value pairs are suitable for the partition key. Which choices would be
appropriate to ensure access uniformity? (Choose 2 answers)

A. UserID, where an application has many users


B. Status code, where there are only a few possible status codes
C. Item Creation date rounded to the nearest time period (e.g., hour, day, minute)
D. Device ID, where each device accesses data at a relatively similar interval
Answer: A,D

Explanation: If a single table has only a very small number of partition key values,
consider distributing your write operations across more distinct partition key
values such as a user ID in an application with many users or a device ID where
access is spread relatively uniformly across devices.. In other words, structure
the primary key elements to avoid one "hot" (heavily requested) partition key
value that slows overall performance.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Guidelines
ForTables.
html
Q28. You are a DBA setting up a fault-tolerant AWS RDS. You need to recognize
the events that will cause a failover. Which situation will cause a failover to the
standby? (Choose 2 answers)


A. A web server health check has failed.


B. The operating system of the DB instance is undergoing software patching.


C. The secondary DB instance fails.


D. An AZ outage occurs.


Answer: B,D 用
使

97 oo 魔

Explanation: A DB failover will be triggered when an Availability Zone outage



46 ze 算狂

occurs, the primary DB instance fails, the DB instance's server type is changed,
13 k
]号

the operating system of the DB instance is undergoing software patching, or a


[8 : 云计

manual failover of the DB instance is initiated using Reboot with failover.


30 _b

Reference:
仅 信 号:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.ht
79 yi

ml
微 众

Q29. You've successfully launched an RDS DB across multiple availability zones.


限 号

In order to be efficient, you've decided to schedule a maintenance window on


Sunday at 2 AM for the scaling of instance storage. What system task performed
by AWS may be carried out during the maintenance window timeframe?

A. Database backups
B. Security patching
C. Enabling additional read replicas
D. Adding availability zones

Answer: B

Explanation: Patching RDS is part of the shared security model at AWS. If a


maintenance event is scheduled for a given week, it is initiated during the
30-minute maintenance window you identify.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBI
nstance.Mai ntenance.html
Q30. You have begun your migration into Amazon Web Services using the AWS
database migration service. You are pleased that there are no errors; however,
the migration tasks are running slowly. You review the resources that have been
assigned to the AWS DMS replication instance, and they seem to be adequate.
What other task could you perform to help speed up the initial migration tasks?

A. Increase the IOPs on the replication instance.


B. Reduce the frequency of automatic backups.
C. Turn off all logging on the target database.
D. Enable multi-availability zones on the target database instance.

Answer: C

Explanation: Turning off automatic backups or logging on the target database


during the migration will help increase the speed of the initial migration load.


Reference:


http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html


Q31. After reviewing the reports from AWS Trusted Advisor, your company has


decided to enable multi-factor authentication for IAM users and the root account.

Which of the following MFA options can be utilized for both account types?
使
(Choose 2 answers)

97 oo 魔


46 ze 算狂

A. Security token-based MFA Device


13 k
]号

B. AWS Key Management Service


[8 : 云计

C. SNS Notification Service


30 _b

D. SMS text messages


仅 信 号:

79 yi

Answer: A,D
微 众
限 号

Explanation: For increased security, AWS recommends that you configure


multi-factor authentication (MFA) to help protect your AWS resources. MFA adds
extra security because it requires users to enter a unique authentication code
from an approved authentication device or SMS text message when they access
AWS websites or services. This type of MFA requires you to assign an MFA
device (hardware or virtual) to the IAM user or the AWS root account. Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html
Q32. You are a DBA and are planning a high-availability (HA) solution for
Microsoft SQL Server. You have decided to use AWS RDS. Which option will be
used for HA?

A. Asynchronous replication
B. Data Guard
C. TDE
D. Mirroring
Answer: D

Explanation: When setting up multi AZ HA for MS SQL Server Native SQL,


Mirroring is used. Data Guard is used by Oracle. Asynchronous replication is
used by MySQL. TDE is Transparent Data Encryption used by Oracle and a SQL
server to encrypt the database.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerM
ultiAZ.html
Q33. You've decided to use autoscaling in conjunction with SNS to alert you
when your auto-scaling group scales. Notifications can be delivered using SNS in
several ways. Which options are supported notification methods? (Choose 3
answers)

A. Messages posted to an SQS queue


B. HTTP or HTTPS POST notifications


C. Invoking of a Lambda function


D. Email using SMTP or plain text



Answer: B,C,D

使
Explanation: Amazon SNS can deliver notifications as HTTP or HTTPS POST,

97 oo 魔

email (SMTP, either plain-text or in JSON format), or as a message posted to an



46 ze 算狂

Amazon SQS queue. If you prefer, you can use Amazon CloudWatch Events to
13 k
]号

configure a target to invoke a Lambda function when your Auto Scaling group
[8 : 云计

scales or when a lifecycle action occurs.


30 _b

Reference:
仅 信 号:

http://docs.aws.amazon.com/autoscaling/latest/userguide/ASGettingNotifications.
79 yi

html
微 众

Q34. You are in the process of planning your backup strategy between
限 号

on-premises data center and AWS cloud, You are considering Storage Gateways
technology for backup and restore in your hybrid environment. What are the
primary factors that affect Backup and Recovery times when using Storage
Gateways? (Choose 2 answers)

A. Amount of data required for backup each day after data deduplication and
compression
B. Net available bandwidth between on premises and AWS over public internet or
Direct Connect line
C. Compute power of on-premises instances that need to be backed up
D. Amount of data required for backup each day before data deduplication and
compression

Answer: A,B
Explanation: Storage gateways interact with AWS over the public Internet or
direct connect. You will need to know the amount of data per day that needs to
be transferred to AWS S3 and the net available bandwidth of your network
connection. Storage Gateway is capable of running data compression and
comparison so only changed data is being backed up.
Reference:
https://d0.awsstatic.com/whitepapers/best-practices-for-backup-and-recovery-on-
prem-to-a ws.pdf
Q35. Your team is developing an Elastic Beanstalk application and discussing
how to design the most appropriate environment. When deploying your Elastic
Beanstalk environment, which of the following must you create? (Choose 2
answers)

A. Tagged based role


B. Instance profile
C. Role-based profile


D. Service role



Answer: B,D



Explanation: When you create an environment, AWS Elastic Beanstalk prompts

you to provide two AWS Identity and Access Management (IAM) roles, a service
使
role and an instance profile. The service role is assumed by Elastic Beanstalk to

97 oo 魔

use other AWS services on your behalf. The instance profile is applied to the

46 ze 算狂

instances in your environment and allows them to upload logs to Amazon S3 and
13 k
]号

perform other tasks that vary depending on the environment type and platform.
[8 : 云计

Reference:
30 _b

http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-roles.html
仅 信 号:

Q36. Exchange Server 2013 is the dominant application used within your
79 yi

company. Due to cost constraints, AWS has been chosen to host your exchange
微 众

servers. You need to test out the environment quickly to understand the cloud in
限 号

more detail. What should be your first step?

A. Deploy an exchange server using a Quickstart reference deployment.


B. Deploy reserved instances that matche your current exchange server
environment.
C. Use AWS import services to migrate your VMs to the AWS cloud.
D. Use the AWS server migration service.

Answer: A

Explanation: Quickstart reference deployments allow you to deploy architecture


that has been designed to operate as a "gold standard."
Reference:
https://aws.amazon.com/compliance/pci-data-privacy-protection-hipaa-soc-fedra
mp-faqs/
Q37. You are an engineer at a large bank, responsible for managing your firm's
AWS infrastructure. As the environment and number of active resources continue
to grow, your finance team is having a difficult time attributing your AWS costs to
the various business units. How can you help the finance team assign costs to
the correct business units? (Choose 2 answers)

A. Enable detailed billing reports.


B. Set up consolidated billing.
C. Give them access to TCO calculator.
D. Tag all resources with the appropriate business unit.

Answer: A,D

Explanation: By tagging all resources with the business unit and enabling
detailed billing reports, you would enable the finance team to run cost reports by
business unit.


Reference:


http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-reports.html


Q38. Your company is planning to deploy a hybrid environment linking their


on-premise database to an application hosted at AWS. What are the concerns


with this design when considering performance between AWS and the on-site
database? (Choose 2 answers) 用
使

97 oo 魔

A. Security of data

46 ze 算狂

B. Network latency
13 k
]号

C. Control of data resources


[8 : 云计

D. Network bandwidth
30 _b
仅 信 号:

Answer: B,D
79 yi
微 众

Explanation: Since the database is not located in the same location as the
限 号

application servers, network latency and bandwidth could potentially be an issue.


Reference:
https://d0.awsstatic.com/whitepapers/the-path-to-the-cloud-dec2015.pdf
Q39. You need to design secure administrative access to application servers
residing in private subnets in multiple availability zones. You require a select
group of administrators to have access from specific IP addresses. You also want
the solution to be automated and repeatable. Which of the following options
could achieve your goals? (Choose 2 answers)

A. NAT Gateway
B. Cloud Formation
C. Third-party AMI NAT solutions
D. Quick Start

Answer: B,D
Explanation: Quick Start is a new solution that is worth checking out. You can use
IAM with AWS CloudFormation to control what users can do with AWS
CloudFormation. Amazon provides Amazon Linux AMIs that are configured to run
as NAT instances. Elastic Beanstalk is primarily for web applications only.
Reference: https://aws.amazon.com/quickstart/architecture/linux-bastion/
Q40. Your company is migrating their environment to AWS. The legacy
environment relied on Chef for automation, and your engineers are comfortable
with that solution. In addition, your compliance officer has indicated that the new
environment needs centralized, auditable configuration management for
regulatory reasons. Which of the following AWS automation tools is most
appropriate for this scenario?

A. OpsWorks for Chef Automate


B. Elastic Beanstalk
C. CloudFormation


D. OpsWorks stacks



Answer: A



Explanation: OpsWorks stacks will let you utilize your existing Chef recipes, but

only OpsWorks for Chef Automate will provide you with centralized, continuous
使
configuration management.

97 oo 魔

Reference: https://aws.amazon.com/opsworks/

46 ze 算狂

Q41. You need to change a deployed Elastic Beanstalk environment to provide


13 k
]号

additional performance for EC2 instances that a client is currently running on


[8 : 云计

their application. To change your instance count for Elastic Beanstalk, select the
30 _b

necessary steps (in any order). (Choose 3 answers)


仅 信 号:

79 yi

A. Change the Minimum Instance Count and then Apply.


微 众

B. Choose Configuration and then choose Scaling.


限 号

C. Open a command prompt window and execute the command


change-elastic-beanstock -env.
D. Open the Elastic Beanstalk console and navigate to the management page.

Answer: A,B,D

Explanation: Elastic Beanstalk environments can be changed after creation. For


example, if you have a compute-intensive application, you can change the type
of Amazon EC2 instance that is running your application. To do this, you use the
Scaling option in the management page of the Elastic Beanstalk console and
change the minimum instance count. Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.html
Q42. AWS Elastic File System (EFS) provides you with the ability to create
scalable file systems for use with Amazon EC2. Although EFS-to-EFS backup is
available, you have been hired to urgently back up an existing AWS EFS file
system, and do not have time to reconfigure it for that automated process. What
three steps should you perform? (Choose 3 answers)

A. Enable EFS backups.


B. Access your EFS backups.
C. Download the AWS Data Pipeline template for backups.
D. Create a data pipeline for backup.

Answer: B,C,D

Explanation: There are three steps to backing up an AWS EFS file system. You
will need to download the AWS Data Pipeline template for backups, as EFS
backups leverage AWS Data Pipeline. Once the template has been downloaded,
create a data pipeline, which will back up the EFS file system on the schedule
you define. Once the backup has run, you will be able to access your EFS
backups.


Reference:


http://docs.aws.amazon.com/efs/latest/ug/efs-backup.html#backup-steps


Q43. Your company is migrating its infrastructure to AWS. When considering the


migration level effort required, you determine there are a select number of VMs


that fall under the "very low, to low" category. After considering third-party

migration options, you decide to utilize available AWS tools. What tools are
使
available for migrating VMs to the AWS cloud? (Choose 2 answers)

97 oo 魔


46 ze 算狂

A. AWS Migration Services


13 k
]号

B. AWS Snowmobile
[8 : 云计

C. AWS Snowball
30 _b

D. AWS VM Import
仅 信 号:

79 yi

Answer: C,D
微 众
限 号

Explanation: Both Snowball and VM Import can be used to migrate VMs.


Snowball mitigates any bandwidth issues that may occur when using VM Import
with standard Internet connections.
Reference:
https://www.slideshare.net/AmazonWebServices/aws-migration-planning-roadma
p
Q44. You have been asked to create a new S3 Bucket that will be used by the
financial auditing team to store and share some files. There are a lot of different
permissions between listing only, read only and read-write access. You have
decided to use resource-based permissions for the flexibility of it. Which
statements below are correct with regards to resource-based permissions on
your S3 bucket? (Choose 2 answers)

A. With resource-based permissions you can define permissions for


sub-directories of your bucket separately.
B. S3 buckets are not eligible for resource-based permissions.
C. An explicit deny in resource-based permissions will overwrite an allow
permissions the user might have for the same resource.
D. Resource-based permissions are managed policies you can apply directly to
your bucket.

Answer: A,C

Explanation: Resource-based permissions are permissions you attach directly to


a resource. Resource-level permissions refers to the ability to specify not just
what actions users can perform, but which resources they're allowed to perform
those actions on. Resource-based policies are inline only, not managed. You can
specify resource-based permissions for Amazon S3 buckets, Amazon Glacier
vaults, Amazon SNS topics, Amazon SQS queues, and AWS Key Management
Service encryption keys.
Reference:


http://docs.aws.amazon.com/IAM/latest/UserGuide/access_permissions.html


Q45. You are planning to set up an on-premises backup solution to the cloud.


The solution must compress and deduplicate the data prior to sending it over


your WAN to storage in S3. You are researching AWS Storage Gateway. Which


features are supported by AWS Storage Gateway? (Choose 2 answers)

使
A. AES 256 Encryption of data at rest

97 oo 魔

B. AES 128 Encryption of data at rest



46 ze 算狂

C. Encryption of downloaded only data via SSL


13 k
]号

D. Encryption of uploaded and downloaded data via SSL


[8 : 云计

30 _b

Answer: A,D
仅 信 号:

79 yi

Explanation: AWS Storage Gateway offers upload and download encryption


微 众

using SSL and AES-256 encryption of data at rest, presents data using industry
限 号

standard formats such as iSCSI or VTLs, and uploads only changed data, which
is compressed prior to upload or download.
Reference:
https://d0.awsstatic.com/whitepapers/best-practices-for-backup-and-recovery-on-
prem-to-a ws.pdf
Q46. Due to compliance rules and regulations, your company's workload has to
run on dedicated instances. How can you make sure that all EC2 instances
created for your workload now and in the future are dedicated instances?
(choose two answers)

A. Create a golden AMI of dedicated instance and enforce this AMI as the base
for any new instance in your environment. This guarantees that all instances
created based on the AMI will be dedicated
B. Create a dedicated Placement Group and associate all new instances with this
placement group at launch time. All instances launched in dedicated placement
group will be dedicated
C. Create your VPCs with instance tenancy of dedicated. This will ensure that all
instances launched in VPC are dedicated
D. Create CloudFormation template for EC2 instances that has Tenancy attribute
as dedicated and always use it to launch any new instance

Answer: C,D

Explanation: You can create a VPC with an instance tenancy of dedicated to


ensure that all instances launched into the VPC are Dedicated instances.
Alternatively, you can specify the tenancy of the instance during launch.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-instance.html
#dedicated -usage-overview


Q47. Your company is considering migrating to AWS, but they are concerned


about the initial and mid- to short-term costs due to the complexity of the


migration cycle. To effectively calculate the total cost of ownership, certain costs


must be understood and planned for. What costs should be primary


considerations? (Choose 3 answers)

使
A. The cost of running migration tools

97 oo 魔

B. The cost of using a Direct Link connection



46 ze 算狂

C. The cost of outside consulting services


13 k
]号

D. The cost of running duplicate environments


[8 : 云计

30 _b

Answer: A,C,D
仅 信 号:

79 yi

Explanation: Failing to accurately estimate your costs before migration and


微 众

during the migration process is a recipe for disaster. To build a migration model
限 号

for optimal efficiency, it is important to accurately understand the current costs of


running on-premises applications, as well as the interim costs incurred during the
transition.
Reference:
https://d0.awsstatic.com/whitepapers/the-path-to-the-cloud-dec2015.pdf
Q48. You are a DBA at a rapidly growing company and you want to shorten
failover time for your AWS SQL Server database. Which strategies will shorten
failover time? (Choose 2 answers)

A. Use smaller transactions.


B. Use larger transactions.
C. Configure the health check using shorter intervals.
D. Ensure you have provisioned sufficient IOPS.

Answer: A,D
Explanation: Recovery takes IOPS; therefore, insufficient IOPS will slow down
failover time. Database recovery relies on transactions; therefore, smaller
transactions will shorten failover time.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractice
s.html
Q49. You wish to perform detailed monitoring on servers that are marked as
unhealthy before they are terminated. After researching the issue, you find that
lifecycle hooks can be deployed as necessary. Which strategies could you use to
perform detailed monitoring? (Choose 2 answers)

A. Define a notification target for the lifecycle hook.


B. Use a Cloud Watch event to invoke a Lambda function.
C. Add additional details to the user data to place the instance into a pending:
wait state.


D. Query 160.254.169.254 when instances are first marked as unhealthy.



Answer: A,B



Explanation: Lifecycle hooks allow customization of existing auto-scaling groups.
Reference: 用
使
http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.html

97 oo 魔

Q50. Your storage team is preparing to move a large amount of the company's

46 ze 算狂

on-premises data to AWS. They decide to use Snowball. What solutions are
13 k
]号

available for moving the data to the Snowball Appliance? (Choose 2 answers)
[8 : 云计

30 _b

A. The Amazon S3 adapter for Snowball


仅 信 号:

B. The Amazon Snowball Client


79 yi

C. Amazon Data Migration Service


微 众

D. The Amazon S3 Cli


限 号

Answer: A,B

Explanation: Amazon provides the Amazon Snowball Client, and the Amazon S3
Adapter for Snowball for data transfer between on-premises data center and
Snowball appliance. The Amazon S3 Adapter for Snowball is a programmatic tool
that you can use to transfer data between your on-premises data center and a
Snowball. It replaces the functionality of the Snowball client.
Reference: http://docs.aws.amazon.com/snowball/latest/ug/using-appliance.html
Q51. Your developers have created a sales application that works in tandem with
the no SQL database. To ensure the fastest response for the application in
production, the developers wish to remove the need to wait for
acknowledgements from the database to the application after data have been
sent. The acknowledgements can be stored and accessed asynchronously.
Which managed application would be the best choice for their design?
A. AWS config
B. Cloud Watch with notifications
C. Simple workflow service
D. Simple queue service

Answer: D

Explanation: SQS allows you to quickly build hosted and scalable message
queuing applications that can run on any computer. SQS stores messages in
transit between diverse, distributed application components without losing
messages and without requiring each component to be always available.
Reference: https://aws.amazon.com/sqs/details/
Q52. You are a DBA setting up a fault-tolerant AWS RDS. You need to recognize
the events that will cause a failover. Which events will cause a failover to the
standby? (Choose 2 answers)



A. A change in the DB instance's Server type


B. Failure of the secondary DB instance


C. Failure of a Automated Backups


D. Failure of the primary DB instance

使
Answer: A,D

97 oo 魔


46 ze 算狂

Explanation: The events that trigger a DB failover are an Availability Zone outage,
13 k
]号

failure of the primary DB instance, a change in the DB instance's server type,


[8 : 云计

ongoing software patching for the operating system of the DB instance, and a
30 _b

manual failover of the DB instance initiated using Reboot with failover.


仅 信 号:

Reference:
79 yi

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.ht
微 众

ml
限 号

Q53. You are an administrator managing Windows File Servers in Amazon Web
Services. You need to set up a fault-tolerant system to protect your shared files.
You decide to use DFS Namespaces and DFS Replication. Which of the
following are prerequisites? (Choose 2 answers)

A. File servers running Windows Server 2012 R2


B. An Active Directory Schema of 2008 R2 or later
C. File servers running Windows Server 2008 R2
D. The File Server Resource Manager role installed by your domain controllers

Answer: A,B

Explanation: The technical requirements are file servers running Windows Server
2012 R2 and an Active Directory Schema of at least Server 2008 R2.
Reference:
https://d0.awsstatic.com/whitepapers/implementing-windows-file-server-disaster-r
ecovery.pd f
Q54. You are developing a static website using S3 for your company's clients.
Your team is configuring the site, and has made the files world readable. What
other steps must they take? (Choose 2 answers)

A. Create a website page hierarchy.


B. Specify a default index document.
C. Specify a default redirect of all errors.
D. Enable static website hosting.

Answer: B,D

Explanation: To configure a bucket for static website hosting, you add a website
configuration to your bucket. The configuration includes an index document, error
documents, redirects of all requests that are intended for the index page, and any


conditional redirects. Amazon S3 does not support server-side scripting.


Reference:


http://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration


.html


Q55. The developers on your project team would like to use Elastic Beanstalk to

deploy web tier applications at AWS without worrying about the underlying
使
infrastructure. When launching their Elastic Beanstalk environment, which

97 oo 魔

environmental tier should they select?



46 ze 算狂

13 k
]号

A. Application tier
[8 : 云计

B. Worker tier
30 _b

C. Web server tier


仅 信 号:

D. Background job tier


79 yi
微 众

Answer: C
限 号

Explanation: The environment tier that you choose determines whether Elastic
Beanstalk provisions resources to support a web application that handles
HTTP(S) requests or a web application that handles background-processing
tasks. An environment tier whose web application processes web requests is
known as a web server tier. An environment tier whose web application runs
background jobs is known as a worker tier.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.archite
cture.html
Q56. After your online sales application is monitored for the last two weeks of
holiday sales, it is apparent that your database tier's current architectural design
is not sufficient. The decision is made to scale your instance to a greater size
based on database recommendations from the vendor. Your current database
storage type is magnetic and, currently, storage usage is at 70%. What
modifications should you consider for improving the performance of your
database? (Choose 3 answers)

A. Change the storage type to general-purpose SSD.


B. Increase the currently allocated storage space.
C. Decrease the currently allocated storage space.
D. Change the storage type to provisioned IOPS SSD.

Answer: A,B,D

Explanation: When scaling a database, you can increase the storage capacity of
a DB Instance up to a maximum of 4-6 terabyte (TB). Please note that scaling
the storage allocation with Amazon RDS does not incur a database outage.
Performance.will be increased as well. Reference:
https://forums.aws.amazon.com/thread.jspa?messageID=203052
Q57. Due to a recent threat, your company has asked you to implement an


architecture that will minimize the effect of DDoS attacks. Which AWS service or


feature will you need to utilize in your architecture to minimize the effect of layer 6


SSL attacks and layer 4 SYN flood attacks?



A. EC2 Security Groups
B. Firewall on Operating System Level 用
使
C. Elastic Load Balancing with Auto Scaling

97 oo 魔

D. Network Access Control Lists



46 ze 算狂

13 k
]号

Answer: C
[8 : 云计

30 _b

Explanation: AWS Load Balancing Service provides DDoS mitigation at layers 4


仅 信 号:

and 6. Larger DDoS attacks can exceed the size of a single Amazon EC2
79 yi

instance. With Elastic Load Balancing (ELB), you can reduce the risk of
微 众

overloading your application by distributing traffic across many backend


限 号

instances. ELB can scale automatically, and accepts only well-formed TCP
connections. This means that many common DDoS attacks, like SYN floods or
UDP reflection attacks will not be accepted by ELB and will not be passed to your
application. Reference:
https://d0.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf
Q58. A customer requires failover from an application server hosted on a
dedicated subnet to another application server on another dedicated subnet. In
order to test the failover scenario, an additional network interface must also be
added to each instance. What approaches can be applied to this scenario?
(Choose 2 answers)

A. Both subnets must reside in the same region.


B. Both subnets must reside in the same availability zone.
C. Both subnets must be peered.
D. The instance can remain running, as the instance can be added as a hot
attach.

Answer: B,D

Explanation: You can attach a network interface to an instance when it's running
(hot attach), when it's stopped (warm attach), or when the instance is being
launched (cold attach). You can attach a network interface in one subnet to an
instance in another subnet in the same VPC; however, both the network interface
and the instance must reside in the same Availability Zone.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#best-pra
ctices-for-c onfiguring-network-interfaces
Q59. Your company wants to migrate an on-premises SQL Server DB to AWS
RDS SQL Server. As the DBA, you have determined that your database can be
offline while the backup is created, copied, and restored. You decide to use


native backup and restore. Which steps are required during setup? (Choose 3


answers)



A. Turn on compression for your backup files by running "exec


rdsadmin..rds_set_configuration `S3 backup compression', `true'."

B. Create an AWS IAM role for access to the S3 bucket.
使
C. Add the SQLSERVER_BACKUP_RESTORE option to an option group on

97 oo 魔

your DB instance.

46 ze 算狂

D. Create an AWS S3 bucket for your backup files.


13 k
]号
[8 : 云计

Answer: B,C,D
30 _b
仅 信 号:

Explanation: There are three components you'll need to set up for native backup
79 yi

and restore: an Amazon S3 bucket to store your backup files; an AWS Identity
微 众

and Access Management (IAM) role to access the bucket; and the
限 号

SQLSERVER_BACKUP_RESTORE option added to an option group on your DB


instance.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Procedur
al.Importin g.html
Q60. The developers on your project team have just finished configuring an
application and want to save the Elastic Beanstalk settings that they have applied
to deploy resources in that environment. What two formats for saving
configuration option settings are supported? (Choose 2 answers)

A. JSON
B. YAML
C. PS1
D. XML
Answer: A,B

Explanation: Elastic Beanstalk supports two format of saving configuration option


settings. Configuration files in YAML or JSON format can be included in your
application's source code in a directory named .ebexternsions and deployed as
part of your application source bundle. You create and manage configuration files
locally.
Saved configurations are templates that you create from a running environment
or JSON options file and store in Elastic Beanstalk. Existing saved configurations
can also be extended to create a new configuration.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environment-configuration
-methods -before.html
Q61. In order to provide adequate computer resources for your company portal
during busy sales cycles, you have your instances assigned to an auto-scaling
group associated with an elastic load balancer and associated health checks.


From time to time the instances within the auto-scaling group are marked as


unhealthy as expected, but the unhealthy instances are not terminated. What


must you change to ensure that instances marked as unhealthy will be


terminated?


A. Add an additional availability zone for failover.用
使
B. Configure the auto-scaling group to use both instance status checks and load

97 oo 魔

balancer health checks.



46 ze 算狂

C. Enable connection draining.


13 k
]号

D. Enable cross-zone replication.


[8 : 云计

30 _b

Answer: B
仅 信 号:

79 yi

Explanation: Both instance status checks and health checks must be enabled
微 众

before unhealthy instances will be terminated. It is critical that you test not only
限 号

the saturation and breaking points but also the "normal" traffic profile you expect.
Reference: https://aws.amazon.com/articles/1636185810492479
Q62. You have received a call from a client who attempted to set up a static
website on S3 by himself. He thinks the website is configured properly but is
unable to view any content. You see that the bucket has the same domain name
as the website and also that index and error docs have been created. What could
be missing?

A. Specify an alternate index document.


B. Make all files world readable.
C. Specify a default redirect of all errors.
D. Create a website page hierarchy.

Answer: B
Explanation: When you configure a bucket as a website, you must make the
objects that you want to serve publicly readable. To do so, you write a bucket
policy that grants everyone
s3:GetObject permission. On the website endpoint, if a user requests an object
that does not exist, Amazon S3 returns HTTP response code 404 (Not Found). If
the object exists but you have not granted read permission on the object, the
website endpoint returns HTTP response code 403 (Access Denied).
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/HowDoIWebsiteConfiguration
.html
Q63. Your web-based application has been launched publicly. Your design has
implemented auto scaling and classic load balancing and your design is
responding to the changes in demand as expected. Over the next few months,
during the holiday season, demand is expected to be quite robust. You estimate
that 100 EC2 instances will be required to meet your customers' demand. How
should you plan properly for growth? (Choose 2 answers)



A. Use AWS Trusted Advisor to analyze your workload requirements.


B. Change your auto scaling configuration, setting a desired maximum capacity


of 100 instances.


Verify your limits allow for this capacity

C. Contact Amazon to pre-warm your elastic load balancer to match the expected
使
demands.

97 oo 魔

D. Add a second load balancer for additional redundancy.



46 ze 算狂

13 k
]号

Answer: B,C
[8 : 云计

30 _b

Explanation: In certain scenarios, such as when flash traffic is expected, or in the


仅 信 号:

case where a load test cannot be configured to gradually increase traffic, AWS
79 yi

recommends that you have your load balancer "pre-warmed". They will configure
微 众

the load balancer to have the appropriate level of capacity based on the traffic
限 号

that you expect. They also need to know the start and end dates of your tests or
expected flash traffic, the expected request rate per second and the total size of
the typical request/response that you will be testing.. Reference:
https://aws.amazon.com/articles/1636185810492479#pre-warming
Q64. Your company has begun deploying corporate resources to AWS. They
want to ensure AWS compliance levels match their corporate requirements.
Which actions reflect best practices for carrying out a security assessment of
your environment on the cloud? (Choose 2 answers)

A. Request AWS to run a penetration test against your environment and generate
assessment report with critical findings and best practices to resolve them
B. Request approval to perform relevant network scans and penetration tests of
your system on AWS.
C. Review applicable third-party AWS compliance reports and attestations and
conduct gap analysis to find missing controls
D. Carry out a detailed audit and inventory of on-premise resources and
operations.

Answer: B,C

Explanation: AWS Service Organization Control (SOC) Reports are independent


third-party examination reports available to current AWS customers that provide a
description of the AWS controls environment and external audit of AWS controls
meeting the AICPA Trust Services Security and Availability Principles and Criteria.
They provide customers and users who have a business need with an
independent assessment of AWS' control environment relevant to system
security and availability. Customer may also request authorization for penetration
testing to or originating from any AWS resources (see
aws.amazon.com/security/penetration-testing). Reference:
https://aws.amazon.com/compliance/soc-faqs/
Q65. Your developers need to create security rules to control the inbound traffic


access to their instances on a public subnet. They wish to provide access to port


80 and port 443 but deny access to specific IP addresses. How should they


proceed when creating their security rules?



A. Create an IAM role-based policy for all security rules.

B. Create NACLs to control port access and security groups to deny access from
使
specific IP addresses.

97 oo 魔

C. Create security groups to control port access, and deny access from specific

46 ze 算狂

IP addresses.
13 k
]号

D. Create security groups to control port access and NACLs to deny access from
[8 : 云计

specific IP addresses.
30 _b
仅 信 号:

Answer: D
79 yi
微 众

Explanation: Security groups cannot deny access because their design is


限 号

permissive only.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.h
tml
Q66. You've designed a public portal (with a MySQL database) featuring popular
scientific journals. Due to its popularity, users are complaining it takes much
longer to review selected journals than in the past. In addition, the popularity of
your site is worldwide. What are the first steps that you should take to resolve
your performance issues? (Choose 2 answers)

A. Place additional read replicas in AWS regions closer to your users.


B. Create a read replica synchronized with your master database.
C. Deploy ElastiCache to improve the performance of your web servers.
D. Redesign your database as a Multi-AZ solution.
Answer: A,B

Explanation: Read replicas synchronized with the master database allow you to
increase performance. Placing your read replicas in different AWS regions closer
to your users will maximize performance and increase the availability of your
database.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.ht
ml
Q67. Your on-premise bare-metal servers are over five years old. You're
considering moving to AWS. The offerings of virtual instances far surpass what
you can access on-premises. Before you choose your test instance at AWS, what
questions should you ask? (Choose 3 answers)

A. What is the cost of over provisioning?


B. What is the average server utilization?


C. When peak load occurs, how much over provisioning is required?


D. How much network bandwidth do you need?



Answer: A,B,C



Explanation: On-premises data centers have costs associated with the servers,
使
storage, networking, power, cooling, physical space, and IT labor required to

97 oo 魔

support applications and services running in the production environment. For



46 ze 算狂

servers, these questions are most applicable: What is your average server
13 k
]号

utilization? How much do you overprovision for peak load? What is the cost of
[8 : 云计

over-provisioning?
30 _b

Reference:
仅 信 号:

https://d0.awsstatic.com/whitepapers/the-path-to-the-cloud-dec2015.pdf
79 yi

Q68. Your developers have been working on a VPC configuration and run across
微 众

some connectivity issues. They ask you if they can attach an additional network
限 号

interface for additional private network connections within their VPC on select
EC2 instances, and if so, how to go about it. What type of network component
should they add to complete this task?

A. Network ACL
B. Elastic network interface
C. Elastic IP address
D. Multi-homed instance

Answer: B

Explanation: Elastic network interfaces add an additional network interface to


selected EC2 instances. An elastic network interface is a virtual network interface
that you can attach to an instance in a VPC. These network interfaces are
available only for instances running in a VPC. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
Q69. Due to increased online sales activity, your backend Amazon RDS relational
database has been under heavy loads. The overall performance needs to be
improved. You have decided to increase the size of your instance based on
vendor recommendations. Analysis of both database reads and writes also
shows that your application database has roughly the same number of reads and
writes. What type of scaling solution should you consider for the database?

A. Auto scaling
B. Vertical scaling to vendor recommendations
C. Horizontal scaling
D. Storage resizing

Answer: B


Explanation: Vertical scaling will help address applications that use the same


number of reads and writes. To handle a higher load in your database, you can


vertically scale up your master database by modifying the size of the instance in


the settings pane. Reference:


https://aws.amazon.com/blogs/database/category/rds-mysql/

Q70. You are an AWS Solutions Architect helping a client plan a migration to the
使
AWS cloud. The client is very cost-conscious and needs to understand the

97 oo 魔

budget implications of any design decisions prior to signing off. Now that you've

46 ze 算狂

identified the resources that must be created in the AWS environment to support
13 k
]号

the migration, what tool could you use to help project future costs given this
[8 : 云计

information?
30 _b
仅 信 号:

A. Detailed Billing Reports


79 yi

B. Simple Monthly Calculator


微 众

C. Cost Explorer
限 号

D. TCO Calculator

Answer: B

Explanation: The Simple Monthly Calculator is used to calculate projected costs,


assuming you know what AWS resources you'll be consuming.
Reference: https://aws.amazon.com/pricing/cost-optimization/
Q71. You've taken over management of your company's S3 Buckets. The
buckets are using Access Control Lists for security. But you have some intricate
security requirements and would like a more fine grained control with
cross-account control. What could you use?

A. S3 Bucket policy
B. IAM resource policy
C. IAM ID policy
D. S3 User policy

Answer: A

Explanation: You can use ACLs to grant cross-account permissions to other


accounts, but ACLs support only a finite set of permissions. Although both bucket
and user policies support granting permission for all Amazon S3 operations, the
user policies are for managing permissions for users in your account. For
cross-account permissions to other AWS accounts or users in another account,
you must use a bucket policy.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-alternatives-gui
delines.ht ml
Q72. You have been consulted on strategy for deploying and removing web
applications hosted in a test VPC. The application team is using Elastic
Beanstalk to host this application however lately they are unable to create new


versions of this application citing an error related to versions limit. What


recommendations can you make to help solve this problem now and in the future?


(choose one).



A. Application version limit will increase automatically when limit is reached.

B. Apply an application version lifecycle policy to your applications.
使
C. From the management console, delete all versions no longer required.

97 oo 魔

D. Elastic Beanstalk deletes all versions automatically.



46 ze 算狂

13 k
]号

Answer: B
[8 : 云计

30 _b

Explanation: You can avoid hitting anapplication version limit by applying an


仅 信 号:

application version lifecycle policy to your applications. A lifecycle policy tells


79 yi

Elastic Beanstalk to delete application versions that are old, or to delete


微 众

application versions when the total number of versions for an application exceeds
限 号

a specified number. Elastic Beanstalk applies an application's lifecycle policy


each time you create a new application version, and deletes up to 100 versions
each time the lifecycle policy is applied. Elastic Beanstalk deletes old versions
before creating the new version, and does not count the new version towards the
maximum number of versions defined in the policy.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-lifecycle.html
Q73. Your company is migrating into Amazon Web Services and has selected the
US East region in which to operate. The majority of your work involves interfacing
with government departments in the United States and Germany. Your company
expects the deployment into the cloud to be up and running in a minimum of time.
Before deploying any resources in the European region, what should be your first
consideration?

A. European privacy laws


B. The number of web servers to deploy
C. Industry rules and standards
D. The pre-warming storage and load balancers

Answer: A

Explanation: Privacy laws in foreign countries will dictate the compliance rules
and regulations. AWS customers remain responsible for complying with
applicable compliance laws and regulations.
Reference:
https://aws.amazon.com/compliance/pci-data-privacy-protection-hipaa-soc-fedra
mp-faqs/
Q74. Your organization is launching a new portal that will be hosted in one of the
US regions but available world-wide. As part of this global portal, clients will be
transferring large files to the primary S3 bucket in the primary region. There is a
concern about delay of file transfer for clients from other continents because of


the distance. Which service or feature could help you with this scenario while


keeping your operation cost at the minimum? (Choose 2 answers).



A. Use Amazon S3 Transfer Accelerator on the primary bucket in primary region


B. Enable S3 cross-region replication for the primary bucket

C. Utilize CloudFront to allow customers to upload to their closest edge location
使
D. Deploy your portal in multiple regions and use geo-location routing features of

97 oo 魔

Route 53 to direct clients to the closest portal deployment to their location



46 ze 算狂

13 k
]号

Answer: A,C
[8 : 云计

30 _b

Explanation: Amazon S3 Transfer Acceleration enables fast, easy, and secure


仅 信 号:

transfers of files over long distances between your client and an S3 bucket.
79 yi

Transfer Acceleration takes advantage of Amazon CloudFront's globally


微 众

distributed edge locations. As the data arrives at an edge location, data is routed
限 号

to Amazon S3 over an optimized network path. The first option is also valid
however it incurs a lot of additional cost and would have a maintenance
overhead.
Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Q75. Due to recent threats, your team has been asked to focus on the prevention
of DDoS attacks in web applications using CloudFront and Route 53. Which AWS
infrastructure feature helps protect your web application from direct attacks?

A. network time protocols


B. edge locations
C. availability zones
D. UDP reflections

Answer: B
Explanation: Services that are available in AWS edge locations, like Amazon
CloudFront, AWS WAF, Amazon Route 53, and Amazon API Gateway, allow you
to take advantage of a global network of edge locations that can provide your
application with greater fault tolerance and increased scale for managing larger
volumes of traffic. Reference:
https://d0.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf
Q76. You are responsible for a web application where the Web server instances
are hosted in auto-scaling group. Monitoring the load of the application over a
period of 12 months reveals that nine servers are required to handle the
minimum load. During a 24-hour period, an average of 14 servers are needed.
Three weeks out of the year, the number of servers needed might increase to 16.
What recommendations would you make to minimize operating costs while
providing the required availability?

A. Nine reserved instances with heavy utilization, 5 reserved instances with


medium utilization, and the rest covered by on-demand instances


B. Nine reserved instances with heavy utilization, 5 on-demand instances, and


the rest covered by on-demand instances


C. Nine reserved instances with heavy utilization, 5 reserved instances with


medium utilization, and the rest covered by spot instances

D. Nine reserved instances with heavy utilization, 5 spot instances, and the rest
使
covered by on-demand instances

97 oo 魔


46 ze 算狂

Answer: D
13 k
]号
[8 : 云计

Explanation: Optimizing instance costs involves consideration of all pricing


30 _b

options as applicable for each requirement. The purchasing option that you
仅 信 号:

choose affects the lifecycle of the instance. An on-demand instance runs when
79 yi

you launch it and ends when you terminate it. A spot instance runs as long as its
微 众

capacity is available and your bid price is higher than the spot price. Reserved
限 号

Instances provide you with a significant discount compared to on-demand


Instance pricing. Reserved instances are not physical instances, but rather a
billing discount applied to the use of on-demand Instances in your account.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-opt
ions.html
Q77. Your company will be connecting to Amazon cloud utilizing a hybrid design.
It will be necessary to create an IPsec VPN connection using the Internet as the
pathway. The solution must provide for two separate VPN endpoints for additional
redundancy to be located at AWS. In addition, both BGP and IPsec connections
must be terminated on the same user gateway device. Which options support
these design criteria? (Choose 3 answers)

A. Dual termination of connections


B. Dynamic routing support
C. Hardware VPN solution with IPsec
D. Marketplace HA Solution

Answer: A,B,C

Explanation: Hardware VPN provides dual redundant paths and BGP support by
establishing a hardware VPN connection from your network equipment on a
remote network to AWS-managed network equipment attached to your Amazon
VPC. This allows reuse of existing VPN equipment and processes, reuse of
existing Internet connections, AWS-managed endpoints with multidata center
redundancy and automated failover, and support of static routes or dynamic
Border Gateway Protocol (BGP) peering and routing policies.
Reference:
http://media.amazonwebservices.com/AWS_Amazon_VPC_Connectivity_Option
s.pdf
Q78. You are migrating your Oracle database using the AWS database migration


service. Due to the large amount of data being replicated, you need the


replication process to be continuous. What must be changed on your replication


instance to use ongoing replication?



A. Enable the Multi-AZ option on the replication instance.

B. Increase the number of tables that are cached in RAM.
使
C. Increase the amount of data written to your database change log.

97 oo 魔

D. Disable backups on the target instance.



46 ze 算狂

13 k
]号

Answer: A
[8 : 云计

30 _b

Explanation: Enabling the Multi-AZ option provides high availability and failover
仅 信 号:

support for the replication instance.


79 yi

Reference:
微 众

http://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html#C
限 号

HAP_BestPr actices.OnGoingReplication
Q79. Due to compliance regulations, network technicians are instructed to begin
logging IP traffic going to and from specific network interfaces on the private
network. They are directed to use flow logs to capture this information. The
network design is a mixture of newer VPCs and older EC2-Classic networks.
What IP traffic information will not be captured by the flow logs? (Choose 2
answers)

A. Metadata requests for 169.254.169.254


B. IP traffic on EC2-Classic networks
C. Amazon WorkSpaces traffic within a VPC
D. IPV4 and IPv6 traffic from elastic network adapters

Answer: A,B
Explanation: Flow logs are not supported for EC2-Classic and have limitations for
all networks. For example, if your network interface has multiple IPv4 addresses
and traffic is sent to a secondary private IPv4 address, the flow log displays the
primary private IPv4 address in the destination IP address field. You also cannot
enable flow logs for VPCs that are peered with your VPC unless the peer VPC is
in your account.
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html#flow-l
ogs-limitati ons
Q80. While monitoring your application servers that are hosted behind an elastic
load balancer, you discover that the servers always operate at between 75 and
80% of their capacity after five minutes of operation. In addition, there is a
constant number of servers being marked as unhealthy very early in their initial
lifecycle. Upon further analysis, you also discover that your servers are taking
between three and four minutes to become operational after launch. What two
tasks should you complete as soon as possible? (Choose 2 answers)



A. Enable detailed CloudWatch monitoring.


B. Increase the length of your grace period.


C. Increase the maximum number of instances in your auto-scaling group.


D. Decrease the length your grace period.

使
Answer: B,C

97 oo 魔


46 ze 算狂

Explanation: A longer grace period and larger instances will solve these issues.
13 k
]号

You can use scaling policies to increase or decrease the number of running EC2
[8 : 云计

instances in your group automatically to meet changing conditions. When the


30 _b

scaling policy is in effect, the Auto Scaling group adjusts the desired capacity of
仅 信 号:

the group and launches or terminates the instances as needed. If you manually
79 yi

scale or scale on a schedule, you must adjust the desired capacity of the group
微 众

in order for the changes to take effect.


限 号

Reference:
http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html
Q81. A backup administrator is designing an offsite backup solution using AWS
Storage Gateway. Which factors below should be evaluated for performance?
(Choose 2 answers)

A. The amount of space required for your data in S3


B. Throughput and volume of data transfer between the backup server and the
storage gateway
C. How long the data will be stored in S3 prior to moving it to Glacier
D. Ratio of data transfer volume to Internet bandwidth between the storage
gateway and Amazon S3

Answer: B,D
Explanation: There are two primary performance factors to consider when
evaluating a storage gateway solution: throughput and volume of data transfer
between the backup server and the storage gateway and the ratio of data
transfer volume to Internet bandwidth between the storage gateway and Amazon
S3.
Reference:
https://d0.awsstatic.com/whitepapers/best-practices-for-backup-and-recovery-on-
prem-to-a ws.pdf
Q82. A MySQL database currently contains 2 million records. Approximately 2000
new records are added every day, with an average of 80 queries per second. The
database is running on a 4 core, 4 GB dedicated system in the local data center.
Once a week, on average, the system has issues and resets. It is next on the list
to be moved to the cloud. What step should be taken first before it is migrated?

A. Export all database records to S3.


B. Order an on-demand instance matching the on-premises architecture.


C. Use the AWS server migration service to migrate existing records.


D. Fix all MySQL issues in the local data center.



Answer: D



Explanation: Issues that are not first solved locally will not be solved by moving to
使
the cloud. Moving to the cloud is not a silver bullet. If there are inherent problems

97 oo 魔

in the existing architecture there is no guarantee that they will be solved by



46 ze 算狂

moving to the cloud. Try to resolve any issues locally before migrating your
13 k
]号

architecture (and it's existing issues) to the cloud. Reference:


[8 : 云计

http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Troubleshooting.html
30 _b

Q83. Your team has just launched an application using Elastic Beanstalk and is
仅 信 号:

in the process of verifying that traffic is being directed correctly. What type of
79 yi

DNS record will be used to direct the associated Load Balancer to your hosted
微 众

environment?
限 号

A. CNAME record
B. AAAA record
C. A record
D. MX record

Answer: A

Explanation: Route 53 uses a CNAME record for scalable operations. Every


environment has a CNAME (URL) that points to a load balancer. The
environment has a URL such as myapp.us-west-2.elasticbeanstalk.com. This
URL is aliased in Amazon Route 53 to an Elastic Load Balancing
URL--something like abcdef-123456.us-west-2.elb.amazonaws.com--by using a
CNAME record.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.archite
cture.html
Q84. You are performing some testing of an application in development and wish
to delete the parts of a multipart upload after a mistake is made with part
numbering. Which command would you run to stop the upload and delete all the
parts? (choose one).

A. Cancel multipart upload


B. Delete multipart upload
C. Abort multipart upload
D. Remove multipart upload

Answer: C

Explanation: To delete the parts of a failed multipart upload so that you do not get
charged for storage of those parts, you must use the command "Abort Multipart


Upload" and provide the upload ID of upload you wish to abort. After aborting a


multipart upload, you cannot upload any part using that upload ID again. All


storage that any parts from the aborted multipart upload consumed is then freed.


Reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html


Q85. You are responsible for maintaining RDS database deployed in a Multi-AZ

deployment architecture. What would be the downtime and/or failover cycle
使
associated with maintenance events on your database? (Choose 2 answers)

97 oo 魔


46 ze 算狂

A. Database will not be available during the maintenance windows


13 k
]号

B. Standby instance will become the new Primary after maintenance.


[8 : 云计

C. Maintenance will be applied to standBy instance first.


30 _b

D. Old primary will be promoted back to new primary after maintenance


仅 信 号:

79 yi

Answer: B,C
微 众
限 号

Explanation: Running a DB instance as a Multi-AZ deployment can further


reduce the impact of a maintenance event, because Amazon RDS will conduct
maintenance by following these steps:
. Perform maintenance on the standby.
. Promote the standby to primary.
. Perform maintenance on the old primary, which becomes the new standby.
Reference:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBI
nstance.Mai ntenance.html
Q86. Multipart upload is a feature that allows you to upload large objects in parts .
Which CLI command can you use to retrieve the parts of a specific multipart
upload or all in-progress multipart uploads?

A. display-multipart-uploads
B. get-multipart-uploads
C. list-multipart-uploads
D. enumerate-multipart-uploads

Answer: C

Explanation: You can list the parts of a specific multipart upload or all in-progress
multipart uploads. The list parts operation returns the parts information that you
have uploaded for a specific multipart upload. For each list parts request,
Amazon S3 returns the parts information for the specified multipart upload, up to
a maximum of 1,000 parts. If there are more than 1,000 parts in the multipart
upload, you must send a series of list part requests to retrieve all the parts. Note
that the returned list of parts doesn't include parts that haven't completed
uploading. Reference:
http://docs.aws.amazon.com/AmazonS3/latest/dev/mpuoverview.html
Q87. You have created a VPC with both public and private subnets. The VPC has
a CIDR notation of 10.0.0.0/16. The private subnet uses CIDR 10.0.1.0/24 and


the public subnet uses 10.0.0.0/24. Web servers serving traffic on ports 80 & 443


and NAT device will be hosted in the public subnet. The database server will be


hosted in the private subnet and will require internet connectivity for patching and


updates. NAT Device security group controls inbound and outbound traffic


to/from internet for private instances. Which of the entries below are NOT

required when creating the NAT security group? (Choose 2 answers)
使

97 oo 魔

A. For Inbound access, allow Source: 10.0.1.0/24 on port 80



46 ze 算狂

B. For Inbound access, allow Source on 10.0.0.0/24 on port 80


13 k
]号

C. For Inbound access, allow Source: 10.0.0.0/24 on port 443


[8 : 云计

D. For Outbound access, allow Destination 0.0.0.0/0 on port 80


30 _b
仅 信 号:

Answer: B,C
79 yi
微 众

Explanation: NAT Device will be used by private instances in private subnet, so


限 号

private subnet CIDR has to be allowed inbound on both port 80, 443. Public
subnet with CIDR block 10.0.0.0/24 already has internet access and doesn't
need to use the NAT device, so this range doesn't need to be opened on security
group. NAT Device will initiate outbound traffic to the internet, usually of port 80
and 443 for regular patches and updates, for that outbound traffic on these ports
has to be open for all destinations.
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.
html
Q88. You are planning to implement a system to mitigate DDoS attacks for a
client in light of some recent suspected threats. You are aware that some DDoS
attacks are more common than others, so your priority is to make sure that your
first action provides protection for the most common attacks. Which of these
architecture layers are more vulnerable to DDoS attacks? (Choose 2 answers)
A. Data layer is vulnerable because DDoS attacks cause data leakage
B. Infrastructure layer is vulnerable because DDoS attacks over-utilize
infrastructure resources
C. Network layer is vulnerable because DDoS attacks aim to break network
backbone and cause a global application outage
D. Application layer is vulnerable because DDoS attacks flood application
services with overwhelming load which causes application to become less
responsive or break completely

Answer: B,D

Explanation: DDoS attacks are usually targeted to infrastructure and application


layers in your architecture, the first objective of DDoS Attacks is to consume all
available resources with fake load causing the whole system to be un-responsive
because of over-utilized resources. The second type of DDoS attacks takes an
aim at application services by overwhelming the application with fake load that


will cause application to break even if there is enough capacity in infrastructure to


handle the load.


DDoS mitigation has to be implemented in both layers to ensure elastic capacity


growth and application protection against layer 7 attacks


Reference:

https://d0.awsstatic.com/whitepapers/DDoS_White_Paper_June2015.pdf
使
Q89. Your client wants to implement scalable but cost-effective storage while

97 oo 魔

maintaining data security. You recommend using AWS Storage Gateway and set

46 ze 算狂

up an instance as Gateway Cached Volumes for low latency. You immediately


13 k
]号

realize that you need to access the storage on the gateway. Which method would
[8 : 云计

you use?
30 _b
仅 信 号:

A. NFS
79 yi

B. Fibre
微 众

C. iSCSI
限 号

D. CIFS

Answer: C

Explanation: AWS Storage Gateway enables applications that are clustered using
Windows Server Failover Clustering (WSFC) to use the iSCSI initiator to access
a gateway's volumes. However, connecting multiple hosts to the same iSCSI
target is not supported. When using Red Hat Enterprise Linux (RHEL), you use
the iscsi-initiator-utils RPM package to connect to your gateway iSCSI targets
(volumes or VTL devices).
Reference:
http://docs.aws.amazon.com/storagegateway/latest/userguide/GettingStarted-use
-volumes.h tml
Q90. A legacy application in your company has been deployed in a single
availability zone. It is used only sporadically but must be available nevertheless.
Due to your heavy administrative workload, you wish to automate a process that
will rebuild the application automatically if it fails. What action should you take?

A. Configure the server in an auto scaling group with a minimum and maximum
size of one.
B. Add an additional availability zone and a load balancer.
C. Monitor the server availability using Cloud Watch metrics to receive alerts of
health check failures.
D. Perform snapshots of EBS volumes on a set schedule.

Answer: A

Explanation: Auto scaling provides redundancy for a single server with the option
of launching a configuration using the attributes from a running instance. When
you use this option, auto scaling copies the attributes from the specified instance
into a template from which you can launch one or more auto scaling groups.


Note that if the specified instance has properties that are not currently supported


by auto scaling, instances launched by auto scaling using the launch


configuration created from the identified instance might not be identical to the


identified instance.


Reference:

http://docs.aws.amazon.com/autoscaling/latest/userguide/LaunchConfiguration.ht
使
ml

97 oo 魔

Q91. You are an AWS Solutions Architect helping a client plan a migration to the

46 ze 算狂

AWS Cloud. The client has provided you with a detailed inventory of their current
13 k
]号

on-premises compute infrastructure, including metrics related to storage and


[8 : 云计

bandwidth. What tool would you use to estimate the amount of money they will
30 _b

save by migrating to the AWS Cloud?


仅 信 号:

79 yi

A. TCO Calculator
微 众

B. Trusted Advisor
限 号

C. Simple Monthly Calculator


D. Detailed Billing Reports

Answer: A

Explanation: The TCO (Total Cost of Ownership) Calculator is used to calculate


savings given information about the user's current environment.
Reference: https://aws.amazon.com/tco-calculator/
Q92. You are an engineer at a large bank, responsible for managing your firm's
AWS infrastructure. The finance team has approached you, indicating their
concern over the growing AWS budget, and has asked you to investigate ways to
lower it. Since your firm has enterprise-level support, you decide to use the AWS
Trusted Advisor tool for this effort. What are some of the cost optimization checks
that Trusted Advisor will perform? (Choose 3 answers)
A. Idle Load Balancers
B. Underutilized Amazon EBS Volumes
C. Underutilized Amazon Redshift Clusters
D. EC2 Spot Instance Optimization

Answer: A,B,C

Explanation: Trusted Advisor checks for Amazon EC2 reserved instances


optimization, low utilization of Amazon EC2 instances, idle Load Balancers,
underutilized Amazon EBS Volumes, unassociated Elastic IP Addresses, Amazon
RDS idle DB instances, Amazon Route 53 Latency Resource Record Sets,
Amazon EC2 reserved instance lease expiration, and underutilized Amazon
Redshift Clusters.
Reference:
https://aws.amazon.com/premiumsupport/trustedadvisor/best-practices/
Q93. ExamKiller has developed a sensor intended to be placed inside of people's


shoes, monitoring the number of steps taken every day. ExamKiller is expecting


thousands of sensors reporting in every minute and hopes to scale to millions by


the end of the year. A requirement for the project is it needs to be able to accept


the data, run it through ETL to store in warehouse and archive it on Amazon


Glacier, with room for a real-time dashboard for the sensor data to be added at a
later date. 用
使
What is the best method for architecting this application given the requirements?

97 oo 魔

Choose the correct answer:



46 ze 算狂

13 k
]号

A. Use Amazon Cognito to accept the data when the user pairs the sensor to the
[8 : 云计

phone, and then have Cognito send the data to Dynamodb. Use Data Pipeline to
30 _b

create a job that takes the DynamoDB table and sends it to an EMR cluster for
仅 信 号:

ETL, then outputs to Redshift and S3 while, using S3 lifecycle policies to archive
79 yi

on Glacier.
微 众

B. Write the sensor data directly to a scaleable DynamoDB; create a data


限 号

pipeline that starts an EMR cluster using data from DynamoDB and sends the
data to S3 and Redshift.
C. Write the sensor data directly to Amazon Kinesis and output the data into
Amazon S3 creating a lifecycle policy for Glacier archiving. Also, have a parallel
processing application that runs the data through EMR and sends to a Redshift
data warehouse.
D. Write the sensor data to Amazon S3 with a lifecycle policy for Glacier, create
an EMR cluster that uses the bucket data and runs it through ETL. It then outputs
that data into Redshift data warehouse.

Answer: C

Explanation: Amazon Kinesis is used for accepting real-time data, and can have
parallel applications reading the raw data for different purposes, including
building a custom real-time dashboard to read shared output data.
Q94. ExamKiller has a library of on-demand MP4 files needing to be streamed
publicly on their new video webinar website. The video files are archived and are
expected to be streamed globally, primarily on mobile devices.
Given the requirements what would be the best architecture for ExamKiller to
design? Choose the correct answer:

A. Upload the MP4 files to S3 and create an Elastic Transcoder job that
transcodes the MP4 source into HLS chunks. Store the HLS output in S3 and
create a media streaming CloudFront distribution to serve the HLS files to end
users.
B. Upload the MP4 files to S3 and create an Elastic Transcoder job that
transcodes the MP4 source into HLS chunks. Store the HLS output in S3 and
create a CloudFront download distribution to serve the HLS files to end users.
C. Provision WOWZA streaming EC2 instances which use S3 as the source for
the HLS on-demand transcoding on the WOWZA servers. Provision a new
CloudFront download distribution with the WOWZA streaming server as the


origin.


D. Provision WOWZA streaming EC2 instances which use S3 as the source for


the HLS on-demand transcoding on the WOWZA servers. Provision a new


CloudFront streaming distribution with the WOWZA streaming server as the


origin.

使
Answer: B

97 oo 魔


46 ze 算狂

Explanation: CloudFront streaming distributions only support the Adobe RTMP


13 k
]号

streaming protocol. HLS a progressive download protocol. Configuring the output


[8 : 云计

of the HLS chunks to be an S3 bucket and using the S3 bucket as the origin for
30 _b

streaming would be the most scalable way to solve the criteria. There is not a
仅 信 号:

criteria of protecting the digital content.


79 yi

Q95. You have a legacy application running that uses an m4.large instance size
微 众

and cannot scale with Auto Scaling, but only has peak performance 5% of the
限 号

time. This is a huge waste of resources and money so your Senior Technical
Manager has set you the task of trying to reduce costs while still keeping the
legacy application running as it should. Which of the following would best
accomplish the task your manager has set you?
Choose the correct answer:

A. Use a C4.large instance with enhanced networking.


B. Use a T2 burstable performance instance.
C. Use two t2.nano instances that have single Root I/O Virtualization.
D. Use t2.nano instance and add spot instances when they are required.

Answer: B

Q96. ExamKiller is running Oracle DB workloads on AWS. Currently, they are


running the Oracle RAC configuration on the AWS public cloud. You've been
tasked with configuring backups on the RAC cluster to enable durability. What is
the best method for configuring backups? Choose the correct answer:

A. Enable Multi-AZ failover on the RDS RAC cluster to reduce the RPO and RTO
in the event of disaster or failure.
B. Create a script that runs snapshots against the EBS volumes to create
backups and durability.
C. Enable automated backups on the RDS RAC cluster; enable auto snapshot
copy to a backup region to reduce RPO and RTO.
D. Create manual snapshots of the RDS backup and write a script that runs the
manual snapshot.

Answer: B

Explanation: RAC is not supported by RDS but can be run on EC2. To backup
EC2 instances, you can suspend IO for a moment to start the snapshot creation


time. Data Guard on Oracle is also an acceptable solution to extend high


availability to a RAC cluster running on EC2.


Q97. ExamKiller has a Redshift cluster for petabyte-scale data warehousing. The


data within the cluster is easily reproducible from additional data stored on


Amazon S3. ExamKiller wants to reduce the overall total cost of running this

Redshift cluster. Which scenario would best meet the needs of the running
使
cluster, while still reducing total overall ownership of the cluster? Choose the

97 oo 魔

correct answer:

46 ze 算狂

13 k
]号

A. Disable automated and manual snapshots on the cluster


[8 : 云计

B. Implement daily backups, but do not enable multi-region copy to save data
30 _b

transfer costs.
仅 信 号:

C. Instead of implementing automatic daily backups, write a Cli script that creates
79 yi

manual snapshots every few days. Copy the manual snapshot to a secondary
微 众

AWS region for disaster recovery situations.


限 号

D. Enable automated snapshots but set the retention period to a lower number to
reduce storage costs

Answer: A

Explanation: The cluster data is easily populated from Amazon S3. The best
overall method for this node would be not to enable backups at all to reduce
storage costs on the cluster. The assumption is the data already exists in S3.
Keep in mind this is not a likely production setup scenario, but is meant to test on
understanding where the costs are incurred in a Redshift environment.
Q98. Your CIO has become very paranoid recently after a series of security
breaches and wants you to start providing additional layers of security to all your
company's AWS resources. First up he wants you to provide additional layers of
protection to all your EC2 resources. Which of the following would be a way of
providing that additional layer of protection to all your EC2 resources?
Choose the correct answer:

A. Ensure that the proper tagging strategies have been implemented to identify
all of your EC2 resources.
B. All actions listed here would provide additional layers of protection.
C. Add an IP address condition to policies that specify that requests to EC2
instances should come from a specific IP address or CIDR block range.
D. Add policies which have deny and/or allow permissions on tagged resources

Answer: B

Q99. ExamKiller has developed a viral marketing website that specializes in


posting blog posts that go viral. The posts usually receive 90% of the viral traffic
within 24 hours of being posted and often need to be updated with corrections
during the first 24 hours. What would be the best method for implementing a
solution to help handle the scale of requests given the behavior of the blog


posts?


Choose the correct answer:



A. Create an ElastiCache cluster and use write through caching strategies to


quickly update the content when blog posts require it.

B. Use a CloudFront CDN and configure 0 TTL and enable URL parameter
使
forwarding to the origin.

97 oo 魔

C. Use CloudFront CDN and configure a lower TTY using CloudFront invalidation

46 ze 算狂

mechanisms to clear the cache when updates are required.


13 k
]号

D. Create an ElastiCache cluster and use lazy loading for the caching strategies.
[8 : 云计

30 _b

Answer: B
仅 信 号:

79 yi

Explanation: While ElastiCache with write through would technically work, since
微 众

90% of the requests occur in the first 24 hours, using write through will hog a lot
限 号

of resources and be an expensive caching strategy. Given the requirements of


short-lived caching, a TTL of 0 allows the edge location to keep a TCP
connection open and use headers to determine if the dynamic object has been
updated.
Q100. You have two different groups using Redshift to analyze data of a
petabyte-scale data warehouse. Each query issued by the first group takes
approximately 1-2 hours to analyze the data while the second group's queries
only take between 5-10 minutes to analyze data. You don't want the second
group's queries to wait until the first group's queries are finished. You need to
design a solution so that this does not happen. Which of the following would be
the best and cheapest solution to deploy to solve this dilemma?
Choose the correct answer:

A. Create two separate workload management groups and assign them to the
respective groups.
B. Start another Redshift cluster from a snapshot for the second team if the
current Redshift cluster is busy processing long queries.
C. Create a read replica of Redshift and run the second team's queries on the
read replica.
D. Pause the long queries when necessary and resume them when there are no
queries happening.

Answer: A

Q101. ExamKiller has two batch processing applications that consume financial
data about the day's stock transactions. Each transaction needs to be stored
durably and guarantee that a record of each application is delivered so the audit
and billing batch processing applications can process the data. However, the two
applications run separately and several hours apart and need access to the same
transaction information. After reviewing the transaction information for the day,
the information no longer needs to be stored.


What is the best way to architect this application?


Choose the correct answer:



A. Use SQS for storing the transaction messages; when the billing batch process


performs first and consumes the message, write the code in a way that does not

remove the message after consumed, so it is available for the audit application
使
several hours later. The audit application can consume the SQS message and

97 oo 魔

remove it from the queue when completed.



46 ze 算狂

B. Store the transaction information in a DynamoDB table. The billing application


13 k
]号

can read the rows while the audit application will read the rows them remove the
[8 : 云计

data.
30 _b

C. Use SQS for storing the transaction messages. When the billing batch
仅 信 号:

process consumes each message, have the application create an identical


79 yi

message and place it in a different SQS for the audit application to use several
微 众

hours later.
限 号

D. Use Kinesis to store the transaction information. The billing application will
consume data from the stream, the audit application can consume the same data
several hours later.

Answer: D

Explanation: Kinesis streams store a rolling "buffer" of data. That data is only
removed after the timeout on the Kinesis stream (now customizable). This is ideal
because no additional costs or management is required to make the data
available and remove the data after the last application consumes it.
Q102. You have just set up your first AWS Data Pipeline. AWS Data Pipeline is a
web service that you can use to automate the movement and transformation of
data. With AWS Data Pipeline, you can define data-driven workflows, so that
tasks can be dependent on the successful completion of previous tasks. You are
pretty excited that it is about to run; however, when it finally kicks off, you receive
a "400 Error Code: PipelineNotFoundException." Which of the following
explanations is the most accurate in describing what this error probably means?
Choose the correct answer:

A. This error means that your IAM default roles might not have the required
permissions necessary for AWS Data Pipeline to function correctly.
B. This error means that the security token included in the request is invalid.
C. This error means that you have not set a valid value for either the runsOn or
workerGroup fields for those tasks.
D. This error means that you need to increase your AWS Data Pipeline system
limits.

Answer: A

Q103. You work for a large university whose AWS infrastructure has grown
significantly over the last year and consequently the IT department has hired four


new AWS System Administrators who will each manage a different Availability


Zone in your infrastructure. You have 4 AZs. You have been given the task of


giving these new staff access to be able to launch and manage instances in their


zone only and should not be able to modify any of the other administrators' zones.


Which of the following options is the best solution to accomplish your task?
Choose the correct answer: 用
使

97 oo 魔

A. Create four AWS accounts and give each user access to a separate account.

46 ze 算狂

B. Create a VPC with four subnets and allow access to each subnet for the
13 k
]号

individual IAM user.


[8 : 云计

C. Create four IAM users and four VPCs and allow each IAM user to have
30 _b

access to separate VPCs.


仅 信 号:

D. Create an IAM user and allow them permission to launch an instance of a


79 yi

different size only.


微 众
限 号

Answer: B

Q104. A few weeks into your dream job with the large scientific institution, a
group of EC2 instances that you set up in a Placement Group doesn't seem to
run as efficiently as you expected it to and seems to be suffering from low
performance of packets, high latency and lots of jitter. Consequently, you have
started to look at ways to fix this. Which of the following solutions would create
enhanced networking capabilities on instances that would result in higher
instances of packets per second, lower latency, and reduced jitter? Choose the
correct answer:

A. Adding more instances to the Placement Group. Making sure you stop and
restart all the other instances at the same time.
B. Using Single Root I/O Virtualization (SR-IOV) on all the instances.
C. Increasing the size of all the instances.
D. Splitting the instances across two Placement Groups in the same Availability
Zone.

Answer: B

Q105. ExamKiller is building out an AWS Cloud Environment for a financial


regulatory firm. Part of the requirements are being able to monitor all changes in
an environment and all traffic sent to and from the environment.
What suggestions would you make to ensure all the requirements for monitoring
the financial architecture are satisfied? (Choose Two)
Choose the 2 correct answers:

A. Configure an IPS/IDS system, such as Palo Alto Networks, that monitors,


filters, and alerts of all potential hazard traffic leaving the VPC.
B. Configure an IPS/IDS system, such as Palo Alto Networks, using promiscous
mode that monitors, filters, and alerts of all potential hazard traffic leaving the


VPC.


C. Configure an IPS/IDS in promiscuous mode, which will listen to all packet


traffic and API changes.


D. Configure an IPS/IDS to listen and block all suspected bad traffic coming into


and out of the VPC. Configure CloudTrail with CloudWatch Logs to monitor all
changes within an environment. 用
使

97 oo 魔

Answer: A,D

46 ze 算狂

13 k
]号

Q106. You're building a mobile application game. The application needs


[8 : 云计

permissions for each user to communicate and store data in DynamoDB tables.
30 _b

What is the best method for granting each mobile device that installs your
仅 信 号:

application to access DynamoDB tables for storage when required?


79 yi

Choose the correct answer:


微 众
限 号

A. During the install and game configuration process, have each user create an
IAM credential and assign the IAM user to a group with proper permissions to
communicate with DynamoDB.
B. Create an IAM group that only gives access to your application and to the
DynamoDB tables.
Then, when writing to DynamoDB, simply include the unique device ID to
associate the data with that specific user.
C. Create an Active Directory server and an AD user for each mobile application
user. When the user signs in to the AD sign-on, allow the AD server to federate
using SAML 2.0 to IAM and assign a role to the AD user which is the assumed
with AssumeRoleWithSAML.
D. Create an IAM role with the proper permission policy to communicate with the
DynamoDB table. Use web identity federation, which assumes the IAM role using
AssumeRoleWithWebIdentity, when the user signs in, granting temporary
security credentials using STS.
Answer: D

Q107. Amazon ElastiCache currently supports two different in-memory key-value


engines, Memcached and Redis. You are launching your first ElastiCache cache
cluster, and you have to choose which engine you prefer. You are not 100% sure
but you decide on Memcached, which you know has a few key features different
to Redis. Which of the following is NOT one of those key features? Choose the
correct answer:

A. Object caching is your primary goal to offload your database.


B. You need as simple a caching model as possible.
C. You use more advanced data types, such as lists, hashes, and sets.
D. You need the ability to scale your cache horizontally as you grow.

Answer: C



Q108. A third party auditor is being brought in to review security processes and


configurations for all of ExamKiller's AWS accounts. Currently, ExamKiller does


not use any on-premise identity provider. Instead, they rely on IAM accounts in


each of their AWS accounts. The auditor needs read-only access to all AWS

resources for each AWS account. Given the requirements, what is the best
使
security method for architecting access for the security auditor?

97 oo 魔

Choose the correct answer:



46 ze 算狂

13 k
]号

A. Create an IAM role with read-only permissions to all AWS services in each
[8 : 云计

AWS account.
30 _b

Create one auditor IAM account and add a permissions policy that allows the
仅 信 号:

auditor to assume the ARN role for each AWS account that has an assigned role.
79 yi

B. Configure an on-premise AD server and enable SAML and identify federation


微 众

for single sign-on to each AWS account.


限 号

C. Create a custom identity broker application that allows the auditor to use
existing Amazon credentials to log into the AWS environments.
D. Create an IAM user for each AWS account with read-only permission policies
for the auditor, and disable each account when the audit is complete.

Answer: A

Q109. ExamKiller has hired a third-party security auditor, and the auditor needs
read-only access to all AWS resources and logs of all VPC records and events
that have occurred on AWS. How can ExamKiller meet the auditor's requirements
without comprising security in the AWS environment?
Choose the correct answer:

A. Create a role that has the required permissions for the auditor.
B. Create an SNS notification that sends the CloudTrail log files to the auditor's
email when CloudTrail delivers the logs to S3, but do not allow the auditor access
to the AWS environment.
C. ExamKiller should contact AWS as part of the shared responsibility model, and
AWS will grant required access to the third-party auditor.
D. Enable CloudTrail logging and create an IAM user who has read-only
permissions to the required AWS resources, including the bucket containing the
CloudTrail logs.

Answer: D

Q110. Your company has just set up a new document server on it's AWS VPC,
and it has four very important clients that it wants to give access to. These clients
also have VPCs on AWS and it is through these VPCs that they will be given
accessibility to the document server. In addition, each of the clients should not
have access to any of the other clients' VPCs.


Choose the correct answer:



A. Set up VPC peering between your company's VPC and each of the clients'


VPCs.


B. Set up all the VPCs with the same CIDR but have your company's VPC as a
centralized VPC. 用
使
C. Set up VPC peering between your company's VPC and each of the clients'

97 oo 魔

VPCs, but block the IPs from CIDR of the clients' VPCs to deny them access to

46 ze 算狂

each other.
13 k
]号

D. Set up VPC peering between your company's VPC and each of the clients'
[8 : 云计

VPC. Each client should have VPC peering set up between each other to speed
30 _b

up access time.
仅 信 号:

79 yi

Answer: A
微 众
限 号

Q111. You've been tasked with creating file level restore on your EC2 instances.
You need to be able to restore an individual lost file on an EC2 instance within 15
minutes of a reported loss of information. The acceptable RPO is several hours.
How would you perform this on an EC2 instance?
Choose the correct answer:

A. Take frequent snapshots of EBS volumes, create a volume from an EBS


snapshot, attach the EBS volume to the EC2 instance at a different mount
location, cutover the application to look at the new backup volume and remove
the old volume
B. Take frequent snapshots of EBS volumes, create a volume from an EBS
snapshot, attach the EBS volume to the EC2 instance at a different mount
location, browse the file system to the file that needs to be restored on the new
mount, copy from the new volume to the backup volume
C. Setup a cron that runs aws s3 cp on the files and copy the files from the EBS
volume to S3
D. Enable auto snapshots on Amazon EC2 and restore the EC2 instance upon
single file failure

Answer: B

Explanation: The question asks how you restore a "single" file. Restoring a whole
volume would actually cause data loss if those other files were being updated.
Q112. Your job at a large scientific institution is moving along nicely. It is at the
forefront of the latest research on nano-technology, of which you have become
very passionate. You have been put in charge of scaling up some existing
infrastructure which currently has 9 EC2 instances running in a Placement Group.
All these 9 instances were initially launched at the same time and seem to be
performing as expected. You decide that you need to add 2 new instances to the
group; however, when you attempt to do this you receive a 'capacity error'. Which


of the following actions will most likely fix this problem?


Choose the correct answer:



A. Stop and restart the instances in the Placement Group and then try the launch


again.

B. Request a capacity increase from AWS as you are initially limited to 10
使
instances per Placement Group.

97 oo 魔

C. Make sure all the instances are the same size and then try the launch again.

46 ze 算狂

D. Make a new Placement Group and launch the new instances in the new group.
13 k
]号

Make sure the Placement Groups are in the same subnet.


[8 : 云计

30 _b

Answer: A
仅 信 号:

79 yi

Q113. ExamKiller has placed a set of on-premise resources with an AWS Direct
微 众

Connect provider. After establishing connections to a local AWS region in the US,
限 号

ExamKiller needs to establish a low latency dedicated connection to an S3 public


endpoint over the Direct Connect dedicated low latency connection.
What steps need to be taken to accomplish configuring a direct connection to a
public S3 endpoint?
Choose the correct answer:

A. Add a BGP route as part of the on-premise router; this will route S3 related
traffic to the public S3 endpoint to dedicated AWS region.
B. Configure a private virtual interface to connect to the public S3 endpoint via
the Direct Connect connection.
C. Configure a public virtual interface to connect to a public S3 endpoint
resource.
D. Establish a VPN connection from the VPC to the public S3 endpoint.

Answer: C
Q114. Due to cost-cutting measurements being implemented by your
organization, you have been told that you need to migrate some of your existing
resources to another region. The first task you have been given is to copy all of
your Amazon Machine Images from Asia Pacific (Sydney) to US West (Oregon).
One of the things that you are unsure of is how the PEM keys on your Amazon
Machine Images need to be migrated. Which of the following best describes how
your PEM keys are affected when AMIs are migrated between regions? Choose
the correct answer:

A. The PEM keys will also be copied across so you don't need to do anything
except launch the new instance.
B. The PEM keys will also be copied across; however, they will only work for
users who have already accessed them in the old region. If you need new users
to access the instances then new keys will need to be generated.
C. Neither the PEM key nor the authorized key is copied and consequently you


need to create new keys when you launch the new instance.


D. The PEM keys will not be copied to the new region but the authorization keys


will still be in the operating system of the AMI. You need to ensure when the new


AMI is launched that it is launched with the same PEM key name.


Answer: D 用
使

97 oo 魔

Q115. After having created a VPC with CIDR block 10.0.0.0/24 and launching it

46 ze 算狂

as a working network you decide a few weeks later that it is too small and you
13 k
]号

wish to make it larger. Which of the below options would accomplish this
[8 : 云计

successfully?
30 _b

Choose the correct answer:


仅 信 号:

79 yi

A. Re-allocate the VPC with CIDR 10.1.1.1/16


微 众

B. Re-allocate the VPC with CIDR 10.0.0.0/28


限 号

C. Re-allocate the VPC with CIDR 10.0.0.0/16


D. You cannot change a VPC's size. Currently, to change the size of a VPC you
must terminate your existing VPC and create a new one.

Answer: D

Q116. One of your work colleagues has just left and you have been handed
some of the infrastructure he set up. In one of the setups you start looking at, he
has created multiple components of a single application and all the components
are hosted on a single EC2 instance (without an ELB) in a VPC. You have been
told that this needs to be set up with two separate SSLs for each component.
Which of the following would best achieve the setting up off the two separate
SSLs while still only using one EC2 instance?
Choose the correct answer:
A. Create an EC2 instance which has multiple subnets attached to it and each
will have a separate IP address.
B. Create an EC2 instance with a NAT address.
C. Create an EC2 instance which has multiple network interfaces with multiple
elastic IP addresses.
D. Create an EC2 instance which has both an ACL and the security group
attached to it and have separate rules for each IP address.

Answer: C

Q117. ExamKiller is running a MySQL RDS instance inside of AWS; however, a


new requirement for disaster recovery is keeping a read replica of the production
RDS instance in an on-premise data center. What is the securest way of
performing this replication? Choose the correct answer:

A. Configure the RDS instance as the master and enable replication over the


open internet using a secure SSL endpoint to the on-premise server.


B. Create a Data Pipeline that exports the MySQL data each night and securely


downloads the data from an S3 HTTPS endpoint.


C. RDS cannot replicate to an on-premise database server. Instead, first


configure the RDS instance to replicate to an EC2 instance with core MySQL,

and then configure replication over a secure VPN/VPG connection.
使
D. Create an secure VPN connection using either OpenVPN or VPN/VGW

97 oo 魔

through the Virtual Private Cloud service.



46 ze 算狂

13 k
]号

Answer: D
[8 : 云计

30 _b

Explanation: RDS instances can replicate to on-premise database servers. It is


仅 信 号:

best practice to first create a dump of the database and copy it down, then
79 yi

enable replication, since this uses the MySQL asynchronous replication feature.
微 众

Latency is an issue when using replication, so consider using a Direct Connect


限 号

connection depending the use case for this situation.


Q118. ExamKiller has an employee that keeps terminating EC2 instances on the
production environment. You've determined the best way to ensure this doesn't
happen is to add an extra layer of defense against terminating the instances.
What is the best method to ensure the employee does not terminate the
production instances? Choose the 2 correct answers:

A. Tag the instance with a production-identifying tag and modify the employees
group to allow only start, stop, and reboot api calls and not the terminate instance
call.
B. Modify the IAM policy on the user to require MFA before deleting EC2
instances
C. Tag the instance with a production-identifying tag and add resource-level
permissions to the employee user with an explicit deny on the terminate API call
to instances with the production tag.
D. Modify the IAM policy on the user to require MFA before deleting EC2
instances and disable MFA access to the employee

Answer: A,C

Explanation: The best method is to add resource level tags to the production EC2
instances and either grant or deny the allowed actions in an IAM policy. An
explicit deny will always override an allow. A and C either deny or allow and
unless explicitly allowed, it is denied, which is why both are correct.
Q119. You are building a large-scale confidential documentation web server on
AWS and all of the documentation for it will be stored on S3. One of the
requirements is that it cannot be publicly accessible from S3 directly, and you will
need to use CloudFront to accomplish this. Which of the methods listed below
would satisfy the requirements as outlined? Choose the correct answer:

A. Create individual policies for each bucket the documents are stored in and in


that policy grant access to only CloudFront.


B. Create an S3 bucket policy that lists the CloudFront distribution ID as the


Principal and the target bucket as the Amazon Resource Name (ARN).


C. Create an Origin Access Identity (OAI) for CloudFront and grant access to the


objects in your S3 bucket to that OAI.

D. Create an Identity and Access Management (IAM) user for CloudFront and
使
grant access to the objects in your S3 bucket to that IAM User.

97 oo 魔


46 ze 算狂

Answer: C
13 k
]号
[8 : 云计

Q120. You have been given a new brief from your supervisor for a client who
30 _b

needs a web application set up on AWS. The most important requirement is that
仅 信 号:

MySQL must be used as the database, and this database must not be hosted in
79 yi

the public cloud, but rather at the client's data center due to security risks. Which
微 众

of the following solutions would be the best to assure that the client's
限 号

requirements are met?


Choose the correct answer:

A. Use the public subnet for the application server and use RDS with a storage
gateway to access and synchronize the data securely from the local data center.
B. Build the application server on a public subnet and build the database in a
private subnet with a secure ssh connection to the private subnet from the client's
data center.
C. Build the application server on a public subnet and the database at the client's
data center.
Connect them with a VPN connection which uses IPsec.
D. Build the application server on a public subnet and the database on a private
subnet with a NAT instance between them.

Answer: C
Q121. Once again your security officer is on your case and this time is asking
you to make sure the AWS Key Management Service (AWS KMS) is working as it
is supposed to. You are initially not too sure how KMS even works, however after
some intense late night reading you think you have come up with a reasonable
definition. Which of the following best describes how the AWS Key Management
Service works?
Choose the correct answer:

A. AWS KMS supports two kinds of keys -- master keys and data keys. Master
keys can be used to directly encrypt and decrypt up to 4 kilobytes of data and
can also be used to protect data keys. The data keys are then used to decrypt
the customer data, and the master keys are used to encrypt the customer data.
B. AWS KMS supports two kinds of keys -- master keys and data keys. Master
keys can be used to directly encrypt and decrypt up to 4 kilobytes of data and
can also be used to protect data keys. The data keys are then used to encrypt


the customer data and the master keys are used to decrypt the customer data.


C. AWS KMS supports two kinds of keys -- master keys and data keys. Master


keys can be used to directly encrypt and decrypt up to 4 kilobytes of data and


can also be used to protect data keys. The master keys are then used to encrypt


and decrypt customer data.

D. AWS KMS supports two kinds of keys -- master keys and data keys. Master
使
keys can be used to directly encrypt and decrypt up to 4 kilobytes of data and

97 oo 魔

can also be used to protect data keys. The data keys are then used to encrypt

46 ze 算狂

and decrypt customer data.


13 k
]号
[8 : 云计

Answer: D
30 _b
仅 信 号:

Q122. The Dynamic Host Configuration Protocol (DHCP) provides a standard for
79 yi

passing configuration information to hosts on a TCP/IP network. You can have


微 众

multiple sets of DHCP options, but you can associate only one set of DHCP
限 号

options with a VPC at a time. You have just created your first set of DHCP
options, associated it with your VPC but now realize that you have made an error
in setting them up and you need to change the options. Which of the following
options do you need to take to achieve this?
Choose the correct answer:

A. You must create a new set of DHCP options and associate them with your
VPC.
B. You can modify the options from the console or the CLI.
C. You need to stop all the instances in the VPC. You can then change the
options, and they will take effect when you start the instances.
D. You can modify the options from the CLI only, not from the console.

Answer: A
Q123. An auditor needs access to logs that record all API events on AWS. The
auditor only needs read-only access to the log files and does not need access to
each AWS account. ExamKiller has multiple AWS accounts, and the auditor
needs access to all the logs for all the accounts. What is the best way to
configure access for the auditor to view event logs from all accounts? Given the
current requirements, assume the method of "least privilege" security design and
only allow the auditor access to the minimum amount of AWS resources as
possible.
Choose the correct answer:







A. Configure the CloudTrail service in the primary AWS account and configure
使
consolidated billing for all the secondary accounts. Then grant the auditor access

97 oo 魔

to the S3 bucket that receives the CloudTrail log files.



46 ze 算狂

B. Configure the CloudTrail service in each AWS account and enable


13 k
]号

consolidated logging inside of CloudTrail.


[8 : 云计

C. Configure the CloudTrail service in each AWS account, and have the logs
30 _b

delivered to an AWS bucket on each account, while granting the auditor


仅 信 号:

permissions to the bucket via roles in the secondary accounts and a single
79 yi

primary IAM account that can assume a read-only role in the secondary AWS
微 众

accounts.
限 号

D. Configure the CloudTrail service in each AWS account and have the logs
delivered to a single AWS bucket in the primary account and grant the auditor
access to that single bucket in the primary account.

Answer: D

Q124. You're working as a consultant for a company designing a new hybrid


architecture to manage part of their application infrastructure in the cloud and
on-premise. As part of the infrastructure, they need to consistently transfer high
amounts of data. They require a low latency and high consistency traffic to AWS.
The company is looking to keep costs as low possible and is willing to accept
slow traffic in the event of primary failure. Given these requirements how would
you design a hybrid architecture? Choose the correct answer:
A. Create a dual VPN tunnel for private connectivity, which increases network
consistency and reduces latency. The dual tunnel provides a backup VPN in the
case of primary failover.
B. Provision a Direct Connect connection to an AWS region using a Direct
Connect provider.
Provision a secondary Direct Connect connection as a failover.
C. Provision a Direct Connect connection to an AWS region using a Direct
Connect partner.
Provision a VPN connection as a backup in the event of Direct Connect
connection failure.
D. Provision a Direct Connect connection which has automatic failover and
backup built into the service.

Answer: C

Explanation: The company's requirement was to have a primary connection with


durable and consistent traffic. Direct Connect meets this requirement. In the


event of a failover, a slower, less consistent private connection is acceptable. A


VPN connection meets this requirement.


Q125. Your company has just purchased some very expensive software which


also involved the addition of a unique license for it. You have been told to set this

up on an AWS EC2 instance; however, one of the problems is that the software
使
license has to be tied to a specific MAC address and from your experience with

97 oo 魔

AWS you know that every time an instance is restarted it will almost certainly lose

46 ze 算狂

it's MAC address. What would be a possible solution to this given the options
13 k
]号

below?
[8 : 云计

Choose the correct answer:


30 _b
仅 信 号:

A. Use a VPC with a private subnet and configure the MAC address to be tied to
79 yi

that subnet.
微 众

B. Use a VPC with an elastic network interface that has a fixed MAC Address.
限 号

C. Make sure any EC2 Instance that you deploy has a static IP address that is
mapped to the MAC address.
D. Use a VPC with a private subnet for the license and a public subnet for the
EC2.

Answer: B

Q126. You have just developed a new mobile application that handles analytics
workloads on large scale datasets that are stored on Amazon Redshift.
Consequently, the application needs to access Amazon Redshift tables. Which of
the below methods would be the best, both practically and security-wise, to
access the tables?
Choose the correct answer:
A. Make a new user and generate encryption keys for that user. Create a policy
for RedShift read-only access. Embed the keys in the application.
B. Use roles that allow a web identity federated user to assume a role that allows
access to the RedShift table by providing temporary credentials.
C. Create a HSM client certificate in Redshift and authenticate using this
certificate.
D. Create a RedShift read-only access policy in IAM and embed those
credentials in the application.

Answer: B

Q127. You've been working on a CloudFront whole site CDN for ExamKiller client.
After configuring the whole site CDN with a custom CNAME and supported
HTTPS custom domain (i.e., https://example.com) you open example.com and
are receiving the following error: CloudFront wasn't able to connect to the origin.
What might be the most likely cause of this error and how would you fix it?


Choose the correct answer:



A. The HTTPS certificate is expired or missing a third party signer. To resolve this


purchase and add a new SSL certificate.


B. The Origin Protocol Policy is set to Match Viewer and HTTPS isn't configured
on the origin. 用
使
C. TCP HTTPS isn't configured on the CloudFront distribution but is configured

97 oo 魔

on the CloudFront origin.



46 ze 算狂

D. The origin on the CloudFront distribution is the wrong origin.


13 k
]号
[8 : 云计

Answer: B
30 _b
仅 信 号:

Q128. You have been given the task of designing a backup strategy for your
79 yi

organization's AWS resources with the only caveat being that you must use the
微 众

AWS Storage Gateway. Which of the following is the most correct statement
限 号

surrounding the backup strategy on the AWS Storage Gateway?


Choose the correct answer:

A. You should use Gateway-Cached Volumes. You will have quicker access to
the data, and it is a more preferred backup solution than Gateway-Stored
Volumes.
B. You should use the Gateway-Virtual Tape Library (VTL) as Gateway-Cached
Volumes and Gateway-Stored Volumes cannot be used for backups.
C. It doesn't matter whether you use Gateway-Cached Volumes or
Gateway-Stored Volumes as long as you also combine either of these solutions
with the Gateway-Virtual Tape Library (VTL).
D. You should use Gateway-Stored Volumes as it is preferable to
Gateway-Cached Volumes as a backup storage medium.

Answer: D
Q129. You're running a financial application on an EC2 instance. Data being
stored in the instance is critical and in the event of a failure of an EBS volume the
RTO and RPO are less than 1 minute. How would you architect this application
given the RTO and RPO requirements? Choose the correct answer:

A. Stripe multiple EBS volumes together with RAID 1, which provides fault
tolerance on EBS volumes.
B. Nothing is required since EBS volumes are durability backed up to additional
hardware in the same availability zone.
C. Write a script to create automated snapshots of the EBS volumes every
minute. In the event of failure have an automated script that detects failure and
launches a new volume from the most recent snapshot.
D. Stripe multiple EBS volumes together with RAID 0, which provides fault
tolerance on EBS volumes.


Answer: A



Explanation: Raid 1 provides additional fault tolerance but not increase in


performance. When requiring higher availability and fault tolerance in the event of


volume failure, Raid 1 accomplishes this as the EBS drive is already running and
has the most recent data available. 用
使
Q130. An online gaming server in which you have recently increased it's IOPS

97 oo 魔

performance, by creating a RAID 0 configuration has now started to have



46 ze 算狂

bottleneck problems due to your instance bandwidth. Which of the following


13 k
]号

would be the best solution for this to increase throughput? Choose the correct
[8 : 云计

answer:
30 _b
仅 信 号:

A. Use a RAID 1 configuration instead of RAID 0.


79 yi

B. Use instance store backed instances and stripe the attached ephemeral
微 众

storage devices and use DRBD Asynchronous Replication.


限 号

C. Move all your EC2 instances to the same availability zone.


D. Use Single Root I/O Virtualization (SR-IOV) on all the instances.

Answer: B

Q131. ExamKiller has developed a Ruby on Rails content management platform.


Currently, ExamKiller is using OpsWorks with several stacks for dev, staging, and
production to deploy and manage the application.
ExamKiller is about to implement a new feature on the CMS application using
Python instead of Ruby.
How should ExamKiller deploy this new application feature? Choose the correct
answer:
A. ExamKiller should create a new stack that contains the Python application
code and manages separate deployments of the application via the secondary
stack using the deploy lifecycle action to implement the application code.
B. ExamKiller should create a new stack that contains the Python application
code and manages separate deployments of the application via the secondary
stack.
C. ExamKiller should create a new stack that contains the Python application
code and manage separate deployments of the application via the initial stack
using the deploy lifecycle action to implement the application code.
D. ExamKiller should create a new stack that contains a new layer with the
Python code. To cut over to the new stack ExamKiller should consider using
Blue/Green deployment

Answer: D

Q132. Due to a lot of your EC2 services going off line at least once a week for no


apparent reason your security officer has told you that you need to tighten up the


logging of all events that occur on your AWS account. He wants to be able to


access all events that occur on the account across all regions quickly and in the


simplest way possible. He also wants to make sure he is the only person that has


access to these events in the most secure way possible. Which of the following

would be the best solution to assure his requirements are met? Choose the
使
correct answer:

97 oo 魔


46 ze 算狂

A. Use CloudTrail to send all API calls to CloudWatch and send an email to the
13 k
]号

security officer every time an API call is made. Make sure the emails are
[8 : 云计

encrypted.
30 _b

B. Use CloudTrail to log all events to an Amazon Glacier Vault. Make sure the
仅 信 号:

vault access policy only grants access to the security officer's IP address.
79 yi

C. Use CloudTrail to log all events to one S3 bucket. Make this S3 bucket only
微 众

accessible by your security officer with a bucket policy that restricts access to his
限 号

user only and also add MFA to the policy for a further level of security.
D. Use CloudTrail to log all events to a separate S3 bucket in each region as
CloudTrail cannot write to a bucket in a different region. Use MFA and bucket
policies on all the different buckets.

Answer: C

Q133. You are setting up a VPN for a customer to connect his remote network to
his Amazon VPC environment. There are a number of ways to accomplish this
and to help you decide you have been given a list of the things that the customer
has specified that the network needs to be able to do. They are as follows:
- Predictable network performance
- Support for BGP peering and routing policies
- A secure IPsec VPN connection but not over the Internet Which of the following
VPN options would best satisfy the customer's requirements? Choose the correct
answer:

A. AWS Direct Connect and IPsec Hardware VPN connection over private lines
B. Software appliance-based VPN connection with IPsec
C. AWS Direct Connect with AWS VPN CloudHub
D. AWS VPN CloudHub

Answer: A

Q134. When you create a subnet, you specify the CIDR block for the subnet. The
CIDR block of a subnet can be the same as the CIDR block for the VPC (for a
single subnet in the VPC), or a subset (to enable multiple subnets). The allowed
block size is between a /28 netmask and /16 netmask. You decide to you create
a VPC with CIDR block 10.0.0.0/24. Therefore what is the maximum allowed


number of IP addresses and the minimum allowed number of IP addresses


according to AWS and what is the number of IP addresses supported by the VPC


you created? Choose the correct answer:



A. Maximum is 256 and the minimum is 16 and the one created supports 24 IP
addresses 用
使
B. Maximum is 65,536 and the minimum is 16 and the one created supports 256

97 oo 魔

IP addresses

46 ze 算狂

C. Maximum is 28 and the minimum is 16 and the one created supports 24 IP


13 k
]号

addresses
[8 : 云计

D. Maximum is 65,536 and the minimum is 24 and the one created supports 28
30 _b

IP addresses
仅 信 号:

79 yi

Answer: B
微 众
限 号

Q135. You've created a mobile application that serves data stored in an Amazon
DynamoDB table. Your primary concern is scalability of the application and being
able to handle millions of visitors and data requests. As part of your application,
the customer needs access to the data located in the DynamoDB table. Given
the application requirements, what would be the best method for designing the
application?
Choose the correct answer:

A. Let the users sign in to the app using a third party identity provider such as
Amazon, Google, or Facebook. Use the AssumeRoleWithWebIdentity API call to
assume the role containing the proper permissions to communicate with the
DynamoDB table. Write the application in JavaScript and host the JavaScript
interface in an S3 bucket.
B. Let the users sign into the app using a third party identity provider such as
Amazon, Google, or Facebook. Use the AssumeRoleWithWebIdentity API call to
assume the role containing the proper permissions to communicate with the
DynamoDB table. Write the application in a server-side language using the AWS
SDK and host the application in an S3 bucket for scalability.
C. Let the users sign into the app using a third party identity provider such as
Amazon, Google, or Facebook. Use the AssumeRoleWith API call to assume the
role containing the proper permissions to communicate with the DynamoDB table.
Write the application in JavaScript and host the JavaScript interface in an S3
bucket.
D. Configure an on-premise AD server utilizing SAML 2.0 to manage the
application users inside of the on-premise AD server and write code that
authenticates against the LD serves. Grant a role assigned to the STS token to
allow the end-user to access the required data in the DynamoDB table.

Answer: A

Explanation: AWS provides a JavaScript SDK, which allows JavaScript to


integrate into AWS services such as STS and DynamoDB. Since it is a client-side


programming language, using this and hosting it in an S3 bucket, allows the web


application to scale. Using a web identity provider, you will not have to manage


any user accounts or user databases.


Q136. A large multi-national corporation has come to you and asked if you can

provide a high availability and disaster recovery plan for their organization. Their
使
primary concern is not to lose any data so they are fine if there is a longer

97 oo 魔

recovery time as it will presumably save on cost. Which of the following options

46 ze 算狂

would be the best one for this corporation, given the concerns that they have
13 k
]号

outlined to you above?


[8 : 云计

Choose the correct answer:


30 _b
仅 信 号:

A. Set up pre-configured servers using Amazon Machine Images. Use an Elastic


79 yi

IP and Route 53 to quickly switch over to your new infrastructure if there are any
微 众

problems when you run your health checks.


限 号

B. Set up a number of smaller instances in a different region, which all have Auto
Scaling and Elastic Load Balancing enabled. If there is a network outage, then
these instances will auto scale up. As long as spot instances are used and the
instances are small this should remain a cost effective solution.
C. Make sure you have RDS set up as an asynchronous Multi-AZ deployment,
which automatically provisions and maintains an asynchronous "standby" replica
in a different Availability Zone.
D. Backup and restoring with S3 should be considered due to the low cost of S3
storage. Backup up frequently and the data can be sent to S3 using either Direct
Connect or Storage Gateway, or over the Internet.

Answer: D

Q137. You are excited to have just been employed by a large scientific institution
that is at the cutting edge of high-performance computing. Your first job is to
launch 10 Large EC2 instances which will all be used to crunch huge amounts of
data and will also need to pass this data back and forth between each other.
Which of the following would be the most efficient setup to achieve this?
Choose the correct answer:

A. Use Placement Groups and launch the 10 instances at the same time.
B. Use Placement Groups. Make sure the 10 Instances are spread evenly across
Availability Zones.
C. Use the largest EC2 instances currently available on AWS, but make sure they
are all in the same Availability Zone
D. Use the largest EC2 instances currently available on AWS, but make sure they
are all in the same region.

Answer: A

Q138. ExamKiller has a legacy application with licensing that is attached to a


single MAC address. Since an EC2 instance can receive a new MAC address


when launching new instances, how can you ensure that your EC2 instance can


maintain a single MAC address for licensing? Choose the correct answer:



A. Private subnets have static MAC addresses. Launch the EC2 instance in a

private subnet and, if required, use a NAT to serve data over the internet.
使
B. Configure a manual MAC address for each EC2 instance and report that to the

97 oo 魔

licensing company.

46 ze 算狂

C. AWS cannot have a fixed MAC address; the best solution is to create a
13 k
]号

dedicated VPN/VGW gateway to serve data from the legacy application.


[8 : 云计

D. Create an ENI and assign it to the EC2 instance. The ENI will have a static
30 _b

MAC address and can be detached and reattached to a new instance if the
仅 信 号:

current instance becomes unavailable.


79 yi
微 众

Answer: D
限 号

Explanation: MAC addresses are assigned to an ENI. EC2 allows the creation of
an ENI that will maintain state for as long as allowed in the EC2 instance; this
works exactly like an Elastic IP address.
Q139. You are setting up a video streaming service with the main components of
the set up being S3, CloudFront and Transcoder. Your video content will be
stored on AWS S3, and your first job is to upload 10 videos to S3 and make sure
they are secure before you even begin to start thinking of streaming the videos.
The 10 videos have just finished uploading to S3, so you now need to secure
them with encryption at rest. Which of the following would be the best way to do
this? Choose the correct answer:

A. Set an API flag, or check a box in the AWS Management Console, to have
data encrypted in Amazon S3.
B. Encrypt your data using AES-256. After the object is encrypted, the encryption
key you used needs to be stored on AWS CloudFront.
C. Use AWS CloudHSM appliance with both physical and logical tamper
detection and response mechanisms that trigger zeroization of the appliance.
D. Use KMS to decrypt source data and encrypt resulting output. Also, use Origin
Access Identity on your CloudFront distribution, so content is only able to be
served via CloudFront, not S3 URLs.

Answer: D

Q140. You've recently migrated an application from a customer's on-premise data


center to the AWS cloud. Currently, you're using the ELB to serve traffic to the
legacy application. The ELB is also using HTTP port 80 as the health check ping
port. The application is currently responding by returning a website on port 80
when you test the IP address directly. However, the instance is not registering as
healthy even though the appropriate amount of time has passed for the health


check to register as healthy.


How might the issue be resolved?


Choose the correct answer:



A. Change the ELB listener port from HTTP port 80 toTCP port 80 for the
instance to register as healthy 用
使
B. Change the ELB listener port from HTTP port 80 to HTTPS port 80 for the

97 oo 魔

instance to register as healthy



46 ze 算狂

C. Change the ELB listener port from ping port 80 to HTTPS port 80 for the
13 k
]号

instance to register as healthy


[8 : 云计

D. Change the ELB listener port from HTTP port 80 to TCP port 443 for the
30 _b

instance to register as healthy


仅 信 号:

79 yi

Answer: A
微 众
限 号

Q141. Your final task that will complete a cloud migration for a customer is to set
up an Active Directory service for him so that he can use Microsoft Active
Directory with the newly-deployed AWS services. After reading the AWS
documentation for this, you discover there are 3 options available to set up the
AWS Directory Service. You call the customer for more information about his
requirements, and he tells you he has 5,000 users on his AD service and wants
to be able to use his existing on-premises directory with AWS services. Which of
the following options for setting up the AWS Directory Service would be the most
appropriate for your customer? Choose the correct answer:

A. Any of these options are acceptable to use as long as they configured


correctly for 10,000 customers
B. Simple AD
C. AWS Directory Service for Microsoft Active Directory (Enterprise Edition)
D. AD Connector
Answer: D

Explanation: AD Connector is your best choice when you want to use your
existing on-premises directory with AWS services. A large AD Connector can
support up to 5,000 users.
Q142. ExamKiller is running a production load Redshift cluster for a client. The
client has an RTO objective of one hour and an RPO of one day. While
configuring the initial cluster what configuration would best meet the recovery
needs of the client for this specific Redshift cluster configuration?
Choose the correct answer:

A. Enable automatic snapshots and configure automatic snapshot copy from the
current production cluster to the disaster recovery region.
B. Create the cluster configuration and enable Redshift replication from the
cluster running in the primary region to the cluster running in the secondary


region. In the event of a disaster, change the DNS endpoint to the secondary


cluster's leader node.


C. Enable automatic snapshots on a Redshift cluster. In the event of a disaster, a


failover to the backup region is needed. Manually copy the snapshot from the


primary region to the secondary region.

D. Enable automatic snapshots on the cluster in the production region FROM the
使
disaster recovery region so snapshots are available in the disaster recovery

97 oo 魔

region and can be launched in the event of a disaster.



46 ze 算狂

13 k
]号

Answer: A
[8 : 云计

30 _b

Explanation: Copying a snapshot from the current region to the disaster region
仅 信 号:

after a disaster occurs isn't possible. One assumes the region or AZ will be
79 yi

having issues, which is why a failover is required. Enabling automatic snapshot


微 众

and automatic snapshot copy ensures that daily snapshots meeting your RPO
限 号

are available in the disaster recovery region. If the snapshots are available in the
event of disaster, the RTO will be less than one hour or equal to the amount of
time it takes for AWS to launch the cluster and copy the data from the snapshot
to the cluster.
Q143. You have acquired a new contract from a client to move all of his existing
infrastructure onto AWS. You notice that he is running some of his applications
using multicast, and he needs to keep it running as such when it is migrated to
AWS. You discover that multicast is not available on AWS, as you cannot
manage multiple subnets on a single interface on AWS and a subnet can only
belong to one availability zone. Which of the following would enable you to
deploy legacy applications on AWS that require multicast?
Choose the correct answer:

A. Create all the subnets on a different VPC and use VPC peering between them.
B. Create a virtual overlay network that runs on the OS level of the instance.
C. Provide Elastic Network Interfaces between the subnets.
D. All of the answers listed will help in deploying applications that require
multicast on AWS.

Answer: B

Q144. You are designing multi-region architecture and you want to send users to
a geographic location based on latency- based routing, which seems simple
enough; however, you also want to use weighted-based routing among resources
within that region. Which of the below setups would best accomplish this?
Choose the correct answer:

A. You will need to use AAAA - IPv6 addresses when you define your weighted
based record sets.
B. You will need to use complex routing (nested record sets) and ensure that you
define the latency based records first


C. This cannot be done. You can't use different routing records together.


D. You will need to use complex routing (nested record sets) and ensure that you


define the weighted resource record sets first.



Answer: D

使
Q145. ExamKiller is consulting for a company that runs their current application

97 oo 魔

entirely all on-premise. However, they are expecting a big boost in traffic

46 ze 算狂

tomorrow and need to figure out a way to decrease the load to handle the scale.
13 k
]号

Unfortunately, they cannot migrate their application to AWS in the period required.
[8 : 云计

What could they do with their current on-premise application to help offload some
30 _b

of the traffic and scale to meet the demand expected in 24 hours?


仅 信 号:

Choose the correct answer:


79 yi
微 众

A. Deploy OpsWorks on-premise to manage the instance in order to configure


限 号

on-premise auto scaling to meet the demand.


B. Create a CloudFront CDN, enable query string forwarding and TTL of zero on
the origin.
Offload the DNS to AWS to handle CloudFront CDN traffic but use on-premise
load balancers as the origin.
C. Duplicate half your web infrastructure on AWS, offload the DNS to Route 53
and configure weighted based DNS routing to send half the traffic to AWS .
D. Upload all static files to Amazon S3 and create a CloudFront distribution
serving those static files.

Answer: B

Explanation: The company cannot send or migrate any data to AWS. However,
DNS changes and a CloudFront distribution can be provisioned in enough time to
help offload some of the demand onto AWS edge locations by creating a whole
site CDN.
Q146. ExamKiller is hosting an Nginx web application. They want to use EMR to
create EMR jobs that shift through all of the web server logs and error logs to pull
statistics on click stream and errors based off of client IP address.
Given the requirements what would be the best method for collecting the log data
and analyzing it automatically?
Choose the correct answer:

A. Configure ELB access logs then create a Data Pipeline job which imports the
logs from an S3 bucket into EMR for analyzing and output the EMR data into a
new S3 bucket.
B. If the application is using HTTP, configure proxy protocol to pass the client IP
address in a new HTTP header. If the application is using TCP, modify the
application code to pull the client IP into the x-forward-for header so the web
servers can parse it.


C. If the application is using TCP, configure proxy protocol to pass the client IP


address in a new TCP header. If the application is using, HTTP modify the


application code to pull the client IP into the x-forward-for header so the web


servers can parse it.


D. Configure ELB error logs then create a Data Pipeline job which imports the

logs from an S3 bucket into EMR for analyzing and outputs the EMR data into a
使
new S3 bucket.

97 oo 魔


46 ze 算狂

Answer: C
13 k
]号
[8 : 云计

Q147. ExamKiller is running a web application that has a high amount of dynamic
30 _b

content. ExamKiller is looking to reduce load time by implementing a caching


仅 信 号:

solution that will help reduce load times for clients requesting the application.
79 yi

What is the best possible solution and why? Choose the correct answer:
微 众
限 号

A. Create a CloudFront distribution, enable query string forwarding, set the TTL
to 0: This will keep TCP connections open from CloudFront to origin, reducing the
time it takes for TCP handshake to occur.
B. Create an ElastiCache cluster, write code that caches the correct dynamic
content and places it in front of the RDS dynamic content. This will reduce the
amount of time it takes to request the dynamic content since it is cached.
C. Offload the DNS to Route 53; Route 53 has DNS servers all around the world
and routes the request to the closest region which reduces DNS latency.
D. Create a CloudFront distribution; disable query string forwarding, set the TTL
to 0. This will keep TCP connections open from CloudFront to origin, reducing the
time it takes for TCP handshake to occur

Answer: A
Explanation: CloudFront uses KeepAlive features to keep TCP connections open
from the edge location to the CloudFront origin. This reduces the time it takes for
the TCP handshake to occur. Only the initial requests have to perform the full
TCP handshake. This will substantially reduce load time for thousands of
requests per minute or greater.
Q148. You have created a VPC with CIDR block 10.0.0.0/24, which supports 256
IP addresses. You want to now split this into two subnets, each supporting 128 IP
addresses and allowing for 123 hosts addresses. Can this be done and if so how
will the allocation of IP addresses be configured? Choose the correct answer:

A. One subnet will use CIDR block 10.0.0.0/127 (for addresses 10.0.0.0 -
10.0.0.127) and the other will use CIDR block 10.0.0.128/255 (for addresses
10.0.0.128 - 10.0.0.255).
B. One subnet will use CIDR block 10.0.0.0/25 (for addresses 10.0.0.0 -
10.0.0.127) and the other will use CIDR block 10.0.0.128/25 (for addresses
10.0.0.128 - 10.0.0.255).


C. No. This can't be done.


D. One subnet will use CIDR block 10.0.0.0/25 (for addresses 10.0.0.0 -


10.0.0.127) and the other will use CIDR block 10.0.1.0/25 (for addresses 10.0.1.0


- 10.0.1.127).


Answer: B 用
使

97 oo 魔

Q149. A new client may use your company to move all their existing Data Center

46 ze 算狂

applications and infrastructure to AWS. This is going to be a huge contract for


13 k
]号

your company, and you have been handed the entire contract and need to
[8 : 云计

provide an initial scope to this possible new client. One of the things you notice
30 _b

concerning the existing infrastructure is that it has a small amount of legacy


仅 信 号:

applications that you are almost certain will not work on AWS. Which of the
79 yi

following would be the best strategy to employ regarding the migration of these
微 众

legacy applications? Choose the correct answer:


限 号

A. Move the legacy applications onto AWS first, before you build any
infrastructure. There is sure to be an AWS Machine Image that can run this
legacy application.
B. Convince the client to look for another solution by de-commissioning these
applications and seeking out new ones that will run on AWS.
C. Create a hybrid cloud by configuring a VPN tunnel to the on-premises location
of the Data Center.
D. Create two VPCs. One containing all the legacy applications and the other
containing all the other applications. Make a connection between them with VPC
peering.

Answer: C
Q150. You have been told by your security officer that you need to give a
presentation on encryption on data at rest on AWS to 50 of your co-workers. You
feel like you understand this extremely well regarding data stored on AWS S3 so
you aren't too concerned, but you begin to panic a little when you realize you also
probably need to talk about encryption on data stored on your databases, namely
Amazon RDS. Regarding Amazon RDS encryption, which of the following
statements is the truest?
Choose the correct answer:

A. Encryption can be enabled on RDS instances to encrypt the underlying


storage, and this will by default also encrypt snapshots as they are created.
However, some additional configuration needs to be made on the client side for
this to work.
B. Encryption can be enabled on RDS instances to encrypt the underlying
storage, and this will by default also encrypt snapshots as they are created. No
additional configuration needs to be made on the client side for this to work.


C. Encryption cannot be enabled on RDS instances unless the keys are not


managed by KMS.


D. Encryption can be enabled on RDS instances to encrypt the underlying


storage, but you cannot encrypt snapshots as they are created.


Answer: B 用
使

97 oo 魔

Q151. A new startup company wants to set up a highly scalable application on



46 ze 算狂

AWS with an initial deployment of 100 EC2 Instances. Your CIO has given you
13 k
]号

the following scope for the job:


[8 : 云计

- All the Amazon EC2 instances must have a private IP address.


30 _b

- You need to use Elastic Beanstalk to deploy this infrastructure. Which of the
仅 信 号:

following criteria is needed to ensure the scope is followed as it is outlined above?


79 yi

Choose the correct answer:


微 众
限 号

A. You should create routing rules which will route all outbound traffic from the
EC2 instances through NAT.
B. All of the listed criteria is needed to ensure the scope is followed.
C. You should create a public and private subnet for VPC in each Availability
Zone.
D. You should route all outbound traffic from EC2 instances through NAT.

Answer: B

Q152. After the Government organization you work for suffers it's 3rd DDOS
attack of the year you have been handed one part of a strategy to try and stop
this from happening again. You have been told that your job is to minimize the
attack surface area. You do have a vague idea of some of the things you need to
put in place to achieve this. Which of the following is NOT one of the ways to
minimize the attack surface area as a DDOS minimization strategy? Choose the
correct answer:

A. Configure services such as Elastic Load Balancing and Auto Scaling to


automatically scale.
B. Eliminate non-critical Internet entry points.
C. Separate end user traffic from management traffic.
D. Reduce the number of necessary Internet entry points.

Answer: A

Q153. ExamKiller has three consolidated billing accounts; dev, staging, and
production. The dev account has purchased two reserved instances with instance
type of m4.large in Availability Zone 1a. However, no instances are running on
the dev account, but a m4.large is running in the staging account inside of
availability zone 1a. Who can receive the pricing? Choose the correct answer:



A. No account will receive the reservation pricing because the reservation was


purchased on the dev account and no instances that match the reservation are


running in the dev account.


B. The reserved instance pricing will still be applied because the staging account
is running an instance that matches the reservation.用
使
C. All accounts running the m4.large will receive the pricing even if there is only

97 oo 魔

one reserved instance purchase.



46 ze 算狂

D. Only the primary account (the consolidated billing primary account) will
13 k
]号

receive discounted pricing if the instance is running in the primary billing account.
[8 : 云计

30 _b

Answer: B
仅 信 号:

79 yi

Explanation: Like volume discounts, reserved instances will work across all
微 众

accounts that are connected to consolidated billing. Since billing is at the payee
限 号

level, consolidated billing does not care which account purchases or uses a
reserved instance. This is a consideration if BCJC wants to host customer
accounts as part of their consolidated billing.
Q154. You are the administrator for a new startup company which has a
production account and a development account on AWS. Up until this point, no
one has had access to the production account except yourself. There are 20
people on the development account who now need various levels of access
provided to them on the production account. 10 of them need read-only access to
all resources on the production account, 5 of them need read/write access to
EC2 resources, and the remaining 5 only need read-only access to S3 buckets.
Which of the following options would be the best way, both practically and
security-wise, to accomplish this task? Choose the correct answer:

A. Create encryption keys for each of the resources that need access and
provide those keys to each user depending on the access required.
B. Copy the 20 users IAM accounts from the development account to the
production account.
Then change the access levels for each user on the production account.
C. Create 3 new users on the production account with the various levels of
permissions needed.
Give each of the 20 users the login for whichever one of the 3 accounts they
need depending on the level of access required.
D. Create 3 roles in the production account with a different policy for each of the
access levels needed. Add permissions to each IAM user on the developer
account.

Answer: D

Q155. You're consulting for company that is migrating it's legacy application to
the AWS cloud. In order to apply high availability, you've decided to implement
Elastic Load Balancer and Auto Scaling services to serve traffic to this legacy


application. The legacy application is not a standard HTTP web application but is


a custom application with custom codes that is run internally for the employees of


the company you are consulting. The ports required to be open are port 80 and


port 8080. What listener configuration would you create?


Choose the correct answer:

使
A. Configure the load balancer with the following ports: HTTP:80 and HTTP:8080

97 oo 魔

and the instance protocol to HTTP:80 and HTTP:8080



46 ze 算狂

B. Configure the load balancer with the following ports: HTTP:80 and HTTP:8080
13 k
]号

and the instance protocol to HTTPs:80 and HTTPs:8080


[8 : 云计

C. Configure the load balancer with the following ports: HTTP:80 and HTTP:8080
30 _b

and the instance protocol to TCP:80 and TCP:8080


仅 信 号:

D. Configure the load balancer with the following ports: TCP:80 and TCP:8080
79 yi

and the instance protocol to TCP:80 and TCP:8080


微 众
限 号

Answer: D

Explanation: The ELB will not work correctly if using Layer 7 HTTP and the
application does not respond back with standard HTTP response codes. To
support this type of application TCP ports are required.
Q156. ExamKiller has many employees who need to run internal applications
that access the company's AWS resources. These employees already have user
credentials in the company's current identity authentication system, which does
not support SAML 2.0. The company does not want to create a separate IAM
user for each company employee.
How should the SSO setup be designed?
Choose the 2 correct answers:
A. Create a custom identity broker application which authenticates the
employees using the existing system, uses the GetFederationToken API call and
passes a permission policy to gain temporary access credentials from STS.
B. Create a custom identity broker application which authenticates employees
using the existing system and uses the AssumeRole API call to gain temporary,
role-based access to AWS.
C. Create an IAM user to share based off of employee roles in the company.
D. Configure an AD server which synchronizes from the company's current
Identity Provide and configures SAML-based single sign-on which will then use
the AssumeRoleWithSAML API calls to generate credentials for the employees.

Answer: A,B

Q157. After configuring a whole site CDN on CloudFront you receive the
following error: This distribution is not configured to allow the HTTP request
method that was used for this request.


The distribution supports only cachable requests.


What is the most likely cause of this?


Choose the correct answer:



A. Allowed HTTP methods on that specific origin is only accepting GET, HEAD

B. The CloudFront distribution is configured to the wrong origin
使
C. Allowed HTTP methods on that specific origin is only accepting GET, HEAD,

97 oo 魔

OPTIONS

46 ze 算狂

D. Allowed HTTP methods on that specific origin is only accepting GET, HEAD,
13 k
]号

OPTIONS, PUT, POST, PATCH, DELETE


[8 : 云计

30 _b

Answer: A
仅 信 号:

79 yi

Explanation: CloudFront always caches responses to GET and HEAD requests,


微 众

and you can also configure CloudFront to cache responses to OPTIONS


限 号

requests. Responses to other requests which use other methods are not cached
by CloudFront.
Q158. ExamKiller is running data application on-premise that requires large
amounts of data to be transferred to a VPC containing EC2 instances in an AWS
region. ExamKiller is concerned about the total overall transfer costs required for
this application and is potentially not going deploy a hybrid environment for the
customer-facing part of the application to run in a VPC. Given that the data
transferred to AWS is new data every time, what suggestions could you make to
ExamKiller to help reduce the overall cost of data transfer to AWS? Choose the
correct answer:

A. Suggest provisioning a Direct Connect connection between the on-premise


data center and the AWS region.
B. Provision a VPN connection between the on-premise data center and the AWS
region using the VPN section of a VPC.
C. Suggest using AWS import/export to transfer the TBs of data while
synchronizing the new data as it arrives.
D. Suggest leaving the data required for the application on-premise and use a
VPN to query the on-premise database data from EC2 when required.

Answer: A

Q159. You are running an online gaming server, with one of its requirements
being a need for 100,000 IOPS of write performance on its EBS volumes. Given
the fact that EBS volumes can only provision a maximum of up to 20,000 IOPS
which of the following would be a reasonable solution if instance bandwidth is not
an issue?
Choose the correct answer:

A. Create a Placement Group with five 20,000 IOPS EBS volumes.


B. Use ephemeral storage which gives a much larger IOPS write performance.


C. Use Auto Scaling to use spot instances when required to increase the IOPS


write performance when required.


D. Create a RAID 0 configuration for five 20,000 IOPS EBS volumes.



Answer: D

使
Q160. DDoS attacks that happen at the application layer commonly target web

97 oo 魔

applications with lower volumes of traffic compared to infrastructure attacks. To



46 ze 算狂

mitigate these types of attacks, you should probably want to include a WAF (Web
13 k
]号

Application Firewall) as part of your infrastructure. To inspect all HTTP requests,


[8 : 云计

WAFs sit in-line with your application traffic. Unfortunately, this creates a scenario
30 _b

where WAFs can become a point of failure or bottleneck. To mitigate this problem,
仅 信 号:

you need the ability to run multiple WAFs on demand during traffic spikes. This
79 yi

type of scaling for WAF is done via a "WAF sandwich." Which of the following
微 众

statements best describes what a "WAF sandwich" is?


限 号

Choose the correct answer:

A. The EC2 instance running your WAF software is placed between your public
subnets and your private subnets.
B. The EC2 instance running your WAF software is included in an Auto Scaling
group and placed in between two Elastic load balancers.
C. The EC2 instance running your WAF software is placed between your public
subnets and your Internet Gateway.
D. The EC2 instance running your WAF software is placed between your private
subnets and any NATed connections to the Internet.

Answer: B

Q161. In an attempt to cut costs your accounts manager has come to you and
tells you that he thinks that if the company starts to use consolidated billing that it
will save some money. He also wants the billing set up in such a way that it is
relatively simple, and it gives insights into the environment regarding utilization of
resources. Which of the following consolidated billing setups would satisfy your
account manager's needs?
Choose the 2 correct answers:

A. Use one master account and many sub accounts.


B. Use one master account and no sub accounts.
C. Use roles for IAM account simplicity across multiple AWS linked accounts.
D. Use one account but multiple VPCs to break out environments.

Answer: A,D

Q162. ExamKiller is designing a high availability solution for a customer. This


customer's requirements are that their application needs to be able to handle an
unexpected amount of load and allow site visitors to read data from a DynamoDB


table, which contains the results of an online polling system. Given this


information, what would be the best and most cost-saving method for architecting


and developing this application? Choose the correct answer:



A. Create a CloudFront distribution that serves the HTML web page, but send the

visitors to an Auto Scaling ELB application pointing to EC2 instances.
使
B. Deploy an Auto Scaling application with Elastic Load Balancer pointing to EC2

97 oo 魔

instances that use a server-side SDK to communicate with the DynamoDB table.

46 ze 算狂

C. Use the JavaScript SDK and build a static HTML page, hosted inside of an
13 k
]号

Amazon S3 bucket; use CloudFront and Route 53 to serve the website, which
[8 : 云计

uses JavaScript client-side language to communicate with DynamoDB. Enable


30 _b

DynamoDB autoscaling and DynamoDB Accelerator (DAX) for the DynamoDB


仅 信 号:

table to handle sudden bursts in read traffic.


79 yi

D. Create a Lamba script, which pulls the most recent DynamoDB polling results
微 众

and creates a custom HTML page, inside of Amazon S3 and use CloudFront and
限 号

Route 53 to serve the static website.

Answer: C

Explanation: DynamoDB can automatically scale the Read Capacity to handle


increased load. DAX provides an in-memory cache that would offload the reads.
A static HTML page hosted in an S3 bucket is a more cost efficient option than
serving the web page from EC2 instances. The Lambda function answer would
work, but provides additional cost and complexity that is not necessary.
Q163. You want to set up a public website on AWS. The things that you require
are as follows:
- You want the database and the application server running on AWS VPC.
- You want the database to be able to connect to the Internet, specifically for any
patch upgrades.
- You do not want to receive any incoming requests from the Internet to the
database. Which of the following solutions would be the best to satisfy all the
above requirements for your planned public website on AWS?
Choose the correct answer:

A. Set up the database in a local data center and use a private gateway to
connect the application to the database.
B. Set up the public website on a public subnet and set up the database in a
private subnet which connects to the Internet via a NAT instance.
C. Set up the database in a private subnet with a security group which only
allows outbound traffic.
D. Set up the database in a public subnet with a security group which only allows
inbound traffic.

Answer: B


Q164. ExamKiller needs to configure a NAT gateway for its internal AWS


applications to be able to download patches and package software. Currently,


they are running a NAT instance that is using the floating IP scripting


configuration to create fault tolerance for the NAT. The NAT gateway needs to be


built with fault tolerance in mind to meet the needs of ExamKiller. What is the

best way to configure the NAT gateway with fault tolerance? Choose the correct
使
answer:

97 oo 魔


46 ze 算狂

A. Create one NAT gateway in a public subnet; create a route from the public
13 k
]号

subnet to the NAT gateway


[8 : 云计

B. Create two NAT gateways in a public subnet; create a route from the private
30 _b

subnet to each NAT gateway for fault tolerance


仅 信 号:

C. Create two NAT gateways in a public subnet; create a route from the private
79 yi

subnet to one NAT gateway for fault tolerance


微 众

D. Create one NAT gateway in a public subnet; create a route from the private
限 号

subnet to the NAT gateway

Answer: D

Explanation: NAT Gateways already have built-in fault tolerance. From the docs:
"Each NAT gateway is created in a specific Availability Zone and implemented
with redundancy in that zone." Granted, fault tolerance != redundancy, but in this
case the redundancy is a building block for creating a fault tolerant gateway. So
creating multiple NAT gateways in the same subnet doesn't make sense. Instead,
you might argue that you need to create multiple gateways in different AZs, so
that if one AZ goes down, you have a backup - but none of the possible answers
provide this solution so you need to pick the best answer out of the available
options.
Q165. You're working as a consultant for a company that has a three tier
application. The application layer of this architecture sends over 20Gbps of data
per seconds during peak hours to and from Amazon S3. Currently, you're running
two NAT gateways in two subnets to transfer the data from your private
application layer to Amazon S3. You will also need to ensure that the instances
receive software patches from a third party repository.
What architecture changes should be made, if any?
Choose the correct answer:

A. Remove the NAT gateway and create a VPC S3 endpoint which allows for
higher bandwidth throughput as well as tighter security.
B. Keep the NAT gateway and create a VPC S3 endpoint which allows for higher
bandwidth throughput as well as tighter security.
C. NAT gateways support 10Gbps and two are running: No changes are required
to improve this architecture .
D. NAT gateways support 10Gbps and two are running: Add a third to a third
subnet to allow for any increase in demand.


Answer: B



Explanation: S3 endpoints use the private AWS network for data transfer. These


endpoints do not have the same bandwidth limitations as NAT gateways since it


is all done through the internal network. This is also an additional layer of security.

In order to ensure that the instances can reach a third party repo a NAT gateway
使
is still required for communication over the internet.

97 oo 魔

Q166. The company you work for has a huge amount of infrastructure built on

46 ze 算狂

AWS. However there has been some concerns recently about the security of this
13 k
]号

infrastructure, and an external auditor has been given the task of running a
[8 : 云计

thorough check of all of your company's AWS assets. The auditor will be in the
30 _b

USA while your company's infrastructure resides in the Asia Pacific (Sydney)
仅 信 号:

region on AWS. Initially, he needs to check all of your VPC assets, specifically,
79 yi

security groups and NACLs You have been assigned the task of providing the
微 众

auditor with a login to be able to do this. Which of the following would be the best
限 号

and most secure solution to provide the auditor with so he can begin his initial
investigations? Choose the correct answer:

A. Create an IAM user who will have read-only access to your AWS VPC
infrastructure and provide the auditor with those credentials.
B. Create an IAM user tied to an administrator role. Also provide an additional
level of security with MFA.
C. Create an IAM user with full VPC access but set a condition that will not allow
him to modify anything if the request is from any IP other than his own.
D. Give him root access to your AWS Infrastructure, because he is an auditor he
will need access to every service.

Answer: A
Q167. You've configured an AWS VPC and several EC2 instances running
MongoDB with an internal IP address of 10.0.2.1. To simplify failover and
connectivity to the instance, you create an internal Route 53 A record called
mongodb.example.com. You have a VPN connection from on-premise to your
VPC and are attempting to connect an on-premise VMWare instance to
mongodb.example.com, but the DNS will not resolve.
Given the current design, why is the internal DNS record not resolving
on-premise? Choose the correct answer:

A. Route 53 internal DNS records only work if the DNS request originates from
within the VPC.
B. The on-premise VM instance needs to have an /etc/resolv.conf record pointing
to the Route53 internal DNS server.
C. A public Route 53 resource record was created using the private IP address
instead of an internal DNS record.
D. The VPN is not configured to use BGP dynamic routing and a static route is


not configured from the on-premise subnet to the VPC subnet with the MongoDB


server.



Answer: A



Explanation: Internal Route 53 resource record sets only work if the originating
使
request is made from within the VPC. Internal Route 53 record sets cannot be

97 oo 魔

extended to on-premise usage.



46 ze 算狂

13 k
]号

Q168. You are setting up a website for a small startup company. You have built
[8 : 云计

them what you believe to be a great solution on AWS for the money they wanted
30 _b

to spend. It is a very image intensive site, so you have utilized CloudFront to help
仅 信 号:

with the serving of images. The client complains to you, however, that he requires
79 yi

a custom domain name when serving up this content that should work with https
微 众

from CloudFront, so rather than being provided with a xxxx.cloudfront.net domain


限 号

he wants a custom domain such as ssuc.com. What would you need to do to


accomplish what the customer is asking?
Choose the correct answer:

A. You must provision and configure your own SSL certificate in IAM and
associate it to your CloudFront distribution.
B. You must provision and configure an ALIAS in Route 53 and associate it to
your CloudFront distribution
C. You must provision and configure your own SSL certificate in Route 53 and
associate it to your CloudFront distribution.
D. You must create an Origin Access Identity (OAI) for CloudFront and grant
access to the objects in your S3 bucket where the images are stored.

Answer: A
Explanation: A Virtual Private Cloud (VPC) is a virtual network dedicated to the
user's AWS account. It enables the user to launch AWS resources into a virtual
network that the user has defined. AWS provides two features that the user can
use to increase security in VPC: security groups and network ACLs. Security
group works at the instance level while ACL works at the subnet level. ACL allows
both allow and deny rules. Thus, when the user wants to reject traffic from the
selected IPs it is recommended to use ACL with subnets. Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html







使

97 oo 魔


46 ze 算狂

13 k
]号
[8 : 云计

Explanation: A user can increase the desired capacity of the Auto Scaling group
30 _b

and Auto Scaling will launch a new instance as per the new capacity. The newly
仅 信 号:

launched instances will be registered with ELB if Auto Scaling group is configured
79 yi

with ELB. If the user decreases the minimum size the instances will be removed
微 众

from Auto Scaling. Increasing the maximum size will not add instances but only
限 号

set the maximum instance cap.


Reference:
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-manual-scali
ng.html

Q505. You have been charged with investigating cloud security options for your
company in light of recent suspected threats. You are aware that certain AWS
services can help mitigate DDoS attacks, and you are specifically looking for a
service that provides protection from counterfeit requests (Layer 7) or SYN floods
(Layer 3). Which services provide this protection? (Choose 2 answers)

A. Elastic Load Balancing


B. Amazon API Gateway
C. Network Access Control Lists
D. CloudFront with AWS Shield
Answer: B,D

Explanation: Amazon API Gateway automatically protects your backend systems


from distributed denial-of-service (DDoS) attacks, whether attacked with
counterfeit requests (Layer
7) or SYN floods (Layer 3). Additional reading is available here
(https://aws.amazon.com/api-gateway/faqs/).
AWS Shield Advanced includes intelligent DDoS attack detection and mitigation
for not only for network layer (layer 3) and transport layer (layer 4) attacks, but
also for application layer (layer 7) attacks.
NACLs do not provide DDoS mitigation. They are used to control ingress and
egress into a subnet based on port and source IP range. Route 53 can protect
against DNS based DDoS attacks, but does not protect entire layers. Elastic
Load Balancing can protect EC2 instances specifically, but not other AWS
resources.


Reference:


http://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html


Q506. You are utilizing the AWS CLI to link a CloudWatch event to an auto


scaling policy. The command you are using is mon-put-metric-alarm. You wish to


monitor average CPU utilization for a period of 60 seconds with an evaluation

period of 3 intervals, for an event of 80% or greater level. What is the proper
使
syntax for your command? (Fill in the blank) put-metric-alarm --alarm-name

97 oo 魔

web-scale-up --metric-name CPUUtilization __________



46 ze 算狂

13 k
]号

A. --period 80 --threshold 60 ---evaluation-periods 3 --unit Average


[8 : 云计

B. --period 80 --threshold 60 ---evaluation-periods 3 --unit Percent


30 _b

C. --period 60 --threshold 80 ---evaluation-periods 3 --unit Percent


仅 信 号:

D. --period 60 --threshold 80 ---evaluation-periods 3 --unit Average


79 yi
微 众

Answer: D
限 号

Explanation: When this operation creates an alarm, the alarm state is


immediately set to INSUFFICIENT_DATA. The alarm is evaluated and its
StateValue is set appropriately. Any actions associated with the StateValue is
then executed. The syntax is as follows:
mon-put-metric-alarm AlarmName --comparison-operator value
--evaluation-periods value --metric-name value --namespace value --period value
[--statistic value] [--extendedstatistic value] --threshold value [--actions-enabled
value] [--alarm-actions value[,value...] ] [--alarm-description value] [--dimensions
"key1=value1,key2=value2..."] [--ok-actions value[,value...] ] [--unit value]
[--insufficient-data-actions value[,value...]] [Common Options] Reference:
http://docs.aws.amazon.com/autoscaling/latest/userguide/WhatIsAutoScaling.ht
ml#access- as
Q507. Your engineers are concerned about application availability during in-place
updates to a live Elastic Beanstalk stack. You advise them to consider a
blue/green deployment and then list the necessary steps to carry out this
deployment. Which steps are valid to include? (Choose 3 answers)

A. From the new environment's dashboard, choose Restart Environment.


B. Clone your current environment.
C. Deploy the new version of the application.
D. From the new environment's dashboard, swap environmental URLs and click
Swap.

Answer: B,C,D







使

97 oo 魔

Explanation: Blue/green deployments resolve application unavailability during



46 ze 算狂

updates.
13 k
]号

To perform a blue/green deployment:


[8 : 云计

. Open the Elastic Beanstalk console.


30 _b

. Clone your current environment, or launch a new environment running the


仅 信 号:

desired configuration.
79 yi

. Deploy the new application version to the new environment.


微 众

. Test the new version on the new environment.


限 号

. From the new environment's dashboard, choose Actions and then choose Swap
Environment URLs.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESw
ap.html
Q508. You are designing a new VPC for the HR team in your company to run
their workload. This VPC should be able to host around 200 EC2 instances, and
the workload is not expected to exceed 240 EC2 instances at any time. As the
HR team is broken into 8 smaller sub-groups, you would like to create a private
subnet for each group, in addition to one public subnet for any public-facing
requirements. This VPC will be peered with your account management VPC with
the address block of 172.27.1.0/24 for common or shared resources. Which
CIDR combination is valid given these requirements?

A. Use 172.27.0.0/24 for the VPC and /28 netmask for all subnets in the VPC.
B. Use 172.27.0.0/24 for the VPC and /27 netmask for all subnets in the VPC.
C. Use 172.27.0.0/23 for the VPC and /27 netmask for all subnets in the VPC.
D. Use 172.27.0.0/16 for the VPC and /24 netmask for all subnets in the VPC.

Answer: A

Explanation: Given the requirements you need to create 9 subnets in this VPC .
Using 172.27.0.0/24 for the VPC and /27 netmask for all subnets in the VPC is
incorrect to use a /27 netmask because it would reserve 32 addresses and you
will not be able to create 9 subnets in your 256 VPC.
. Using a 172.27.0.0/24 for the VPC and /28 netmask for all subnets in the VPC
is correct, although it doesn't consume all of the 256 addresses in your VPC. .
Using 172.27.0.0/16 for the VPC and /24 netmask for all subnets in the VPC is
incorrect because you need to pair the VPC with the Management VPC and the
/16 range will create conflict.
. Using 172.27.0.0/23 for the VPC and /27 netmask for all subnets in the VPC is
incorrect because it will conflict with the Management VPC range. Reference:


http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html


Q509. An international web journal hosted at AWS handles requests for technical


publications. The front-end tier is hosted in a VPC with multiple availability zones,


auto-scaling groups, and cross zone load-balancing. RDS hosts the application


database responsible for indexing and searching the content and technical

journals are served from S3 buckets. At certain times, specific technical journals
使
became quite popular, causing viewing delays. What additional components

97 oo 魔

could be utilized in your architecture to help improve performance? (Choose 2



46 ze 算狂

answers)
13 k
]号
[8 : 云计

A. Utilize the SQS service to speed up response to requests


30 _b

B. Utilize ElasticCache with Lazy Loading to cache popular searches and


仅 信 号:

indexes
79 yi

C. Add cross-region replication to S3 buckets hosting your journals, this will


微 众

speed up content delivery


限 号

D. Deploy CloudFront to help with content delivery to users in remote geographic


locations

Answer: B,D

Explanation: Lazy loading keeps the cache up to date based only on requests.
This avoids filling up the cache needlessly, but if data is only written to the cache
when there is a cache miss, data in the cache can become stale since there are
no updates to the cache when data is changed in the database. This issue is
addressed by using lazy loading in conjunction with write through, which adds
data or updates data in the cache whenever data is written to the database. You
may also use CloudFront to optimize your caching. By default, CloudFront
doesn't consider headers when caching your objects in edge locations. If your
origin returns two objects and they differ only by the values in the request
headers, CloudFront caches only one version of the object.
Reference:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/Strategies.ht
ml
Q510. You are running a group of Web Servers behind Load Balancer in a VPC,
the Health Check configurations of your target hosts is defined as below:
Healthy threshold 2
Unhealthy threshold 2
Timeout 25
Interval 30
Success codes 200
With this configuration, what is the minimum time taken by the load balancer to
mark a failed instance (not responding) as OutOfService and stop sending traffic
to it?

A. 55 seconds minimum
B. 110 seconds minimum


C. 50 seconds minimum


D. 60 seconds mimimum



Answer: A



Explanation: The load balancer sends a request to each registered instance at
使
the ping port and ping path every Interval seconds. An instance is considered

97 oo 魔

healthy if it returns a 200 response code within the health check interval. If the

46 ze 算狂

health checks exceed the threshold for consecutive failed responses, the load
13 k
]号

balancer takes the instance out of service. In this example, the load balancer
[8 : 云计

sends health check request every 30 seconds and the timeout is 25 seconds. So
30 _b

for the failed instance it will take 25 seconds to fail the first health check and the
仅 信 号:

load balancer will send another request after 30 seconds from the previous
79 yi

attempt, for the second attempt after 25 seconds the attempt will be marked
微 众

failed and the instance will be taken out of service. so the correct answer for this
限 号

question is 55 seconds at minimum. it could definitely take more than that for
example if the instance has failed in the middle or the beginning of health check
interval.
Reference:
http://docs.aws.amazon.com/elasticloadbalancing/2012-06-01/APIReference/API
_HealthChe ck.html
Q511. You are creating a custom Virtual Private Cloud (VPC) which will host a
mix of public and private instances. An EC2 instance has been deployed to one
of the public subnets in this VPC. What are the configurations that have to be
implemented to make this instance accessible from the internet? (Choose 3
answers)

A. Create new record set and custom domain name for the new instance in
Route 53
B. Make sure the instance has a public IP address
C. Edit security group to allow traffic to and from the internet on required ports
D. Attach an Internet gateway to the VPC and add default route to the IGW in
public subnet's route table

Answer: B,C,D

Explanation: Public and private subnets, with the exception of the default VPC,
do not automatically have Internet access enabled. To enable access to or from
the Internet for instances in a VPC subnet, you must do the following:
. Attach an Internet gateway to your VPC.
. Ensure that your subnet's route table points to the Internet gateway. . Ensure
that instances in your subnet have a globally unique IP address (public IPv4
address, Elastic IP address, or IPv6 address).
. Ensure that your network access control and security group rules allow the
relevant traffic to flow to and from your instance.
Reference:


http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gate


way.html


Q512. You are architecting for a hybrid cloud solution that will require continuous


connectivity between on-premises servers and instances on AWS. You found that


the connectivity between on-premises and AWS does NOT require high

bandwidth for now so you decided to go with resilient VPN connectivity. Which of
使
the following services are part of establishing resilient VPN connection between

97 oo 魔

on-premises networks and AWS? (Choose 3 answers)



46 ze 算狂

13 k
]号

A. Virtual private gateway


[8 : 云计

B. Customer gateway
30 _b

C. VPN connection
仅 信 号:

D. Direct connect
79 yi
微 众

Answer: A,B,C
限 号

Explanation: A customer gateway CGW is the anchor on customer side of that


connection. It can be a physical or software appliance. The anchor on the AWS
side of the VPN connection is called a virtual private gateway VGW. There are
two lines between the customer gateway and virtual private gateway because the
VPN connection consists of two tunnels in case of device failure. When you
configure your customer gateway, it's important you configure both tunnels. VPN
connection will connect VGW attached to certain VPC and CGW on AWS. Public
and Private Virtual Interfaces VIFs are part of configuring a Direct Connect
between on-premises and AWS
Reference:
http://docs.aws.amazon.com/AmazonVPC/latest/NetworkAdminGuide/Introductio
n.html#Cu stomerGateway
Q513. Host Manager (HM) is a software component that runs on each EC2
instance provisioned as part of Elastic Beanstalk environment. Which of the
following actions is a responsibility of Host Manager? (Choose 3 answers)

A. Creating instance-level events


B. Rotating application log files
C. Patching instance components
D. Deploying the application server

Answer: A,B,C

Explanation: A host manager (HM) runs on each Amazon EC2 server instance.
The host manager is responsible for:
. Deploying the application
. Aggregating events and metrics for retrieval via the console, the API, or the
command line . Generating instance-level events


. Monitoring the application log files for critical errors . Monitoring the application


server


. Patching instance components


. Rotating your application's log files and publishing them to Amazon S3 The host


manager reports metrics, errors and events, and server instance status, which

are available via the AWS Management Console, APIs, and CLIs.
使
Reference:

97 oo 魔

http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.archite

46 ze 算狂

cture.html
13 k
]号

Q514. You're migrating an existing application to the AWS cloud. The application
[8 : 云计

will be primarily using EC2 instances. This application needs to be built with the
30 _b

highest availability architecture available. The application currently relies on


仅 信 号:

hardcoded hostnames for intercommunication between the three tiers. You've


79 yi

migrated the application and configured the multi-tiers using the internal Elastic
微 众

Load Balancer for serving the traffic. The load balancer hostname is
限 号

example-app.us-east-1.elb.amazonaws.com. The current hard-coded hostname


in your application used to communicate between your multi-tier application is
applayer.example.com. What is the best method for architecting this setup to
have as much high availability as possible?
Choose the correct answer:

A. Add a cname record to the existing on-premise DNS server with a value of
example-app.us-east-1.elb.amazonaws.com. Create a public resource record set
using Route 53 with a hostname of applayer.example.com and an alias record to
example-app.us-east-1.elb.amazonaws.com.
B. Create a public resource record set using Route 53 with a hostname of
applayer.example.com and an alias record to
example-app.us-east-1.elb.amazonaws.com
C. Create a private resource record set using Route 53 with a hostname of
applayer.example.com and an alias record to
example-app.us-east-1.elb.amazonaws.com
D. Create an environment variable passed to the EC2 instances using user-data
with the ELB hostname, example-app.us-east-1.elb.amazonaws.com.

Answer: C

Explanation: Route 53 is highly available by design and serves DNS results from
the closest region within AWS. If the application is moved entirely to AWS, then
requests are originated from within the VPC for the application and Route 53 will
be able to serve the internal DNS. A public resource record set is not needed
because in this architecture there is no reason to send the traffic out to the
internet and back in, leaving security holes open.
Q515. Big Brother Bank has been acquiring smaller banks. BBB has a security
requirement that all bank employees are required to log into a central identity


solution, so that when they log on they gain access to central bank resources.


Given that each bank has their own AWS account, and existing application


instances with which to run their bank software, how would BBB connect each


bank's AWS networks to the central VPC, as to allow each bank to use the


central identity solution?

Each bank runs their VPC in the US-West-1 region, requires a high availability
使
solution, and regulation does not allow each bank access to the others' resources.

97 oo 魔

How would you best design this solution?



46 ze 算狂

Choose the correct answer:


13 k
]号
[8 : 云计

A. Create a Direct Connect connection from each VPC endpoint to the main BBB
30 _b

VPC.
仅 信 号:

B. Create an OpenVPN instance in BBB's VPC and establish an IPSec tunnel


79 yi

between VPCs.
微 众

C. Create a VPC peering connection with BBB's VPC peered to each branch's
限 号

AWS account, ensuring that the peered subnets do not have an overlapping
CIDR block range.
D. Migrate the acquired banks' AWS accounts to the main BBB account using
migration tools such as Import/Export, Snapshot, AMI Copy, and S3 sharing.

Answer: C

Q516. You've created a temporary application that accepts image uploads, stores
them in S3, and records information about the image in RDS. After building this
architecture and accepting images for the duration required, it's time to delete the
CloudFormation template. However, your manager has informed you that for
archival reasons the RDS data needs to be stored and the S3 bucket with the
images needs to remain. Your manager has also instructed you to ensure that
the application can be restored by a CloudFormation template and run next year
during the same period.
Knowing that when a CloudFormation template is deleted, it will remove the
resources it created, what is the best method for achieving the desired goals?
Choose the correct answer:

A. Set the DeletionPolicy on the S3 resource declaration in the CloudFormation


template to retain, set the RDS resource declaration DeletionPolicy to snapshot.
B. Set the DeletionPolicy on the S3 resource to snapshot and the DeletionPolicy
on the RDS resource to snapshot.
C. For both the RDS and S3 resource types on the CloudFormation template, set
the DeletionPolicy to Retain
D. Enable S3 bucket replication on the source bucket to a destination bucket to
maintain a copy of all the S3 objects, set the deletion policy for the RDS instance
to snapshot.

Answer: A


Explanation: Setting the DeletionPolicy on the S3 bucket will ensure the S3


bucket is not removed. Keeping the S3 bucket and the name of the S3 bucket


ensures it is easy to relaunch the application later with a template. Setting the


RDS DeletionPolicy to snapshot ensures the data can be restored when the


application needs to be run again later. Setting the DeletionPolicy on RDS to

retain would leave the RDS instance running when it would not be used, thus
使
increasing costs when not required.

97 oo 魔

Q517. You are excited that your company has just purchased a Direct Connect

46 ze 算狂

link from AWS as everything you now do on AWS should be much faster and
13 k
]号

more reliable. Your company is based in Sydney, Australia so obviously the Direct
[8 : 云计

Connect Link to AWS will go into the Asia Pacific (Sydney) region. Your first job
30 _b

after the new link purchase is to create a multi-region design across the Asia
仅 信 号:

Pacific(Sydney) region and the US West (N. California) region. You soon discover
79 yi

that all the infrastructure you deploy in the Asia Pacific(Sydney) region is
微 众

extremely fast and reliable, however the infrastructure you deploy in the US
限 号

West(N. California) region is much slower and unreliable. Which of the following
would be the best option to make the US West(N.
California) region a more reliable connection?
Choose the correct answer:

A. Create a private virtual interface to the US West region's public end points and
use VPN over the public virtual interface to protect the data.
B. Create a private virtual interface to the Asia Pacific region's public end points
and use VPN over the public virtual interface to protect the data.
C. Create a public virtual interface to the Asia Pacific region's public end points
and use VPN over the public virtual interface to protect the data.
D. Create a public virtual interface to the US West region's public end points and
use VPN over the public virtual interface to protect the data.

Answer: D
Q518. ExamKiller is managing a customer's application which currently includes
a three-tier application configuration. The first tier manages the web instances
and is configured in a public subnet. The second layer is the application layer. As
part of the application code, the application instances upload large amounts of
data to Amazon S3. Currently, the private subnets that the application instances
are running on have a route to a single NAT t2.micro NAT instance. The
application, during peak loads, becomes slow and customer uploads from the
application to S3 are not completing and taking a long time.
Which steps might you take to solve the issue using the most cost efficient
method? Choose the correct answer:

A. Increase the NAT instance size; network throughput increases with an


increase in instance size
B. Create a VPC S3 endpoint
C. Configure Auto Scaling for the NAT instance in order to handle increase in


load


D. Launch an additional NAT instance in another subnet and replace one of the


routes in a subnet to the new instance



Answer: B

使

97 oo 魔


46 ze 算狂

13 k
]号
[8 : 云计

30 _b
仅 信 号:

79 yi
微 众
限 号

Explanation: Creating a VPC endpoint will reduce the need for the S3 uploads to
be sent through a NAT instance. It is the most cost efficient method and the most
scalable method as well. The following answers will also get the job done but at
additional costs. NAT instances cannot be autoscaled since the traffic is sent
through the route table: "Increase the NAT instance size; network throughput
increases with an increase in instance size" and "launch an additional NAT
instance in another subnet and replace one of the routes in a subnet to the new
instance."
Q519. You're consulting for a new customer, who is attempting to create a hybrid
network between AWS and their on-premise data centers. Currently, they have
internal databases running on-premise that, due to licensing reasons, cannot be
migrated to AWS. The front end of the application has been migrated to AWS and
uses the DB hostname "db.internalapp.local" to communicate with the
on-premise database servers. Hostnames provide an easy method for updating
IP addresses in event of failover instead of having to update the IP address in the
code. Given the current architecture what is the best way to configure internal
DNS for this hybrid application? (Choose Two)
Choose the 2 correct answers:

A. Create an EC2 instance DNS server to configure hostnames for internal DNS
records, Create a new Amazon VPC DHCP option set with the internal DNS
server's IP address.
B. Configure the database to have a public-facing IP address and use Route 53
to create a domain name
C. Use an existing on-premise DNS server to configure hostnames for internal
DNS records.
Create a new Amazon VPC DHCP Option Set with the internal DNS server's IP
address.
D. Use an existing on-premise DNS server to configure hostnames for internal
DNS records.


Create a new Amazon VPC route table with the internal DNS server's IP address



Answer: A,C



Explanation: The application is an internal application. Using a public IP address

would cause the application to route externally, which is not part of the desired
使
architecture. Internal Route 53 record sets would not work since Route 53

97 oo 魔

internal resource record sets only work for requests originating from within the

46 ze 算狂

VPC and currently cannot extend to on-premise.


13 k
]号
[8 : 云计

Q520. ExamKiller is running an Amazon Redshift cluster with four nodes running
30 _b

24/7/365 and expects, potentially, to add one on-demand node for one to two
仅 信 号:

days once during the year. Which architecture would have the lowest possible
79 yi

cost for the cluster requirement? Choose the correct answer:


微 众
限 号

A. Purchase 4 reserved nodes and bid on spot instances for the extra node
usage required
B. Purchase 4 reserved nodes and rely on on-demand instances for the fifth node,
if required
C. Purchase 5 reserved nodes to cover all possible node usage during the year
D. Purchase 2 reserved nodes and utilize 3 on-demand nodes only for peak
usage times

Answer: B

Explanation: The fifth node is expected to run, at most, one day. In this situation,
purchasing four nodes will reduce overall costs since four nodes will run
continuously. Relying on on-demand instances for the fifth node is the best
possible cost option in relationship to reserved instances. The problem with spot
instances, is that they have no guarantee to run. The question is saying that on a
few days per year, demand might increase to the point that another node is
needed. The only time you can guarantee that the other node would be launched
is if it is on-demand. Remember, with spot instances you "bid" on unused
capacity. Only if your bid is greater than the other bids does it launch and if your
bid is less than another bid, then AWS will actually take the instance away. Spot
instances are not great for work loads that cannot be interrupted. Using spot
instances versus on-demand will usually lead to cost savings, but we also have
to take into account the scenario outlined in the question. This scenario can't use
spot instances.

Q573. Given the following IAM policy assign to user "jeff" {


"Version": "2012-10-17",


"Statement": [


{


"Action": [


"ec2:StartInstances",


"ec2:StopInstances",
"ec2:RebootInstances", 用
使
"ec2:TerminateInstances"

97 oo 魔

],

46 ze 算狂

"Condition": {
13 k
]号

"StringEquals": {
[8 : 云计

"ec2:ResourceTag/env":"production"
30 _b

}
仅 信 号:

},
79 yi

"Resource": [
微 众

"arn:aws:ec2:us-east-1:account-id:instance/*"
限 号

],
"Effect": "Deny"
}
]
}
Choose the correct answer:

A. IAM accounts tagged with "production" will be able to terminate instances


B. EC2 instances tagged "env:production" can not have the
Terminate|Start|Stop|Reboot instances actions performed against them
C. EC2 instances tagged "env:production" will not have the
Terminate|Start|Stop|Reboot instances actions performed against them
D. EC2 instances tagged "env:production" can have the
Terminate|Start|Stop|Reboot instances actions performed against them
Answer: B

Explanation: Resource tagging will apply to the instances that have the
associated tag values. Resource tagging can help prevent instances from being
terminated on accident as well.







使

97 oo 魔


46 ze 算狂

13 k
]号
[8 : 云计

30 _b
仅 信 号:

79 yi
微 众
限 号

You might also like