You are on page 1of 18

POWER9 IC922 Level 2 Quiz

Mark Lumiti
 POWER9 IC922 L2 Quiz /
 POWER9 IC922 L2 QUIZ /
 POWER9 IC922 Level 2 Quiz /
Started on Monday, February 21, 2022, 2:52 AM
State Finished
Completed on Monday, February 21, 2022, 7:38 AM
Time taken 4 hours 46 mins
Grade 22.00 out of 25.00 (88%)
Feedback A minimum of 19 correct answers is required to pass.
Congratulations, you passed the quiz for the POWER9 IC922 Level 2!
Question 1
Correct
1.00 points out of 1.00

Flag question

Question text
Elastic Distributed Inference (EDI) is a component of Watson Machine Learning Accelerator
(either in Technical Preview or GA depending on when you're taking this quiz). What is a
benefit of EDI?
Select one:

EDI is meant as a high-availability feature that allows a server in the public cloud to act as a hot
standby for an on-premises inference server.

EDI enables you to publish inference models as services across a scalable cluster of servers, from
which clients can consume the services.

EDI takes a pre-trained model and optimizes it for inferencing for the specific hardware it is
running on.
EDI allows a single inference request to be partitioned and distributed across GPUs running on
multiple servers, speeding up execution of that one request.

Question 2
Correct
1.00 points out of 1.00

Flag question

Question text

What is the memory capacity of the IC922 server?

Select one:

1 TB

4 TB

2 TB

512 GB

Question 3
Correct
1.00 points out of 1.00

Flag question

Question text

What value-based framework puts emphasis on methods that make AI effective and helps guide
how AI models are created and applied to real-life problems?
Select one:

Discover-Derive-Deploy

Ingest-Train-Score

Develop-Deploy-Infer

Data-Train-Inference

Question 4
Correct
1.00 points out of 1.00

Flag question

Question text

As of the initial GA (February 2020), what is the maximum number of GPUs that can be
configured in the IC922 server?

Select one:

12
10

Question 5
Incorrect
0.00 points out of 1.00

Flag question

Question text
In Nvidia’s testing of the T4 GPU versus the V100 GPU, what was the difference in power
utilization? (Choose the closest number.)
Select one:

T4 is approximately 1.5x the wattage of the V100

T4 is approximately 1/3 the wattage of the V100

T4 is approximately 2x the wattage of the V100

T4 is approximately 2/3 the wattage of the V100

Question 6
Correct
1.00 points out of 1.00

Flag question

Question text

Which of the following is *NOT* an attribute of the IC922 server?

Select one:
Enterprise security

Fast insights

Future-ready

Engineered for training

Question 7
Correct
1.00 points out of 1.00

Flag question

Question text

What type of GPUs are found in the IC922 server?

Select one:

Intel Xe

Nvidia T4

Intel V100

Nvidia GTX 1050

Question 8
Correct
1.00 points out of 1.00

Flag question

Question text
Which of the following is *NOT* a type of accelerator for machine learning and deep learning
workloads?
Select one:

ASIC

ESLC

FPGA

GPU

Question 9
Correct
1.00 points out of 1.00

Flag question

Question text

What is the form factor of the IC922 server?

Select one:

19” rack 1U
19” rack 2U

24” rack 2U

24” rack 4U

Question 10
Incorrect
0.00 points out of 1.00

Flag question

Question text

Traditional infrastructure isn’t well-suited for AI workloads, putting enterprise AI projects at


risk. Which of the following statements regarding the suitability of traditional infrastructure for
AI is *NOT* correct?

Select one:

Systems don’t scale easily to meet AI demands.

CPU processors are not optimized for AI workloads.

The data pipeline is too slow, causing bottlenecks.

All AI software, regardless of vendor, simply do not run on non-GPU systems.

Question 11
Correct
1.00 points out of 1.00
Flag question

Question text

According to a May 2018 report by Forrester Research Inc., what is the fastest growing workload
type?

Select one:

Decision Support (DS)

Object Detection (OD)

Image Classification (IC)

Artificial Intelligence (AI)

Question 12
Correct
1.00 points out of 1.00

Flag question

Question text

What are the core configurations (per CPU) available for the IC922 server?

Select one:

10, 20 or 30 cores
12, 20 or 24 cores

10, 12 or 16 cores

12, 16 or 20 cores

Question 13
Correct
1.00 points out of 1.00

Flag question

Question text

What is “Quantization”?

Select one:

It is a compression technique that allows for fast and loss-less transfer of neural networks
between AI software running on different server architectures.

During model training, it is the act of analyzing the distribution of data values within the dataset
to determine how many categories to break the dataset into.

It is a machine learning algorithm popular with data scientists that is commonly used in
classification problems.

It is the reduction of the precision of numeric data values in a trained model, making it smaller
and more efficient for inferencing.
Question 14
Correct
1.00 points out of 1.00

Flag question

Question text

In addition to the IC922 being a great server for inference workloads, its storage characteristics
make it a great fit for data and cloud workloads as well. In a test involving MongoDB running on
Red Hat OpenShift, how well did the IC922 outperform the similarly configured Intel-based
system?

Select one:

2x more containers; 2.35x better price/performance

3x more containers; 1.6x better price/performance

1.4x more containers; 1.7x better price/performance

2.3x more containers; 2.15x better price/performance

Question 15
Correct
1.00 points out of 1.00

Flag question

Question text
Within the AI workflow, what does the Inference stage represent?
Select one:
Inference is a method of interpreting and understanding the decision-making process within a
complex machine learning model.

Inference is the earliest stage where data sources are discovered and cataloged, and the meaning
of the data is inferred from the column names within those data sources.

Inference is where the model is deployed into production and new never seen before data is
passed into it for the purposes of making a prediction.

Inference is where the model learns from historic business data, adjusting parameter values
within the model to make it as accurate as possible.

Question 16
Correct
1.00 points out of 1.00

Flag question

Question text

Typically, how fast is an inferencing operation expected to run?

Select one:

Minutes

Hours

Sub-second
Days

Question 17
Correct
1.00 points out of 1.00

Flag question

Question text

Security is always top of mind for clients. IC922 has various security features built right into the
hardware and software stack. Which of the following is *NOT* a security feature or capability of
IC922?

Select one:

Trusted Boot

Security Screener Module 

Secure Boot

Trusted Platform Module

Question 18
Correct
1.00 points out of 1.00

Flag question

Question text

On which server(s) is  IBM Visual Insights (formerlly PowerAI Vision ) software supported?
Select one:

It is supported only on IC922, but not AC922.

It is supported on both of AC922 and IC922.

It is supported only on AC922, but not IC922.

It is not supported on either of AC922 or IC922.

Question 19
Correct
1.00 points out of 1.00

Flag question

Question text

What is the peak memory bandwidth of the IC922 server?

Select one:

235 GB/s per CPU; 470 GB/s total

170 GB/s per CPU; 340 GB/s total

125 GB/s per CPU; 250 GB/s total

190 GB/s per CPU; 380 GB/s total


Question 20
Correct
1.00 points out of 1.00

Flag question

Question text

What does “IC” in the name IC922 refer to?

Select one:

Inference Cloud

Integrated Cloud

Inferencing Cognition

IBM Cognitive

Question 21
Correct
1.00 points out of 1.00

Flag question

Question text

The IC922 server is storage-dense with strong I/O characteristics, making it an ideal server for
data and cloud needs. How many drives can be supported in the server (through the local drive
bays)?

Select one:
24

18

30

12

Question 22
Correct
1.00 points out of 1.00

Flag question

Question text

While the IC922 is not intended to be a direct replacement for LC922, it does share some similar
characteristics. However, the IC922 does have advantages over the LC922. Which of the
following statements about these advantages is *NOT* correct?

Select one:

The maximum amount of memory supported is higher.

The peak memory bandwidth per CPU is higher.

There are more PCIe slots in the server.


There are more cores per CPU.

Question 23
Correct
1.00 points out of 1.00

Flag question

Question text

Clients who already own or are considering purchasing AC922 servers for training may ask why
these servers can’t also be used for inference. What should you tell them?

Select one:

The IC922 is just a rebranding of the AC922 server, meaning that the specifications are identical
between them and the client can in fact run inference workloads equally on either of the AC922
or IC922 servers.

Inference software is not supported on the GPUs that are available in the AC922 server, which
means that they must purchase a separate IC922 server for inference purposes.

This is possible, but training and inference workloads have different characteristics and the
AC922 might not be the most energy efficient and cost-effective option for inferencing.

The AC922 can be used for inference, but not by default. It must be configured at manufacturing
time with the inference-specific GPUs that are shipped with the IC922 server.

Question 24
Correct
1.00 points out of 1.00

Flag question
Question text

In the context of building a machine learning model, what is meant by “training”?

Select one:

Training is the stage of the machine learning workflow where auditors are educated on the
internal workings of a “black box” model.

Training is the building of a model by learning from the vast amounts of input data presented to
it.

Training is where an existing model is used to make predictions against data it has never seen
before.

Training is the act of choosing an appropriate algorithm to use based on the type of problem
being solved and inspecting a sample of the input data.

Question 25
Incorrect
0.00 points out of 1.00

Flag question

Question text

According to analysts, how large is the accelerated inferencing market expected to be by 2023?

Select one:

$7 billion USD

$15 billion USD


$11 billion USD

$3 billion USD

You might also like