You are on page 1of 21

 First page:

Introduction
Suppose you want to help your team understand the latest artificial intelligence (AI) innovations in the
news. Your team would like to evaluate the opportunities these innovations support and understand
what is done to keep AI advancements ethical.

You share with your team that today, stable AI models are regularly put into production and used
commercially around the world. For example, Microsoft's existing Azure AI services have been handling
the needs of businesses for many years to date. In 2022, OpenAI, an AI research company, created a
chatbot known as ChatGPT and an image generation application known as DALL-E. These technologies
were built with AI models which can take natural language input from a user and return a machine-
created human-like response.

You share with your team that Azure OpenAI Service enables users to build enterprise-grade solutions
with OpenAI models. With Azure OpenAI, users can summarize text, get code suggestions, generate
images for a web site, and much more. This module dives into these capabilities.

Capabilities of OpenAI AI models


There are several categories of capabilities found in OpenAI AI models, three of these include:

Capability Examples

Generating natural Such as: summarizing complex text for different reading levels, suggesting alternative
language wording for sentences, and much more

Generating code Such as: translating code from one programming language into another, identifying and
troubleshooting bugs in code, and much more

Generating images Such as: generating images for publications from text descriptions and much more

 Second page:

What is generative AI
OpenAI makes its AI models available to developers to build powerful software applications, such as
ChatGPT. There are tons of other examples of OpenAI applications on the OpenAI site, ranging from
practical, such as generating text from code, to purely entertaining, such as making up scary stories.

Let's identify where OpenAI models fit into the AI landscape.

 Artificial Intelligence imitates human behavior by relying on machines to learn and execute tasks
without explicit directions on what to output.
 Machine learning models take in data like weather conditions and fit the data to an algorithm,
to make predictions like how much money a store might make in a given day.

 Deep learning models use layers of algorithms in the form of artificial neural networks to return
results for more complex use cases. Many Azure AI services are built on deep learning models.
You can check out this article to learn more about the difference between machine learning and
deep learning.

 Generative AI models can produce new content based on what is described in the input. The
OpenAI models are a collection of generative AI models that can produce language, code, and
images.

Next, you'll learn how Azure OpenAI gives users the ability to combine Azure's enterprise-grade solutions
with many of OpenAI's same generative AI models.

Deep learning, machine learning, and AI

Consider the following definitions to understand deep learning vs. machine learning vs. AI:

 Deep learning is a subset of machine learning that's based on artificial neural networks.
The learning process is deep because the structure of artificial neural networks consists of
multiple input, output, and hidden layers. Each layer contains units that transform the input data
into information that the next layer can use for a certain predictive task. Thanks to this structure,
a machine can learn through its own data processing.

 Machine learning is a subset of artificial intelligence that uses techniques (such as deep
learning) that enable machines to use experience to improve at tasks. The learning process is
based on the following steps:
1. Feed data into an algorithm. (In this step you can provide additional information to the
model, for example, by performing feature extraction.)

2. Use this data to train a model.

3. Test and deploy the model.

4. Consume the deployed model to do an automated predictive task. (In other words, call
and use the deployed model to receive the predictions returned by the model.)

 Artificial intelligence (AI) is a technique that enables computers to mimic human intelligence. It
includes machine learning.

 Generative AI is a subset of artificial intelligence that uses techniques (such as deep learning) to
generate new content. For example, you can use generative AI to create images, text, or audio.
These models leverage massive pre-trained knowledge to generate this content.

By using machine learning and deep learning techniques, you can build computer systems and
applications that do tasks that are commonly associated with human intelligence. These tasks include
image recognition, speech recognition, and language translation.

Techniques of deep learning vs. machine learning

Now that you have the overview of machine learning vs. deep learning, let's compare the two
techniques. In machine learning, the algorithm needs to be told how to make an accurate prediction by
consuming more information (for example, by performing feature extraction). In deep learning, the
algorithm can learn how to make an accurate prediction through its own data processing, thanks to the
artificial neural network structure.

The following table compares the two techniques in more detail:

All machine learning Only deep learning

Number of data Can use small amounts of data to make Needs to use large amounts of training data to
points predictions. make predictions.

Hardware Can work on low-end machines. It Depends on high-end machines. It inherently does
dependencies doesn't need a large amount of a large number of matrix multiplication
computational power. operations. A GPU can efficiently optimize these
operations.

Featurization Requires features to be accurately Learns high-level features from data and creates
process identified and created by users. new features by itself.

Learning Divides the learning process into Moves through the learning process by resolving
approach smaller steps. It then combines the the problem on an end-to-end basis.
results from each step into one output.
All machine learning Only deep learning

Execution time Takes comparatively little time to train, Usually takes a long time to train because a deep
ranging from a few seconds to a few learning algorithm involves many layers.
hours.

Output The output is usually a numerical The output can have multiple formats, like a text,
value, like a score or a classification. a score or a sound.

What is transfer learning?

Training deep learning models often requires large amounts of training data, high-end compute
resources (GPU, TPU), and a longer training time. In scenarios when you don't have any of these
available to you, you can shortcut the training process using a technique known as transfer learning.

Transfer learning is a technique that applies knowledge gained from solving one problem to a different
but related problem.

Due to the structure of neural networks, the first set of layers usually contains lower-level features,
whereas the final set of layers contains higher-level features that are closer to the domain in question. By
repurposing the final layers for use in a new domain or problem, you can significantly reduce the amount
of time, data, and compute resources needed to train the new model. For example, if you already have a
model that recognizes cars, you can repurpose that model using transfer learning to also recognize
trucks, motorcycles, and other kinds of vehicles.

Learn how to apply transfer learning for image classification using an open-source framework in Azure
Machine Learning : Train a deep learning PyTorch model using transfer learning.

Deep learning use cases

Because of the artificial neural network structure, deep learning excels at identifying patterns in
unstructured data such as images, sound, video, and text. For this reason, deep learning is rapidly
transforming many industries, including healthcare, energy, finance, and transportation. These industries
are now rethinking traditional business processes.

Some of the most common applications for deep learning are described in the following paragraphs. In
Azure Machine Learning, you can use a model you built from an open-source framework or build the
model using the tools provided.

Named-entity recognition

Named-entity recognition is a deep learning method that takes a piece of text as input and transforms it
into a pre-specified class. This new information could be a postal code, a date, a product ID. The
information can then be stored in a structured schema to build a list of addresses or serve as a
benchmark for an identity validation engine.

Object detection
Deep learning has been applied in many object detection use cases. Object detection comprises two
parts: image classification and then image localization. Image classification identifies the image's objects,
such as cars or people. Image localization provides the specific location of these objects.

Object detection is already used in industries such as gaming, retail, tourism, and self-driving cars.

Image caption generation

Like image recognition, in image captioning, for a given image, the system must generate a caption that
describes the contents of the image. When you can detect and label objects in photographs, the next
step is to turn those labels into descriptive sentences.

Usually, image captioning applications use convolutional neural networks to identify objects in an image
and then use a recurrent neural network to turn the labels into consistent sentences.

Machine translation

Machine translation takes words or sentences from one language and automatically translates them into
another language. Machine translation has been around for a long time, but deep learning achieves
impressive results in two specific areas: automatic translation of text (and translation of speech to text)
and automatic translation of images.

With the appropriate data transformation, a neural network can understand text, audio, and visual
signals. Machine translation can be used to identify snippets of sound in larger audio files and transcribe
the spoken word or image as text.

Text analytics

Text analytics based on deep learning methods involves analyzing large quantities of text data (for
example, medical documents or expenses receipts), recognizing patterns, and creating organized and
concise information out of it.

Companies use deep learning to perform text analysis to detect insider trading and compliance with
government regulations. Another common example is insurance fraud: text analytics has often been
used to analyze large amounts of documents to recognize the chances of an insurance claim being fraud.

Artificial neural networks

Artificial neural networks are formed by layers of connected nodes. Deep learning models use neural
networks that have a large number of layers.

The following sections explore most popular artificial neural network typologies.

Feedforward neural network

The feedforward neural network is the most simple type of artificial neural network. In a feedforward
network, information moves in only one direction from input layer to output layer. Feedforward neural
networks transform an input by putting it through a series of hidden layers. Every layer is made up of a
set of neurons, and each layer is fully connected to all neurons in the layer before. The last fully
connected layer (the output layer) represents the generated predictions.

Recurrent neural network (RNN)

Recurrent neural networks are a widely used artificial neural network. These networks save the output
of a layer and feed it back to the input layer to help predict the layer's outcome. Recurrent neural
networks have great learning abilities. They're widely used for complex tasks such as time series
forecasting, learning handwriting, and recognizing language.

Convolutional neural network (CNN)

A convolutional neural network is a particularly effective artificial neural network, and it presents a
unique architecture. Layers are organized in three dimensions: width, height, and depth. The neurons in
one layer connect not to all the neurons in the next layer, but only to a small region of the layer's
neurons. The final output is reduced to a single vector of probability scores, organized along the depth
dimension.

Convolutional neural networks have been used in areas such as video recognition, image recognition,
and recommender systems.

Generative adversarial network (GAN)

Generative adversarial networks are generative models trained to create realistic content such as
images. It is made up of two networks known as generator and discriminator. Both networks are trained
simultaneously. During training, the generator uses random noise to create new synthetic data that
closely resembles real data. The discriminator takes the output from the generator as input and uses real
data to determine whether the generated content is real or synthetic. Each network is competing with
each other. The generator is trying to generate synthetic content that is indistinguishable from real
content and the discriminator is trying to correctly classify inputs as real or synthetic. The output is then
used to update the weights of both networks to help them better achieve their respective goals.

Generative adversarial networks are used to solve problems like image to image translation and age
progression.

Transformers

Transformers are a model architecture that is suited for solving problems containing sequences such as
text or time-series data. They consist of encoder and decoder layers. The encoder takes an input and
maps it to a numerical representation containing information such as context. The decoder uses
information from the encoder to produce an output such as translated text. What makes transformers
different from other architectures containing encoders and decoders are the attention sub-layers.
Attention is the idea of focusing on specific parts of an input based on the importance of their context in
relation to other inputs in a sequence. For example, when summarizing a news article, not all sentences
are relevant to describe the main idea. By focusing on key words throughout the article, summarization
can be done in a single sentence, the headline.
Transformers have been used to solve natural language processing problems such as translation, text
generation, question answering, and text summarization.

Some well-known implementations of transformers are:

 Bidirectional Encoder Representations from Transformers (BERT)

 Generative Pre-trained Transformer 2 (GPT-2)

 Generative Pre-trained Transformer 3 (GPT-3)

 Third page:
Microsoft has partnered with OpenAI to deliver on three main goals:

 To utilize Azure's infrastructure, including security, compliance, and regional availability, to help
users build enterprise-grade applications.

 To deploy OpenAI AI model capabilities across Microsoft products, including and beyond Azure
AI products.

 To use Azure to power all of OpenAI's workloads.

Introduction to Azure OpenAI Service

Azure OpenAI Service is a result of the partnership between Microsoft and OpenAI. The service
combines Azure's enterprise-grade capabilities with OpenAI's generative AI model capabilities.

Azure OpenAI is available for Azure users and consists of four components:

 Pre-trained generative AI models

 Customization capabilities; the ability to fine-tune AI models with your own data

 Built-in tools to detect and mitigate harmful use cases so users can implement AI responsibly

 Enterprise-grade security with role-based access control (RBAC) and private networks

Using Azure OpenAI allows you to transition between your work with Azure services and OpenAI, while
utilizing Azure's private networking, regional availability, and responsible AI content filtering.

Understand Azure OpenAI workloads

Azure OpenAI supports many common AI workloads and solves for some new ones.

Common AI workloads include machine learning, computer vision, natural language processing,
conversational AI, anomaly detection, and knowledge mining.

Other AI workloads Azure OpenAI supports can be categorized by tasks they support:

 Generating Natural Language


o Text completion: generate and edit text

o Embeddings: search, classify, and compare text

 Generating Code: generate, edit, and explain code

 Generating Images: generate and edit images

Azure OpenAI's relationship to Azure AI services

Azure's AI services are tools for solving AI workloads and can be categorized into three groupings:
Azure's Machine Learning platform, Cognitive Services, and Applied AI Services.

Azure Cognitive Services has five pillars: vision, speech, language, decision, and the Azure OpenAI
Service. The services you choose to use depend on what you need to accomplish. In particular, there are
several overlapping capabilities between the Cognitive Service's Language service and OpenAI's service,
such as translation, sentiment analysis, and keyword extraction.

While there's no strict guidance on when to use a particular service, Azure's existing Language service
can be used for widely known use-cases that require minimal tuning (the process of optimizing a model's
performance). Azure OpenAI's service may be more beneficial for use-cases that require highly
customized generative models, or for exploratory research.

Note

Pricing is different for Azure OpenAI and Azure Cognitive Service for Language. Learn more here.
When making business decisions about what type of model to use, it's important to understand how
time and compute needs factor into machine learning training. In order to produce an effective machine
learning model, the model needs to be trained with a substantial amount of cleaned data. The 'learning'
portion of training requires a computer to identify an algorithm that best fits the data. The complexity of
the task the model needs to solve for and the desired level of model performance all factor into the time
required to run through possible solutions for a best fit algorithm.

 Forth page:

How to use Azure OpenAI


Currently you need to apply for access to Azure OpenAI. Once granted access, you can use the service by
creating an Azure OpenAI resource, like you would for other Azure services. Once the resource is
created, you can use the service through REST APIs, Python SDK, or the web-based interface in the Azure
OpenAI Studio.

Note

To learn more about the basics of APIs, check out this infographic on how Azure APIs work.

Azure OpenAI Studio

In the Azure OpenAI Studio, you can build AI models and deploy them for public consumption in
software applications. Azure OpenAI's capabilities are made possible by specific generative AI models.
Different models are optimized for different tasks; some models excel at summarization and providing
general unstructured responses, and others are built to generate code or unique images from text input.

These Azure OpenAI models fall into a few main families:

 GPT-4
 GPT-3

 Codex

 Embeddings

 DALL-E

Azure OpenAI's AI models can all be trained and customized with fine-tuning. We won't go into custom
models here, but you can learn more on the fine-tuning your model Azure documentation.

Important

Generative AI models always have a probability of reflecting true values. Higher performing models, such
as models that have been fine-tuned for specific tasks, do a better job of returning responses that reflect
true values. It is important to review the output of generative AI models.

Playgrounds

In the Azure OpenAI Studio, you can experiment with OpenAI models in playgrounds. In
the Completions playground, you can type in prompts, configure parameters, and see responses without
having to code.

In the Chat playground, you can use the assistant setup to instruct the model about how it should
behave. The assistant will try to mimic the responses you include in tone, rules, and format you've
defined in your system message.
 Fifth page:

Understand OpenAI's natural language capabilities


Azure OpenAI's natural language models are able to take in natural language and generate responses.

Natural language learning models are trained on words or chunks of characters known as tokens. For
example, the word "hamburger" gets broken up into the tokens ham, bur, and ger, while a short and
common word like "pear" is a single token. These tokens are mapped into vectors for a machine learning
model to use for training. When a trained natural language model takes in a user's input, it also breaks
down the input into tokens.

Understanding GPT models for natural language generation

Generative pre-trained transformer (GPT) models are excellent at both understanding and creating
natural language. If you've seen recent news around AI answering questions or writing a paragraph
based on a prompt, it likely could have been generated by a GPT model. GPT models often have the
version appended to the end, such as GPT-3 or GPT-4. Azure OpenAI offers preview access to ChatGPT
powered by gpt-35-turbo and to GPT-4. You can apply for access to GPT-4 here.

What does a response from a GPT model look like?

A key aspect of OpenAI's generative AI is that it takes an input, or prompt, to return a natural language,
visual, or code response. GPT tries to infer, or guess, the context of the user's question based on the
prompt.

GPT models are great at completing several natural language tasks, some of which include:
Task Prompt

Summarizing text "Summarize this text into a short blurb"

Classifying text "What genre of book is this?"

Generating names or phrases "Write a tagline for my flower company"

Translation "Translate 'How are you' to French"

Answering questions "What does Azure OpenAI do?"

Suggesting content "Give me the five best weddings songs"

For example, given a prompt where the user types in text asking for a cooking recipe:

Write a recipe based on these ingredients and instructions:

 Fruit Pie
 Ingredients:
 Strawberries
 Blueberries
 Flour
 Eggs
 Milk
Here's an example of the OpenAI API generated response:

Instructions:

 Preheat your oven to 350 degrees Fahrenheit.


 In a medium bowl, mix together 1 cup of flour and 1 egg until it forms a dough.
 Grease a 9-inch pie dish with butter or non-stick cooking spray.
 Roll out the dough and place it in the pie dish.
 In a separate bowl, mix together 1 cup of strawberries, 1 cup of blueberries, and 1 cup of milk.
 Pour the fruit mixture into the pie dish.
 Bake
Delicious - maybe! It's important to understand that the generated responses are best guesses from a
machine. In this case, the generated text may be useful for cooking something that tastes good in real
life, or not.

How models are applied to new use cases

You may have tried out ChatGPT's predictive capabilities in a chat portal, where you can type prompts
and receive automated responses. The portal consists of the front-end user interface (UI) users see, and
a back-end that includes a generative AI model. The combination of the front and back end can be
described as a chatbot. The model provided on the back end is what is available as a building block with
both the OpenAI API and Azure OpenAI API. You can utilize ChatGPT's capabilities on Azure OpenAI via
the GPT-35-turbo model. When you see generative AI capabilities in other applications, developers have
taken the building blocks, customized them to a use case, and built them into the back end of new front-
end user interfaces.

 Sixth page:

Understand OpenAI code generation capabilities


Code generation AI models are able to take natural language or code snippets and translate them into
code. The OpenAI code generation model family, Codex, is proficient in over a dozen languages, such as
C#, JavaScript, Perl, PHP, and is most capable in Python.

Codex

Codex models are based off of GPT-3 and are optimized to understand and write code. These models
have been trained on both natural language and billions of lines of code from public repositories. Codex
is able to generate code from natural language instructions such as code comments, and can suggest
ways to complete code functions.

For example, given the prompt "Write a for loop counting from 1 to 10 in Python," the following answer
is provided:

PythonCopy

for i in range(1,11):

print(i)

Code generation models can help developers code faster, understand new coding languages, and focus
on solving bigger problems in their application. Developers can break down their goal into simpler tasks
and use Codex to help build those out tasks using known patterns.

Examples of code generation

Part of the training data for GPT-3 included programming languages, so it's no surprise that GPT models
can answer programming questions if asked. What's unique about the Codex model family is that it's
more capable across more languages than GPT models.

Codex goes beyond just writing code from natural language prompts. Given the following code, it can
generate unit tests:

PythonCopy

# Python 3

def mult_numbers(a, b):

return a * b
# Unit test

def

Codex builds out unit tests for our function:

PythonCopy

# Python 3

def mult_numbers(a, b):

return a * b

# Unit test

def test_mult_numbers():

assert mult_numbers(3, 4) == 12

assert mult_numbers(0, 10) == 0

assert mult_numbers(4, 0) == 0

# Unit test

def test_mult_numbers_negative():

assert mult_numbers(-1, 10) == -10

assert mult_numbers(10, -1) == -10

Codex can also summarize functions that are already written, explain SQL queries or tables, and convert
a function from one programming language into another.

When interacting with Codex models, you can specify libraries or language specific tags to make it clear
to Codex what we want. For example, we can provide this prompt formatted as an HTML comment: <!--
build a page titled "Let's Learn about AI" -->, and get this as a result:

HTMLCopy

<html>

<head>

<title>Let's Learn about AI</title>

<link rel="stylesheet" type="text/css" href="css/styles.css">

</head>

<body>
<div class="wrapper">

<div class="header">

<h1>Let's Learn about AI</h1>

</div>

<div class="nav">

<ul>

<li><a href="index.php">Home</a></li>

<li><a href="about.php">About</a></li>

<li><a href="resources.php">Resources</a></li>

<li><a href="contact.php">Contact</a></li>

</ul>

</div>

<div class="content">

<h2>Contact</h2>

<p>

<!-- build a form to contact the site owner -->

<form action="contact.php" method="post">

<label for="name">Name:</label>

<input type="text" name="name" id="name" required>

<label for="email">Email:</label>

<input type="email" name="email" id="email" required>

<label for="subject">Subject:</label>

<input type="text" name="subject" id="subject" required>

<label for="message">Message:</label>

<textarea name="message" id="message" required></textarea>

<input type="submit" value="Send">

</form>

</p>
</div>

<div class="footer">

<p>Let's Learn about AI</p>

</div>

</div>

</body>

</html>

GitHub Copilot

OpenAI partnered with GitHub to create GitHub Copilot, which they call an AI pair programmer. GitHub
Copilot integrates the power of OpenAI Codex into a plugin for developer environments like Visual Studio
Code.

Once the plugin is installed and enabled, you can start writing your code, and GitHub Copilot starts
automatically suggesting the remainder of the function based on code comments or the function name.
For example, we have only a function name in the file, and the gray text is automatically suggested to
complete it.

GitHub Copilot offers multiple suggestions for code completion, which you can tab through using
keyboard shortcuts. When given informative code comments, it can even suggest a function name along
with the complete function code.
 Seventh page:

Understand OpenAI's image generation capabilities


Image generation models can take a prompt, a base image, or both, and create something new. These
generative AI models can create both realistic and artistic images, change the layout or style of an image,
and create variations on a provided image.

DALL-E

In addition to natural language capabilities, generative AI models can edit and create images. The model
that works with images is called DALL-E. Much like GPT models, subsequent versions of DALL-E are
appended onto the name, such as DALL-E 2. Image capabilities generally fall into the three categories of
image creation, editing an image, and creating variations of an image.

Image generation

Original images can be generated by providing a text prompt of what you would like the image to be of.
The more detailed the prompt, the more likely the model will provide a desired result.

With DALL-E, you can even request an image in a particular style, such as "a dog in the style of Vincent
van Gogh". Styles can be used for edits and variations as well.

For example, given the prompt "an elephant standing with a burger on top, style digital art", the model
generates digital art images depicting exactly what is asked for.
When asked for something more generic like "a pink fox", the images generated are more varied and
simpler while still fulfilling what is asked for.

However when we make the prompt more specific, such as "a pink fox running through a field, in the
style of Monet", the model creates much more similar detailed images.

Editing an image

When provided an image, DALL-E can edit the image as requested by changing its style, adding or
removing items, or generating new content to add. Edits are made by uploading the original image and
specifying a transparent mask that indicates what area of the image to edit. Along with the image and
mask, a prompt indicating what is to be edited instructs the model to then generate the appropriate
content to fill the area.

When given one of the above images of a pink fox, a mask covering the fox, and the prompt of "blue
gorilla reading a book in a field", the model creates edits of the image based on the provided input.
Image variations

Image variations can be created by providing an image and specifying how many variations of the image
you would like. The general content of the image will stay the same, but aspects will be adjusted such as
where subjects are located or looking, background scene, and colors may change.

For example, if I upload one of the images of the elephant wearing a burger as a hat, I get variations of
the same subject.

Note

Access to DALL-E is currently granted on an invite basis only.

 Eighth page:

Describe Azure OpenAI's access and responsible AI policies


It's important to consider the ethical implications of working with AI systems. Azure OpenAI provides
powerful natural language models capable of completing various tasks and operating in several different
use cases, each with their own considerations for safe and fair use. Teams or individuals tasked with
developing and deploying AI systems should work to identify, measure, and mitigate harm.

Usage of Azure OpenAI should follow the six Microsoft AI principles:

 Fairness: AI systems shouldn't make decisions that discriminate against or support bias of a
group or individual.
 Reliability and Safety: AI systems should respond safely to new situations and potential
manipulation.
 Privacy and Security: AI systems should be secure and respect data privacy.
 Inclusiveness: AI systems should empower everyone and engage people.
 Accountability: People must be accountable for how AI systems operate.
 Transparency: AI systems should have explanations so users can understand how they're built
and used.
Responsible AI principles guide Microsoft's Transparency Notes on Azure OpenAI, as well as explanations
of other products. Transparency Notes are intended to help you understand how Microsoft's AI
technology works, the choices system owners can make that influence system performance and
behavior, and the importance of thinking about the whole system, including the technology, the people,
and the environment.

If you haven't completed the Get started with AI on Azure module, it's worth reviewing its unit
on responsible AI.

Limited access to Azure OpenAI


As part of Microsoft's commitment to using AI responsibly, access to Azure OpenAI is currently limited.
Customers that wish to use Azure OpenAI must submit a registration form for both initial
experimentation access, and again for approval for use in production.

Additional registration is required for customers who want to modify content filters or modify abuse
monitoring settings.

To apply for access and learn more about the limited access policy, see the Azure OpenAI limited
access documentation.

 Ninth page:

Knowledge check
How are ChatGPT, OpenAI, and Azure OpenAI related?

Azure OpenAI is Microsoft's version of ChatGPT, a chatbot that uses generative AI models.

ChatGPT and OpenAI are chatbots that generate natural language, code, and images. Azure OpenAI
provides access to these two chatbots.

OpenAI is a research company that developed ChatGPT, a chatbot that uses generative AI models.
Azure OpenAI provides access to many of OpenAI's AI models.
2. You would like to summarize a paragraph of text. Which generative AI model family would you use to
solve for this workload?

GPT.

Codex.
Dall-E.
3. What is one action Microsoft takes to support ethical AI practices in Azure OpenAI?

Provides Transparency Notes that share how technology is built and asks users to consider its
implications.

Logs users out of Azure OpenAI Studio after a period of inactivity to ensure it's only used by one
user.

Allows users to build any application, regardless of harmful effects, to ensure fairness.
Check your answers

 Tenth page:

Summary
This module introduced you to the concept of generative AI and how Azure OpenAI Service provides
access to generative AI models.
In this module, you also learned how to:
 Describe Azure OpenAI workloads and how to access the Azure OpenAI Service
 Understand generative AI models
 Understand Azure OpenAI's language, code, and image capabilities
 Understand Azure OpenAI's Responsible AI practices and Limited Access Policy
To continue learning about Azure OpenAI and find resources for implementation, you can check out
the documentation on Azure OpenAI and the Develop AI solutions with Azure OpenAI Learning Path.

You might also like