You are on page 1of 3

Once this transporting is complete, then transported video will be stored in cloud storage bucket

in both these options. If they are not interested in using the transcoding API, a managed service,
- can use third-party Computing, Library, both in chlorophyll and in app engine application here.

Next requirement is to expose their prediction model to their partner.


- use a Apigee; they can expose an API to get the information about their prediction model which they can
expose to their partner or their last requirement related to minimizing the other operation complicity.

They can use it to search from cloud of widgets lake logging on not monitoring

This solution represents


-data processing pipeline, both was bad for Glory, and for your time,
-Eh pump is used for a is rated you bitches like this prediction bigquery and bigquery ml,
-re-body data analysis and machine learning use cases

-represent the video transcoding flow, which use cloud storage, Cloud function, and conquered our API.
Not until a transporter API to do the actual transporting
-once transporting is complete, then it has been transported
-video will be stored back in cloud storage bucket here this representing,
-intelligent API is used for video analysis
-real-time messaging server is running inside the app engine, which allows me video streaming in essence
from the contract Data Center.
-can expose their prediction model to their partners by exposing a rest API via ABC to serve the data from the
location nearest to customer data is stuck to the customers from median point of present location.
Heli2.mp4

Helicopter Racing League wants to stream this racing either on-demand or through the live stream. And they

-already are in Cloud but the

main business requirements


-expand the predictive capabilities and reduce latency
-reduce the latency for their viewers in Emerging Markets.
-ability to expose the predictive models to Partners.

-increase predictive capabilities, during and before races like race results or mechanical failures.
-increase Telemetry and additional insights

-predictions and the increase the number of concurrent viewers


-scale as much as possible with based on the concurrent viewers who are watching the stream or watching the
on-demand video.
-minimize operational complexity
-ensure compliance with regulations.

technical requirements
to maintain or increase production, throughput and accuracy
-reduce your latency.
-increase the transcoding performance. And create real-time analytics

Diagram: on the left side on the right side, you would see things that are happening in their Cloud
-record the trace using a camera and you have your media production or people who are going
to sit near the race
-make any changes to the files or drop those files into the storage bucket,

Encoding and transcoding to stream or to put those media files into an on-demand space
-to encode and transport (reduce the size of the file from 1080p to 720p)
-adaptively stream based on internet connection of the user, based on the size of the device, whether it is mobile
phone or laptop or something like that.

-tensorflow in VMs and the model spot for the prediction.

What they do?


-all this stuff in a data center on Wheels. (huge truck)
-connectivity from the truck to the cloud through a high connectivity Network.
-drop the files to the to an object storage; and from there

-use the VMS again to do transcoding, encoding transcoding (this increases the cost of their clouded
tremendously - also it's not efficient way of doing things)
-if there are like thousand jobs in future or hundred thousand jobs, then you would have to create
hundred thousand VMs, which is not operationally sustainable in cloud and the tensorflow as
well.

using VM's for tensorflow training the models at a very high level

Let's focus more on the video transcoding


-want to improve the performance of the transcoding and want to reduce the latency.
dropping that in the cloud storage. And then you will have pop. So that is going to be notified whenever there is
an object in the cloud storage bucket, and then the app engine worker is going to start the transcoding and then
ultimately put those files, Into the cloud storage is Nation.
We are trying to use app engine workers, which is a sublist compute with that way. We are reducing the
operational complexity. It's more like an Ops model

in the second thing, is note here on the top right hand side, we're using ffmpeg which is I think an open-source
transcoding software. Now, ffmpeg has the docker image

if in exam, Ask you about app engine standard versus flexible if a customer is using ffmpeg, you know for
transporting then this architecture makes sense like or you can also use kubernetes based on the exam options.

either go with kubernetes or you can go with app engine, flexible because
App Engine flexible supports a Docker containers.

So
possible solution (1)
-ffmpeg software is going to be installed or the image of that will be available in a Docker image.
-transporting will happen. It will scale infinitely based on the number of objects that are there in the bucket
-once the transcoding is completed, it will try to create different files and then store it in the storage bucket.

-use another application to give that content on-demand to the user or stream and stuff.

possible solution (2)


-transcoder API is a service in Google Cloud that will help you to transcode the video files. -choose the
destination of the storage bucket and it will drop those files
(not sure in the exam, they will give you transcoder API; if given, you don't have to use app engine flexible,
you can just use a pin in standard or for that matter.

-can also use cloud functions in anything will work. As long as you are consuming the API through your
service account or you're hitting the API through service account
-then transporting, and then
-dropping the files into cloud storage. (only change here is whether the option would be a transcoder API)

-on-demand content – use GSE to transfer and then the files are stored in the cloud storage transcoding here
-cloud CDN -Google has a lot of point of presence across the world (very less latency)
-media files are cached across different POS and from there on user can view.

improve the user experience, by giving the predictions of the races and all those things.
-currently the company has been using tensorflow

-use the same tensorflow model and then they can spin up the tensorflow VMS in the a platform in Google
cloud and use those notebooks to continuously train the models from the right side
-once the model is ready, deploy the model to the a platform

-expose the model as a rest API.


has that capability to expose the model to their Partners, to rest API
has the capability of providing a very less latency to the customer

You might also like