Professional Documents
Culture Documents
in both these options. If they are not interested in using the transcoding API, a managed service,
- can use third-party Computing, Library, both in chlorophyll and in app engine application here.
They can use it to search from cloud of widgets lake logging on not monitoring
-represent the video transcoding flow, which use cloud storage, Cloud function, and conquered our API.
Not until a transporter API to do the actual transporting
-once transporting is complete, then it has been transported
-video will be stored back in cloud storage bucket here this representing,
-intelligent API is used for video analysis
-real-time messaging server is running inside the app engine, which allows me video streaming in essence
from the contract Data Center.
-can expose their prediction model to their partners by exposing a rest API via ABC to serve the data from the
location nearest to customer data is stuck to the customers from median point of present location.
Heli2.mp4
Helicopter Racing League wants to stream this racing either on-demand or through the live stream. And they
-increase predictive capabilities, during and before races like race results or mechanical failures.
-increase Telemetry and additional insights
technical requirements
to maintain or increase production, throughput and accuracy
-reduce your latency.
-increase the transcoding performance. And create real-time analytics
Diagram: on the left side on the right side, you would see things that are happening in their Cloud
-record the trace using a camera and you have your media production or people who are going
to sit near the race
-make any changes to the files or drop those files into the storage bucket,
Encoding and transcoding to stream or to put those media files into an on-demand space
-to encode and transport (reduce the size of the file from 1080p to 720p)
-adaptively stream based on internet connection of the user, based on the size of the device, whether it is mobile
phone or laptop or something like that.
-use the VMS again to do transcoding, encoding transcoding (this increases the cost of their clouded
tremendously - also it's not efficient way of doing things)
-if there are like thousand jobs in future or hundred thousand jobs, then you would have to create
hundred thousand VMs, which is not operationally sustainable in cloud and the tensorflow as
well.
using VM's for tensorflow training the models at a very high level
in the second thing, is note here on the top right hand side, we're using ffmpeg which is I think an open-source
transcoding software. Now, ffmpeg has the docker image
if in exam, Ask you about app engine standard versus flexible if a customer is using ffmpeg, you know for
transporting then this architecture makes sense like or you can also use kubernetes based on the exam options.
either go with kubernetes or you can go with app engine, flexible because
App Engine flexible supports a Docker containers.
So
possible solution (1)
-ffmpeg software is going to be installed or the image of that will be available in a Docker image.
-transporting will happen. It will scale infinitely based on the number of objects that are there in the bucket
-once the transcoding is completed, it will try to create different files and then store it in the storage bucket.
-use another application to give that content on-demand to the user or stream and stuff.
-can also use cloud functions in anything will work. As long as you are consuming the API through your
service account or you're hitting the API through service account
-then transporting, and then
-dropping the files into cloud storage. (only change here is whether the option would be a transcoder API)
-on-demand content – use GSE to transfer and then the files are stored in the cloud storage transcoding here
-cloud CDN -Google has a lot of point of presence across the world (very less latency)
-media files are cached across different POS and from there on user can view.
improve the user experience, by giving the predictions of the races and all those things.
-currently the company has been using tensorflow
-use the same tensorflow model and then they can spin up the tensorflow VMS in the a platform in Google
cloud and use those notebooks to continuously train the models from the right side
-once the model is ready, deploy the model to the a platform