You are on page 1of 11

Editing for Multiple Screens

Learning Outcomes:

 The Changing Shape of Cinema: Aspect Ratio


 Digital Media and alternative viewing spaces
 Immersive technologies of the new digital reality: 360, VR, AR & MR
 Editing pipeline
 360 4K Video Editing

The Changing Shape of Cinema: Aspect Ratio

Before we begin discussing multiple screens and its nuances, it’s important to
look-back and see how cinema has changed through time with its varying aspect
ratios and formats. In simple words aspect ratio can be defined as the shape of
your screen which is projected or used depending on the theatre, screen or in
today’s day and age device. For a film-maker it can be consider as a canvas,
always remember that it’s not the size of your canvas which matter but what
you portray on it.

The very first aspect ratio from the very first motion picture shown to a public
audience“Dickson Greeting” (1891) is credited to staff photographer of Thomas
Edison – William Kennedy Dickson which was 4 by 3 ratio. In 1909 Motion
Picture Patent Company declared that 35mm film with Edison perforations as 4
by 3 image standard across United States. This stayed the same for nearly two
decades until in 1929 synchronized sound came and audio was recorded
simultaneously one side of the film strip resulting in changing the aspect ratio to
1.37 – 1.33, this ratio was then made standard in 1932 by the Academy of
Motion Picture Arts to make room for the soundtrack.

In 1950s motion picture industry was forced to restructure to give competition


to television which led to films to offer something more than television and thus
began a decade long war for widescreen film formats and beginning of a new
cinema aesthetic. This new wave began with Fred Waller, who pioneered
multicamera projection, “Cinerama” simultaneously projecting with three
projectors and boosting 7 track surround sound audio system, although
expensive, nut it ran for two years at Warner Theatre in New York city. It was
only after a decade that Cinerama was used in a dramatic film The Wonderful
World of The Brothers Grimm (1962) and How the West was Won (1962).

After seeing the impact of Cinerama executive of 20 th century fox approached


Professor Henri Chretien who invented “Anamorphoscope” Using a 2 to 1
anamorphic lens, this method which Fox called Cinemascope, delivered a 2.35
aspect ratio, the very first feature released in this format was 1953 film “The
Robe” which became a blockbuster. Later Paramount developed “Vista-vision”
which took traditional 35mm and turned it on its side and yet again changed the
aspect ratio to 1.85. Alfred Hitchcock shot several films on Vista-Vision.

Various other formats became mainstream during 1950s such as Vistarama,


Superscope, Technirama, etc. although further progression with 35mm didn’t
had much scope and led the film engineers to go bigger. Todd AO along-with
Mike Todd with American Optical Company developed a 70mm format and
created a new aspect ratio 2.20. In 1954 Panavision manufactured anamorphic
lenses, they eventually replaced cinemascope. MGM - Metro-Goldwyn-Mayer
Studios used 70mm film to capture the famous chariot race scene of the film
Ben-Hur (1959) in super wide aspect ratio of 2.76. MGM 65 was coined a Super
Panavision 70 and was later used for the Oscar winning film “Lawrence of
Arabia” (1962). But 70mm being expensive was used only for special purposes.

In 1970s came IMAX aspect ratio 1.43:1, which was similar to VistaVision
where 35mm was used horizontally in IMAX format 70mm was run
horizontally and was theoretically three times in resolution when compared to
35mm. Christopher Nolan has been a long-time supporter of IMAX 70mm
format since 2000s, his film The Dark Knight (2008) features 28 minutes
sequence shot on IMAX.

So far, we have discussed 1.33 or 4x3 ratio associated with the silent era, then
1.37 Academy ratio, 2.59 Cinerama, 2.35 ratio Cinescope, 1.85 VistaVision,
Todd AO with 2.20 ration and 2.76 super-wide ratio with Ben-Hur (1959) and
Lawrence of Arabia and IMAX ratio 1.43:1.

In 1980s came on of the mostly popular aspect ratio which is still popular at
present 16 x 9 ratio or 1.77 which is credited to television. In late 1980sDrKerns
H Powers an engineer who suggested this new aspect ratio and which more or
less became standard default for widescreen aspect ratio from DVDs to Ultra
High Definition 4K computer screens and digital television. 16 x 9 ratio is the
geometric mean between 4x3 and 2.35 the two most extreme ratios of that time.

Digital Media and alternative viewing spaces

Early 21st Century was the dawn of a digital era and evolved with lighting pace
rapidly changing the function of media in our society. During the last two
decades both the delivery mechanism and channels through which we receive
media have changed drastically and have provided an array of new media
platforms.

This phase if often referred to as the new media technology and began the same
time when computers became more mainstream. At present time and
geographical location have become more or less redundant with advent of
satellite and computer networks. What McLuhan describes as “Global Village”
where there are no barriers, allowing free access to information.

Now most of the television set manufacturer have internet enabled screens
giving access to YouTube, Amazon Prime, HULU and Netflix directly to your
television sets. One of the major consumption of on demand media is via
smartphones, during the last 5 years there have been an unprecedented growth
in the smartphone market, this has challenged the supremacy of television as a
mainstream entertainment hub. With the increase in devices enabled with media
access and increasing internet access speeds consumers have access to media
24/7 with an option to choose from. The evolving media landscape both locally
and globally brings certain opportunities as well as challenges. Media is no
longer an exclusive club, but it determines individual identity and culture as
technology changes how we see ourselves and the world around us.

Immersive technologies of the new digital reality: 360, VR, AR & MR

These alternative viewing spaces which are slowly becoming mainstream are
not just limited to passive media content consumption. There are new
technologies at play now which are immersive. Let’s define what is immersive
when it comes to digital technologies, it is multisensorial deeply engaging
digital experience and can be delivered using – VR, AR, 360, mixed reality and
other such formats.

Now these technologies differ from one another, AR or Augmented Reality


creates content into the real-world by placing trackers in the environment where
the content in viewed using transparent optics. VR or Virtual reality on the
other hand replaces the user’s environment with a virtual world, both of these
technologies feature motion-tracking capabilities to make the user experience
more immersive. MR or mixed reality combines both the world in a seamless
manner using special awareness and gestural recognition systems. 360-degree
videos are often confused with VR and AR, although 360-degree videos can be
viewed on a VR headset which enables better interaction, but 360-degree videos
lack interaction and true freedom of movement and can be viewed on other
screens too such as a smart phone, using its inbuilt gyroscope or on a computer
screen using your mouse’s tracker to toggle between different viewing angles.
In the next topic we will discuss how to edit these new media formats or
screens.
Editing Pipeline

Video editing for media other than the new media technologies that we have
discussed earlier remains the same, except the way we import immersive
technologies media and some of the tools and transitions which are unique to
each media that we will address today. We will start with the process of editing
a typical video and then discuss how immersive technologies media content it
imported and will elaborately discuss the toolset unique to each one.

It’s important to understand the post workflow or pipeline of post-production,


post production begins after the required shots, scenes or sequences decided
during the pre-production phase have been shot.

Editing Script:

In the post production editing script makes the story telling organised and the
way it was initially visualised. Any script for a media project is essentially
written three times, first by the screenwriter, then during the production stage
and third time at the editing room. This third script edit is called the “paper
edit” and is essential to decide the pace of the film as its only after everything is
shot and placed on the timeline that one realizes that which shot is crucial to
push the story forward and which shot is slowing down the story and are no
longer required and goes straight into the bin, although they can released online
or on DVDs as deleted scenes.

Organization and backup are two important things to keep in mind while
working on any kind of video editing work, an organized interface will increase
efficiency and speed-up the whole workflow.Backup is crucial so that you don’t
lose data and end paying for costly recovery or the worst loosing data.

First step in the post pipeline is the storage which can vary depending on how
you work, it can be a solid-state drive, hard-drive, DAS – Direct Access Storage
or a RAID based array storage device. RAID or Redundant Array of
Inexpensive disk is a technology to increase performance and have a reliable
mirroring option to backup all the data automatically onto another drive and
utilizes faster drives than the traditional consumer hard-drives. You might work
on a footage dumped in a centralized storage, where several editors are working
on the same footage on different levels such as visual effects artist, colourist,
editors, sound editors, and so on, SAN or Storage Area Network are used by big
production houses where different people can access the same data seamlessly
as if accessing it from their own drives.

Importing and Organizing files into bins or folders is the second step, plus
backup of all the data. Then comes assembling or organizing all the footage into
a logical order inside a project panel in the editing software been used. Next
step is interpreting footage, it is getting to read the editing application to read
your footage in a correct manner. For example, if you have shot a footage at 24
frames per seconds you want it to read it the same way not 25 frames per
second, if it’s interlaced the editing application should recognize it as such. At
times footage is processor intensive and you have to transcode it to edit in real-
time, if that’s not a viable option then you have to follow a proxies workflow by
transcoding it to a much lower resolution and then later on replacing with hi
resolution footage, this also enables to edit on a laptop. Finally, in the assembly
process you need to sync all the audio files which were recorded separately
from the video, usually assistant editor does these tasks. Industry standard for
video editing is Final Cut Pro, Adobe’s Premiere and Avid.

Then we can begin editing by dragging and dropping the footage in the timeline
and being creative. Once the edit is locked or you have the final timeline with
all the audio and video are in-place. At this point we divide the workflow
further into audio and visuals, which have completely different workflow.

For visuals or the second stage of editing is motion-graphics which involves


creating titles, overlays/ inlays, animation, idents, logos and basic effects.
Motion-graphics can be complex as it also involves 3D graphics, for motion-
graphics world-over Adobe After-effects and nuke are the de-facto standard.

3rd Stage of post pipeline is visual effects, VFX or visual effects is a process of
integrating special effects with live-action footage.The typical workflow is that
you pre-select or identify the sequences that needs any kind of VFX work and
you export it into a format the visual effects team wants it, which is usually a
TIFF image sequence. After the VFX work is finished it comes back to the
editorial, it can be considered as a hub – where everything has to come back
eventually.

4th stage is moving the entire timeline into another application; this stage is
called conforming. This is a complex stage as at times you might have footage
from different cameras or even files with different resolutions or aspect ratios.

5th stage is colour-grading, which comes after conforming - a short format


films or features are not mastered in an editing software. Editing application is
not a specialized application for colour grading thus we need to get the footage
into a colour grading application, DaVinci Resolve is an industry standard
application for colour grading for high-end colour grading baseLight by
filmLight is used which is packed with powerful tools for colourist. Colour-
grading involve – colour correction, shot-matching, texture equalizer and LUT
or lookup Table. The last one mentioned here (LUTS) is basically a modifier
between two images original and the displayed image and is typically viewing,
transforming and calibrating based on a mathematical formula.

6th stage is finishing and mastering. Once the project is moved out of the
editorial you don’t have to bring it back after finishing colour grading. It is
primarily a warp-up session with producer, cinematographer, director and
editor. This stage involves – real-time finishing, minor edits, minor VFX, final
colour grading, final motion graphics and titles. All this can be done in a Hero-
Box application in real-time. Some industry standard Hero-Box applications
used for final finishing and mastering are – Autodesk Flame, Lustre, SAM
Quantel Pablo Rio and SGO Mistika, all the applications have motion graphic,
visual effects and editing capabilities and can run in real-time provided you
have the required hardware to support real-time rendering. On a low-budget you
can create a Hero-Box with combination of 2-3 applications such as – Adobe
Premiere Pro, After-Effects and Speed-grade or with DaVinci Resolve and
Fusion.

Mastering is the last step, it is all about getting the highest quality file possible
from your project and essentially transferring files into a single video file or
DPX, OpenEXR and TIFF image sequences plus audio files. This decision is
based on where you are going to broadcast your media for a YouTube video you
might just export a ProRes or H.264 directly. This final stage is called creating
deliverable such as DCP or Digital Cinema Package, DVD/ Bluray, VOD
Video on Demand, or others.

Let’s go back to the editorial and look at the audio pipeline, all the audio files
used in the production are sent in as uncompressed WAV format along-with
render of the final edit to the sound-editing team, where they clean the audio
and remove any artefacts, sync it, add Foley, Dubbing and Sound effects. Music
is recorded separately and sent for final sound-mixing. Protools is an industry
standard application for sound-mixing, this is where sound is normalized,
equalized, and is adjusted to multiple environments and standards.

360 4K Video Editing

Pipeline for editing different media technologies such as immersive


technologies is pretty much the same as discussed earlier although the set of
tools, transitions and the way we import these into an application differs. Things
get a little bit different when editing 360 videos. 360 resolutions are non-
standard and are called equirectangular video file, it is a flattened file version of
a 360 video. To get your 360 videos into an editing application you would need
to convert and optimize it into an equirectangular video file it all depends on the
camera you have. Viable options for editing 360 videos are Adobe Premeire
Pro, cyberlink power director, Final Cut Pro X, Autopano Video Pro, Mocha
VR andMolanis VR. Other than these paid applications there are several mobile
based apps such as VeeR Editor, V360, Collect, Theta+, etc. and apps with
comes with the camera such as Mi Sphere Camera for cameras by Mi and Tiny
Planets is custom built for iPhones to name a few.

Monoscopic and Stereoscopic These are the two types of 360 videos –
Monoscopic in one of the most commonly supported file and can be uploaded
on YouTube and Facebook. This is typically a flat-rendered 360 files with no
real depth such as the Google street view. On the other hand Stereoscopic video
files are immersive and requires Virtual Reality Headsets. For editing these can
be exported in an equirectangular video file, fisheye or dual fish eye.

Equirectangular video file are just like world map and are flattened rectangular
format of a spherical experience, where you can see both front and back. This
format is also the native format for editing 360 videos.

Next is Fisheye this format projects your camera in the centre and put all the
things around it in a circle, although it may look uniquely different but is
converted for editing and playback.

Dual fisheye format comes in when your camera has two lenses and each one
captures 180 degree, this format is also converted in an equirectangular video
file for editing and playback.

Equirectangular video file are then imported in an editing application where


they are merged, trimmed and titles are added. These titles vary from the
traditional video titles as they are 360 responsive titles and are used for guiding
the audience into the action. One can crop or trim a 360 video in the same
manner as any standard video and add sound tracks based your genre.

Equirectangular video file in 4k are 3840 by 1920 pixels, be careful to use the
same resolution as your camera in the editing application. In Adobe Premiere
one can enable to VR video to change the viewing option from equirectangular,
this will give a 360 view where we can click and drag to look around in 360
spaces. We can add colour corrections, trim and edit our footage. For adding
titles, we can switch back to normal view to place and switch back to VR view
to check if its correctly placed.

Exporting a 360 file in Adobe Premiere make sure you match the resolution and
select in advanced setting VR Video, if this is not selected then metadata won’t
work and devices won’t recognize it as a 360 video. Quality is important while
exporting 360 video as you might zoom in while viewing the video and it might
pixelate if the bitrate is too low one rule of thumb is you can choose a YouTube
360 video pre-set while exporting and choose 40 megabits per second in a 4K
360 video going less than that will give a less immersive experience.

References/Reading List:
Geoffrey Nowell-Smith, “The Oxford History of World Cinema”. OUP Oxford
(30 October 1997).
Dancyger, Ken, “The Technique of Film and Video Editing, Fourth Edition:
History, Theory, and Practice”. Focal Press; 4th edition (November 28, 2006).
Crittenden, Roger. “Film and Video Editing” Routledge; 1 edition (January 26,
1996).
Arnaldi, B., Guitton, P., & Moreau, G. “Virtual reality and augmented reality:
Myths and realities.” London: Wiley-ISTE. (2018)
Mealy, Paul “Virtual & Augmented Reality For Dummies (For Dummies
(Computer/Tech))”. For Dummies; 1 edition (June 8, 2018).
Web links:
https://www.colesclassroom.com/aspect-ratio-definition/
https://www.wired.com/brandlab/2018/02/digital-reality-focus-shifts-
technology-opportunity/
https://arvrjourney.com/
https://nofilmschool.com/2016/10/10-stages-post-production-data-storage-
deliverables
http://ucafilm.org/digital-filmmaking-post-workflow-pipeline/
https://www.explainthatstuff.com/virtualreality.html

You might also like