You are on page 1of 5

This video is talking about how will we write our stories in the future.

In the past, the


journalists for instance would type into a typewriter even, hand the script over to the news
editor and the news editor would either approve or disapprove the script and then it would be
you know versioned with different colors on the paper. The scripting and versioning process
was not optimal because there was no single source of truth scripts.
theprocess it's not very optimal.
when i was doing this as a graphics op you used to get in the script something that said host
aston okay so and as a graphics open acid is a graphic in america i think it's called a chiron
and so i know that i have to make a graphic for brian davidson's name
so i would type that into the system and then i would in my script i would make a queue. i
would write above it a page number right so because i know that when i store it in the system
it's on page 101
so we have the whole script is here these are all here's the vts here's the bumpers here's the
camera cues you know here's the timings so we know what time things are everything is here
all the information
what we need is here and what makes that interesting is that really fundamentally that these
scripts are actually what today's workflows have been born from the information in these
scripts is just as good today
in the 90s we got introduced to something new called the newsroom control system was
actually . invented in 70s. The newsroom control system would do was it would change this
workflow which is typewriter to xerox to piles of paper hand it out to everybody right
typewriter xerox piles pay for a handout the whole department did this stuff and it switched to
a much more sort of collaborative.
The first newsroom system called basis and it's originally done in the 70s. This basis was
responsible for what made CNN so successful in the 80s because the journalists were able to
collaborate and and make changes without committing to paper so you'll be able to log into
the same server in this case it's a vax 6200.
if you ever want to heat a room get one of these it will heat the room very very efficiently and
quickly they're very hot and that system would allow them to collaborate before they hit the
paper and then without piles of paper again so things didn't change that much but at least
there was some collaboration. in in this area here then back in 94 Basis was bought by
Abbott and basis was then turned into inews and inews has iterated over the years and had
things added to it and bolted onto it there's been other vendors released similar software to
eye news but our news is the market leader and it was actually formed from basis when it was
bought by avid all those years ago

now because i'm talking about the market leader of course that brings us up to the present
because fundamentally the journey from script to screen hasn't changed in decades.
okay it's been very much the same workflow very run down lead and very linear tv led and
this is what a an average rundown in any arcs system looks like it looks like a spreadsheet
with the you know here's all the stories all the slugs as they call them you know when was it
created what's its status what's his mos id lots of complicated stuff in here and then when you
add the script onto this the script has these strange commands that are in line with all of the
text and so the story and the commands are all mixed up um i'm being a little simplistic here
because actually quite a lot of nlcs's do have a column here very similar to the script that you
saw earlier that breaks these things down but the moss codes are actually stuck in here and
i'm saying moss quite a lot because nothing has changed okay and why has nothing changed
why is everything frozen entirely like this the reason being because a workflow and workflow
and workflow it's totally baked into the broadcast linear television workflow how this stuff
works and that was brought about through automation in the early 2000s .
when we had automation come in automation was effectively being driven by the nrcs and
what that means is that the automation is a bit like the conductor in orchestra so it's telling
each individual machine or or tape deck or you know prompter or whatever to do something a
bit like a you know like a conductor so um if automation is the conductor then the music
score that's the nrcs okay all the notes on the page of what the conductor is showing is
actually coming from the nrcs and again as i've mentioned more squad this is the mos
protocol media object server communications protocol is a kind of open source uh protocol
that allows a script to tell a automation system i want a graphic i want to play a video i want
to do something with the audio i want to change the cameras and this is all done with these
moss codes so again hopefully i can do this without double cueing getting used to having a
screen at home so over here on this side here there's a little funny symbol that represents a
morse code and then what follows that is the actual commands the mos commands that will
be sent to the automation and a good example of that in this script is here actually this is a
command for a cg i.e for a graphic so it's got its timing information which template it's going
to use and then here we have actually got the name of the person we want the graphic to have
and their job title so it's metadata we're sending the metadata the graphics machine itself has
the template just like in this case the template is the piano right all we're doing is telling the
automation to put these notes on the script here and then the piano here which would be the
graphics machine would play that note at that particular time according to the automation so
we're just basically handing the automation what's going to happen using this mos protocol
excuse me one moment let's clear that so just another example just to break it into a sort of a
flow chart the nrcs says i want to play video a the moscows go into the automation the
automation can then set ready the video on the video server and then the time is right the
automation can cue the vision mixer and the audio mixer at the same time to play video a on
air but the fundamental where where is video a going to play and what how do we trigger this
is coming from the most codes in the nrcs so that's the role the nrcs is playing now this stuff
is all very well and good but there's some issues here so for instance if you delete a character
accidentally in this graphic thing here it won't come on it won't happen um so you can you
know you can be working on a script and accidentally delete something and that if you delete
it in a mos code you'll break it at the same time it's also not really a very user-friendly way of
working and if you show this to a millennial this is what will happen i know i've seen it i've
done it myself they get terrified of it um so the trick is we have to try and do this in a better
way how can we do that in a better way
so if we're going to try and do bigger things with this stuff we need to have better tools and
one of the problems that we suffer from in the broadcast industry is that we tend to sort of
like look at technology new technology in the same way that we look at old technology right
so we try to make the new technology do what the old technology did and that's not going to
get the full potential out of the new technology okay it's just gonna make the new technology
do what the old technology did and i've got a little uh analogy it's kind of fun for this this is
bob bob owns a company called train express and he's been running it for years and he's got
all sorts of different kinds of trains he knows lots of stuff about trains right but we all know
that trains running rails and trains have been around for hundreds and hundreds of years uh so
but everybody knows what a train is and how it works etc etc so let's just say that bob's train
company they run this particular service each day and they run from paris all the way down
to lisbon in portugal and they run this service every single day one day down and one day up
it takes 33 hours 33.5 hours actually in actual fact to get down there um and they have the
sleeper cars on there they have the restaurant cars on there all the things that they need to do
but fundamentally after 33 hours this train will arrive here now this is used in old technology
old rails but it gets the job done and it's reliable and it works okay now i come along and i say
to bob hi bob i work for an airline company i want to sell you this fancy new airliner it can
carry just as many people as your train but here's the difference even if your train leaves two
hours earlier than my aircraft so it gets a head start and it gets halfway into france the aircraft
makes it all the way over here to lisbon in two and a half hours two and a half hours and the
train is still chugging its way through here so we could do this journey on the plane four or
five times in a day for instance uh so we could move four or five times more passengers than
a single train journey so the new technology here far outweighs the old technology in terms
of business opportunity but bob says well that's amazing but where's the sleeper car because
bob's comparing the train and the plane you can't compare them they're incomparable they're
two very different things but he he thinks they need to sleep on this two and a half hour
journey they don't because he because he's comparing we shouldn't compare technology what
we should do is we should reflect on the technology and how can we use the technology that
we have now better than the technology that we've been using before so it's a kind of a fun
analogy it's a bit simplistic but you get what i'm saying um so by doing that we can create this
sort of more future where we have reactive storytelling that coming from an agile foundation
so instead of having a sleeper car we just get there faster speed is definitely something that
you need to have if you want to be able to react to a story as it's fast evolving so i found this
article the other day it's called the future of journalism new skills old values i'll put the link in
underneath after i've finished i'm not really one for bullet points so i'm not going to read them
all out but there were some really interesting points on this i thought uh so i just try and do
this hopefully i can get through this without double queuing at all if i'm very lucky yes all
right so a simple print news story is still a valid and valuable template that i love because
that's absolutely true the story itself the simple story is always as valuable and as valid as it
was in the past for like a newspaper to now you know in in modern broadcast news or even
web stuff um this third bullet here is very interesting as well it says whether the news is
deliverable on a sheet of paper or a laptop or a mobile device is of secondary importance now
i don't agree with that i don't think it's of secondary importance i think it's probably of equal if
not more importance but i'm going to come back to that and address that a bit later on uh the
last but it's also very nice you know the fundamental and critical to the quality and eventual
survival of professional journalism in the digital age something that we should be building
tools to uh to enhance and to help the profession of journalism in the digital age that's a very
important thing uh that we need to do moving forward with this technology that we make
sure that it helps the journalists that's a very key point okay so bearing all that in mind uh

newsrooms need now solutions that deal with big data as well so you know you're as a
journalist you're overwhelmed with a lot of stuff coming in and you're not looking at just the
nrcs screen anymore right you're also looking at you know the wider picture lots of screens
lots of data coming in and how do you aggregate that data how do you search that data how
do you find the bits that are relevant to the story that you're writing so um of course we could
say let's turn to our very fashionable friend artificial intelligence now artificial intelligence
just has some pros and cons in a newsroom environment i might get a bit controversial here
but and i'm slightly off topic but bear with me it's there is a bit of a problem uh when it comes
to things like future proof and legality of things if we go around uh teaching our ais the faces
of people we don't have permission to give to the ai and then the ai starts searching through
our archive and finding these people willy-nilly amongst all of the things here they may not
approve of that and there's some precedence for it because facebook has been sued already
and is in the process of being sued by texas the state of texas because they've been capturing
and commercializing biometric data in photos and videos for more than a decade without
users permissions so when we're in this litigious environment where people can
retrospectively apply privacy laws we have to be very careful about how we use this stuff
particularly in a news environment and i don't think it's just the biometric side of it it's also
the geolocation aspects of it people don't know when they send the photo or when the photo's
taken that the geolocation is actually on the photograph so the use of biometrics and
geolocations in a newsroom is likely to be a big problem in the future maybe not necessarily
now it's a bit wild west but just so you know the pros of ai agility and story versioning you
don't actually have to use ai for this but it it's very good at it if i just say that boomers and
generation x they want the long form they want the long version right they want everything
right but the millennials in generation zed they just want the short version okay so they just
want to you know tick tock kind of style right so how can we deal with that well one of the
things that we can't do is this simplistic way of let's take the story and put it on every device
and give it to everybody without that's our one version of a story every device it just this just
doesn't work i mean these two here are gone straight away because you put the long-form
story on their mobile phones it doesn't work so we need to find a smarter way to do this and
i've come across this really great way i love this this modularjournalism.com and these
companies here have come up with this what they call an algebra for news modules to
support a new kind of storytelling that's more focused on user needs now how do you do that
how do you create an algebra for that well it gets quite interesting what they've done really
and that's why i have dna in the background here is that they're breaking the story up into its
dna pieces so you know we have the summary of the story you know that's the short version
that the millennials like we have the headline just the headline the lead story so that's you
know what you would expect you know if it was uh the the the edge of the website for
instance maybe uh situation what's the situation around the story what's the background to the
story how did we get here what's the main event what's the consequences of this story what's
the history of this story what's the context of the story what's the editorial comment on it
what's the journalistic comment on it what's the verbal reaction by other people and so what's
the conclusion in a reality check so each one of these individual modules can be interchanged
with each other to tell the same story in a slightly different way from a slightly different angle
and of course with ai we could actually learn what the preferences of people are and start
bolting the story together as as preferences and by way of showing you that as a reality i'm
coming back to my generations here i'm taking devices out of the whole equation but um gen
zed here for instance they might want the short version which is the summary they also might
want the context because they don't understand how this has all started or what's caused this
story to happen and they also want the history of it to help back up that context so all of that
could be generate gen zed's story and that's generated from this pool of assets on the other
hand you've got the baby boomer here now the baby boomer wants the main event they want
the long-form version and they also want the situation what's the current situation regarding
this story so we have you know two different needs served by the same set of modules here
effectively um my personal favorite is you could you could even say the headline that's the
tweet we just tweet the headline because that's the tweet and you get that from these pieces of
dna so this is very exciting um and we're quite uh we're quite keen on this next actually um
okay so modular journalism but breaking a story into modules so that ai can process it and
then you can do that for personalization but also for platform so we could say okay that long
form doesn't work on on a mobile but what does is the lead story so i only deliver the lead
story so it's interesting stuff like that so in summary what's the future of nrcs so to actually set
everything in context that we've seen and that we talked about and the historical value of
things what's happening now is the broadcast industry is beginning to move to a software
base so instead of hardware boxes we're now running software and it's in the cloud or it's on-
prem or it's a hybrid of both by the way it's mostly driven by software so because of that
we're looking at this sort of elemental best of breed solutions so each element of the best of
breed solution is bolted in so different manufacturers products or different software developer
products can be put into this consolidated hub and this consolidated hub could be anything
from you know a vendor creating a universe of suppliers that they bring together and
consolidate it could be aws so you could have on aws you could have um you could have uh
like the marketplace where you bring all the things in uh it could even be an si that would put
it all together but fundamentally you're going to feed your story into this and then all of your
things they're going to be here for you to be able to push the story out uh to either
transmission or to the cloud for instance you know like websites and things and nrcs in all of
this is really just one part of this large jigsaw effectively so it's it's a service within the jigsaw
so when i talk about the future of nlcs i think it's going to be more of a feature than a
standalone product i think you're going to see that it basically becomes part of this ecosphere
of different applications that come together to allow journalists to tell a story using much
more of the facilities available for them they're basically doing more with less and that's what
the consolidation aspect of that will do to this so i do believe it will become much more of a
feature than an application okay so uh if you want to see something like that plug plug uh
we're actually showing it at cabs at uh next we'll be at cab sat with a hall 5 booth 503 so
please do come along if you're there i would love to see you me and roger will be there so
you can shake hands with us both meet us in the flesh so to speak uh if not it's wobbly
wednesday next week i'm not sure what we're doing yet but uh we will post it after this so uh
i hope you've enjoyed today i hope you've enjoyed being in my palm i know i have and i hope
to see you next week have a great day

You might also like