Blog

Give Your Content Presentation the MVP Treatment

Screeners_Blog_Image_Baseball_Field

Spotlight, a completely unique digital experience builder fully integrated into MediaSilo’s asset management functionality, allows content owners to present assets with professional, branded microsites and presentations. With enterprise-grade security, in-depth activity tracking and analytics and the ability to live stream events, Spotlight gives sports teams and networks the power to instantly deliver content to fans.

Better Presentation Tools

Spotlight connects to all your MediaSilo projects and assets, enabling your team to dynamically create and customize a branded, secure digital experience for your fans and media contacts. No need to shuffle files between various systems – just drag and drop assets directly into your Spotlight design. Your Spotlight page updates instantly as you add or remove files from synced playlists, removing the need to generate new playlists with fresh assets after every update. This means your audience will always have instant access to the latest press releases or highlight clips.

MediaSilo_Blog_Spotlight_Presentation

Go Live for Special Events

You can live stream any event to fans directly from Spotlight, such as post-game press conferences, court-side commentary or even off-season training camp updates. Just add our livestream element to your Spotlight template, allowing audiences to stream your event without leaving the customized viewing experience of your Spotlight. You can even include a library of past events to keep fans engaged and updated on all the latest news.

MediaSilo_Blog_Spotlight_Go_Live

Simplify Media Logistics

Spotlight allows you to see how your content is performing in real-time with one central Insights dashboard. Measure engagement, performance and drop-off points so you know which plays are hitting home with your audience and which have been banished to the bench. You can also review user access logs to see which viewers are most active and what content keeps bringing them back, helping you better understand your audience’s behavior and plan future content that will keep them engaged. Search Spotlight insights by date range, title, URL, viewer or file type, and export that data in a variety of formats.  

MediaSilo_Blog_Spotlight_Activity Dropdown

Enterprise-Level Security

Spotlight also delivers on the studio-grade security practices of MediaSilo. This makes having a secure platform that protects against cyberattacks and intellectual property theft mission-critical. Spotlight provides multiple security options, such as custom user access policies, password protection, dynamic personalized visible and forensic watermarking and workspace permission customization. Ensuring your content is seen only by those who are supposed to see it when they’re supposed to see it.  

No Coding Required

Unlike other platforms that require coding and UI/UX experience, Spotlight provides no-code, professionally designed and completely customizable templates to users. Either start with a blank canvas or use one of our premade templates that you can quickly update to include your team’s unique font, colors and logo. You and your team can reuse the same template multiple times or create one-off templates for special events like special team announcements or promo packages. 

MediaSilo_Blog_Sportlight_Templates

Available exclusively to MediaSilo customers, Spotlight lets you present work-in-progress projects professionally, simply and securely.

To learn more or get started with Spotlight today, dive into our Spotlight knowledge center and sign up for a free trial of MediaSilo.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.

More Info

Cloud Editing Costs Demystified

Stephen Tallamy, looks into the benefits and cost efficiencies of Cloud workflows with FLEX, Tyrell Cloud and AWS

Shift_Media_Blog_Image_573x458_Lindsay_color_R2

MediaSilo offers full-fledged review and approval for multiple file and document types to accelerate the feedback process. Watch our Account Executive, Lindsay, demonstrate how users can generate branded links to internal and external users, version assets and set share preferences. She also discusses the various ways users can effectively utilize comments.

Hey, everyone. I’m Lindsay with Shift Media, here to go over review and approval workflows within MediaSilo today. Here we have our MediaSilo workspace with all of our projects. I’ll go ahead and click into a project here. You can see you have your basic folders, subfolder organization, all of your different files, and all different file types supported, from images to video to documents, and we’ll go ahead and go through a few different share paths.

When it comes to sharing work for review and approval, we have a review and approval workflow within the app. For example, if you are going through review and approval with users within the workspace with your own team members. We also have the option to share content out for review and approval via an external link. So here you can see you have your folder and sub-folder organization, all of your different files within MediaSilo, including videos, images and documents.

We also have the option to version your assets within MediaSilo, as you can see by this little layered icon here, which is achieved simply by dragging and dropping to layer files one on top of the other. We have a few different ways to approach review and approval within MediaSilo. You can jump right into review and approval directly within the app. For example, if you’re going through review and approval with your team members, with other users within the workspace, and you also have the option to generate a review link and share that out externally as well.

I’ll start by capturing a few assets here to share out. So we’ll capture some folders and go ahead and drop that into my collection bin as well as single assets, and I’ll share this out as a review link. You have a few settings here when it comes to generating your link. You can set your access preferences, whether you want that link to be accessible via users within the workspace only, if you want that link to be publicly accessible or if you want that link to have password protection for added security. You also have the option to expire the link, taking that offline within a certain timeframe if you’re working with a deadline or for security purposes. We’ll go ahead and make this link public for now. You have the option to toggle on and off whether you want recipients to be able to download the content on that link and then enable your feedback, which refers to the commenting for review and approval. When you’re ready, you’ll go ahead and create this link which will be copied to your clipboard.

And here you have your review link within MediaSilo. So here you can see this [page] is MediaSilo branded. You would, of course, be able to set your branding preferences within the administration panel, and when you’re sharing out review links, this would reflect your own branding, color scheme, everything to match your preferences.

And again, you can see we have the folders here that are shared as part of the review link. We have all of the different individual files. I’ll jump into this versioned asset here just so you can see what that looks like. So here we have the different versions, which you would be able to toggle on and off. The most recent version is the one that’s going to appear first.

And then, when you’re ready to begin your annotation, go ahead and pop your comments in there. All of your comments will appear on the side panel to the right here. You have the option to edit those comments, reply to them, delete them, and you can resolve them as well. And if it’s useful, you can also export those comments in order to view them all, organize them, and see all of that feedback on how to resolve that comment there.

And then, when the asset is approved, we also have this little thumbs up here so that you can track that the asset has been approved by the recipient and it’s good to go. So that is sharing our review link through MediaSilo, and that’s what the workflow would look like on the recipient side.

Going into another way to approach review to people directly within the platform, I’ll hop into this asset and go into review mode. Again, your time-based commenting will pop out the comments bar here, and then you can see all of the comments that were previously made by your team and all of their time codes as well. So that’s what it looks like directly within the app.

Again, you can see all of the comments and all of the work from your previous team members, whereas those review links that you share out externally will be unique to that user, and the comments will start fresh. So that’s the review and approval workflow within MediaSilo. For more information or to schedule a demo or get into a trial, check out MediaSilo on our website at MediaSilo.com.

Miss our interview with Mark Turner, Project Director of Production Technology at MovieLabs? Watch it now to learn more about their 2023 Vision.

Play_Shift_Media_Turner_Movielabs_Interview_Blog_Image

For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.

red-komodo-x

When asked what pain points the new Komodo X camera fills on set, Jarred Land, president of RED Digital Cinema, replied, “Komodo X really is filling the gap between our utility camera (Komodo) and our Raptor. Komodo was designed as a C, D, E, F crash camera, but a lot of Komodo users wanted a little more, for it to be more of an A camera or B camera but not at the level or price of what Raptor is.” Let’s dig into this new release and see how RED responded to the market’s request for a tiny camera that can work as your “A” cam.

What makes the RED Komodo unique?

In 2020, RED introduced the Komodo, a tiny “crash cam” designed for shooting action sequences. It featured a groundbreaking 6K sensor and a global shutter in a tiny package. The camera was designed in response to the need for a more professional alternative to using GoPros in action scenes. One key factor was producing a camera at a low enough price point that a big production could wreck it in an action sequence without taking too big of a financial hit. So even though it wasn’t intended to be a main camera, Jarred realized that “the image is just so good, and [it’s] so romantic to hold,” that people would use it as their main camera. He confessed, “I myself am guilty of using it as an A cam too.”

Using Komodo as an “A” camera can be a pain

As “romantic” as it is to hold the little RED camera, the pain points are less romantic. Komodo uses small Canon batteries instead of standard V-lock batteries. Its RF mount doesn’t lock down and flexes with a lens motor. And most significantly, it only has a single 12G SDI output. This proved to be a single-point-of-faliure. On any brand, a 12G SDI port can blow out due to plugging and unplugging power and SDI cables out of order (power in, then video in, video out, then power out). And even though Komodo can output a feed to your mobile device, this isn’t a true replacement for a hardwired monitor in most cases. The result of these shortcomings when trying to use the Komodo as an A cam resulted in companies like Mutiny creating ingenious accessories to allow the little Komodo to level up.

RED realized there was a hole in their lineup

Land recognized that this situation wasn’t ideal. He said, “The work-arounds were people using the camera as it wasn’t ever intended. [It was] our fault for not filling that hole earlier.” So in response to the way customers were using the camera, RED began to design a new version that wouldn’t replace their original “utility” camera but rather work as an “A” camera to the original Komodo or a “B” camera to their high-end V-Raptor. And that’s how the Komodo X was conceived.

Komodo X improvements tackle the original’s limitations

Once the team at RED nailed their focus down to making an “A” camera with the DNA of the original camera, they got to work on making improvements that would enhance and streamline the experience of shooting with Komodo.

Multiple monitor outputs

At the top of the list was giving more options for the monitor output. The new Komodo X features the same “pogo pins” as the original Komodo. However, this time they arrive with the ability to drive a monitor, just like the higher-end V-Raptor. This connection allows for the RED/Small HD 7” monitor with the “RMI” cable to be used on both Komodo X and V-Raptor. The 12G SDI output will be mainly used for accessories like a Teradek Bolt wireless transmitter. RED also released a more compact top handle for attaching the DSMC3 monitor. This handle addresses significant ergonomic challenges with rigging the original Komodo.

Improved frame rates

Komodo X offers frame rates up to 80 fps at 6K and 120 fps at 4K. That doubles the speed of the original Komodo. That frame rate would be fine for the intended use as a “crash cam,” but a main camera needs to hit that 60 frames per second number without windowing down on the sensor. For many shooters, 60fps is the magic number for usable slow-motion shots in commercials. So when the original Komodo only offered 40 fps at 6K, it felt like it was just missing the mark.

CFExpress media

Komodo X utilizes CFExpress Type B media rather than the CFast cards of the original Komodo. This improvement brings it in line with the media from the V-Raptor. CFExpress cards feel more robust, offload data faster and offer higher capacities. This change means shooters can condense the array of card readers, and DITs can bring uniformity to their workflows.

Improved batteries

Physically speaking, the biggest improvement is the type of battery the new camera employs. The Micro V-lock battery aligns it with its big brother, the V-Raptor. This simplifies things for productions using the two cameras side-by-side. In Scott Balkum’s launch day live stream, Land mentioned that most people using the Komodo as an A cam were using v-lock adapters with their camera instead of the stock Canon batteries. This improvement alone will substantially streamline camera rigs for most users. RED also released the REDVOLT Nano-V, a tiny 49 Wh battery for those shooters looking for the most compact power solution possible.

Locking RF lens mount

RED introduced the locking EF mount with the DSMC2 system years ago. Komodo introduced the new Canon RF mount to their lineup. However, many users struggled with lens mount flex when trying to use Komodo with cine-style lenses. This problem became more acute when a focus motor would be included in the setup. RED eventually addressed this by releasing a sturdy RF to PL adapter, but that didn’t resolve the issue for those using EF or RF glass. The new locking ring will add much-needed rigidity to the lens mount allowing for a greater selection of lenses and motors to be used on the system. And it will also reduce the amount of hardware needed to stabilize lens adapters.

Improved audio

It is no secret that audio has played second fiddle to image quality on many RED cameras. The Komodo has a particularly weak pre-amp and offers no phantom power for microphones. This shortcoming makes sense on a “utility” camera. But the moment you try to use Komodo as an A camera, you start a journey down the road of how to incorporate proper audio and timecode without creating a rig so unwieldy that it defeats the purpose of buying a small camera.

On Komodo X, RED has included a 5-pin Lemo connector with an improved pre-amp. This aligns it with the V-Raptor and ARRI’s ALEXA lineup. Users will need to make sure that they purchase the proper adapter for their audio gear (3.5mm or XLR). Using the 5-pin lemo connection, RED can offer improved audio while keeping the overall size of the camera smaller than if full-size XLRs were incorporated into the body itself. There is a good chance this will be the most critical improvement for documentary shooters.

Integrated USB-C

A USB-C output module is available as an add-on for the original Komodo. However, Komodo X incorporates it right into the body of the camera. Again, this simplifies rigging and provides a connection for wired control over IP. Through their RED Control Pro app, RED has worked hard to provide advanced tools for controlling multi-camera arrays for advanced users. The integrated USB-C port will make it much easier to setup up those rigs. However, most users will find that the free RED Control app will meet most of their needs.

Key accessories

Alongside the Komodo X, RED is offering an advanced RF to PL adapter with an electronic ND cartridge system. This features two cartridges, one clear and one ND. The level of the ND can be controlled via buttons on the lens mount or controls within the menu system. This option is especially attractive to users mounting the camera on gimbals. This mount eliminates the need for a matte box in many situations.

RED teased an upcoming I/O module, which features dedicated connections for genlock, timecode and more. It will allow for full-size V-mount batteries. The module also sports a unique v-notch that allows improved cable routing. Finally, RED has teased that they’ve got an EVF (electronic viewfinder) and additional monitors in the works.

Pricing and availability

RED released a batch of limited edition white (a.k.a stormtrooper) Komodo X cameras. That run sold out in 2 hours. (Other resellers may still have some stock at the time that this article goes live) RED has now begun production of the black Komodo X, and, according to RED, it will ship in June.

Komodo X retails for $9,995. That places it between their other Super 35 cameras, the Komodo ($5,995) and the V-Raptor S35 ($19,500) while leaning toward the lower end of the pricing scale.

Conclusion

RED should be commended for crafting a camera based on user feedback. The improvements are all based on the challenges of using the Komodo in the field “not as intended.” But instead of telling people that they were “using the camera wrong” or telling them to step up to V-Raptor, RED made a camera for them. From the monitor, lens mount, power, media, audio, handle and even to the placement of the record button, RED has shown that they are listening to their customers. Now it will be time for users to test it in the field and see if its image, functionality and stability can live up to the physical improvements they’ve made in this new camera.

wga-strike-nle

At midnight on Tuesday, May 2, what had been feared for months happened. For the first time in fifteen years, the Writers Guild of America (WGA) went on strike against the Alliance of Motion Pictures and Television Producers (AMPTP). At stake is the livelihood of thousands of people throughout the industry who will be impacted by the fact that all narrative, late-night, and other written film and television productions have halted.

We had the opportunity to connect with working post-production professionals to get their take on the strike and how they feel it will impact their corner of the film and television world. Due to the sensitive nature of the topic and because, as one person we contacted said, “…retribution is real in this industry,” the respondents chose to remain anonymous.

Before we get into their responses, let’s briefly cover what the strike is about and why this one is so different from the last industry strike of 2007-2008.

What’s at the core of the WGA strike?

Every three years, the WGA and the AMPTP negotiate over contracted terms to arrive at what’s called a Minimum Basic Agreement (MBA). If the two organizations are unable to agree, the union calls for a strike. These negotiations happen with all the major unions (e.g., DGA, SAG, etc.)

Conflict typically arises from disagreements in compensation and/or working conditions—and they can cost the entertainment industry hundreds of millions of dollars. The longest strike in the WGA’s history was back in 1988. It lasted for 21 weeks and cost an estimated $500 million. The strike of 2007-2008 lasted 100 days and cost $1.5 BILLION!

A recurring theme in WGA strikes

Whenever there’s a new technology that changes how television shows and movies are delivered to the masses, residual compensation becomes a key sticking point.

When DVDs and other physical media became prominent in the late 80s, payment to writers for their work on these media was the issue.

In the ‘07-’08 strike, a key driver in the disagreement between the WGA and the AMPTP was compensation and residual payments for projects distributed via emerging “new media” channels. These included digital downloads from sites like the iTunes store and streamers like Netflix.

Not unlike the last WGA strike, this one is also closely tied to the impact streamers like Netflix have had. But a key difference between then and now is that where the WGA’s overall objectives are homogenous, due to the make-up of distributors today, the needs and objectives of the AMPTP members are different.

New vs. old models of distribution

In the previous era of film and television distribution, the overwhelming members of the AMPTP were representatives from traditional studios like Paramount, Universal, Sony, etc. The primary business models for all these entities were the same.

The entertainment landscape today has evolved significantly. Companies like Apple and Amazon are now part of the game, and frankly, a protracted strike will not impact them as much as traditional studios. Whether a WGA strike lasts for 100 days or even 100 months, companies of this size—with revenue sources significantly broader and larger than traditional studios—could hold strong.

Streamers are probably well suited for a longer hold-out as well due to their large number of non-scripted shows (e.g., documentaries and reality TV).

Could these disparate business models and media categories motivate the AMPTP to be more cooperative? Perhaps. However, the gulf between the WGA and the AMPTP—which relates to myriad issues like staffing numbers, working period, Artificial Intelligence, and residual payments for hit shows—suggests we could be in for a strike that lasts well into the fall.

And that is where we come to the central theme of this article.

The impact of the WGA strike on post-production

The professional post-production world spans a wide variety of industries. In addition to film and television, there are corporate, gaming, and event professionals. The overwhelming number of people who responded to our inquiries were in film and television. The TL:DR of the responses we received can be summarized in these points:

Here’s what they had to say.

How the pros think the WGA strike will affect post-production

“I was living in LA during the WGA strike in the mid-2000s. I had just moved to LA and was establishing my network. Work dropped off at the top level, feature film jobs and the like. Since no work was being done at that level, those working there took the B- and C-level jobs. That really closed the door on a lot of potential gigs I could get. I had to rely on my Plan B, which was teaching editing and consulting gigs.

I eventually had to take a job in video engineering [major color house]. Though I appreciated the money, it was a job I wasn’t exactly suited for. I was rather desperate for work, and when that gig went away, I had to go into survival mode. Essentially, I went broke.

Fortunately, my connections at Apple (from an earlier gig on Final Cut Studio 3) had a job for me back up in Cupertino as a QE on FCP 7 and Motion 4. I bailed out of LA and moved back to San Francisco. I’ve been there ever since.

Yes, the WGA strike and the diving US economy crushed my LA dreams to dust. My advice is to be prepared for a long haul. Set up Plan B and Cs, and cut your budget, especially if you are not well established with your network. For those in LA high-end post. I wish you luck!”

“My prediction is, mild impact varying from slightly less work to slightly more. There might be more packages rolling into live shoots, repurposed/remixed existing footage, clips shows, verite style reality or docs. But corpo and ad work will be the same and features are on such a long post-production timeline that editors can be kept busy in their dungeons for a month without letting them into the sunlight. Some shows that might just now be kicking off will be on pause, and that will cascade down to editors being put on pause.”

Editor of trailers, promos, and ads for games

“I work in documentary and unscripted, so I am largely unaffected. If anything, I have more work. I think the writer’s demands are more than fair, and I’ve seen all the same exact problems they have with streaming giants, so I fully support them—same as I supported the movement within IATSE (International Alliance of Theatrical Stage Employees) to strike. Even before the strike, my colleagues and I had started calling this the era of “Insta-Docs”—where we conceptualize to air documentaries in just 2-3 months max.

Have you noticed the streaming giants just produce so many similar-looking documentaries, they have their moment in the sun and then are gone, never to be spoken of again? When was the last time we had a Man on Wire or Hoop Dreams that transcended its original platform? I’m not saying good documentaries aren’t still being made, but we’ve been explicitly told, all these streaming giants care about is “length to profitability,” which just means how fast can we get enough viewers to show profitability of the project, which means we can get greenlit for another, and another. Anything after the profitability mark is just a bonus; but really they don’t care about the longevity of their products. So for me personally, the most I’ll be affected is likely just to be asked to work on lower-quality content than I’d prefer until the strike is over and things settle down.

All of this feels so very reminiscent of the 2007/8 strike. The networks and studios believe cheap content will be good ammo against the writers, but once again, they are wrong. The public and the industry as a whole are on the writer’s side. If the writers can hold out, they will ice out the networks from having top-tier content and they will eventually cave.”

Documentary editor

“I’d just say I support the writers wholeheartedly and hope they’re able to get everything they’re negotiating for. A lot of their demands have heavy implications for post-production, especially those regarding artificial intelligence, so I hope they’re able to make big strides and set a precedent for protecting human jobs that the other guilds can follow. A rising tide lifts all boats, as they say.”

Assistant Editor

“We’ve been planning this since January. Nobody can really start shooting again until at least mid-August because the bond companies stop bonding on July 1 for at least 6 weeks. And nobody still firming up script can do a deal, not even distribution due to WGA strike rules. Most international is in solidarity. Post will have a major bubble upon return, which will cause all sorts of delivery issues. The most we can hope for is what is stated in Deadline’s Strike Talk podcast with the execs and writers (not negotiators) getting in the room to do the right thing within the coming month. But since Wall Street, not humans, are so in control of Hollywood these days, it’s hard to know how this will come together. There are sane people at the smaller AMPTP companies who might broker their own deal with the WGA if it comes to it.”

Producer

“Studios and networks have seen this coming for months. So there has been pre-planning on getting shows done early or just not starting up new shows. Next seasons are already on hold if not already shot. Upfronts will be awkward in a few weeks as most of the new shows can’t go into summer production. Late night is gone, so those editors are out. Reality shows will be a lot of the summer and new shows depending on how things play out and how long things go. Different edit sectors will feel it differently, and it will be a bit before the full effects hit post.”

Post Supervisor for a Promo/trailer house

One industry veteran we spoke with that didn’t mind being mentioned was Zack Arnold (ACE), editor & associate producer of Netflix’s Cobra Kai.

This is a once-in-a-generation strike that goes far beyond writers fighting for their slice of the pie. This is about ensuring the future of all creative professionals in the entertainment industry, setting boundaries that protect our livelihood outside of the work, and being valued for the creative contributions & ideas we bring to each project. As much as we’d all love to go back to work as soon as possible, this fight now will protect future generations from the rampant exploitation of Hollywood creatives. We have to do this right before doing it fast.

A word about Artificial Intelligence

It’s worth noting the WGA’s request that producers do not turn to AI-generated scripts as a replacement for human writers, or that they should share screen credit, or affect writers’ compensation. Rest assured that whatever agreement the WGA makes with respect to AI will be emulated by other areas of production that can be affected by AI.

It’s becoming more apparent that Generative AI will impact post-production. Programs like Synesthesia and Runway’s Gen-2 text-to-video program are opening new ways for post-production to be aided (and in some cases replaced).

Arnold has some thoughts about AI as well:

With the rapid progression of A.I., not only in post-production but all creative fields, the days of making a living as a specialist with one very specific skillset are over. The AI revolution will be the rise of the generalist with a broad range of knowledge in a multitude of crafts & skill sets. If we don’t protect our creative work from A.I. right now – if we don’t regulate what is and is not acceptable for using A.I. in generating original creative material – there is no future discussion to be had. The can cannot be kicked down the road the way we did with streaming as “new media.” This fight over the future of our creative ideas having value is now or never.

It’s unlikely these programs are ready to edit a Christopher Nolan opus or a 12-episode series on a major streamer. But it’s not too far-fetched to see AI tools like this being virtual assistant editors and creating stringouts based on descriptions of the kinds of scenes and soundbites you want. It would be short-sighted for MPEG (Motion Picture Editors Guild) not to factor AI into their negotiations.

All opinions expressed by named or unnamed participants are their own and do not imply an endorsement by Shift Media or any of its employees.

Header image computer credit Jacob Owens on Unsplash. WGA strike image courtesy Jorge Mir (CC BY)

Like many of you, I’m taking a big breath following an exciting week in Vegas for this year’s National Association of Broadcasters (NAB) conference. As always, NAB provided a unique opportunity for us to connect with industry experts, showcase our latest products, get together as a globally-distributed team, and gather valuable feedback from our esteemed customers and partners. It was a great show – and we enjoyed seeing everyone who made the trip to our booth.

A few key themes seemed to dominate conversations during the show – this is what I noticed that kept coming up at NAB:

Collaborative Workflows: The importance of collaborative workflows in the media and entertainment industry has never been more evident. At NAB, we highlighted our latest innovations in collaborative workflows and shared storage solutions. Our new features, such as universal project sharing, enhanced metadata management, multi-site support and remote editing capabilities, were met with overwhelming positive feedback. We are proud to continue our commitment to providing cutting-edge collaborative tools that streamline media production workflows and foster creativity among teams. We continue to drive forward our strategy of creating amazing everywhere.

Hybrid Cloud-Based Solutions: As our CTO, Stephen Tallamy puts it, “everything seems destined for the cloud… eventually.” But the pace and sequencing of that move is different for every team. That’s still true here in 2023. While some teams are dipping their toes in the water, others are ready to take the plunge but aren’t quite ready to commit to moving all of their workflow to AWS just yet. As a provider of cloud-based solutions, we want to support customers who are ready to start their cloud journey while also acknowledging that the right first step looks different for every team.

At NAB, we showcased our latest advancements in hybrid cloud-based editing, media management, and storage solutions. Our hybrid cloud offerings give customers the flexibility, scalability, and cost-efficiency they need to meet the evolving demands of modern media production – sometimes that means a mix of on-prem and cloud, both in storage and media asset management. We’re excited about the possibilities that hybrid cloud-based technologies bring to the industry, and we’re committed to expanding our solutions to help customers stay ahead of the curve. If you’re thinking about a potential hybrid cloud strategy, we have more examples than ever about ‘what good looks like’ that we’d be happy to share.

We were also surprised by the number of those who have multiple EditShare deployments and are interested in connecting those workflows to create global efficiencies. This is an area where we are innovating and making investments, and we’re pleased that these investments were validated by the customers we spoke with. We’re going to continue investing here – check out our CTO, Stephen Tallamy, discussing our thinking on where hybrid is headed from the NAB floor here.

AI-Driven Media Management: Artificial intelligence (AI) has reached peak hype status, but it’s also transforming the way media assets are managed and monetized. At NAB, we demonstrated our latest AI-driven media management tools that leverage machine learning and automation to streamline media workflows, enhance search capabilities, and optimize media asset organization. Our customers were impressed with the increased efficiency and productivity that our AI-powered solutions bring to their operations.

While I was at NAB I also participated in the SET Future of Broadcast panel. Fernando Bittencort, former CTO of Globo moderated the panel.  He kicked off the panel by reading the response he got when he asked ChatGPT “What is the future of Broadcast?”. We can debate the quality of ChatGPT’s answer to his question, but the fact that this is even possible should cause us to stop and recognize two things: (1) The world has changed, and (2) our industry is not exempt. The possibilities  for what it can do – from search ability to documentation to customer support to how we test our products – the limit of how we apply AI and machine-learning technology to the problems media creators face. And I’ll leave it to smarter guys than me to talk about the limits and governance that should be placed on it.

The most encouraging part of NAB? Our industry is back. We had 120 channel partners in attendance from all around the world. We had more than double the product demos vs. 2022.  Leads and opportunities coming out of the show were also up. Things are moving in the right direction.

As we reflect on this year’s NAB conference, we are energized by the opportunities and challenges that lie ahead. We remain committed to our mission of Creating Amazing Everywhere by empowering media professionals to create, collaborate on, and deliver exceptional content.

Thank you for your continued support of EditShare. We look forward to spending more time together in 2023.


MediaSilo_Camera_Choice_Post_Production

In the earliest days of filming, the choice of what camera (or film stock) the production used didn’t affect the post team; for a long time, it was a relatively settled workflow. In the film days, and even in the tape days of video, there was really only one way of doing things, and much of it was outsourced to a specialized lab. If the camera team decided to shoot Panavision instead of Arriflex, or even Moviecam, it didn’t matter much to the assistant editor. Shooting on Fuji or Kodak film stock might matter to the lab and the final dailies colorist, but the edit team didn’t know to worry. The major issue was, did they shoot spherical or anamorphic lenses, one box to tick on a camera report.

MediaSilo_Camera_Choice_Post_Production

With the digital video explosion of the 2000s, however, every camera has started to come with its own set of logistical problems and issues that require post-production teams to keep up with a great variety of plugins, file formats and special software that can change with every job.

Even within a single camera, several major decisions can affect how the post pipeline will go, which often means it’s best to have a workflow conversation with the camera team before production begins to get everyone on the same page.

Download our free Guide to Major Camera Platforms now.


RAW Video

The first major thing a post team should be getting a handle on with camera choice is whether the camera is capable of shooting in RAW video and if the production is choosing to shoot in RAW if it’s available.

RAW video records the RAW data coming off the sensor before it’s processed into a usable video signal. Depending on the RAW format, camera settings like ISO and White Balance can then be changed in post-production with the same image quality as if you had made the changes in the camera, which can be a great benefit if there were errors on set. RAW video has become incredibly popular over the last decade and is increasingly the default workflow of choice for many productions.

However, there are drawbacks to RAW that cause some productions to continue shooting to a traditional video format, even in a camera that is capable of RAW. First off, the files are often harder for the post team to handle and require processing. If you are shooting something with an exceptionally tight turnaround or with a small post team, it might make more sense to work with a traditional video format.

RAW is primarily beneficial for the flexibility you get in post. If the white balance is off in-camera, you can more easily change it in post with a raw capture format. With traditional video, settings like white balance and ISO get “baked” into the footage. Some cinematographers prefer to perfectly bake the look into the camera file they want and then let the post-production team work with those files without the flexibility of RAW.

RAW cameras are also increasingly capable of shooting into two formats at once or “generating their own proxies.” However, while cameras can do this, it’s not a particularly common practice for one key reason; it doubles your download time for cards. If the camera is both shooting an 8k RAW file and a 1080p prores file, you need to download both from the camera card to the on-set backup, which increases your download time. Additionally, you need to duplicate everything you have on the camera card to multiple copies for insurance purposes. In-camera proxies end up eating up more time and hard drive space than is beneficial.

There are a few cameras, however, that have a new workflow that shoots the RAW to one card and the proxie to another card. This workflow seems like it might take off on sets since the proxy will then be immediately available for the editor while the RAW files are still being downloaded to multiple backup copies.

MediaSilo_Camera_Choice_Post_Production

LOG
Once you’ve left the world of RAW capture behind, whether it was because the camera couldn’t record RAW or because there was some reason the production has chosen not to record RAW, the next decision made on set is whether to capture in LOG or linear video.

Linear video is the world we live in most of the time. When you edit in your NLE, it shows you linear video. Your phone shoots in linear video, and it displays linear video. But the file format created for linear video is only capable of handling a certain amount of dynamic range. For a standard 10-bit video file, that is usually considered to be 7-9 stops of latitude, depending on how you measure dynamic range.

But a 12-bit video sensor or the incoming 14- and 16-bit sensors are capable of recording a much, much wider range of brightness values. To squeeze that larger dynamic range into a smaller video package, LOG video was created. This process takes the 12-bit linear data coming off a sensor and uses logarithm encoding to “squeeze” it into a 10-bit video package.

This is a huge benefit for the post-production team that wants to preserve all that light value detail in the post pipeline for the most flexible color grade possible. However, standard 10-bit video is made to display 10-bit linear video images. Your images that are encoded in LOG tend to look very “flat” or “milky” when used in this fashion.

To overcome this, we use either a LUT (a discrete file you can load into your software and apply to footage) or a transform (a mathematical equation that transforms footage from one format to another) to process logarithmic footage to look correct in a video space. LUTs have been the default for a long time, but the industry is increasingly moving to transforms for their higher level of precision and flexibility. The most common workflows for using transforms are the ACES workflow and the RCM (Resolve Color Management) system built into Blackmagic DaVinci Resolve. For both RCM and ACES, you need them to have a transform created for the profile of the camera.

It is generally considered a good idea to check in with the production to see if they have a preferred workflow for you to use. Whether it’s the camera manufacturer’s LUT, a custom LUT built by the production, or the ACES or RCM systems, make sure you can properly view the footage the production creates. No self-respecting post team should ever be working on an edit with footage in its LOG form.

Timecode & Audio
Another essential factor of camera choice that often gets neglected in the conversation about post-production is how it handles timecode and audio. If you are working on a multi-camera job, a camera with good timecode inputs that can maintain steady timecode will make your life infinitely easier than a camera that lacks those functions. In audio as well, while we generally still prefer to run dual system audio, many productions like to run a mix to the camera for backup purposes and to get the edit workflow started more quickly. You’ll ideally want a camera with industry-standard and robust audio inputs and outputs.

A final issue to consider is the somewhat obscure but increasingly vital area of file metadata over SDI or HDMI. While this seems confusing, it’s actually pretty simple; some cameras can pass along certain metadata, including things like filename, over their HDMI or SDI ports. This can be a huge benefit with some camera-to-cloud workflows where an external box, like a Teradek Cube, encodes real-time proxies for the edit team to get over the web. If the camera can send the filename out over SDI into that Cube box, then the files going on the Cube can get the right names and make relinking to the full-res file properly later a snap. Without that output, the camera-to-cloud workflow makes much less sense.

MediaSilo_Camera_Choice_Post_Production

Lens Squeeze
The final issue to worry about is one that we worried about in the film days as well; the squeeze of the lenses. The vast majority of productions shoot with spherical lenses where you don’t need to worry about any squeeze to the lens. But there are lenses available called “anamorphic” lenses that take a wide image and squeeze it down to fit on a narrower sensor. This is how “widescreen” movies were made in the analog film days. You would have a 2x anamorphic lens that would take a 2.39 image and squeeze it down to a 1.33x piece of motion picture film. Then on the projector, you’d put a 2x de-anamorphoser to get a “normal” looking image that filled the widescreen.

In the digital era, we tend to do our de-anamorphosing in post-production, often during the dailies stage, expanding the image to look correct. You need to make sure you get the information from production if they shot in spherical or anamorphic, and if they shot anamorphic, it’s vital that you ask them to shoot a framing chart with each lens they are working with so that you have a reference. Ideally, that framing chart would be taped out with frame lines and also have some recognizable elements on it, including perfectly drawn circles and pictures of humans to help if there is a problem troubleshooting issues in post.

In addition to the standard 2x anamorphic lenses, lens makers have released 1.5x anamorphic lenses designed to work with the wider 16×9 sensors of modern digital cameras. Since the sensor is already wider than the old 1.33×1 film frame (roughly 4×3), the anamorphic lenses don’t need to be as strong, so a few vendors have released 1.5x lenses to help cinematographers craft wider images that take advantage of the full sensor and also offer some of the qualities users love about shooting anamorphic. As you can see, when a production settles on a camera and lens combination, it can majorly affect your post-production workflow.

Download our full guide to the major camera platforms and what features they offer to be helpful to post-production teams.
Get the Guide

For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.

Leaked content is a multi-billion dollar problem in our industry, robbing people of revenue and jobs. Watch our Solutions Engineer, Nick, demonstrate how you can securely and confidently screen your pre-released content with Screeners.com

Hi, my name is Nick Ciccantelli with Shift Media, and today we’re going to talk about our Screeners.com platform. Screeners is a secure OTT-style preview and screening room for your pre-release content, leveraging our watermarking technology. So as a reviewer, I will have been given access to a number of different titles within a certain network. So you can see here that I have access to my Shift Media network.

And when I click into it as a reviewer, I get a very, very straightforward and simple experience where I see the titles I have been given access to. This may be episodic content, or it may be full-length feature film content. There is no implication to storage with the Screeners.com platform. So as you can see, I’ve got access to a few titles here, and if I click into one of them, as I mentioned, I get an OTT-style experience where I can see the episodes that I have been given access to as well as some basic information about those assets.

So we see the name of our asset here, a description. If it is episodic content, we will see that information here, as well as some basic contact information and external links if you’d like to include that in your screening room. When I hit play here, you’ll see that we are generating a personalized watermark for the content that you will be viewing.

As you can see, we’ve got our opaque text that appears destructively on the screen. This can pull in the user’s first and last name if you like. It can pull in their email address, and you can also add custom text, “property of…” for example, to your watermark. As far as the viewer experience is concerned, that is pretty much it. You’ve got your content that you’ve been given access to, you’ve got your bespoke watermark, and now all you need to do is watch your content and hopefully write a good review.

On the administration side of Screeners.com, you’ll see that we have very robust analytics, so you can see how your content is actually performing. We can get very granular information about which users are actually doing your content and how much of it they are actually getting.

So you can know for certain whether your reviewers are watching your content and how much of it they actually are watching. If we navigate to the Screeners section of administration, you’ll see this is where we can actually manage the content that we are sharing with the world.

So you’ll see we’ve got a handful of titles here. When I click into these titles, I can manage the episodes or other iterations of this content within administration. You’ll see from here we have the option to make this content live once it is ready to go live and be shared out with your reviewers. We have the option to send these assets through unique links that will be sent directly to your reviewer’s inbox as well. In the edit title section of Screeners.com, you’ll see that we have a number of different settings that we can manage for our individual titles with the option to send notifications.

When new titles go live, you can add an additional layer of security to these titles with MFA, and we can also manage the actual watermark templates that the end user will see. Here, you see that we have a template that pulls in the user’s first and last name. We have a number of different templates here that we can make available for specific titles with destructive watermarking, as well as a forensic option as well to have that extra layer of security to make sure that your content will not leak or fall into the hands of people that you don’t want it to.

We also have the option to set go-live dates for your titles, as well as dates for those titles to expire, so that you can make sure that people are not watching your content after a point that you don’t want them to. In the user’s section of the administration side of Screeners.com, you’ll be able to manage the audience that you will be sharing your content with. We can categorize this audience by user tags that will allow you to more easily curate distribution lists for your links.

You also have the option to manage these users more granularly and give them access to specific titles that you want them to see. If you choose to share your Screeners directly with your reviewers, with our link workflow, you have an area here of the administration panel where you can manage those links, set expiration dates, decide to expire them if you’d like to and then further add titles to those links as well.

If you like, on the administration side of Screeners.com, you can create multiple different watermark templates with various facets depending on what type of burned-in watermark you want your audience to see. Regardless of what type of watermark you decide to use, you can ensure that your content will be secure and screen safely with Screeners.com.

Thank you so much for taking the time to check out Screeners.com. Please don’t hesitate to visit our website to schedule a demo and learn more about how you can secure your pre-release content.

Miss our interview with Mark Turner, Project Director of Production Technology at MovieLabs? Watch it now to learn more about their 2023 Vision.

Play_Shift_Media_Turner_Movielabs_Interview_Blog_Image

For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.

Shift_Media_Edell-GrayMeta_Interview_Blog_Image

Michael recently sat down with Aaron Edell, President and CEO of GrayMeta, to help us make sense of all the new AI / ML technology entering our industry. They discuss the evolution of AI in the workplace, the limitations of this technology, and how you can use it to make your life easier.

Michael: Michael Kammes with Shift Media here at NAB 2023, and today we’re joined by Aaron Edell from GrayMeta. I’ve been looking forward to this interview, this entire NAB, because everyone’s been asking about AI and machine learning, and when I have questions about AI and machine learning, this is who I go to. So, Aaron, thanks for being here today. Let’s start out very simply. Tell me just what the heck GrayMeta is. 

Aaron: Wow. That is actually a loaded question. So, when I started in tech in 2008, my first job was at a company called SAMMA Systems, a little startup based out of New York. It made a robot that moved videotapes into VTS, and we digitized it. So, okay, set that aside. Keep that in your memory. Years later, we founded GrayMeta, based on the idea that we wanted to extract metadata from files and just make them available separately from the files. Now that I’m back at GrayMeta, we have three core products. We have the SAMMA system back, and it’s so much better than it used to be. We used to have eight massive servers and all this equipment. Now it’s, you know, one server, a little bit of equipment, and we can plow through hundreds of thousands, if not millions, of video cassette tapes on customers’ facilities. 

And we work with partners. If they wanna do it with OPEX, they can buy the equipment from us. And it’s an Emmy award-winning technology. So there are a lot of really, really wonderful proprietary things that archivists love about migrating. So you’re not just digitizing. GrayMeta also has, I think, the world’s most accurate reference player in QC product software tool that runs both in the cloud and on-prem, which is pretty cool. The cloud thing is magic, as far as I’m concerned. I don’t know how you play back MFX-wrapped, LP1, or, you know, encrypted DCP files off the cloud. Somehow we figured it out. And then we have Curio, which is our metadata, sort of creation data platform that uses machine learning to, as I kind of explained in their original vision was to take all these files and just create more metadata. So we really are across the media supply chain. And if you were to diagram it out, you would find GrayMeta’s products at different points. 

Michael: That’s gotta mean that you have a bunch of announcements along the entire supply chain for NAB. So let’s hear about those.

Aaron: Yes. Well, I think the most exciting is that when I first came back to GrayMeta, which was really not long ago, one of the things I really pushed hard for was a new product or a repositioning of our product. So we were happy to actually be able to announce it at NAB and not just announce the product, but that we signed a deal with the Public Media Group of Southern California to buy it. So we’re announcing Curio Anywhere, our metadata machine learning management data platform, which is now available on-prem and can run on an appliance or a local cluster as well as in the cloud. So there are hybrid applications, there are on-prem applications, but I think the most important thing is that all the machine learning now can just run locally to where you’re processing the metadata, and that saves a lot of money, a lot of time. 

You know, our product, and we’re gonna be kind of expanding on this in the future, allow you to train these models further. We kind of use the word tune the models further to be more accurate with your content, using your own content as training data. So we’re really excited to announce that at NAB. We’ve got a whole lot of other features that we’ve added to Iris as well. We can support down to the line. You can get some QC data, used to be the whole frame, but now we can actually look at individual lines in a video. Curio now also supports sidecar audio files. It’s frame-accurate timecode, which was really important, obviously, for a lot of customers. So you can export a shot list or an EDL right out of Curio of, you know, a shot with all of, let’s say, the gun violence locations on a timecode-accurate place, or all of the places where a certain celebrity or known face appears in the time code, accurate timeline, which you can just then pull into your nonlinear editor.

Michael: We’ll talk about this more later, but I want anyone watching right now to understand just how important the ability to localize machine learning and AI is. That keeps your content secure. You don’t have to pay the tax of using a cloud provider to do their cognitive services. So we’ll talk about more of that later, but you need to understand just how important that is. So the, the main product agreement is offering that. Can you explain some of the features that Curio has in terms of AI and ML? 

Aaron: Yes. So the way I like to describe it is you tell Curio where your storage locations are, and it walks through these storage locations. And for every file, really, I mean, it doesn’t just have to be video file, but video is kind of the most obvious one. It will apply all of these different machine-learning models to that video file. So face recognition, logo detection, speech-to-text, OCR, natural language processing, you know, there are other models like tech cues, simple things like that. You know, tech cues is a really interesting one because detecting color bars, right? Color bars come in all shapes and sizes, ironically, which it shouldn’t cause its color bars.

Michael: Well, NTSC, no, never the same color. 

Aaron: Exactly. Yes. But the kind of general concept of color bars is something that, for machine learning, it’s so easy for that to detect. But I think what’s really my favorite aspect is what we’re doing with faces right now. And this is, again, going to expand. Let’s say you process a million hours of content, like you’re a public television station in Los Angeles, and there are scientists and artists who you’ve interviewed in the past, maybe not part of a global celebrity recognition database that you get from the big cloud vendors or other vendors, but they’re important. And you want to be able to search by them. So Curio will process all of that content, and it’ll say, I found all these faces. Who is this? Right? You just type in the name, and it immediately trains the model. 

So you don’t have to reprocess all 1 million hours of content. It will just update right there on the spot instantly. So that’s really powerful, I think, because a lot of folks are concerned that they need to, that the machine learning model needs to tell you who it is. But it doesn’t. It just needs to tell a person that you need to tag this. It’s about helping people do their jobs better. So we also have a lot of customers who have some great use cases. I think reality television is one of the big ones.

Michael: Absolutely. 

Aaron: They have 24 cameras running 24 hours a day, every day for a week. And that’s thousands and thousands and thousands of hours of content. One use case I heard recently was we have a reality show where people try not to laugh, right? I guess things that are funny happen, and they’re not supposed to laugh. And so when they were trying to put together a trailer, they wanted to find all the moments that people were laughing amongst hundreds of thousands of hours of content. So we could solve that immediately. That’s very easy. Just here are all the points where people were smiling. So I’m really excited about some of the simpler things, some of the simpler use cases, which involve not just tagging everything perfectly a hundred percent the first time but helping people do their jobs better and saving them so much time. 

Because imagine you’re an editor, and you’re trying to find that moment where Brad Pitt is holding a gun or something like that amongst your entire archive, or really just any moment of an interview. Let’s say you’re a news organization; You’ve interviewed folks in the past, and maybe somebody passed away, and you need to pull together the footage you have quickly. Machine learning can help you find those moments. So customers use Curio, and they just search for a person. It pulls it up wherever it is, right? It could be stored anywhere in any place, as long as Curio has had a pass at it. It pulls those moments up. Here’s the bit in that moment, in that file, you can watch it and make sure it’s what you want, and then pull it down. It’s a simple use case, but it’s really powerful. 

Michael: Some of the other use cases I’ve talked at length about are things like Frankenbiting. Being able to take something that takes 30 seconds to say, getting it down to 10 seconds by using different words that that person has spoken through different places. That used to be a tedious procedure where you’d have to go back through transcripts, which you had to pay someone to do. Now you can type in those words into something like Curio, find those timestamps in a video, localize that section of video, and string together a Frankenbite without having to spend hours trying to find those words. 

Aaron: Yes. There’s a term for not doing that, which is called Watchdown. I just learned this recently. It’s where you as an editor, and I hope this is the right term, but I read about it in an article from Netflix editors, but they’re trying to put together trailers. They just have to watch every hour of everything they own for the moments that they want. And, yeah, nobody should have to do that. You don’t need to do that.

Michael: There’s that great line. It’s kind of cliche at this point, but when an editor’s putting something together, and the producer or director isn’t thrilled with the shot, and they say, “Didn’t we do a better take of that?” or “Didn’t we have a take of someone saying that,” and like, no, because you’ve sorted everything, and here is everything that was absolutely done despite your memory. So there are a lot of misconceptions, right? AI has been really hyped right now. I almost wish we had used the word AI five or six years ago because it was cognitive services. Which is not really a sexy term, but there are a lot of misconceptions about AI and what AI and ML are. Can you maybe shed some light on what those misconceptions are and the truth, obviously? 

Aaron: Yeah, absolutely. So, first of all, the word artificial intelligence is quite old and can literally apply to anything. Your kids are artificially intelligent. Think about that. You’ve created your children, and they are intelligent, right? So it’s, I mean, just like any buzzword that gets thrown around a lot. There are a lot of different meanings. The one misconception that is my favorite is that AI is going to take over the world, or general artificial intelligence is, you know, ten years away, five years away, and they’re gonna kill us all, and that’s it—end of humans. I cannot tell you how far away we are from that. Think about how hard it is to, like, find an email sometimes, right? I mean, computers, you have to tell them what to do. 

There’s no connection between the complexity of a computer and the complexity of a human brain, right? There just isn’t. One of my favorite examples is the course I took that ended up being very important to my career, but I had no idea at the time, back in college a million years ago, which was called, ironically, What Computers Can’t Do. And my favorite example was, imagine you’ve trained a robot with all the knowledge in the world. You then tell the robot to go make coffee. It will never be able to do that because it doesn’t know how to ignore all the knowledge in the world. It’s at the same time thinking, oh, that pen over there is blue, and you know, the date that America was founded, and, um, all of these facts and just information that it has built into it. It doesn’t know how to just make coffee. It doesn’t know how to filter all that stuff out. 

Michael: No pun intended. 

Aaron: No pun intended. Yes. I think that’s something that’s unique to humans: our ability to actually ignore and to just say, yeah, I’m just gonna make coffee. And you barely even think about it, right? 

I think even the most advanced artificial network supercomputers are probably the equivalent of maybe 1% of a crow’s brain, right? So in terms of complexity, again, we’re not talking about the contextual things that humans learn. So that’s my favorite misconception – that artificial intelligence is going to destroy us all and be as smart or smarter than us. 

Now, the difference is machine learning. So the term machine learning, it’s a subset of artificial intelligence. It’s usually solved by using neural networks. These are all different things, right? So we’re drilling down now. Now, a neural network is specifically designed to mimic how a human thinks and how a brain works. And it is a bit mysterious. We had a machine learning model in the past where it would learn what you liked and it was based on what things you clicked on a website or something like that. 

It would surface more things, and we built this very clever UI that would show it learning as it went along. So let’s say you have millions of people on your website, and they’re clicking things. We have no idea how it’s working. We don’t know. The neural network is drawing connections between nodes and just trying to get from when input is x, I want the output to be y. And you’re just saying, figure out how to do that in the fastest, most efficient way in between. And that’s what humans do. When we learn new words as babies, the first thing we do is make a sound. And then we get feedback from people, “Nope, that’s not bath, that’s ball.” And your brain goes, okay, and tries again, and it’s a little bit better. It tries again. And that’s our neural network building. So in that sense, machine learning models can operate similarly, but they’re so much less complex. The most complex thing in the universe is the human brain. There’s just nothing like it. And I don’t think we’re anywhere close to that. So I don’t think anybody needs to worry about artificial general intelligence taking over, Skynet, taking over and launching nuclear missiles and killing us all in spite of it making for good movies. 

Michael: I think you put a lot of people’s minds at ease, but there are a lot of creatives in our industry who are seeing things like stable diffusion. And reporters are seeing something like ChatGPT being a front end to create factual articles. Folks are worried about their jobs being eliminated. And I think one point to remember is that everyone’s job is constantly evolving, right? It, there’s always been change. But what would you say to the creatives in our industry who are concerned about AI taking their jobs? 

Aaron: Your concerns are valid in the sense that, you know, I would never presume to go and tell somebody you should not be worried about anything. It’ll be fine. I don’t know. But throughout my whole career in AI, it’s been true that it should make your job easier. I mean, it’s supposed to be used to make your life easier. So I’ll give you some examples. When I took over as CEO of GrayMeta, one of the things I wanted to do was some marketing, right? And I didn’t have a whole lot of time, and I wanted to get some catchphrases for the website or write things in a succinct way. So I used ChatGPT, and I said, Hey, here’s all the things our products do. These are all the things I wanna talk about. Help me summarize it. Give me ten sentences, bam, bam, bam, bam, bam. They were great. 

Now, that could have been somebody’s job, I suppose, but that was also my job. So I could have sat there all day and tried to come up with that myself, but I didn’t have time. So it made my job a lot easier as a marketing person. I used Midjourney to create interesting images to post on LinkedIn and those sorts of things. But there is no person at my company whose job it was to create interesting images. Our only alternative was to go and find some license-free images that you can find on the website and post them. 

So there were no jobs being lost for us. It only made us, as a small company, more productive. Now, I think that the other part of what we’ve always talked about with machine learning is scale. And, you know, even Midjourney, Midjourney’s very cool. But if you look, if you zoom in a little bit, eyeballs are [off center], fingers are weird. And I’m sure that will improve over time. And we all need to be very careful about understanding what we see on the internet, when we see images that are fake, when there’s music that’s fake, and when there is artificially created or content that’s written by artificial intelligence. But I still think that humans are needed because I don’t think we’re ever going to get the creativity that humans have. It’s the same kind of example I was talking about earlier. 

You can’t train a robot to make coffee if you give it all the knowledge in the world. I don’t think you can train an AI to be an artist in the same way that humans can. There’s just something about the human experience and the way that we process information that just can’t be replicated. But it can make an artist’s job easier, you know? So, you know, add some snow or change the color of this image. Or, as somebody who needs to acquire art or creative images, it helps me to at least give a creative person some examples. Like, can I have an image that sort of looks like these things? As a prompt. 

I do think there’s evolution in the jobs. I think that if we all try to think about it as a way that makes our jobs more efficient, saving us time, and maybe I like to think of it as just taking over the laborious parts of our job. I mean, think about logging. Editors logging, that was my first job as an intern at KGO Television, watching tapes and logging every second, right?

Michael: And to some extent, that’s still done. Unscripted still does that. 

Aaron: Yes. Yes, and they shouldn’t. I don’t think they have to. I think any logger would, aside from like an internship, would appreciate editing a log that’s been created by AI instead of actually creating it. So imagine you have your log, and all your job is to do, is to make sure it’s right. That’s such a better use of human brains. We’re so good at seeing something and then saying, yes, that’s correct, or that’s not correct. Or that’s a, or that’s b, instead of having to come up with the information ourselves from scratch. It just takes so much more time and is so much more annoying, laborious aspects of our brain that could just be put to better use. So that’s how I see it. I’m sure there are examples of people losing their job because the team just wants to use Midjourney or something like that. And, yeah, I mean, that sucks. Nobody should have to lose their job over that. But I think that’s the same thing, like we no longer use horses, right? We started having cars, like, you don’t need horses or people to take care of your horses anymore.

Michael: Right. But 90 mechanics. 

Aaron: 90 mechanics. Exactly. So now we need people who can make good use of these machine learning models, and now we need people who know how to train them and understand them. And it should all kind of float around and work out towards a future where our jobs are different. 

Michael: You used the term laborious, and I think that’s what most people need to realize is that the AI, the tasks that AI are going to do are the things that we don’t really want to do, right? At HPA, we saw that Avatar: The Way of Water had something like a thousand different deliverable packages, which each had its own deliverables inside of that. And that’s tens of thousands of hours of work that, at some point, we can automate. So we don’t need to do that. So you can move on and make another film. So if there were jobs, let’s say for the next 12 months, because the AI landscape is changing so quickly until NAB 2024, what tasks would you say, AI do this, humans do this? 

Aaron: Let’s take content moderation, for example. So you’re distributing your titles to a location where there can’t be nudity, or there can’t be gun violence, or there can’t be something, or even certain words can’t be spoken. I would tell AI to go and process all that with content moderation that is trained to detect those things. But I wouldn’t just trust it. I would tell the human to review it. So now your job, instead of going through every single hour of every single file that you’re sending over there and checking, is to just review and check for false positives or false negatives should save you 80% of your time, assuming that the models are 80% accurate. Right? So that’s an example of something that I would say – humans review, machines go and process. That’s simple. I would say that’s the best human-in-the-loop aspect of any kind of machine-learning pipeline or ecosystem. 

Michael: I am immensely grateful that you’ve come here today. I wanna do another webinar, another video with you. We’ll find a way to make that happen. If you’re interested in AI and ML as it pertains to our industry – M&E, check out GrayMeta. He’s Aaron. I’m Michael Kammes with Shift Media here at NAB 2023. And thanks for watching.

Miss our interview with Mark Turner, Project Director of Production Technology at MovieLabs? Watch it now to learn more about their 2023 Vision.

Play_Shift_Media_Turner_Movielabs_Interview_Blog_Image

For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.

Shift_Media_Mathur-Avid_Interview_Blog_Image

Michael sat down with Shailendra Mathur, VP of Technology and Architecture at Avid, to discuss how they study and implement new opportunities AI brings to the industry. From integrations in Azure analytics to the RAD Lab, Shailendra explains how Avid investigates and decides when and how to utilize the latest technologies.

Michael: Hi, Michael Kammes here from Shift Media, and today we’re sitting down with Shailendra Mathur, VP of Technology and Architecture at Avid. Thank you so much for being here today.

Shailendra: Thanks for having me, Michael.

Michael: I am thrilled. We talked to a lot of people this week about technology, and a lot of it’s about what the announcements are. And there’s a lot of press on Avid’s announcements, but we’re gonna talk about AI cause, obviously, that’s the hot thing right now, and we’re really excited to see what Avid is doing with AI. So let’s start, kinda at the top level. Avid’s done a lot of research into AI. There’s been a lot of transparency and publishing of papers. Can you kind of go over how AI is handled internally, the lab that Avid has, and how that documentation is getting out to the world?

Shailendra: Yeah, absolutely. So, Avid has had AI integrations, for example, with our media asset management system. We have integrations with the Azure analytics service. So that’s how we enrich metadata, and we can search using expression analysis and other facets. So we’ve been kind of utilizing some of the AI functionality on that side, but we also started something called the RAD Lab, which is the research and advanced development. It’s RAD!

Michael: I like that.

Shailendra: And, frankly, it was also a way of bringing in researchers, the young folks, the who are out there right now. And so some of these are internship programs, but these are fail-fast, succeed-fast, investigate and figure out what we want to do with some of the technologies because there are so many ideas of how we should be doing AI for editorial, for asset management. There are so many. Which ones do we pick first? So this, using the RAD Lab, we did quite a bit of research, and part of the mission that we had was not just to keep it private to ourselves, but as you said, we’ve been publishing, but it’s also because of the collaborations, right? We are publishing. So we are published at the SMPTE conference. We actually had HPA presentations last year and this year.

So those have also just brought out other collaborators on that. And you know, when we are picking some technologies to investigate, other people have been contributing and say, “Hey, did you think of this?” So that’s been sort of our mission. So in terms of what we have done so far in that research, there are things like AI-based codecs. That’s something that we started looking at, especially when we looked at storage efficiency. You know, HEVC, AV1, these are all proceeding anyway, but AI adds another aspect to the codecs, so we started investigating that. That’s part of what’s published in the SMPTE journal as well. Some of the results we brought out are looking at things like semantic search technologies. Of course, ChatGPT is everywhere.

But it’s more the open AI models that actually help semantic indexing and semantic search. So that’s been another one. Related to that have been things like saliency maps and figuring out contextual information from images that can be actually used for different purposes. So that’s another paper that we published, which basically allows for better compression and color correction, extracting regions of interest. This is some of the work that we are doing and publishing, and you’ll probably see more coming out as a result of this work. So this is just research, but yes, there will be productization as well.

Michael: What would you say the ethos is for Avid in terms of how they view AI and AI’s role?

Shailendra: The ethos is that it’s all to help the creatives. Creatives are the life and blood of this industry. Whatever we do, we want to make sure that it’s an assistive technology versus something that’s replacing anybody. This is not about replacing. It’s all about assisting. It’s about recommending, right? Even when you look at ChatGPT, we think of these as recommendation engines, right? It’s recommending how to do things better, right? That’s really the ethos that we are following.

Michael: To get a little bit more specific on where AI fits. Now, I’m sure by NAB 2024, we’ll be sitting down, and the conversation will be skewed a little bit. But what tasks for creatives today would you say this is AI, and what tasks would still be in the creative realm?

Shailendra: Like I said, it’s a lot to do with recommendations, right? So just think of what we do with search today. Today a lot of folks have to log metadata, right? Right up front. If you don’t have the metadata, you can’t search for content appropriately. So it’s a pretty established field that you can use ML-based models for metadata augmentation, right? So that’s well understood. But then also, as a creative, you may be missing other related content. If so, then that’s where contextual search comes in, or semantic search comes in. Where it may not be exactly the person’s name, it could be another language. It could be some other information, or the person changed names. So that semantic information now is giving you a richer set of information back to work with as I created.

Shailendra: And the same thing with a journalist. You might be writing a story, right? You’re writing a story, but something else is happening, and you want to make sure that you can capture what’s happening out there. Or you could have some content for it to be used as B-roll, or it could be content in your archive that you weren’t even aware of. But as you’re writing, this is all assisting you in writing the news story. But it could also be scriptwriting. And in fact, it’s interesting that the HPA this year, Rob Gonsalves, was part of our team. He actually gave a presentation where he literally started showing how you could actually generate some script, start putting some animatics together, all using this technology. This is not replacing the creative, he was acting as a creative, and this was just speeding up their work. Right? So I think that that’s the way this is going to proceed.

Michael: That brings me to my next question because everyone in the industry is concerned about this – “What’s my future as a creative, as an editor, as somebody who does VFX or motion graphics? Do I have to worry about machine learning and AI taking my job?” And what would be your response to that?

Shailendra: No, I think this is one of the fears that everybody has. The way I think about this is that it’s AI, you know. You can say it’s taking over the world, but no. I mean, even our brains, we ourselves, I mean, I come from a research background in computer vision, and we’ve studied neurology. And as part of what we learned, we’ve barely mapped out 10% of our brain. How can we say that AI will replace our brains when we don’t ourselves know how our brains work? What it is doing is a lot of mimicking and basically has a lot of horsepower to do things. So will it get there? Maybe? I don’t know. But at this point, I’m a glass-half-full guy, you know. I’d rather focus on the positives of where it can assist us and where it can help us. I don’t think it’ll take over the jobs. It is going to be about assisting. There will be job changes. Sure. But those job changes will be very positive in my mind.

Michael: And well, that’s also been the job of a creative since the beginning of motion pictures, right? Your job has always evolved, whether it’s cutting celluloid or cutting video, or, you know, not using a bin button but instead logging stuff into a computer. It all has constantly evolved.

Shailendra: You’re just doing it faster now. Somebody still has the job of curating content. But now you’re being assisted in that. I don’t think it’s gonna take over a job. It will change them for sure.

Michael: We sat down with Mark Turner from MovieLabs, who obviously has, as you probably know, put out the 2030 Vision paper. And there are ten principles outlined in that. Yeah. I’m curious, has there been any work in RAD regarding AI and how it plays into MovieLab’s 2030 Vision?

Shailendra: So, what’s very interesting is MovieLabs, EBU and SMPTE actually just published the ontology primer, which we really believe in because we actually believe that asset management, as it stands right now, will move to much more of knowledge management, as you go forward. And that primary literally lays this principle out as well. And it’s one of the core principles moving forward. So we are very, very much aligned with that. And yes, that is going to be one of the areas that we are very interested in, and we’re working together with MovieLabs and others to bring that out. What does that look like? This is all part of the RAD Lab projects too. There are graph databases there that are coming up and implementations around that. So these are all going to be areas that we continue focusing on together with the MovieLabs site, the rest of the MovieLabs 2030 Vision. Well, we’re already showcasing products that are actually starting to show the way forward. Things like bringing the application to the media asset

Michael: Yeah. That media is sitting in cloud.

Shailendra: Exactly. So there are three ways we are doing that. Literally, virtualized editing that’s actually happening. Our customers are leveraging that today., in the cloud, public cloud storage and working directly on that. We have a web browser view that allows you to edit and asset manage. So again, even though the web browser view is remote, you might be sitting remotely, but it is close to the media because you’re not moving the whole content over. So that’s another way of thinking of it. And we just introduced NEXIS | EDGE. NEXIS | EDGE as a product is the same thing, but in that case, it’s not a browser view. It’s a much richer editorial environment, like the full editing system where you’re just accessing the media remotely in the swimming mode. So these are all aligned with MovieLabs, principles, the cloud principles. So [we] completely believe in where they’re going and will be right along for the journey.

Michael: Excellent. Shailendra, thank you so much for your time. You’re welcome. I’m Michael Kammes with Shift Media here at NAB 2023. And thanks for watching.

Shailendra: Thank you.

Miss our interview with Mark Turner, Project Director of Production Technology at MovieLabs? Watch it now to learn more about their 2023 Vision.

Play_Shift_Media_Turner_Movielabs_Interview_Blog_Image

For tips on post-production, check out MediaSilo’s guide to Post Production Workflows.

MediaSilo allows for easy management of your media files, seamless collaboration for critical feedback and out of the box synchronization with your timeline for efficient changes. See how MediaSilo is powering modern post production workflows with a 14-day free trial.