In this guide, we'll highlight how creative agencies and the production world can each optimize their reels to get the attention they deserve and win more business.
For creative professionals, showreels are a resume, portfolio and calling card all rolled into one. Whether you work at an agency, production company or any of the dozens of other businesses where creativity is at a premium, showing your work in the best possible light can mean the difference between getting the gig and being forgotten.
Creative and marketing agencies of all types use reels for pitching agency services, demonstrating category experience, illustrating case studies and differentiating their creative thinking processes from those of competitors. Meanwhile, companies that represent production, post-production and supporting talent use their reels to show off the past work of their talented creatives, highlight specific skills or technical capabilities, and draw attention to the specific voices of their artisans.
While those two categories of businesses often have very different goals when showing reels, they have even more in common. All of them ultimately are accessing the power of sight, sound and motion to present the best of their creative output.
In this guide, we’ll highlight how creative agencies and the production world can each optimize their reels to get the attention they deserve and win more business. Then we’ll review the most critical considerations in setting up an in-house system for building great reels.
Pain-Free Pitching For Ad Agencies
The traditional agency pitch is a drawn-out, sprawling, cumbersome process that has now spread throughout the creative industry. From pitching on a client’s entire creative business to bidding on a specific project or campaign, creative agencies in advertising, marketing, experiential, PR and digital all dutifully invest time, creative energy and resources into frustratingly rigid dog-and-pony shows.
Most consultants’ pitch templates are older than the internet.
The average agency spent 22.2 days of staff time last year on each pitch they entered (equal to one employee working one entire month per pitch – eleven times a year).
“We’re meant to be in the business of creativity, but the focus has shifted…The average agency now spends around 2,000 hours a year working on pitches, time that’s often tacked on to the end of the working day.”
Lucy Taylor, MullenLowe Group UK
So how can you shift the odds in your favor when preparing for the dreaded pitch? Consider the basic criteria clients use to determine the fit of any agency:
You understand the client’s business, vision and immediate needs.
You have experience in their industry.
You have a recognizable roster of previous clients.
You have the right mix and level of capabilities.
You can personalize your solution to them.
The most important thing to remember is that the best reels reflect the specific client watching it and demonstrate what you can do for their exact needs. Of course, every client is different, meaning the best reel you can use is customized particularly for them. If you’re going to engage with multiple prospects, it makes sense to scale up your reel-building capabilities internally. Doing this will allow you to conduct business development proactively, respond more swiftly to requests, reduce the expense of customizing your reels and, most importantly, increase the “at bats” your agency gets by pitching as many clients as possible.
A Digital Foot In The Door — Reel Building For Prodcos And Post Vendors
When you first approach an agency or client, you often don’t even get the chance to talk to anyone. It’s only after you’ve sent a reel and they like it that you get to have an actual conversation about the talent you represent. Whether you’re an independent rep hoping to get your future star on an agency’s radar or an executive producer with a director or editor who’s ready to tear up the awards circuit, the first step is getting agency producers or creatives to take notice. And that means you need a showreel. But how do you make your showreels work harder and cut through the clutter amidst so much competition? Our guide will help you make the most of your work by making the most of your reels.
Use advertising tactics to cut through the clutter.
You want a combination of reach and frequency, so use a platform that allows you to track viewership metrics and follow-up. You want to keep in touch and maintain an ongoing relationship to ensure viewers engage.
Choose work that demonstrates you’ll take their project to the next level. You want to show them how good their work could be if they hire you.
Specialize your reel for their project.
If you’re really strong in multiple genres or styles, there’s nothing wrong with creating separate reels for each of them, but it’s often a good idea to refrain from putting it all on one showreel (unless specifically requested).
Leave them wanting more.
Especially with new and developing talent, having a short reel is fine, and far preferable to a reel of mixed quality with some flawed pieces that aren’t up to standard. As a guideline, people will assume you’re as good as the worst work they’ve seen from you.
Look the part.
Include company graphics or animation at the head and tail, and make the hosting page or presentation look flawless. If possible, you may even want to include their logo or customize the reel with a mention of the project.
Building a great showreel can be a bit like building a great meal. You want enough courses to make everyone satisfied, but you don’t want to overwhelm with volume.
In the end, what you choose to put on the reel is about reassuring a prospective client that you have the knowledge and skills to make their project great, even if you haven’t done that exact thing before. That connection between your work and the client’s needs is what gives a reel the best chance of getting you in the door. So the more customized you can make the reel, the better your chances of winning the job.
What Matters When You’re Building Reels In-House
So you’ve decided to set up an in-house system for making your reels. There are a few basic parameters to keep in mind when you’re setting up your reel-building system:
Organization: Every additional step it takes to get your work samples onto your finished reel is another obstacle between you and your potential new work. You want to have anything that you might ever use on a reel in one place so that no one has to go hunting for an asset or version. And you want all the pieces well organized and in compatible formats, ensuring your presentations are consistent and reliable.
Simplicity, Speed, and Efficiency: Your system should be straightforward and non-technical enough that anyone on your staff can create a great reel in a pinch, even if you’re out of the office. And it should be efficient enough that it can be done on short notice. Your responsiveness alone—along with your ability to turn around a beautiful presentation quickly—will make a strong impression from the start and potentially get you the chance to compete for projects you might have missed out on otherwise.
Customization: You should be able to make customized reels for any client or project and be able to adjust the look and presentation so that each client feels like it was made just for them. Find a solution with premade templates, customizable design themes and drag-and drop presentation-building features to ensure consistency and deliver presentations quickly.
Security: Unlike showcase websites and traditional asset storage solutions, modern asset management and reel-building systems can also offer higher-tech security features, like watermarking and personal access codes. Some platforms offer more advanced multi-factor security and even integrate it with their analytics so you can keep track of who is seeing your work in real-time. You need to keep your materials safe and ensure that only the right eyes see them. This also shows prospective clients that you understand their security needs and how to keep your future work for them secure.
Analytics: You want to be able to harvest a robust set of data from the reels you send out. Being able to tell who has looked at a reel, which parts they watched, and for how long can be critical in pursuing new business and following up with the presentations you’ve already sent out. Finding this type of solution can help you make better, more profitable new business decisions.
Make Your Next Pitch A Fast One
The arduous process of building reels isn’t going away any time soon. But with tools that let you quickly present the beautiful work you’ve already created, you can get on the shortlist and possibly even short-circuit the process of winning more work.
Wiredrive by EditShare can help you share winning work with the world. Contact us to get started with a demo of the industry gold standard for building and presenting winning showreels.
Docs are real films
Since the Lumière brothers wowed audiences with footage of a train pulling into a station in 1896, documentary films have been a part of cinema. Some documentaries record life as it happens, and some films interweave re-enactments, graphics, or archival footage. But all documentaries promise to grant the viewer a glimpse of the truth. They seek to deliver insight into the truth of things instead of just the pantomime of reality that narrative fiction brings to the screen. For those brave filmmakers seeking to challenge, inspire, and illuminate the world through documentaries, we present this guide to making real films.
Some documentaries follow a “hands-off” approach. The goal of these filmmakers is to be a fly on the wall. They want to introduce as little disruption to the events they document as humanly possible. Others see limitations to this approach and cast actors to amplify the emotional impact of a story.
We want to get into the nuts and bolts of crafting a compelling doc. Mastery of lensing, the use of audio, story structure, editing, music clearances, and distribution lead to a total package that rises above the noise. Of course, there are also different levels of documentary. You can see docs shot on phones and docs shot on RED cameras. The critical thing is that you use all the tools at your disposal to draw the audience into the emotional arc of the story. So, we will cover the planning, shooting, and editing so that you can go into your next unscripted project fully prepared. It’s true that beginners can just grab a phone and get shooting. But we want to push things to the next level and see what it takes to craft a top-tier documentary.
Filmmaker Elaine McMilion Sheldon has been on both sides of the coin. Elaine McMillion Sheldon is an Academy Award-nominated, Peabody-winning, and two-time Emmy-winning documentary filmmaker. She premiered her latest feature-length documentary, “King Coal” at the 2023 Sundance Film Festival. “King Coal,” employs the latter method to weave an evocative story about the coal communities of her childhood.
In an interview with EditShare, the filmmaker describes casting the two young girls who help to tell the story visually. They were cast from the local community for their ability to “ignore the camera.” And their presence in the film provides a visual metaphor for the experiences of the locals. Elaine McMilion Sheldon wrote and delivered narration, thereby revealing her own perspective as another layer in this ethereal documentary. These divergent techniques show how much a documentary film is the product of the director’s heart and soul. Documentaries embody a point-of-view and express a desire to bring the viewer into that perspective and see real life through the eyes of another.
Story priorities should guide camera choice
Some filmmakers get more excited about buying gear than making films. They make more videos about buying, comparing, unboxing, and reviewing equipment than actual films. This is mainly because YouTube rewards this kind of video with ad revenue. But in another way, nerding out on technology is easier than going out and shooting something that exposes your soul to an audience and invites criticism or rejection. Choosing a brand can be like choosing a sports team or a personal fashion style. It is too easy for your camera to become an extension of your ego and a mark of your belonging to a particular tribe. But with that being said, there’s nothing like bringing home a new baby cine cam and getting it all rigged up to go make a movie. The right gear prevents headaches and amplifies your creativity. So, I’m not one who says cameras are just tools. For whatever reason, they are more than that. They do become the means through which you see, create, and communicate. Like the sword of a samurai or a knight, cameras become part of our professional identity as image makers, and that’s why we agonize over the choice. Just don’t let your tools define your personal identity, and you’ll be okay.
How do you determine the best camera choice? Consider your storytelling priorities. If you are putting together a run-and-gun project, consider these questions.
Do you need to quickly boot up to grab shots on the fly?
Do you have a crew, or are you a one-person band?
Will you have a separate sound person, or will you record audio right into the camera? Do you need a lightweight camera to go on a small gimbal?
Will you have the time to manage a matte box and filters in the field?
On the other hand, you might have more time to compose shots and the ability to take a bit more gear. If you have a small crew, you can take a more capable camera and give some attention to lighting. In that case, image quality may trump extreme portability.
The Canon C300 series has long been considered the king of documentary cameras. It works with Canon EF mount photo lenses. Those lenses are high quality and available just about anywhere in the world. You can run your XLR audio cables right into the camera as well. The C300 MK III also features built-in ND filters. That makes it easy to adjust your exposure while shooting outside. Canon is, of course, renowned for its warm, pleasing images, and the C300 MK III delivers on image quality. At $8,999, it is not inexpensive, but it provides excellent value and can be rented at reasonable rates.
Sony has been working hard to elevate the mirrorless camera form factor to provide the functionality of a cinema camera. The FX3 has a full-frame sensor, excellent auto-focus capabilities, and built-in image stabilization. Its tiny size makes it perfect for shooters who need to be agile and low profile. While it can send a RAW signal from the HDMI jack, you won’t get ProRes or RAW recording in the camera body itself. The Sony comes in at $3,899, making it less than half the Canon C300 MK III cost.
Panasonic combines a smaller “micro 4/3rds” sensor with the professional ProRes recording format. This makes the GH6 attractive because its lenses are smaller and lighter. Since it has a professional codec, you eliminate the need to record externally. At just $1,697, it is less than half of the Sony FX3. Panasonic’s weakness is that it does not have the same autofocus performance as the Sony or Canon. But it does offer enhancements to Lumix lenses. You can set up those lenses so that they are able to pull focus manually instead of focus-by-wire. This is important if you are shooting a scene where an actor needs to hit a mark.
When image quality takes priority over the speed of operation, you might turn your attention to cine cameras, like the new RED Komodo X. At $9,995, it is the most expensive camera on the list. RED offers in-body REDCODE RAW. That is a 16-bit RAW recording format that offers maximum flexibility in post. As a documentary shooter, I love shooting in REDCODE RAW because it is the most forgiving format when you’ve over- or under-exposed a shot. You can’t always control the light in a given situation; that is where raw codecs shine. To my eye, the images that come out of the RED cameras “look right.” REDCODE RAW is also flexible, allowing for quality levels that deliver raw images at data rates lower than ProRes. But RED cameras are designed with a crew in mind. Yes, you can use them solo. For instance, RED offers a PL mount with electronic ND filters, but this solution does not come cheap. So remember that RED cameras work best when paired with a camera assistant.
I won’t lie. When in a pinch, the iPhone Pro Max will deliver a ProRes (512GB version $1,399) image that can work. The new cinematic mode even allows you to pull focus in post! The software will denoise footage and manage highlights for you. Its in-body image stabilization is just amazing. The biggest issue I have with it is the lenses. It can be challenging to rid your image of unwanted glare when shooting with a backlit subject, and the “smart” features of the camera can often work against you. Battery life, heat, internal capacity, and file offload speed are issues with this camera. But every year, Apple delivers better video performance. Most of all, if you are trying to shoot unobtrusively, the iPhone could be your go-to tool of choice.
The high end:
The cameras listed above favor a lightweight style of documentary shooting. If your project demands the highest quality of images, and you have the budget for lights and crew, the RED V-Raptor ($24,995) makes a lot of sense. When a project calls for high-resolution images, the V-Raptor delivers 8k at 120fps in REDCODE RAW. It is RED’s best camera for low light and combines superior dynamic range with a body built for efficiency when you work with a team. If you are working on the next “Chef’s Table” cinematic-style documentary, then the RED V-Raptor might be the right choice.
Even if a film doesn’t use a “high-end” camera, that doesn’t mean a documentary can’t have a cinematic look. Lens choice, quality camera support, and lighting techniques can create beautiful images with low-cost equipment. The real challenge to creating compelling b-roll is not in equipment but in conceptualizing sequences that communicate concepts. Visual metaphors and vignettes go a long way toward engaging a viewer than merely pretty visuals. So, work to find ways to shoot sequences of shots that amplify the message of your interviewees.
Audio is more important than video
You’ve probably heard the old adage, “People will watch a video with bad video quality but not with bad audio quality.” That is especially true for documentaries. Viewers understand that capturing life as it happens means trading off polished visuals for behind-the-scenes access.
The best way to ensure top-quality audio for your documentary film is to hire a sound mixer when you shoot. They will be able to provide the microphones for the right situation, balance levels on the fly, and ensure that unwanted sounds aren’t affecting your recording.
Documentarians often don’t have the budget to hire a sound mixer or their setup calls for something as minimal as a shotgun mic on a boom pole over a seated interviewee. Here are some recommendations for gear to help the solo shooter get the best audio for their film.
The Zoom F3 audio recorder features 32-bit float recording technology. This technology means that you don’t have to worry about your recording “clipping” when an interviewee gets too loud. The F3 offers timecode sync via a Bluetooth adapter, the UltraSync Blue, and UltraSync One combo of timecode boxes. When recording an interview, you can run a cable out of the line out into your camera. This setup gives a nice, clean recording in the camera and a higher quality 32-bit float recording in the F3 itself. (It also provides a little insurance if you forgot to press record on your audio recorder, not that anyone has ever failed to do such a thing!) The F3 is small yet can power XLR microphones that need phantom power. If you need more than two channels, the Zoom F6 is a good choice.
Another popular method of syncing timecode is the Tentacle Sync system. Tentacle Sync is used by sound professionals worldwide to ensure that the audio recorder and camera stay perfectly in sync.
Microphones
There are so many choices for microphones. And more than any other area of audio/video production, “you get what you pay for” applies to mics. More expensive mics sound better. But for the typical documentary shooter, the keys to success are simplicity and reliability.
Just about everyone has a smartphone. You can use the Apogee ClipMic Digital 2 to plug into a phone and record right to it. It works with the same UltraSync timecode system as the Zoom F3 to keep all your recordings in sync. At $199, the ClipMic provides one of the least expensive ways to get quality audio for the documentary shooter.
Rode just released a 32-bit float, timecode-enabled lavalier system. This solution eliminates the need for an additional audio recorder like the Zoom F3. You also won’t need external timecode boxes like the UltraSync.
Sennheiser delivers a great-sounding and simple-to-use lavalier mic system. The build quality of these mics is a step up from the lower-cost solutions. They fall into a space in the market that is short of the higher-end professional solutions and better than the prosumer offerings.
Hiding lav mics
When a lav mic shows in the shot, it really reminds you that you are watching a produced piece. Ironically, it takes more work to hide the mic. Sean Woods has a great 3-part series on how to hide lav mics on different shirts and then how to easily EQ the mic to compensate for it being under clothing.
If your subject is stationary, like in a sit-down interview, or if you have a boom op on set, you’ll want to consider a shotgun microphone. Shotgun mics reject noise around the subject and focus on a narrow pickup pattern. This pattern makes it great for environments where you can’t control the ambient audio. The Sennheiser MKE 600 lets you power it from the camera or audio recorder’s phantom power jacks. But you can also use an AA battery and power it without phantom power.
As microphones increase in price, they become more specialized. As you consider the various environments you’ll record in, you begin to see why sound pros bring a selection of mics with them on location. One of the most highly regarded microphones for indoor recording is the Sennheiser MKH 50. It provides a beautifully warm natural recording that will elevate your audio to a professional level. It isn’t the best choice when you are outside or if the mic has to be far from the subject.
Rigging a boom pole
If you are conducting a sit-down interview, having a solid rig to hold your mic over your interviewee is essential. You’ll want to position the mic out-of-frame and pointed toward your subject’s chin. Here are the key pieces of equipment that will help you build a solid and lightweight rig.
The reverse stand is great for packing in a case when traveling. It is lighter than a standard baby stand and easier to work with than a C-stand. It has a “baby pin” on the tip, which makes it compatible with video-oriented rigging gear, unlike many light stands designed for still photography.
Place a grip head on the reverse stand to hold accessories firmly. Don’t skimp out and buy a plastic grip head of some sort. This piece of kit prevents your boom pole from slipping and bonking your interviewee on the head.
A boom pole holder, or yoke, will enable you to position your boom pole at a slightly upward angle. This positioning will compensate for the flex in your boom pole when it extends. Make sure that you don’t skip this piece and try to put your boom pole through the grip head.
The most important tool for rigging your mic is the boom pole. You can use it handheld or as a part of a rig. This boom pole from K-tek has an internal XLR cable that will keep your rig tidy. It extends so you can place your stand away from your subject and keep it out of the shot.
Your shotgun mic is held in place by a shock mount. This mount eliminates handling vibrations from being transferred into the microphone. Make sure that the shock mount you choose fits your microphone. For instance, the Sennheiser MKH 50 P48 has its own shock mount.
The last thing you want is for your microphone rig to fall over. You can take a boa bag or other shot bag and wrap it around the base of your stand. This is a critical piece for the safety of both your gear and the on-camera talent.
Don’t forget a high-quality XLR cable. This cable will go from the back of the boom pole to the audio recorder. It will transfer power to the microphone and signal to the audio recorder. It can be convenient to have this cable be a different color than black to distinguish it from all the other cables in your kit.
Headphones
A good pair of closed-back headphones is essential to successfully recording audio on location. Don’t fall into the trap of trying to use an old pair of Apple headphones from an old iPhone. Sennheiser’s HD280 Pro headphones are relatively inexpensive and will serve you well both in the field and when you are editing. The key is to use a pair of headphones that deliver a relatively flat frequency response, unlike Beats or other headphones designed to accentuate specific frequencies for music listening.
Closed-back headphones are critical because you want to hear the environment through the headphones only. It is also essential to avoid audio bleed-through, like on a pair of “open-back” headphones, which will affect your recording. If the Sennheiser headphones are too bulky, you might like the Sony MDR-7506 headphones.
Listening to locations
Half the work is done if you can scout out a location ahead of filming. Listen for heating and cooling systems. Clap your hands in the room and listen for echoes. Ask about construction projects in the area. If there are ways to reduce the background noise in the location during filming, it will save you an enormous amount of time in post-production.
In addition to listening for sounds to remove, listen for sounds to capture. Can things like fans, vehicles, or natural sounds enhance the story when used with b-roll? If you have hired a sound mixer, bring them on the location scout and treat their ears like you would treat the eyes of a cinematographer. They can help enhance the story by identifying both sounds to remove and sounds to capture.
Archival documentaries
Some docs shoot interviews, and some just use archival footage to tell their story. In the Netflix documentary WHAM!, the filmmakers didn’t film interviews. Instead, they used extant footage and interviews. They combined this with footage of a scrapbook one of the pop duo’s moms made during her son’s journey to stardom. The National Geographic documentary LA92 took the same approach. The filmmakers used news footage and archives to demonstrate the strife that shook that city and the nation. The effect of archival footage is powerful. As a viewer, you get a sense of the bigger narrative. The filmmaker’s point-of-view can be just as effectively portrayed through archival footage as through original video.
Clearances
The challenging aspect of archival documentary is known as “clearances.” This involves balancing the need to contact the original TV network or studio that created the media in the first place against the “fair use” doctrine. Fair use is a legal doctrine allowing media to be reused in specific ways. It is intentionally vague. A documentary filmmaker must carefully evaluate the guidelines for fair use and determine their willingness to take on the risk of using media under fair use. If the owner of that clip contests the use of that media as not being legitimate under fair use, you could face a legal fight.
The other option is to contact the media owner and pay them a licensing fee, which will be negotiated based on your project’s budget, the places the documentary will be shown, and the license duration (e.g., ten years or in perpetuity).
In these cases, it is a good idea to enlist the services of an entertainment attorney to help guide your decision-making process. Sometimes, a clip needs multiple layers of releases to clear it.
When it comes to music clearances, it is helpful to employ a music supervisor who can help you navigate the complex world of licensing songs. Using a famous hit will be expensive and time-consuming, so keep that in mind before attempting to license a television performance of a famous musician. You may need to negotiate with their estate if they have passed on.
Photo licensing can be more straightforward. Getty offers services for licensing its catalog of images. It can be easier to license a photo of a famous person than to license audio or video footage.
After Skid Row
Sometimes, a documentary comes together after months of careful planning; other times, an opportunity presents itself, and you have to move quickly. That was how Lindsey Hagen was able to assemble a small crew and capture the experience of “Gangsta Grannie,” Barbie Carter, in her short doc film After Skid Row. Lindsey is a Director, Executive Producer and Story Producer at Gnarly Bay. Together, the team at Gnarly Bay has garnered multiple awards and produces films for Fortune 500 brands. In 2022, they won the Vimeo Film Festival award for the best branded video for Cannondale, which Hagen directed. Lindsey Hagen It went on to be Oscar Qualified, and L.A. Times distributed it.
The crew was small, and their gear was light. That helped them to take a “fly-on-the-wall” approach to this film. Hagen notes that the story was really about their subject “reclaiming her identity.” As a result of the film’s impact, the crew was able to raise funds for Carter’s medical needs as well. It really stands out as a beautiful example of how an audience can enter into the suffering of someone, and yet the subject of the film can retain their agency so that it is their story, told on their terms.
Editing a documentary film
Walter Murch edited the brilliant documentary Particle Fever. In this look at the editing process, he talks about the ratio of footage shot to footage used as 300:1.
This presentation is a tour-de-force in the level of organization that it takes to surface a compelling narrative from so much footage. The story is there, and the tension and drama are real, but the effort it takes to form that story into a movie is immense.
The key is to organize and log that footage. One of the best tools for doing that is transcripts. Here is a case where AI comes into play. Adobe Premiere, DaVinci Resolve, and Lumberjack Builder (initially designed for Final Cut Pro) use machine learning to generate transcripts. Those transcripts are key to identifying common themes and making those spots quick and easy to access.
Transcripts allow editors to do a “paper edit” or “radio edit” based on the interview content. This edit can help the editor build a backbone for the story with the drama necessary to maintain the audience’s interest. Because no matter how important your story is, it won’t matter if you lose the audience’s attention.
Ode to Desolation
Lindsey Haggen also directed the short documentary Ode to Desolation, where she and her cinematographer/editor, Chris Naum, voyaged into the North Cascades. The gear for this film was even lighter. They shot on camera designed for stills but paired it with some beautiful lenses to capture the majestic landscapes that can be seen from the fire lookouts of Washington state. The filmmakers captured Jim Henterly’s role as a “keeper of history” and “maintainer of the story” of one of the last fire lookouts in North America.
The shoot took four days. They slept at altitude, hiked up a mountain for hours, and traveled by boat. This goes to show that you truly can tell a majestic story with a kit that can fit into a couple of backpacks.
Storage and Workflow
It takes a team to execute a film. That’s true on set, and it’s true in post. The assets you create and share with your team need to live in a central location that is separate from your editing team’s MAM (media asset manager) and accessible remotely.
When creating a world-class documentary, you accumulate a huge number of assets. I’ve written about the challenges of organizing massive amounts of footage. This is especially true when you are trying to shuttle drives back and forth. I once heard a story of someone who shipped drives via FedEx and the truck got into an accident, and the drives were lost! It’s best if you can deploy multiple strategies for backing up footage. Shipping two drives separately would be one approach that gives you redundancy. But as upload speeds have greatly improved, cloud-based collaboration workflows are revolutionizing the way teams work.
The team at DEFINITION6 shoots everything from unscripted work to Sesame Street. MediaSilo is their tool of choice for collaboration. According to their Chief Engineer Luis Albritton, DEFINITION 6 uploaded over 10,000 assets, sent nearly 7,000 review links, and hosted almost 24,000 viewers of their content in MediaSilo during 2022 alone. With MediaSilo, producers, executives, and stakeholders can stay in the loop on the post-production process while maintaining the security that is critical to their clients.
Conclusion
Documentary filmmaking can be your ticket to see the world or see the world from a new perspective. It’s taken me to Oxford, Cambridge, the Jerusalem Museum, and the Vatican Library for Fragments of Truth. Some projects are small and tell the story of your family; others can be on the cosmic level. They can showcase the beauty and struggles of a community, like King Coal, or the beauty of nature. But one thing all good documentaries share is a point of view. They show the world from a unique perspective and seek to touch the hearts of an audience and challenge their perspectives on the world.
Reuben Evans (Member, Producers Guild of America) is a director, an award-winning producer, writer and director. His company Visuals 1st Films, LLC is producing a documentary on the hymn Amazing Grace, starring John Rhys-Davies. He is the former Executive Producer at Faithlife Films & Faithlife TV. He’s produced and directed multiple feature-length documentaries including Fragments of Truth (2018) and The Unseen Realm (2020). Most recently Reuben produced “The Disappearance of Violet, Willoughby” (coming in 2023).
EditShare’s video workflow and storage solutions power the biggest names in entertainment and advertising, helping them securely manage, present, and collaborate on their highest-value projects. To learn more about how EditShare can help your video production team, contact us today.
The EditShare team designed this guide as a reference for the most commonly used codec you will run into in your work in motion picture post production. It can’t serve as a universal encyclopedia of codecs; there are just too many to count, and new special purpose formats arrive seemingly every month.
Motion picture post production has luckily settled on a few commonly used codecs that have a large footprint in the industry, and a good working knowledge of each will help you tremendously as you go about your work.
Apple ProRes
Apple ProRes is currently the most widely used codec in all of motion picture post production. If you work in a Mac shop, and Macs continue to dominate a lot of post, you’ll likely run into ProRes on a daily basis. In fact, one of the factors that keep Macs on top in motion picture post is the functionality and ubiquity of ProRes.
ProRes is used all the way from image capture in major platforms like the Arri Alexa, through editing in any of the four major NLE platforms, all the way through delivery, with streamers and major networks accepting ProRes file for delivery. If you are worried about the drawbacks of transcoding from one format to another, ProRes avoids those issues with an end-to-end pipeline that stays in one codec throughout.
Though you shouldn’t be afraid of transcoding for your edit workflow, you can always reconnect back to another format at the end in your online color session. Whatever format you shoot, you’ll be happier with your edit if you take the time to transcode to ProRes, especially an edit-friendly flavor.
If you are going to be working with ProRes extensively, it’s well worth a read of the ProRes Whitepaper, the technical sheet that Apple keeps alive spelling out the ins and outs of ProRes as a format.
The basics to understand is that Apple ProRes isn’t just a single codec, but a family of codecs built around the same technology, available in multiple implementations. You can think of these as “flavors” or “strengths” of ProRes. These flavors refer to both the method of encoding the image, 422 or 4444, and the data rate, how many Mb per second are allocated to creating the image. The higher the data rate, the higher quality the image reproduction will be, with fewer artifacts, but on the flip side, the larger the file will be.
The data rate scales with the image size, meaning that a given flavor of ProRes will be a much bigger file if you shoot a higher resolution and a smaller file if you shoot a smaller resolution. You can see the strengths in the following chart, combined with their data rate when working at 1080p 29.97.
ProRes File Sizes
4444 XQ 500 Mb/s
4444 330 Mb/s (often called 4×4 or quatro)
422 HQ 220 Mb/s
422 147 Mb/s (often called prime)
422 LT 102 Mb/s
422 Proxy 45 Mb/s
This starts all the way at the smallest file sizes with ProRes Proxy and goes up to the currently largest file sizes of ProRes XQ. You might shoot your film and capture it in XQ, then transcode it to LT for editing, then reconnect back to the XQ’s for color grading, then deliver to your network in 4444.
4444 and XQ both support Alpha Channels (that’s the fourth four in 4444), which allows for passing transparency information back and forth with VFX platforms like After Effects, Fusion and Nuke. Most VFX houses work on PC platforms and prefer to get files delivered as image sequences (discussed below), but there is increasing use of 4444 and XQ for some motion graphics and VFX workflows.
For many years, ProRes support on Windows was relatively weak, but the last few years have seen an explosion of both approved and work-around versions of that support. You can currently work natively with ProRes in applications like Avid, Premiere and Resolve on a Windows machine, which is very useful for professional workflow. Where things break down is at the consumer level. If you are delivering a file to a client, there still isn’t an easy way to get a non-tech savvy Windows user who defaults to Windows Media Player to playback a ProRes file.
ProRes naming has sometimes been a little difficult, with “proxy” confusing some users since a lot of software can create “proxy” files, but in any format. You can use Premiere to make “proxies,” but they don’t have to be ProRes proxy; they could be in LT. 4444 is often difficult to say, so many say four by four or quatro, with quatro being more common on the west coast. Plain old “prores” without any modifiers can be confusing since you might say to someone, “can I have it in ProRes,” meaning plain prores, and they’ll ask, “what flavor,” and you say, “ProRes,” and comedy ensues. Thus most use the term “prime,” as in, “let’s use ProRes prime for that workflow,” to mean the middle-level codec.
While you might think “bigger is always better,” bigger files take up more storage, take longer to move around and are more taxing on the system to work with, so you often choose the flavor that works for your workflow. ProRes proxy is rarely used anymore since the image quality is visibly degraded, and storage is less expensive than it used to be. Most projects use LT for “offline” work like editing, then a bigger flavor for finishing & VFX.
But if your camera doesn’t shoot enough data, it’s likely not worth going to a huge format like XQ since those files are large, and the extra data rate isn’t going to magically create quality that isn’t there in the source file. XQ is really for cameras that are capable of shooting high bit depths natively (like an Alexa, for instance). If your camera shot in 10bit 4:2:2 video, transcoding it to 4444XQ doesn’t magically add extra quality. Typically most productions render out to plain old 4444 for their final master file.
While originally built primarily for the .mov video wrapper, Apple ProRes is officially supported in the .mxf wrapper, which is widely used in broadcast applications and has some features that can make it more useful, including better implementation of timecode.
Avid DNx
While Apple ProRes has become far more ubiquitous in post-production workflows, Avid DNx as a codec family actually launched first and has a few key features that make it more useful in a few key situations that should keep it on your radar.
DNx, like ProRes, is actually a family of codecs available at a variety of data rates and encoding for a variety of workflows. You can shoot straight to it in cameras like the Alexa, the RED lineup and more, and you can edit it and deliver it to networks.
DNx is most comfortable in the .mxf (media exchange format) wrapper, which is a robust format with a lot of professional features, though you can also write DNx into a .mov wrapper if, for some reason, your workflow requires that.
DNx is widely supported on both PC and Mac machines, meaning it can be a great codec to use if your facility has mixed platforms or you are collaborating with others working in a variety of different formats. This has been its greatest strength. However, it’s not particularly easy to install for the less technically savvy, so it again doesn’t make a great format for delivering cuts to clients since it requires installing a professional application for support.
DNx originally launched as DNxHD in a series of flavors that baked their data rate right into the name of the codec; you had DNx36 for editing and DNx175 for masters. DNx36 was a 36 Mb/s codec, designed to work well with 1080p 23.98 footage, and somewhat equivalent to ProRes proxy though ever so slightly smaller.
The problems came when formats started exploding. When the vast majority of work was 1080p, having the codec name and implementation built around a data rate made sense. While a 1080p 23.98 codec might look fine at 36Mb/s (not great, but fine), a 4k 60fps file would look terrible at that format. The larger resolution and framerate need more data to still look good.
Users, of course, could and should use a different flavor of DNx for 4k files than 1080p files, but many users were accustomed to using 36 for their edit. Avid revised the rollout of the DNx codecs to a new platform, which you would commonly work from today as the DNxHR format of codecs. These shift their data rate depending on the resolution and framerate of the source footage, making them work more like how ProRes works and more how users expect them to work.
So, to compare with ProRes, the new DNxHR HQ format at 1080p 29.97 is 25.99 MB/s, while ProRes HQ is 220 Mb/s. That might seem like a big difference until you note that the Avid number is MB, while the Mac number is Mb. MB is megabyte, and Mb is megabit. Putting them both in Mb, the DNxHR HQ is around 207 Mb/s, roughly equivalent to ProRes HQ.
DNxHR is a very common format in all houses running Avid Media composer, and its cross-platform compatibility makes it useful when dealing with moving from PC and Mac.
H.264/H.265
These codecs are consumer-facing codecs that post-production professionals need to be aware of and work with on a daily basis, but have some huge drawbacks. It’s important to understand and master to keep your workflow performing optimally. The main place you will want to actively use these codecs is in delivery, especially on web platforms. You aren’t going to send an H.265 file to Netflix or HBO, but if delivering to IG, YT, Vimeo or a work-in-progress review platform, you are going to be using H.265 all day long to get a file that is both small enough to quickly upload but still looks good enough to share with the world.
H.264 has been around longer, and H.265 is an update of the technology that offers similar image quality with about half the file size. You’ll sometimes see H.265 referred to as “HEVC,” an abbreviation for “high efficient video codec.”
H.264 is far more ubiquitous since it’s been around longer and is easier to license. H.265 has relatively high license pricing, so while you’ll find it supported natively in all the major editing platforms and all the major web delivery platforms (Youtube, Instagram, Vimeo, etc.) you’re still going to run into the occasional weird platform that doesn’t fully support H.265. If you are having trouble delivering to a strange client portal or obscure streaming software the client uses, the issue might be that that platform doesn’t support H.265, and you should try making an H.264 instead.
These codecs are built around Long GOP technology, in which a group of frames is compressed together to save space in the file. This is a wonderful technology for when you are viewing something linearly forward in time, making this a great codec for delivering video over the web. However, Long GOP can be very awkward in the editing room, since it requires your video software to recreate individual frames by looking at the group of frames. If you are scrubbing around, it can be laggy, and if you cut in the middle of a GOP group, the software has to recreate the missing picture information by holding those other frames in memory.
While some software platforms like to market that they can natively cut in H.264 or H.265, it is highly recommended you transcode footage into an editing codec like ProRes or DNxHR for an easier post-workflow experience. Running an overnight dailies render will make the rest of your post pipeline so much easier.
H.264/H.265 can also be used for capture, though that is generally something to be avoided if you can, as the image quality drawbacks can be very frustrating. Even cameras like the iPhone now shoot straight to ProRes, so the arguments for capturing into H.265 are less pressing than they were a few years ago. If you have to shoot to H.265, choose the highest bitrate you can and choose “All-I” if it is an option, which will make every frame an “I” frame instead of compressing groups of frames together for compression.
H.264/H.265 formats can support whatever data rate you want to encode; generally, your encoder will let you change the data rate of your compression when you make the file. It is highly encouraged you test your specific encoder and project at a variety of data rates to find one that works for your projects and deliveries. For some reason, most encoders (like Resolve, Adobe Media Encoder, Compressor, etc.) have relatively small data rates as their “high-quality” file size. If you aren’t happy with how your images look when compressing to these codecs, try testing at higher quality file sizes to see when you start to like the image.
DPX AND SIMILAR IMAGE SEQUENCES
These aren’t technically “codecs,” but you should be aware of image sequences as a tool in post production. DPX is the most common image sequence (and the one we’ll focus most on here), though EXR and Cineon are other common image sequences.
Image sequences are literally just a folder with a series of still images, numbered sequentially, saved into it. That’s pretty much the totality of it. Software dealing with image sequences (like Resolve and most VFX platforms like Nuke) will look at that folder full of still images and see it as a single image file that you can manipulate just like a video file.
Image sequences are incredibly popular in the VFX world for a few main reasons. First off, they are easier to move around. If you have a 40GB file, and your file transfer crashes halfway through your upload to the web, you have to start over from the beginning. Not with an image sequence; you can just start over with the last frame copied.
Beyond that, if you have a render and 90% of the shoot looks perfect, but you need to fix part of something that looked off at the end, with an image sequence, you only need to re-render those final frames. With a video file, you need to re-render the whole shot. With render times sometimes being exceptionally long in the VFX world, this is a huge time savings.
VFX artists aren’t going to give up image sequences any time soon for those benefits. If you are being asked to interface with a VFX artist and they are asking for an image sequence, you can and should ask them for a spec sheet on what they are looking for. Then you can deliver it with a tool like Resolve, which has full support for multiple image sequence formats built-in natively.
RAW FORMATS
RAW capture formats aren’t technically “codecs” since RAW happens to the video signal before it gets wrapped into a codec, but it’s good to have a handle on the most common RAW formats and how they might present in your workflow. RAW isn’t “video” in that it can’t be played easily by a video player. RAW formats take the RAW camera signal from the sensor and compress it into a file before it gets processed into video and the menu settings like ISO, white balance, etc. get applied. This makes more processing required in post (since your editing station has to do all that work that the camera used to have to do), but they offer the benefit of more flexibility in post. If you want to change your mind about white balance or ISO, you can do it in the edit or color suite, which is helpful, especially if the settings were accidentally wrong on camera.
There are two major categories of RAW forms, open RAW formats and proprietary or closed RAW formats. Open RAW formats are designed for many different platforms to capture to or work with. Proprietary formats created by a camera company are often only supported by that one company, with varying support from post-production software. The major proprietary RAW formats are now natively supported in all the major software platforms, but if you run into a more obscure format, you’ll often need to download software support from their website.
OPEN RAW FORMATS
Before discussing the two open RAW formats, one issue needs to be discussed: the red RAW patent. RED introduced the RED ONE camera at NAB 2006, and it was a working model that captured compressed motion picture RAW footage into an internal recorder. They applied for and received a patent on that technology. Both Sony and Apple have challenged the patent in court, and even with their legal resources, both lost. The RED patent stands, and as far as we know (it’s not always public), the other internal RAW proprietary formats are paying some sort of license fee to RED.
This led to two different strategies for how to implement a RAW video format that was accessible to all users without paying for the RED license fee, since whatever that fee is, it doesn’t make sense within a mass-market-facing, consumer-focused video market.
ProRes RAW
The first “open” RAW to market is ProRes Raw, a format co-developed by Apple and Atomos, which makes an external monitor/recorder platform. That is their method for getting around the limitations of the RED patent; ProRes RAW is something you record to an external recorder.
Currently, ProRes raw has native support in Final Cut Pro, Avid Media Composer and Adobe Premiere, but not in Blackmagic Resolve. There are no announced plans to bring it to Resolve. If you are planning on doing your final color grade in Resolve, ProRes Raw isn’t going to be the format for you.
Interestingly, DJI has implemented ProRes RAW into some of their drones, since, technically, with a drone, the camera is actually dangling underneath the drone, and the recorder is up in the body of the drone, which is enough to make it an “external” recording. ProRes RAW was briefly available in the DJI Ronin 4D but then disappeared, and the suspicion is that they weren’t able to argue that it counted as “external” on that camera.
ProRes Raw is available in two data rates and offers substantial image quality benefits for shots that weren’t exposed under proper settings, such as with the wrong white balance. However, for shots properly exposed and with correct menu settings, the benefits are not large, though they are there.
Blackmagic RAW
Blackmagic had an interesting challenge in building their RAW codec; they make external recorders, and editing software, but they also make cameras, and they wanted their RAW to work inside a Blackmagic Camera. However, they sell a lot of cameras, and outsiders suspect they wanted to avoid a RED license fee considering the sheer volume of units they ship. To get around it they designed the Blackmagic RAW format which is partially debayered. It’s not a full debayer, which means there are still some of the benefits of RAW (you can change ISO and white balance in post), but also avoid the patent limitations of trying to record full RAW.
Blackmagic RAW is an open format, supported by all the major NLEs, and available in Blackmagic cameras and recorders, supported by several other camera manufacturers, including Fujifilm.
Blackmagic RAW is available in multiple bitrates but, interestingly, is also available in a variable bitrate format. This changes the bitrate based on the content of the shot so that a very static shot (an interview, for instance, where only the mouth of the speaker moves) can be a smaller file than a handheld shot out the moving window of a car in a busy street where there is a ton of movement. Variable bitrate shooting makes some users nervous, but some doc shooters have taken to the format for data rate savings in predictable environments.
PROPRIETARY OR CLOSED RAW FORMATS
We can’t cover every proprietary raw format here as there are too many, but there are two we need to discuss a bit. If you run into another format, you should go to the camera manufacturers website for more info.
.r3d RED RAW
RED RAW, recorded in the .r3d wrapper, is the format that started the RAW video revolution. RED RAW takes the RAW camera data, applies a JPEG2000 compression to it, and wraps it up in a file that you can then process to your heart’s content in post production.
RED RAW is currently supported basically everywhere. It’s been around 15 years, and all the major software platforms have fully integrated it’s technology into their systems.
RED files are surprisingly small, considering the quality of their imagery; because of the nature of their compression, many users are surprised to discover that the files can get larger when transcoding to an edit codec like ProRes, depending on your editing resolution and codec choice.
.ARRIRAW
ARRIRAW is the other major file format to discuss, not just because ARRI is at the top of the industry but also because the files are just huge. For a long time, you needed to rent an additional external recorder from Codex to record ARRIRAW (to avoid the patent, most assume), though you can now record ARRIRAW internally to an ARRI camera. Either ARRI figured out some very tricky way to argue their internal recorder is actually external, or they are paying the license fee to RED.
The thing to know about ARRIRAW files is that they are big. If you are bidding your first ARRIRAW job after years of RED RAW, know that it’s going to require more hardware resources than you are used to. These are massive files. Transcode them immediately to an edit-friendly codec, then deal with them again only at the end for color grading on a powerful machine. Their saving grace is that ARRI cameras only shoot up to 4k; an 8k or 12k ARRIRAW file would be a monster.
ODDBALL FORMATS
While this guide can’t go into detail on every possible format and codec you might encounter, we want to offer some general advice when a shot lands in your lap that might not immediately make sense to you.
Your first tip is to use the “get info” command, either in the finder, in QuickTime player or in an app like “Screen,” to get a better sense of what is going on with the codec. A quick Google search will often turn up more info on the codec, and it is usually available for download and install on your system for playback. If “get info” isn’t helping, there is a great app called “MediaInfo” that might offer more information.
There are some limits to this (Apple ProRes still has issues with running on a Windows machine in certain players depending on the install), but for the most part, pretty much every codec you need is possible to download, and that will often lead to your software being able to decode the video.
If you run into a truly unplayable codec, there is a player you should know about called VLC. It’s a video playback software that is often a “swiss army knife” in post when you’ve been given a strange video format to deal with. Maybe you are working on a documentary with a lot of archival home-video footage in an obscure format that didn’t take off commercially. Or you are working on a film with footage coming in from primary sources from a variety of archives. Or you have a shot that has gremlins and just doesn’t want to play. VLC is often the tool that will finally get that video open, and then you can export from VLC into a more traditional codec and format that will let you play it in your editing platform of choice.
EditShare’s video workflow and storage solutions power the biggest names in entertainment and advertising, helping them securely manage, present, and collaborate on their highest-value projects. To learn more about how EditShare can help your video production team, contact us today.
With so many different types of cameras on the market, it’s difficult to keep up with the specific features of each. The MediaSilo by EditShare team designed this guide to cover the major factors affecting post-production for each of the major camera platforms currently popular in production. Our goal with this guide was to help you know what questions to ask and what to expect when coming on to a production shooting on any of these systems.
ARRI ALEXA
ARRI has been a dominant camera maker for going on 100 years now, and the ARRI ALEXA platform is a widespread capture format across motion pictures, television, commercials and music videos.
The ALEXA platform can shoot into both video files (in either ProRes or DNx formats, depending on your workflow) in either the .mxf or .mov wrappers) or into RAW. For a long time, RAW was less common on ALEXA jobs since it required renting an accessory recorder that significantly raised costs, but for several years now .ARRIRAW capture has been built into the system.
RAW isn’t the default on mid or low-budget projects, as the files are massive. Working with RAW requires a lot of post-processing to handle and more hard drive space for downloading and backup. The larger productions default to .ARRIRAW, but many smaller productions will shoot video files in Log.
The ARRI Log format is known as Log C, and there is a LUT available on the ARRI website to support it. Currently, there are two versions of the Log C LUT in common use: LogC3, which is the newest LUT for earlier generations of the ALEXA, and LogC4, which is used for the new color science that came with the ALEXA 35 and that is supported with the LF and Mini LF. ALEXA Log C color (both 3 and 4) is also fully implemented in the ACES workflow and Resolve Color Management. In fact, the ALEXA implants a flag in the video files so that Resolve can auto-detect if it was shot in Log C, in which version, and automatically transforms it to linear video when working in Resolve Color Management.
The ALEXA platform doesn’t typically support recording simultaneously to RAW and video proxies, though that isn’t recommended anyway when working with a single-card camera like the ALEXA lineup. Doing simultaneous RAW and proxy recording to a single card increases data management time and hard drive expense.
The ALEXA has a common timecode input standard for the industry, 5-pin LEMO. It also reputationally has one of the better internal timecode clocks that seems to drift less than others, but there is still some drift, and timecode should be jammed at least twice a day.
While it seems like the ALEXA lineup has everything a post team might want, the lineup does have some hiccups that post teams should be aware of, especially around the area of audio. The most notable is that the full-sized ALEXA LF only has a 5-pin XLR input (instead of the normal 3-pin) and the Mini LF uses the obscure and pricey 6-pin LEMO connection. This special connector is for a single-cable connection to bring 4 tracks of audio into the cable the simplest way possible, which is a noble goal, but of course, requires the sound team to have the cable. In addition, while many cameras allow audio to be input at either mic or line level, the ALEXA lineup traditionally only allowed for line level, which means if you wanted to record a “scratch” audio track with a microphone on the camera for absolute backup or to hear operators comments, you needed a mic to line converter. The newest bodies, the Mini LF and the ALEXA 35, now have built-in microphones to serve this purpose.
ALEXA bodies put out the filename over SDI, so you can use something like a Teradek Cube to make proxies with a filename match for later relinking.
If you are encouraging the production team to run a scratch mix to the camera, they might well enough up needing to purchase a special cable or down converter to do so.
RED
The RED camera platform is widespread both on RED-native productions and also as a special-purpose camera on other productions for its flexibility. It was one of the first platforms to launch with a smaller camera form factor (the DSCM2 bodies) and was capable of high frame rates without the expense and complication of a specialty camera like the Phantom. As such, it’s incredibly common to see the RED as a second camera on an ALEXA show since the ALEXA had more limited slow-motion capture options. The ALEXA would be A and B camera for most of the heavy lifting, and then the RED would fly out on a steadicam or a gimbal for action sequences.
RED camera as a company is heavily responsible for pushing RAW image capture in motion pictures, starting with their launch at NAB 2006, and for a long time, was a true leader in the space. In the early years, there were some frustrations with their post integrations, with the company making statements like, “Kodak doesn’t tell you how to develop your film, we won’t tell you what to do with our images,” which was frustrating for many users since, in fact, Kodak did publish technical guides on development, and a camera maker having robust post software support was a good thing. RED originally pushed its own software, REDCine, for a lot of early post work.
However, RED has now fully integrated with other popular software platforms and could arguably be considered one of the most widely supported cameras in the post world. Native support for RED RAW .r3d file processing is built into every major NLE, and when RED updates its firmware with new image parameters, the updates flow into other software like Resolve with rapid speed.
One thing to remember with RED RAW is that it is live debayering the images, which can be processor intensive. If your editor will be working on a lower-power machine, it is still highly recommended that you render video dailies into a format like ProRes or DNx for editing. RED does offer the ability to change your debayer quality, which can make for easier processing on your system, but that comes with image quality tradeoffs.
While RED does have a Log format, it is very uncommonly used, as the default and most common format to capture when shooting RED is straight to RAW in .r3d files.
RED cameras offer the ability to shoot natively to both .r3d RAW files and .mov or .mxf video files at the same time. However, most users find that this fills cards up more quickly, increases download time and increases the complication and expense of on-set downloads. While there are occasional jobs where this might make sense (a tight turnaround job that requires both the flexibility of RAW and the immediacy of dailies), it’s very uncommon. Another solution to that same problem would be using a proxy box like the Teradek Cube to make immediate proxies. The filename passes over SDI with the RED cameras for later relinking to your RAW file.
RED cameras use the common 5-pin DIN timecode format. Reputationally RED cameras don’t keep the best internal timecode, and many recommend re-jamming often or using an external timecode box to feed more stable timecode into the system.
RED camera bodies have both on-body microphones for scratch tracks and have industry-standard audio inputs for sending in a microphone or a mix if necessary.
Panavision
Panavision is in an interesting position in that they have a major camera platform, the DXL (or Digital-XL), that is very popular, especially in Television production.
However, as Panavision is best known for its lenses and was never particularly famous for its camera bodies, they actually built the DXL system on top of RED camera bodies. The DXL systems have their own color science baked in and use Panavision lens mounts and accessories, but the primary decisions you’ll be making with a DXL are very similar to the decisions you’ll be making with a RED camera platform.
Phantom
Phantom cameras are the current gold standard of high frame rate slow-motion capture. While traditional motion picture cameras from RED and Blackmagic are getting speeds up to 240fps, for action sequences or product work there is often a benefit in going higher, up to 480fps or even 960 and above. For those moments, most productions go with a Phantom camera.
In terms of production use, the Phantom has a bit of a reputation for being slightly high maintenance. They are made in lower volumes by a company with more of a specialization in scientific imaging, and they aren’t always ready for all the various needs of motion picture production. That issue rolls on into post-production, where support for Phantom files is occasionally hiccupy. Most of the kinks are worked out, but be sure to do extra testing and prep for phantom-heavy jobs.
Phantom shoots a file format called .cine and are RAW files recording the data coming straight off the sensor. They are a bit processor intensive, even on powerful machines, and you should absolutely make editing video proxy files as quickly as you can after production, only relinking back to the RAWs for final color.
In order to shoot to higher framerates, the Phantom shoots at lower resolutions (as is common on other cameras). This can occasionally lead to some shots coming in from Phantom at lower resolutions, like 1280 x 720, if the production wanted to shoot at 2700 fps. This has led more than one post team to worry they were getting an editing proxy but not the camera-original file. Be sure to check the camera reports to see what resolution was shot since, with high—-speed work, it is often not the original format. Another tip is that if it’s a .cine file, it’s the camera original, since you can’t make a RAW proxy file with software.
Sound and timecode aren’t really something most people worry about with high-speed work, so they aren’t a major factor when working with files from the Phantom.
Panasonic
Panasonic has long been dominant in digital video capture, and after a few years of fewer offerings, have come back in force in the last few years with the Varicam and EVA-1 cameras.
Both camera lines are primarily used for Log-based video recording, though they both support RAW to an external recorder (the Atomos lineup of ProRes RAW cameras and the Codex V-RAW system for the Varicam Pure). If the production shot V-Log (the Varicam Log format, also available on the EVA-1), LUTs are available on the Panasonic site, and ACES and Resolve Color Management support it, though there aren’t flags in the files, and you’ll need to tell RCM what camera the files were shot with.
Panasonic is excellent at interface, hardware and I/O. It uses standard audio interfaces and a timecode interface, the more affordable BNC connection, with an internal timecode clock that is considered quite stable. One nice feature on the Varicam bodies is that they feature both 5-pin XLR inputs for a stereo mixed input (as you might get from a dedicated sound mixer), and also individual 3-pin XLR inputs for a documentary or scratch workflow where you’ll want to run microphones directly into the camera.
Sony
Sony offers a wide gamut of cameras from the top-of-the-line VENICE platform (now up to VENICE 2) to the FX9, FX6 and FX3 cine-style video cameras and the A7 lineup of stills cameras that maintain their impact on motion picture production. Sony cameras are everywhere and will often show up as the B or C or “night-work” camera on productions just because the director owns one and they want to keep it in the mix.
At the top of the line is VENICE (you aren’t supposed to use the definite article “the,” like Concorde, Venice is not “the Venice,” just VENICE), their cinema camera offering that is designed to compete with the likes of ALEXA and is successfully doing so. With a full-frame sensor, the ability to shoot RAW internally to Sony’s proprietary X-OCN format, and a robust body design, VENICE has grown popular on a lot of productions at the high end of the market.
The X-OCN RAW files are full 16-bit RAW files recording the data coming off a 16-bit sensor, so they offer a high level of flexibility for your post-production grading pipeline. The other option is to shoot video files in S-Log3/SGamut3.cine, which is a popular format that is also available on the FX and A7S lineups of cameras and makes intercutting between those cameras relatively easy.
With its amazing low light and autofocus performance, the Sony A7S lineup has been a major hit for motion picture image capture, but it has some drawbacks that make it less ideal for larger productions. There is a motion-picture-focused version of the camera, the FX3, that is very similar internally in terms of sensor and processor but adds more robust inputs, including timecode, though a special adapter cable needs to be purchased.
One thing to be aware of is that Sony cameras have many more picture profiles than competitors, with more arguments for why you might use one or another. DPs will frequently have arguments around issues like, “I use S-Log3 for everything, except for night scenes, then it’s S-Log2.” Each of these formats has a different LUT or transform for converting them back to property viewing space, so you’ll need to watch the camera reports closely to be sure you have a handle on what you need to apply to each shot.
Another issue to be aware of in post-production on Sony is that there is often proprietary software that will require you to sign up for a Sony account to purchase, and is required for post-production on the footage. X-OCN support is built into Resolve and other platforms natively, but if you are having an issue, downloading the dedicated Sony software can help troubleshoot problems. Even with that support for traditional workflows, some of their more unique features that require special software. For instance, the A7S3 has an accelerometer built-in and can record that data in the video file. This is useful in post if you want to use post stabilization; the software can extract stabilization info from the shot and use it to help make the shot appear smooth. To do that, however, requires Sony-specific software.
In the FX and VENICE lineups, timecode inputs exist and audio I/O is quite robust. The A7S lineup lacks those integrations, making it more frustrating and often requiring more work in post for audio and multi-camera syncing.
The A7S lineup shoots to an AVC-Intra format that doesn’t need to be reconnected for final finishing; you can take those files, transcode them directly to ProRes4444, and treat those files are your new master files. If someone has chosen to shoot that format on the FX line, you can do the same.
Blackmagic
Blackmagic makes a lineup of cameras from the small “Pocket” lineup through the larger URSA bodies that shoot up to 12k resolution. One of the key benefits of Blackmagic is that there is a lot of similarity across the camera lineup; all feature real audio inputs (full-sized XLR on the bigger URSA, Mini XLR on the pocket), timecode, video output and more. All shoot straight to SSD hard drives like the Samsung T7 lineup. All shoot Blackmagic RAW, prores or DNx. It makes for a very simplified lineup, with the main benefits of the larger camera bodies being more robust input and output features and higher resolutions.
Blackmagic RAW is not a full RAW format, it’s actually been partially debayered, but you can still correct your ISO and White Balance in post with the power of the camera’s original settings. The design of Blackmagic RAW makes it a relatively lightweight format for post-processing, and some productions edit it natively, though of course, it’s still always the best bet to crunch out some ProRes or DNx video dailies where possible.
Unsurprisingly, Blackmagic Camera footage integrates exceptionally well with Blackmagic DaVinci Resolve if you are using that for your dailies creation. For an RCM workflow, it will auto-detect not just what camera shot the footage but also what settings were in the camera in terms of gamma and color space.
One interesting feature of the Blackmagic Pocket camera is that it is the only major “small handheld DSLR form factor” camera with a robust timecode input. The audio input ⅛ inch jack will auto-detect if you run timecode into it. Most competitors’ cameras will accept timecode over that port but then record it as LTC (linear timecode, as audio data) that needs to be extracted with special software. With the Blackmagic Pocket 6k, it just comes in as timecode.
Audio inputs are robust, and all cameras have an in-camera microphone for scratch audio.
Canon
Canon likely has the broadest array of camera options that a post team will have to deal with. There is the cinema lineup, from C100 up to C700FF, with steps in between. Some of which have features out of order from what you would expect. The C200 shoots RAW, but the C300 doesn’t (the 300 came out years before the 200).
In addition, there is the R5, their mirrorless full-frame cinema-focused camera, and the C70, their compact version of the cinema line. And you’ve got a lot of folks still doggedly hanging on to their older cameras like the 5D Mark IV, which came out in 2016 but still shows up on jobs from time to time.
Most Canon cameras shoot to H.264 or H.265, formats you should transcode to ProRes or DNx for editing. Many users transcode those files to a larger ProRes or DNx format like 4444 and then treat those new files as their new masters, never going back to the lower bandwidth camera original files.
Canon supports the Cinema RAW Light format for RAW recording, but it is not dominant in the way RAW recording is on the RED or VENICE platforms. Canon will also allow you to shoot RAW out to an external recorder from some cameras, which is increasingly popular.
Most users shoot in C-Log, with variations available, including C-Log 2 and C-Log 3, both of which remain popular. Be sure to check in with production to know which format they shot so you can apply the right LUT or Transform in your workflow.
The Cinema lineup, generally noted by the “C” at the start of the camera name, will have the timecode and audio I/O to do things right and make things easier on you in post-production. Unfortunately, you won’t get that level of integration from the other cameras, which were originally designed as still cameras, and remain focused on that world. They might shoot wonderful video images, but it will be difficult to run audio into them in a robust fashion, and they generally lack a real timecode workflow, though you can use an LTC timecode system to the audio track.
MediaSilo by EditShare can help your post production team get video projects approved faster. Contact us to learn more.
Post-production is one of the most critical and complex parts of making the creative vision come to life. But it is also one of the most administratively laden elements of production. Between dumping footage, labeling clips, managing the project file, controlling versions, and collecting feedback, the process is burdened with tasks that create friction. In this guide, the MediaSilo by EditShare team will show you how to move smoothly from raw footage all the way to final output, and even start preparing it for marketing and sales.
Remote, but connected
Over 94% of MediaSilo’s customers say they are doing their post-production work either fully remote or semi-remote. With production and post-production becoming more complex and intertwined, modern workflows have to be re-thought in order to keep up with and take advantage of new technologies. This has created the need for new tools that help manage your assets, share work-in-progress, get approvals, and even facilitate pitching and selling your projects. And those tools need to work whether on set, in the office, or at home.
Stage 1: From Camera to Editing Bay
Lights. Camera. Action. Import.
A critical step in the post process is one of the first ones — importing shoot footage. Dailies need to be reviewed quickly and across multiple roles and departments. Often this process involves input from not only the director, but also from the producer(s), the editor, and sometimes the representatives of the studio, network, or client.
Remote viewing of dailies has become increasingly more global in the post-pandemic world. Having everyone on the team able to access the dailies as soon as they are available, whether on-set or remote, means that input can happen while there’s still time to address any issues.
Complexity in these early stages comes from more than just reviewing dailies. It is about importing footage in a way that is easy to find later, like ten versions from now when someone asks, “Remember that scene we shot and there was a take when he stepped forward and not back? Can we see that?” Being able to put your hands on this clip quickly, even weeks out from the import stage, is essential to keeping the team efficient and indispensable.
To stay organized, you need to import with purpose. You must design an organizational architecture that allows for multiple projects, clips, project files, assets, and multiple ways to search to find it all at your fingertips, in seconds. Tagging, adding meta-data, and creating naming conventions is possible when using a remote collaboration platform. And it’s not possible on less sophisticated cloud-based tools that may be cheap, free, or outdated.
“I think most people realized that as long as you can receive the footage in some way, you don’t need to be face-to-face to be able to create a finished video.”
— MediaSilo Customer
Along with the footage itself, there are often camera logs and script supervisor notes that correspond to the footage. Keeping all of these assets where anyone on the team can review them is invaluable and can save enormous amounts of time throughout the life of the project when key documents are constantly referenced and needed.
Using a platform like MediaSilo allows you to upload dailies directly into your account and instantly share files with your collaborators, while allowing commenters to leave feedback directly on the files, or send private links for more contained review sessions.
With a robust asset management and storage solution, relevant files can be organized in one place, allowing producers, project managers and anyone on the team to add, revise, delete, and update any documents as needed so that the entire team is always working from the most current version of the materials.
Stage 2: From Raw Footage to Rough Cut
Ready. Set. Rough.
The rough cut or offline edit is where your show, commercial, or film starts to come together as a story. Typically, this is a complicated process as the editor needs to review all the footage and determine how to best tell the story while maintaining the vision of the team. Carefully logged and tagged footage helps in this stage as it makes the work of the editor more efficient. With less time spent searching for clips and director’s notes, the editor has time to think creatively about the story arc.
In some cases, there may even be more than one editor working on a project, with various scenes divided up amongst several cutters and assistants. Having multiple editors can create issues staying on the same page while handling review and input from stakeholders. As with any collaborative project, clear communication and organization is key.
After the first pass is created, often using smaller low-quality files called proxies, the editor and director may collaborate to create a “director’s cut.” Easily being able to share project files for review with the director is essential so he or she has a broader view of the footage available.
Once the rough cut is complete, it is sent to other team members to review. At times, the team reviewing the cut can be large, and global. Using a tool that allows collaborators to make timestamped comments ensures smoother communication. All of the comments and input need to be collected and tracked, so that revisions can be made efficiently and in a way that takes all feedback into account.
“By creating proxies and sharing material internally, we were able to work remotely having great results. Then, we shared rough cuts to directors and clients for a more secure way to get feedback.”
— MediaSilo Customer
Finding a place for placeholder assets
The rough cut phase is also where temporary or placeholder graphics, sound elements, music, visual effects, and color correction are introduced for reference. These assets can be sourced from font libraries, stock footage collections, music libraries, and sound effects catalogs. Keeping track of these assets is critical to the success of the rough cut.
As all editors know, every cut has versions that use different assets, takes, or edits to achieve the ideal finished product. Versioning can be a challenge, as keeping track of subtle changes is tricky and often requires diligence and attention to detail.
A well-thought-out naming convention is a crucial, albeit complicated, aspect of any post-production workflow. While it’s sometimes seen as superstitious, many post-production veterans will tell you that you should never name a version “final” — nothing guarantees another round of edits more reliably. But there is also a practical reason for that. Small errors are caught and changes must be made, making it very difficult to tell which final version is actually final.
“Collaboration platforms make it easy to access files between teammates especially in remote conditions because it makes sending files more efficient.”
— MediaSilo Customer
Instead, using a naming convention that incorporates dates, revision numbers, and sometimes even colors, can help manage a project that has had a large number of changes made to it.
Finally, when all of the elements are in place and agreed upon in the rough cut, the cut is locked. This “picture lock” stage means that there will be no more changes made to the overall structure of the edit, and that it’s time to move on to finishing.
With MediaSilo’s versioning features, you can keep track of all of a file’s versions easily in one place. Shared links automatically update with the newest file, and reviewers can easily toggle between old and new versions to see changes and resolve comments.
Stage 3: The Post Production Team
Who’s Who
While getting to picture lock in the offline edit or rough cut is a huge part of the post-production process, there is still a lot of work to be done before the project is completed. Many details still need to be finalized, and finishing touches need to be applied to various aspects of the piece to create a finished work. Imagine an art gallery, in which all of the pencil sketches on the walls will soon be replaced with real paintings.
To create the final masterpiece, many people and players fall in and out of the workflow. It takes a village to make a movie! When watching the credits roll at the end of a film or TV show, you’re reminded of all the human touchpoints within the post-production process. Keeping track of all of them and assigning user roles at this stage is critical to ensuring that the appropriate people have access to the correct cuts at the right time.
Editors / Finishing Team / Artists – The online editing process, or “conform”, is usually when the original, full-size footage files are accessed again, and inserted into the cut in place of the smaller proxies. Close collaboration between the offline editor and the finishing and online artists is critical. When working remotely, everyone will need access to the same elements in order to accurately transition from locked rough cut to final online conform.
VFX – The creation of visual effects is a detail-oriented process that may require working on each frame of the footage individually. Tiny details, such as shadow and lighting, can throw off the realism of the effect for even the most casual viewer. And often a particular visual effect, or piece of CGI work, will go through multiple rounds of revisions and approvals before being inserted into the online edit.
Graphic Designers – The online phase of post-production is when the final type treatments are discussed, fonts and logos are designed and chosen, and overall composition is determined for any graphic elements. Often an entirely separate company may be in charge of the titles and graphics. They will provide options for the graphics work to the entire team and input will be given by the team, just as in the other phases of the cut. Once reviewed and approved, usually remotely, the finished elements will be placed in the final piece.
“Clients and directors are adapting and trusting the post-production process more and more every day. They are using technology to their advantage and shortening reviewing time by getting the videos right in their phones and computers having the opportunity to watch them anytime, anywhere.”
— MediaSilo Customer
Colorists – Careful color grading makes a film more cohesive by better matching footage from different days and places, and under different lighting conditions. This helps the finished piece feel more like a uniform whole and sets the mood and tone for the entire film. It’s always a good idea to view the work on a variety of monitors of all sizes and qualities, to simulate the experience of different viewers using all kinds of personal devices. What might seem pleasantly dark and moody on one device, may simply be impossible to see on another.
Musicians and Composers – Music is one of the most powerful components of a film, tv show, or ad. When the editor first starts composing the rough cut, temporary music or a “scratch track” is often used to give a general sense of how the scene will feel with music in place. But once the picture is locked, final music needs to be locked down, too. If stock music or an existing song is to be used, the rights must be secured. If original music is to be created, the composer will begin scoring to the picture. This process can often include several rounds of compositions and revisions for each piece or scene, and tracking the versions is key, as is sharing files remotely.
Sound Editors – Sound design can play an important element in bringing a film, television show, or commercial to life. Sound design can be as simple as enhancing or adding footsteps to a scene in which a character is walking, or as complex as creating an entire auditory language for a cinematic world. Naturally, no two filmmakers will choose the same elements for a particular sound, so choosing sounds is quite a creative endeavor, and having the team review and agree on them is also an important part of the post-production workflow.
During the finishing process, an entire ecosystem of different users can focus their energy on specific elements of the ad, show, or movie — but they must all be seamlessly integrated into the process in order for the end product to be effective. Keeping track of versions, examining minute details, and providing access to those who need it (when they need it) is crucial in creating an elegant and seamless workflow.
MediaSilo’s user permissions allow you to easily control who has access to your content and exactly what they can do with it. Use one of our standard user roles, or create your own custom roles for each team member.
Stage 4: After the Final Cut
One for Me, One for You
Once the picture is locked and the masterpiece has been fully developed, it feels like the end of the line. But, wait, there is still more. Two important final steps are needed to complete the finished piece: delivery to a host of other teams, departments, and vendors who need to begin the sales and distribution process, and archiving in a media storage system.
Delivery
When post production ends, a whole new phase of the process begins. Before signing off on a project, an editor or post production supervisor must ensure that the project is delivered to any other team that might need it. This could include internal departments or outside vendors, depending on the size and scope of the production.
Marketing and distribution teams will need to generate promotional materials, that may include trailers and promos, photographic assets, printed posters and artwork, or press kits. While many of these will have been in the works throughout the post process, they are often subject to versioning and team approvals just like the film. Because the content is so closely intertwined with the film, the versions and input need to be centralized where team members can always access the latest materials.
Captioning and localization is often handled by a third-party vendor, who will need full access to audio and video files to complete their work. For an international release, the film or tv show may need to be captioned in a variety of languages, dubbed, or even reformatted entirely to fit the specifications of international distribution platforms.
Traffic managers or broadcast business managers might oversee the process as the piece gets sent to networks, studios, theaters, digital portals, or other final users. And each is likely to have his or her own preferred specifications. Some will request files in a particular file size, format, or compression algorithm that is best suited to their own systems. Security will have to be maintained for any projects that might be proprietary or attractive to eager fans or competitors.
The threat of content leaks and piracy is a huge concern for pre-release materials. MediaSilo has built-in SafeStream technology for both visual and forensic watermarking, keeping your content safe while you share to all of your vendors.
Archiving
In addition to outputs, all of the elements that went into making the piece, including the original camera footage, camera logs, XML files of metadata, sound elements, visual effects elements, graphics, and any versions of the masters will all need to be prepared for storage and archiving, so they can be found and accessed later if needed. A cataloging and naming convention is a good idea, since it’s often difficult to remember all the little things that were done months or even years later. A versatile and robust storage platform is critical to being able to find what you need for later revisions, and to create future files as needed.
Non-Traditional Workflows
While we’ve talked about how things are typically done, each project has its own unique needs, and post-production workflows can be created based on the way a particular project comes together. Today’s increasing use of remote collaboration makes non-traditional workflows even more common, and adds new challenges to designing them. Flexibility is often critical in making things happen efficiently. In some cases, a phase of post-production may start before the typical preceding ones are complete. For example, time-consuming visual effects may have to be underway before the picture is locked. Or music may be composed before the shoot is even complete, so that it can be played on set and the characters can react specifically to it. The key to managing an unconventional workflow is organization. Without it, a huge amount of time and work can be wasted. So it becomes even more important to have a well-synchronized team and a solid post-production structure when using a non-traditional workflow.
Conclusion
Today’s post-production workflows bear almost no resemblance at all to how things were done
20-30 years ago in the days of tape-to-tape editing and processing actual film. And post-production continues to change every day, as new technologies and needs arise. With an increasing percentage of production being done with all or part of the team operating remotely, up-to-date sharing solutions and digital collaboration are more important than ever.
“I think this will be the way to work from now on. I don’t think we will ever go back to the way it was.”
— MediaSilo Customer
We can already see significant changes coming, such as dumping footage from cameras to post tools, VR and AR formats, new finishing considerations for virtual production, and finishing platforms that seamlessly combine offline, online, and color grading capabilities for users of all levels.
What will never change is the need to be organized, team-oriented, and collaborative when undergoing a post-production project. New technologies will only bring new complexities and building a foundation for an elegant workflow will serve you now, and… to infinity and beyond.
Find out how MediaSilo by EditShare can help your post-production team get projects reviewed and approved faster. Contact us to get started.
A Tradition Stuck In The Past
Let’s face it: the traditional agency pitch is a drawn-out, sprawling, cumbersome process that has made its way into all aspects of creative business development.
From selling agency services to proposing high-profile campaigns; from bidding on big commercial production projects to placing new talent – in every creative endeavor, agencies, boutique firms and reps dutifully invest time, creative energy, and resources into frustratingly rigid dog-and-pony shows. And talk about rigid: according to pitch consultancy ID Comms, today’s agency pitch process has been in place since the early 1990’s. In other words, “most consultants’ pitch templates are older than the internet.”
Even during the COVID-19 pandemic when accelerated changes swept through so many other industries, these archaic processes still remained in place, keeping agencies tied to stagnant and unproductive methods of developing new business – while the rest of the world raced ahead. Basically, we’re stuck with the traditional pitch.
The Creative Cost
Yes, the process is rigid and time consuming. Yes, the average agency surveyed in industry report The OUCH! Factor™ spent 22.2 days’ worth of staff time last year on each pitch they entered (equal to one employee working one full month per pitch – 11 times a year). And sure, the odds of winning the pitch after all that work are around half, according to the same research. It couldn’t get any worse, right? Wrong.
New studies have shown that the traditional pitch process actually undermines the core strength of an agency or commercial production company: your creativity.
“We’re meant to be in the business of creativity, but the focus has shifted,” says MullenLowe Group UK’s Lucy Taylor in a Campaign Live article from March 2022, Resetting the pitch process and bringing the soul back to adland.
Pitches can be very stressful and lead to burnout, posing a serious problem for clients, who require an ecosystem of dynamic agencies doing great creative work, which is “the lifeblood of our industry,” says Andrew Lowdon from ISBA, the trade body representing advertisers, in Marketing Week. After all, says Jemima Monies of adam&eveDDB in the Creative Salon, “New business should be a means of nurturing talent, rather than draining it.”
How can you shift the odds in your favor when it comes to preparing for the dreaded pitch?
The Agency Reel: The Win Before The Pitch
Consider the common, basic criteria clients use to determine the fit of any agency— essentially, the admission price for you to compete: you understand the clients’ business, the vision, their immediate need · you have experience in their industry you’ve got a recognizable roster of previous clients you’ve got the right mix and level of capabilities you’ve shown you can personalize your solution to them
With one creative reel done right, you can prove to your prospects that you possess all these characteristics, before you invest valuable hours into a pitch.
A good reel will help pave the way into a prospect conversation, while leading with proof points they care about— giving them confidence to include you in the brief.
Better yet, a reel will allow you to learn earlier in the process (even before the competition begins) whether the client feels you’re a good fit. Then you can determine whether to continue to invest, or weed out the clients that aren’t a good match, and, instead, pivot to the next important project.
What’s important to note is this: the best reels reflect the specific client who’s watching it, and demonstrate what you can do for their exact needs. That airtight resonance of your work with the client’s needs is what gives a reel the best chance at hitting every one of their initial criteria.
Sizzle On Demand?
Of course, every client is different. That means the best reel you can use is one that’s customized for each client. And creating custom sizzle reels professionally can get very expensive—up to $10,000 per minute of finished video, or more in many cases.
And that’s the conundrum: on one hand, a reel made just for the client you want to pitch will be far more effective, and ultimately win more of the right clients for your agency or firm. On the other, it’s risky, involved, and expensive to professionally create a bespoke agency reel each time, when you have no idea whether you’ll land the project.
If you’re going to engage with multiple prospects while trying to beat The OUCH! Factor™ odds, it makes sense to scale your reel-building capabilities internally. Doing this will allow you to conduct business development proactively and more swiftly when you want to get ahead of the pitching cycle; reduce the expense and time of customizing every reel; and most importantly, increase the “at bats” your reps can get for you with as many clients as you can handle.
Best of all, with the right tools, it’s possible to build reels quickly with the people you have in place (whether internal talent or outside reps), using the content you’ve already created. Just make sure the tools you choose have what it takes to hit all the right marks with clients.
Agency-Friendly Requirements For Reel-Building Tools
As a secure and highly customizable media management and reel sharing solution for 1,500 creative companies around the globe, Wiredrive has had marketing teams, agencies, and commercial production businesses as our customers for more than a decade. Our customers have called Wiredrive one of their favorite solutions for making custom reels – and the software includes a complete set of features designed specifically for this capability, called Library.
We’ve examined the usage of hundreds of Library customers and categorized their reel-building requirements into three major themes. Keep reading for the specific features you should look for when considering your own internal solution for custom reels.
First Things First: Streamline Steps
Every additional process it takes to get from your content to your finished reel is another obstacle between you and your potential clients.
Think about all the disparate components involved in delivering a video show reel today—from cataloging and finding all the pieces of content you want included, to getting them from where they are into the right format and location for production, to designing a template to showcase them – and look for a solution that eliminates steps all along the way.
Connection To Asset Catalog
Everything starts with the assets. Since the reel is your resume (tailored, of course, to your prospect), you’ll likely draw from the entire library of your creative work as the source of raw materials for the final portfolio. Why not use a presentation tool that connects directly to those assets?
Instead of having to start from scratch and think about “Wait a minute, was that on Vimeo? Do we still have that in storage?” choose a solution that doesn’t require people to take the media out of one system and move it, transfer it, or send it into another system.
When you have a library of all your final finished work, already uploaded, cataloged, tagged, and easy to find, you can easily use it as the back-end of your media source and then wrap a show reel around it.
Consolidation Of Multiple Tools
Most agencies and production teams could be using up to a half dozen or so different tools to present and share their work. Anything from downloading media from their company Google Drive to creating Keynote presentations that link to Vimeo videos, to collecting everything in a video site like Wistia, and even creating custom websites to host pitch reels.
Not only does this involve a multi-step process to collect assets and deliver a polished reel, but it also means paying a half dozen monthly fees for different tools—along with the multiple storage costs across those tools.
Find a front-end solution that consolidates a lot of discrete tools that don’t talk to each other, and instead contains everything in one platform that handles each task, all collated together. Otherwise, you end up paying for that many vendors, and taking on the administrative overhead of the fragmented landscape.
Keep Things Simple
This is the next major theme: you want tools that are simple enough that anyone who needs to can create a reel when prospects need to see one. And “anyone” could even mean your sales reps who have a prospective client on the line that wants to see something right now.
Your responsiveness alone—along with your quick-turnaround capability of a beautiful presentation—will make a strong impression from the start.
Easy for Non-Designers
If most of the people pitching your services are natural-born salespeople, not natural-born designers, one requirement to look for in augmenting your in-house reel capabilities is the ability to create impressive-looking output without being a professional designer.
Prebuilt templates, customizable design themes, and drag-and-drop presentation building are valuable features that any presentation software should have. The further ability to simply plug in the desired asset and have it “just work” saves many nail-biting hours otherwise spent struggling with incompatible file formats, complicated editing software, and painstaking adherence to creative guidelines.
Brand-Customizable
If there’s any caution around allowing reps and other non-creatives to build effective agency reels, it’s that you have clear standards for external-facing presentations, fixed guidelines for what they should look like, and a reluctance for people to simply make their own and go off-brand. After all, that’s a recipe for what should be a captivating portfolio to turn into Myspace really quickly. Regardless of whether your staff and reps are creatively gifted or visual newbies, ensure your solution can be deployed under a model of brand control. Set up templates that do fit the model, and then that template can be applied automatically to hundreds, thousands of presentations that get sent out for all the things that you do.
No Code Platform
It’s worth mentioning that bringing the tools for reel-building in-house doesn’t necessarily mean everyone needs additional tech skills (or a new hire to manage the solution). Many small companies would contract with a web development firm to create bespoke client branded web pages, but then the agency would still need to figure out how to get them the assets they need, how to ensure client assets are secure, how to specify the correct analytics. It truly is cost-saving, complexity-saving, and time-saving when the platform doesn’t require staying up to date on technology skills.
Embrace Analytics
Posting a sizzle reel on YouTube could show you the number of views, daily trends, and so on, but with the right tools you can gain enough knowledge over time to know in advance whether your pitch will have a chance.
Reporting And Insights
How powerful would it be if you could eliminate uncertainty around the business development process? To have insight about whether your reel was viewed (or wasn’t), the knowledge of how widely it was shared, and potentially, even the confidence to determine whether you’ll be selected?
Make sure the solution you choose has enough data reporting built-in that you can make better, faster, and more effective decisions about the presentations and reels you’ve sent out.
It’s one thing to check the basic box of “So oh, could you use a Google Analytics code and put it on the web page where the video was shared?” But it’s another to be able answer questions around, for instance, how much media was viewed by your recipient. Was only a short clip viewed and then the window closed, or did the recipient stay engaged enough to view the entire reel? Was it shared with other people? In what time span was it viewed?
All of these types of insights can signal interest, consideration, and urgency of a decision—or it can indicate that your creative efforts should be spent on the next promising account. Find a solution that helps you make better, more profitable decisions.
Make Your Next Pitch A Fast One
The agency pitching process isn’t going away anytime soon. But with tools that let you quickly put your prospective clients’ vision front-and-center – using the beautiful work you’ve already created – you can get on the shortlist, and possibly even short-circuit the distance to a winning pitch.
Wiredrive Library puts your entire media vault at your fingertips. When all your assets are so readily available to your team, you can empower your stakeholders, sales and marketing teams to easily create and send video reels and multimedia presentations of finished work, to help present your work effectively – and win more pitches with less pain.
Want To Learn More?
Discover why agencies and commercial production companies have found that Wiredrive by EditShare makes a difference in their pitching workflows.
Check out this case study to find out how Australia-based video agency New Mac became more efficient at responding to business opportunities, and improved how they present themselves in those opportunities.
EditShare’s video workflow and storage solutions power the biggest names in entertainment and advertising, helping them securely manage, present, and collaborate on their highest-value projects. To learn more about how EditShare can help your video production team, contact us today.
With nearly three decades in video games marketing to his name, Stephen Hey is one of the most experienced freelancers in the business. His career includes Marketing Director for EA studio Chillingo, leading lifestyle PR for Ocean Software and Infogrames, founding a games creative agency, and now freelancing for the likes of Wargaming, Bossa Studios and Merge Games. Stephen started his own consultancy HeyStephenHey in 2017. Stephen helps developers, publishers, educational and government bodies, and other companies working in the games industry with their marketing strategy.
We gave Stephen some questions we’ve heard from our customers, and he provided his take on game trailers. He also set off to get the opinion of industry experts to find out exactly how far the role of game video assets have evolved, who’s involved, how to do it right, and what comes next.
HOW IMPORTANT IS A TRAILER THESE DAYS?
After nearly thirty years of video games, I can tell you the fact that the quality of the trailer can ‘make or break a game’ is still valid. We only really started making trailers in the mid-nineties for trade shows or sizzle reels, but now they are critical to any game campaign. Trailers changed and redefined games marketing.
What I think about these days is where we are now with trailer creation, especially given today’s insanely powerful graphics cards and game engines. With the ability for more developers to create at that ‘top-of-the-pyramid level’, there are still ways to generate a marketing breakout with the release of a couple of minutes of well-edited gameplay.
WHAT’S AN EXAMPLE OF A TRAILER WITH HIGH IMPACT?
When first-person footage of the PS5 Ride 4 breakneck speed, motion sickness-inducing motorcycle race went viral at the end of September 2021, it wasn’t because it was remarkably different from other actual action-cam footage. The difference was that it wasn’t real; it was from a game.
The gameplay from ‘Ride 4’, shot from a motorcycle riders’ point of view, was incredibly realistic and enthralling. As the viewer bolted around a rain varnished circuit lit by a gloomily overcast sky, you could feel every lean, the terror of near-misses and feel the wind rushing past. This was ‘next gen’ gaming doing what it was meant to do – deliver the photorealistic gaming that gamers have dreamt of for decades. This game was already delivering the astonished “Looks like GoPro footage” Tweets in the thousands.
HOW CAN A GAMES COMPANY USE A TRAILER TO REACH BEYOND ITS FAN BASE?
Releasing long-form gameplay like what happened with Ride 4 could be something to think about if you have an addressable market outside of the conventional games segments, in this case, motorsports fans. By releasing a trailer that focused on the accuracy of the simulation, developers may engage with that secondary audience of real-world fans and convince them to give the thing they love so much, IRL, a chance in the virtual world.
HOW NECESSARY IS IT TO PUSH THE VISUAL LIMITS OF PHOTOREALISM IN A TRAILER AND BROADER MARKETING CAMPAIGN?
Today’s tech can deliver photorealism, ‘like being in a movie’ — but is that what everyone wants? I’ve talked to many colleagues in the industry about this, including the founder of Atomhawk and co-founder of the new agency Big Thursday, Ron Ashtiani. Ron told me, “The world has shifted away from realism now. Ten years ago, it was enough to have ‘realistic’ looking graphics to wow the player, but today you need more. When the PlayStation 3 and 4 and the Xbox 360 came along, there was a substantial jump in ‘realistic’ looking graphics. However, these worlds were usually created using brown and grey colour palettes. But today, there is a shift towards realism combined with wild colour or stylistic choices. Cyberpunk 2077 is a great example of this with its highly contrasting colours and lighting in a realistically rendered world.”
While an array of technical issues hampered Cyberpunk 2077’s launch, the vivid yellow and neon blues of its marketing campaign, key art (the ‘pack front’ images used on digital stores) and out of home advertising did an outstanding job of conveying its look across all media. Using your aesthetic consistently across all your assets and metadata is especially important. When it looks as strong as this, it can aid discoverability on stores that are as densely populated as the PlayStation Store or Steam.
WHAT CAN MAKE A TRAILER BREAK THROUGH THE NOISE?
It takes a lot to surprise the games industry and its fans, but at 2021’s Gamescom (Europe’s largest games show, held annually in Cologne, Germany), an open-world adventure called DokeV from developer Pearl Abyss was on everyone’s lips. The game takes the established Pokemon genre but radically appears to shake it all up with a look and feel that feels genuinely unique, all communicated by an eye-popping three-minute trailer. Ed Thorn from games site RockPaperShotgun said, “Unlike everything else, which made some sort of sense, this game took a bold choice and made none. It made no sense at all. All we got was a barrage on the senses, and I respect that rogue attitude. Instead of opting for a PowerPoint presentation like its peers, it just blared K-pop at everyone for three minutes and then moseyed off like it was nothing.”
The intoxicating trailer for DokeV felt familiar yet stunningly different; the city looked like other cities in games, the characters weren’t radically different, the actual gameplay wasn’t anything especially new. But it was just like taking a visual cold shower and immediately went viral, with many journalists calling it the game of the show even though no one got to play it. It showed the power of a unique aesthetic and how a successful style and theme can become a crucial asset for a game, a valuable part of the IP.
THAT’S GREAT FOR HIGH-PROFILE STUDIOS, BUT WHAT ABOUT MORE MODEST-SIZED BUSINESSES?
More often than not, when a game surprises, intrigues, or delights me with a new look, it comes from an Indie studio rather than one of the vast developers or publishers. With up to 300 games a week now being published on PC games platform Steam, games need to work hard to have a point of difference, and style and theme is often key to this. AAA teams may be hundreds of people, many specialising in ‘micro’ niches like vehicle physics. Indie teams are made from much smaller groups of people who are used to being more flexible and turning things around to short deadlines.
I asked Bossa Studios’ Studio Art Director, Ben Jane, for his take on this; “You can take more risks in the Indie world because the production times can be shorter. You pay a premium with AAA because of the attention to detail and the quality of execution, and this takes so much more time, it’s harder to take risks. As an Indie, you can be more forgiving and reactive because your budgets are hundreds of thousands of dollars instead of millions.
However, AAA studios are pushing the boundaries, ‘Ratchett & Clank: A Rift Apart’ on PlayStation 5 is just jaw-dropping gorgeous and delivers a unique style,” said Ben.
IF THE GAME’S AESTHETICS ARE BECOMING AS SOPHISTICATED AND COMPELLING AS BIG-BUDGET FILMS, WHAT’S THE PURPOSE OF A TRAILER?
For marketing, trailers are still essential. A great new ‘breakthrough’ trailer sits at the top of the marketing funnel in creating awareness for your game.
I asked Sam Roberts, creative producer at game trailer house DoubleJump, for his opinion; “The cinematic game trailer is not dead; it is still absolutely the best way to sell a game. Two minutes of punchy editing, with clear, precise use of music and sound effects, will not be going away anytime soon,” he said.
But while a powerful ‘impact’ trailer is one of the most vital assets for a campaign, it is not the only video asset. Modern game campaigns will be made up of tens or even hundreds of pieces of video. Look at the official video channel for the incredible Forza Horizon 5, which lists about 30 video assets just on YouTube alone, ranging from deep dives into the recording of SFX to episodes of a Forza 5 Horizon magazine show.
And this isn’t just the big AAA titles; a roster of assets can be powerful for any game. Curating a community and building a tidal wave of support, even for the most ‘indie’ of titles, is vital in a market where 200+ games launch on Steam every week. So for campaigns to succeed, they need to have multiple videos, each with different objectives. An impact trailer will be about getting eyeballs, but then the engaged parts of that broad audience will want to know more about how a game plays. Interviews with developers and ‘making of’ mini-documentaries will bring your community closer to the developers and breed loyalty to the game. Unique gameplay mechanics can be demonstrated in shorter, focused ‘mechanics’ trailers, and you may want to spotlight the ‘craft’ behind your game with profiles of some of the team who created it. Again, these needn’t be the preserve of AAA — take this example for Creature in the Well.
HOW WILL EMERGING TECHNOLOGIES CHANGE THE WAY GAME TRAILERS ARE CREATED?
The new generation of consoles is now with us in the form of PlayStation 5 and Xbox Series X/S, and they are powerful machines. Their graphics capabilities are unique, including the much-heralded ray tracing, by which scenes are rendered by simulating the actual rays of light in a game. This technique makes for much more realistic looking games and has been used in the tv and movie production industry for years, and the new consoles can render scenes using ray tracing on the fly.
Lighting Directors have long been a part of tv and film making specialised in creating light and mood for each scene. Now we have similar capabilities in games, and we will see similar roles evolve here. It would not surprise me that game trailer creators move from being ‘editors’ to fully-fledged cinematographers. In this video promo for Call of Duty: Vanguard, actual war photographers were sent ‘into the game’ to capture stills, and the results were stunning. Send a movie director in instead, and you are going to get some earth-shattering footage.
To go with the new hardware, there are new tools.
Unreal 5 is the latest version of Epic’s mighty game engine and comes fully loaded with graphics capabilities that promise to take things to another level. We’ll start seeing Unreal 5 crafted games soon with ‘Redfall’, ‘Senua’s Saga: Hellblade 2’ and ‘STALKER 2: Heart of Chernobyl’ as well as many more on the way.
Epic has made Unreal 5 with cross-industry appeal, and Unreal has already been used in close to 200 movies and tv shows to date including ‘The Mandalorian’ and ‘Westworld’. I think this will breed more cross-fertilisation of both game and movie industries, with each learning from the other. We’ll see this reflected in games and the video assets we use to promote them and maybe a blurring of the lines resulting in productions that create both games and TV shows from AAA IP. If you invest in building a virtual world in an engine that can be used for games and tv, why not make both?
WHAT CAN STUDIOS AND PUBLISHERS DO TO MAXIMIZE THE IMPACT OF THEIR VIDEO ASSETS IN A GAME TRAILER?
Marketing starts at day one, it should be embedded in the game’s design, and this applies as much to a one-person Indie studio production as a 200 person AAA franchise. This should include the development of a style guide, including a colour palette branding, even the beginnings of the key art and UI design elements. The games that do this well are recognisable just from a screenshot, for example, ‘Cyberpunk 2077’, ‘Untitled Goose Game’ and ‘Hades’. The graphic language must be consistent across all assets, and this can only succeed where there is full collaboration between development and marketing.
If videographers are an entwined part of this, you give your game the best chance to have maximum impact. So work closely with them and bring them in earlier than you might think necessary to start thinking about how they could create that first ‘impact’ trailer or teaser piece.
Developers can go further still and add modes in games that allow professional game videographers to go into games like a cameraman would go into a warzone, as in the COD film mentioned earlier. Dedicated game video houses like DoubleJump and Big Thursday can explore a game from the raw build. They know how Unreal and Unity works and, if given the option, can go into the game to choose camera positions, light scenes, create tracking shots and capture incredible footage that is still ‘in engine’. This footage can then be used for video and static assets and allow the very best rendering of the game to be captured without taking valuable time from the developer team.
Finally, think about how the trailers will be consumed. When briefing in trailers and consumer videos or choosing footage, think of the devices
people will be watching them on. Very few people will be watching on the top-level equipment that edit houses and studios have. Often, these videos are being consumed on a phone screen (usually while the viewer is also doing other things on other screens). So don’t assume that the stunning visual detail and ear chewing audio will be experienced by everyone. Aim for the best scenario, of course, but imagine the worst!
ANY LAST WORDS OF ADVICE FROM YOUR 30 YEARS OF MARKETING GAMES?
Sure! Marketers and producers of game trailers should experiment, have fun, play with the tropes and challenge the preconceptions. Take inspiration from movie trailers, watch as many game trailers as you can and take note of the ones with lots of views even though the game may be relatively small or indie. Think about trailers and other assets from day one – even when concepting games because moving images sell games today more than ever and the trailer, those narrow slices of games, need to cut through more than ever before.
Remote collaboration during game development and publishing is chaotic. MediaSilo by EditShare was designed to help production professionals collaborate on video assets, and get work reviewed and approved faster. Get in touch with us today for a demo.
Networks and studios want reviewers to view and write about new shows and movies. Writers want an easy experience to access pre-release content. Both parties want to keep content secure. But despite these closely aligned goals, the relationship between content producers and reviewers can sometimes get contentious. The reason? Reviewers hate screening sites.
We reached out to 200 press writers, bloggers and reviewers and asked for their opinions about what they love and what they hate about screener sites. Through in-depth interviews and surveys, we learned about the current issues surrounding the screener ecosystem. Incredibly, only 9% of reviewers are somewhat satisfied with the current state of affairs and none are “very satisfied”. Clearly, there is nowhere to go but up in serving a key audience disillusioned with tools critical to performing their jobs.
Frozen Out
So what do reviewers dislike so much about digital screening platforms? Their number one complaint: Lack of reliability. In fact, 50% of reviewers said they have missed a deadline or failed to write a review at all due to technical issues with a screener.
“Often times, because there is so much television these days, it is a last minute thing when I am getting to a show,” explains Rob Owen, TV Critic of the Pittsburgh Post-Gazette. “And then to get to it, and not being able to watch because it stops every 10 seconds is very frustrating.”
“It’s assumed that screeners are going to be poor quality.”
Colleen Kelsey, Assoc. Editor, Interview Magazine
There are multiple reasons why a video might not work, with many on the user’s end including unsupported browsers, bad wi-fi connections and internet outages. But since most online screening solutions don’t offer dedicated support, reviewers are left to reach out to their PR contacts, who in turn must get help from their IT or operations department. If the issue happens after hours to a reporter on deadline—well, you can kiss that coveted coverage goodbye.
Log-in Chaos
The second most common complaint reviewers have about screeners is the need to juggle multiple logins and ways of accessing content. Half of reviewers have access to more than 20 screener sites, all of which require different URLs and usernames and utilize different password rotation and complexity requirements. The end result: frustrated reviewers and an alarming number of potential loopholes in security.
“Just managing the variety of ways you have to get screeners is now a huge part of the job for everybody…sometimes it’s like ‘how much trouble is this worth?”
Ellen Gray, TV Critic, The Philadelphia Inquirer
While distribution is increasingly going digital, press reviewers still get about 25% of their screeners via DVD. About half of those discs are never destroyed or thrown away. Even more worrisome, only 8% of reviewers use unique passwords stored in a password manager.
To manage the proliferation of screener destinations and credentials, the rest leave passwords scribbled on Post-It notes, keep the same password across all sites, store them in Google Sheets shared with colleagues, and use other less than desirable password management strategies. The end result? Many screening sites are shockingly susceptible to attack by enterprising hackers willing to cross-reference a reporter’s publicly posted email address with the latest password dump.
Relegated to a small screen
Critics are just like the rest of us – they prefer to watch shows and movies on a big screen, perhaps with a bowl of popcorn or beer in hand. Even though reviewers are paid to watch new content, they tend to do it after office hours; peak viewing time for screeners is between 8-9 PM in any given time zone.
“There are certainly some shows that you want to see on a bigger screen…viewing on a laptop is really not the ideal situation.”
Rock Ellis, Managing Editor, AllYourScreens
This viewing pattern has a few ramifications. First, reviewers are often accessing content when there is no longer any technical support available at the network or studio, should they run into a problem. Secondly, despite the fact that most people in the industry now have Apple TVs, Rokus, Fire TV Sticks, or other devices, most screeners are still only offered online and must be viewed on a PC, laptop or tablet. The lack of a compatible TV app was listed as the third most common pain point for reviewers.
“My main problem with any screener site is that it’s very difficult, if not impossible, to find ways to cast it on to regular television,” says Randee Dawn, entertainment writer for TODAY.com and NBCNews.com. “I’m not a huge fan of watching on my computer screen because I spend hours in front of my computer anyway. It’s kind of a turnoff for me in terms of trying to invest time to watch screeners.”
Is There A Better Way?
Reviewers want to give network content an honest, timely review and, as industry professionals, they’re just as concerned as creators about keeping pre-release content safe until premiere date. They’re even willing to jump through some additional hoops if the overall experience is easy and seamless.
“If the industry would adopt a centralized solution with everything in one place, I would happily accommodate much tighter security.”
Alyssa Rosenberg, Culture Writer, The Washington Post
Dawn proposes a potential solution: “What would be nice is having some sort of central site where you have just one log-in, and all networks have just agreed to use it.”
While the idea of a destination screening site seems radical, it’s reality for many reviewers today. Think about the problems a centrally managed site solves. Networks and studios are all doing duplicate work to achieve good playback. By banding together on one platform, content creators could ensure better quality of service and enterprise-level security, offered with 24/7 high-touch support. All while freeing PR and marketing professionals to develop relationships with reviewers and promote content, rather than troubleshooting technical issues.
The future of screeners is here. Screeners.com addresses all of the issues cited by critics, making it easy to view all of the content they’ve been invited to preview in one frustration-free destination. We’ve also taken the requests of PR and marketing teams (and the IT teams that support them) to heart, providing turnkey, branded screening rooms protected with industry-leading security.
World Class User Experience
Screeners.com provides critics and reviewers with a simple interface with a video player that just works. No buffering, no broken connections. And if for some reason your reviewers do have an issue, we provide industry leading customer support so that your team doesn’t have to field angry phone calls.
Simple and Secure
Leave your password log-in and security issues in the past. Screeners.com uses secure Magic Link technology so that your reviewers can just worry about watching your content: no passwords, no frustration. When combined with SafeStream visible and forensic watermarking, your PR team can be a hero to critics and the content security team.
Let Viewers Watch Where They Want To
The critics have spoken: give us the ability to watch on the big AND small screen. Screeners.com lets viewers watch your pre-release content on a native Apple TV app, cast to other connected devices, or watch on their PC or laptop. By giving reviewers a simple experience across platforms, you’ve eliminated the barriers to getting the coverage your content deserves and giving you a better shot during awards season.
Happy Press Reviewers = Good Reviews
Screeners.com keeps you in control of your content and brand, while keeping some of your most important viewers happy and engaged. It’s time to implement a simple, secure, and frictionless system for both you and your reviewers, and always keep them coming back the next time. The more barriers you can remove between your content and your reviewers, the better off everyone will be.
“I’m genuinely thrilled when something new pops up in Screeners.com rather than other screening sites.”
Jacqueline Cutler, Freelance Journalist
Learn more about sharing pre-release content with reviewers, critics, and other stakeholders with Screeners.com.
EditShare’s video workflow and storage solutions power the biggest names in entertainment and advertising, helping them securely manage, present, and collaborate on their highest-value projects. To learn more about how EditShare can help your video production team, contact us today.