Is Perfect the Enemy of Good Enough?

Case Study: Is Perfect the Enemy of Good Enough? 
Digital Video Preservation in the Age of Declining Budgets

by Doug Boyd

Digital video is a very large, complex and expensive format to curate on a large scale with no codified professional standards for preservation. This case study explores the efforts of the Louie B. Nunn Center for Oral History in the University of Kentucky Libraries to develop preservation-oriented procedures and workflows for responsibly curating the born digital video recorded oral histories that are being accessioned into our collection. Specifically, I will address how we are choosing to accommodate the discrepancy between the increasingly growing quantity of digital video being recorded for oral history and being accessioned into the archive and the declining budgets for available for properly preserving these complex and extremely large data files.

The Louie B. Nunn Center for Oral History at the University of Kentucky Libraries has a collection of over 8,000 interviews, 95% of which were collected on audio cassette. The video holdings are, primarily, associated with oral history based documentaries that were produced in the 1980s and were archived on 3/4” Umatic Tape and some scattered interviews on Beta SP and VHS, however recently our collection is accessioning Mini-DV tape, HDV tape, DVCam tape, as well as data files that are coming in on external hard drives encoded using a variety of formats.  I try to work with groups conducting video based projects as early in the process as possible so that we can agree on formats and begin preparation for accessioning such a large amount of data. However, I am often not consulted on a project until the project is ready to be turned in to the archive.

For the digitization of our relatively few analog video holdings we chose to digitize the interviews without compression. I have taught dozens of digitization and preservation workshops over the years, which were mostly focused on digital audio.  With digital audio, it was easy to draw the clear line between uncompressed audio as the “best practice” for audio preservation. I would say that compression, such as that used in the creation of Mp3 audio files was fine for the creation of access versions to put on the web, but the preservation version must be the highest quality possible. When working with born digital video, it is near impossible to not deal with compression as most formats that are captured on the camera use some degree of compression (See Kara Van Malssen’s OHDA essay Video Preservation). With digital video, the goal is to minimize transcoding, which introduces additional compression and can degrade the overall visual quality of the video file.

Because the Nunn Center had archived a relatively low percentage of video, it was an inconvenience to digitize our analog videos as uncompressed files wrapped in either a .mov or .avi containers. File sizes for this type of data can be quite large (75 Gigabytes/hour in Standard Definition), however, because of the limited quantity of analog video in our collection, it was easy to categorize these as anomalies in the digital preservation system, make room on the servers and move on. I felt secure in and satisfied with our preservation evaluation of our analog video holdings, as the majority of our analog holdings have now been professionally digitized.

At the time of writing this, the Nunn Center has 10 terabytes of server space. This amount of storage works very well for an audio only collection. Digital audio interviews that were born digital or digitized at 16-bit/44.1KhZ (averaging 2 hours) were typically yield 1.5 gigabytes of data, and now, using 24-bit/96Khz capture settings yield 2 gigabytes per hour. A terabyte contains 1000 gigabytes. The Nunn Center could comfortable accession and digitize 300-500 digital audio interviews and stay well within our server constraints. However, this comfort was clearly disrupted by the more aggressive introduction of digital video capture.

In the past five years, the Nunn Center has experienced a dramatic increase in born-digital video accessions. Simply put, more projects and interviewers are utilizing high definition video recording equipment to conduct their interviews. I am a staunch preservationist. I want to ensure that the Nunn Center is being vigilant with our preservation system to ensure that oral history interviews being conducted and archived today will be accessible for decades to come. Additionally, I want the best quality and most accessible version of this interview possible to still be accessible for decades to come.

At the point of capture of video all of the digital video formats that are being accessioned into the archive are born compressed, and most of that compression is lossy.  Even extremely high quality, professionally captured high definition video utilizes lossy compression. Despite this fact, the file sizes of some high definition capture formats that we have recently accessioned contain over 100 gigabytes/hour. As we began to witness a clear acceleration of born digital, high definition video being accessioned into the Nunn Center collection, the massive file sizes made it abruptly clear that we needed to develop and articulate a coherent and executable set of policies and workflows to properly preserve these interviews in the best way possible. Of course, my first tendency was to search for standards and best practices.

Digital Video Preservation: Where Are the Standards?

Standards for digital audio are clear and simple, and professional consensus has been clearly articulated for many years. In contrast, standards for preserving digital video have yet to emerge. Digital video carries on the tradition of analog video which always consisted of a dizzying continuum of proprietary technologies that, more often than not, failed to emerge from the commercial marketplace as a ubiquitous format, posing multiple and expensive preservation challenges. This is not to say that there is not a great amount of work being conducted on the creation of preservation strategies. There are many talented individuals who are highly motivated to make this happen. Digital video preservation has great commercial potential, and usually when there is money to be made, difficult problems get solved.

So with digital video being so ubiquitous in our society, why are there no standards in place for effective digital video preservation? The primary difficulties posing challenges to consensus on preservation standards for digital video:

  • Digital video data files are complex information packages, containing multiple streams of information stored in multiple and often proprietary packages.
  • Digital video data files are extremely large as compared to digital audio. The Nunn Center has born-digital oral history interviews that contain 200 gigabytes of data for a single, high definition two-hour interview. These massive file sizes require greater network bandwidth, greater processing power in the computers that will be processing the digital video file, and much greater amounts of storage. Uncompressed formats for standard definition video can range from 75-112 gigabytes/hour and for high definition video can range be up to 600 gigabytes/ hour.
  • Most digital video technologies are proprietary; they are rapidly changing due to commercial competition, thus posing a great risk for potential obsolescence. There are hundreds of digital video codecs and formats active at the moment.

Digital video poses a major challenge to abiding by basic and traditional rules of digital curation and preservation: avoiding compression, interoperability, making lots of copies and storing them in lots of places, and utilizing widely adopted, open access formats (See Doug Boyd’s OHDA essay the Digital Mortgage: Digital Preservation of Oral History). The bottom line: it is incredibly complex and expensive to curate and preserve digital video, even on a small scale.

File sizes for uncompressed, high definition video is prohibitive if not impossible. Unlike with digital audio, it has been very difficult to avoid compression and the need for transcoding when preserving born digital video.   The ideal scenario would be a workflow and process that could minimize loss that comes from compression while achieving reasonable data file footprints. The most promising initiative in the creation of digital video standards for long-term preservation is being led by the Audio Visual Working Group of the Federal Agencies Digitization Guidelines Initiative (FADGI). This working group has been working over several years to develop coherent standards supporting Lossless JPEG2000 as a standard, preservation video codec contained in an MXF wrapper.  Lossless JPEG2000 is an efficient, open access codec that retains quality of the video frame. The MXF (OP1a) wrapper contains video in a nonproprietary open access container that already has adoption in the professional broadcast arena and is built to additionally contain significant metadata as well as the audio and video streams.
Although the immediate focus of the FADGI efforts was on creating a standard for the digitization of analog, standard definition video, there is growing and rapid support for the application of this standard toward the preservation of born digital, high definition video as well. The FADGI MXF Application Specification for Archive and Preservation is currently in draft form (http://www.digitizationguidelines.gov/audio-visual/). Theoretically, this combination represents the ideal from a preservation standards perspective. Lossless compression lowers file sizes to, relatively speaking, more manageable levels and utilizes widely adopted open access formats.  Lossless JPEG2000/MXF (OP1a) video preservation combination is gaining traction as major organizations and institutions, such as the Library of Congress, Library and Archives Canada, France’s National Audiovisual Institute, and 20th Century Fox have adopted it as the long term preservation standard. Creation and playback of the Lossless JPEG2000 MXF video package is available through the purchase of third party plugins and in the high-end, extremely costly but truly remarkable preservation systems such as Front Porch Digital’s Samma Solo.

However, simple playback of these files is not possible without major customization of a very expensive computer system optimized for video production. At the moment, you cannot view a file of this type by simply double clicking on a Lossless JPEG2000 encoded video wrapped in an MXF container.  Currently, the most common systems (including the professional systems) do not support this combination, which creates a major barrier to the establishment of this combination as the standard. This presents a potential problem in the development of a standard. Normally, standards rely on wide adoption. In this case, the standard, when completed, is being put forth prior to implementation by the major video vendors.  There is too much money at stake for the industry to continue on much longer without a realistic preservation standard.

My first impulse was to adopt the Lossless JPEG2000/MXF emerging standard as it began to gain wider adoption and endorsement, even if current access poses a major inconvenience. This logic was based on the thinking that if the Library of Congress and 20th Century Fox have begun adoption of the Lossless JPEG2000/MXF for digital video preservation that we clearly have an emerging national standard for the preservation of digital video.

Proprietary video formats pose an obsolescence risk. In a typical workflow that I would design, I would recommend the creation of a more open access preservation version of the proprietary video interview while also continuing to curate the original for as long as possible. The majority of the Nunn Center’s born digital video content is being captured in high definition. From my strict preservation orientation, I theoretically and philosophically felt strongly that I had gathered enough supportive information to comfortably adopt the Lossless JPEG2000 MXF (OP1a) for the Nunn Center’s born digital workflow, however, in practice I regrettably found implementation, at this time, an untenable option at this time.

Digital Video Preservation, Limited Storage, and Declining Budgets

Having invested in professional grade video technologies for the Nunn Center, we had the latest version of both Final Cut Pro and Adobe Premier (both of which are industry standard) installed on very powerful Mac Pro computers optimized for video production. I want to reflect back on the process of preserving the interviews from our oral history project From Combat to Kentucky: Interviews with Student Veterans the interviews originally were captured utilizing a proprietary camera format and transferred to the archive on an external Hard Disk Drive as an Apple ProRes422 encoded .mov file. For a good frame of reference where this example fits in the Nunn Center’s preservation workflow, look at my OHDA essay Born Digital Accession Workflow: The Louie B. Nunn Center for Oral History, University of Kentucky Libraries. For this example, I want to examine the second interview we conducted for the From Combat to Kentucky oral history project, an interview with Nathan Noble.

Original Interview Data File: Technical Metadata

(“Original” here is defined as the data file that was delivered to the archive)

I have left off the technical details of the audio stream in this particular example

General

Complete name:                   2010OH005_WW359_Noble.mov

Format profile:                      QuickTime

File size:                                 157 GiB

Duration:                                2h 0mn

Overall bit rate:                     186 Mbps

Video

Format:                                  ProRes

Codec ID:                              apch

Codec ID/Hint:                      HQ

Bit rate mode:                       Variable

Bit rate:                                  185 Mbps

Original width:                      1440 pixels

Original height:                     1080 pixels

Display aspect ratio:            16:9

Frame rate:                            29.970 fps

Color space:                          YUV

Chroma subsampling:         4:2:2

By any measurement, Apple ProRes 422 would be considered a proprietary codec at high risk for rapid obsolescence.  Ideally, I would propose the following workflow and strategy:

  • Master: Maintain Original
  • Preservation Version: Lossless JPEG2000/MXF (OP1a)
  • Mezzanine Version: H.264 encoded MPEG-4 at 30 mbps
  • Access Version:  H.264 encoded MPEG-4 at low bitrate

The Master and Preservation versions would be moved straight to the preservation repository while the Mezzanine and Access versions would be more readily available on servers for reference and access.

The Nunn Center has been allocated a certain amount of server storage that has been perfectly well suited for the future growth of a digital audio-based oral history archive. I projected that our collection efforts and accessions would yield an increasingly higher percentage of high definition video on a variety of codecs. I produced a realistic growth projection for storage needs based on this assumption. Unfortunately, this projection was created and proposed in the same year as the announcement of major university budget cuts. Needless to say, these projections were perceived by administrators as unrealistic.

The problem with a preservation workflow that involved maintaining the Master Version, the original data file as it was accessioned, and the creation of a more open access Preservation Version utilizing the Lossless JPEG2000/MXF, was data storage. The total data needing storage at this point, for this single interview, would now be exceeding 300 gigabytes. The Nunn Center could store 75 audio-recorded interviews (at 24 bit/96Khz) with an equivalent amount of storage. I was assured that I would not be able to sustain a storage growth rate that could accommodate digital video utilizing this strategy. Time to envision an alternative workflow!

After consulting closely with preservation specialist Kara Van Malssen (See OHDA essay Video Preservation), I began to incorporate a “mezzanine” level version in our preservation workflow. Our policy declared that mezzanine version be derived from the original or “Master” version (unless the original contained multiple parts, at which point it would be derived from the “Submaster” version that we created which maintained original settings and stitched the parts together in sequence). The mezzanine version can use compression, but is intended to still be a high-enough quality version that it can be used as a source to create future access versions if necessary.  Because of our limited storage space, if total file size for born digital master was under 60 gigabytes, no Mezzanine version would be necessary. If total file size for born digital master or sub-master interview is over 60 gigabytes, create video mezzanine. Additionally, if original video interview data file type is deemed to be an immediate obsolescence risk, create a sub-master that retains the Master settings and a mezzanine version.

After much testing and comparison, I settled on a high bitrate MPEG-4 encoded using the H.264 codec. The primary reasons are that H.264 has wide adoption (although not technically an open format) and is an incredibly efficient codec. H.264 encoded video produces file sizes that are significantly smaller and a minimal level of visually perceptual loss of information. The true test would not be in evaluation of the Mezzanine version itself, but in the evaluation of Access versions derived from the Mezzanine version. The primary goal of the Mezzanine version is to be high enough quality that it could serve as source for the creation of future derivatives and minimize pixilation and artifacts that often emerge when utilizing high amounts of compression.  Despite the added compression, the quality of the derivative files was impressive. In fact, I was startled at the results.

As a result of this testing, the Nunn Center is currently creating Mezzanine version of born digital video utilizing the following workflow:

  • Master: Maintain Original
  • Mezzanine Version: H.264 encoded MPEG-4 at 30 mbps
  • Access Version:  H.264 encoded MPEG-4 at low bitrate

The Master versions is moved straight to the preservation repository while the Mezzanine and Access versions are more readily available on servers for reference and access.  The Mezzanine versions are created according to the following guidelines:

Retain original Height/Width/Aspect Ratio/Frame Rate

Utilize the following Settings:

Wrapper/Container: mp4

Estimated size: 13.65 GB/hour of source

                   Width: (100% of source)

                   Height: (100% of source)

                   Codec Type: H.264

Multi-pass: On, frame reorder: Off

Average data rate: 30 (Mbps)

Video: H.264 High profile

The Nunn Center archived the original Master version (157 gigabytes) in our OAIS Preservation Repository (See Doug Boyd’s OHDA essay the Digital Mortgage: Digital Preservation of Oral History) and housed the Mezzanine version (27 gigabytes) in the Preservation Repository as well as on our server (along with the access versions).  With the adoption of this mezzanine approach and utilizing H.264 based compression to achieve smaller file sizes and balance out risky proprietary codecs with widely adopted codecs, I feel that I have constructed a system for preserving digital video that is good enough. I think most archival curators would agree that video preservation is vital to preserving contemporary society and that the demand that video will place on the archival repository will dramatically increase through time.  I will continue to lobby for more storage space, and as storage continues to get cheaper, I will begin to reexamine FADGI’s Lossless JPEG2000/MXF standard. I do believe that this is the ideal approach, but in the face of declining budgets, I had to make a difficult, but what I feel was as a responsible choice. For now.

Citation for Article

APA

Boyd, D. A. (2012). Case study: is perfect the enemy of good enough? Digital Video Preservation in the Age of Declining Budgets. In D. Boyd, S. Cohen, B. Rakerd, & D. Rehberger (Eds.), Oral history in the digital age. Institute of Library and Museum Services. Retrieved from https://ohda.matrix.msu.edu/2012/06/is-perfect-the-enemy-of-good-enough/.

Chicago

Boyd, Douglas A. “Case Study: Is Perfect the Enemy of Good Enough? Digital Video Preservation in the Age of Declining Budgets,” in Oral History in the Digital Age, edited by Doug Boyd, Steve Cohen, Brad Rakerd, and Dean Rehberger. Washington, D.C.: Institute of Museum and Library Services, 2012, https://ohda.matrix.msu.edu/2012/06/is-perfect-the-enemy-of-good-enough/

 

This is a production of the Oral History in the Digital Age Project (https://ohda.matrix.msu.edu) sponsored by the Institute of Museum and Library Services (IMLS).  Please consult https://ohda.matrix.msu.edu/about/rights/ for information on rights, licensing, and citation.

Permanent link to this article: https://ohda.matrix.msu.edu/2012/06/is-perfect-the-enemy-of-good-enough/

Leave a Reply