Digital Domain Productions Inc. built a private network-attached storage (NAS) cloud using NetApp Inc. NAS filers and Avere Systems Inc. acceleration appliances that brought the company significant savings and improved the efficiency of its artists’ rendering process, according to one of its system engineers.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Digital Domain was founded by director James Cameron and two partners in 1993, and has created digital visuals for Titanic, The Curious Case of Benjamin Button and the Transformers trilogy, among other movies.
Digital Domain has approximately 250 digital artists in its Vancouver, B.C., and Venice, Calif., offices and another 30 located in San Francisco. Much of their work involves rendering -- digitally generating an image from files containing movie scenes -- for 3-D movies.
The FXT nodes acted like a shield or screen in front of the filers. When we drop these boxes in front of the render farms, the load drops on the filers. That increases their longevity.
Mike Thompson, Senior Systems Engineer at Digital Domain
Each site has NetApp FAS6070 or FAS6080 file storage, and Avere FXT 2550 NAS accelerator appliances to help with the IOPS-intensive work. Multiple computers read the same data, and render nodes can read data from the Avere cache from different locations by pulling the files across the wide-area network (WAN).
Mike Thompson, Digital Domain’s senior systems engineer, said the movie files must be rendered at about 24 frames per second. With 3-D movies, the frames must be rendered twice -- once for each eye.
“Artists submit their jobs to render farms, sending a lot of NFS I/O to the filers,” he said. “The end result is a picture that gets spit back to the filers, and all those pictures get assembled together for shots for the movie.”
He said he began using Avere with the NAS cloud to improve that IOPS performance.
“We initially started using Avere to take a lot of heat off the filers,” Thompson said. “The FXT nodes acted like a shield or screen in front of the filers. When we drop these boxes in front of the render farms, the load drops on the filers. That increases their longevity.”
Thompson said Digital Domain was outgrowing its small Los Angeles data center, so in mid-2010 it moved the render farms off-site to a collocation facility. The production house contracted with Switch Communications’ SuperNAP mega data center in Las Vegas. That data center holds Digital Domain storage and render nodes, and is connected over the WAN to the other offices in a private cloud setup. Using the collocation facility allowed Digital Domain to close its Los Angeles data center without having to build a new one, which Thompson said probably would have cost more than $1 million.
“You can get data center space for a lot cheaper elsewhere instead of building it yourself,” Thompson said. “We run lines out to remote data centers for floor space and cooling for a lot less money. We put a render farm and an Avere system [in Las Vegas] as well. Now we can get away with a lot less network infrastructure. We used to have to run dark fiber between our sites, but now use a gigabit line that’s much cheaper. The result was probably millions of dollars in savings.”
Thompson said Avere’s caching cuts down on the data that goes across the WAN, eliminating the need for more bandwidth or separate WAN optimization devices.
Each Avere FXT 2550 contains 72 GB of DRAM, 2 GB of NVRAM and 3.6 TB of SAS drives. The systems automatically tier data, storing most active data on DRAM to minimize latency. Thompson estimates that using Avere reduces latency from approximately 25 milliseconds to 0.1 millisecond. That makes it possible to move data across the WAN without WAN optimization or dark fiber, although several of Digital Domain’s sites hold hundreds of terabytes of data on NetApp filers.
“Avere is caching a lot of data and absorbing the load that would go across the network and hit the filers,” he said. “It’s taking the I/O load off and reducing network traffic that we’d have to move across the LAN [local-area network].”