IT optimization the Lucasfilm way

How the company that created “Star Wars” and “Indiana Jones” wrings the most out of its 10G Ethernet network and 4,000-plus servers.

Lucasfilm is the creative force behind a host of special-effects-laden motion pictures, including the "Star Wars," "Indiana Jones" and "Pirates of the Caribbean" series. The firm has six divisions in addition to the parent company: Industrial Light and Magic, the special effects group; Lucas Arts and Entertainment, the gaming division; Lucasfilm Animation; Skywalker Sound; Lucas Licensing; and Lucas Online. The company operates from three locations in the San Francisco area and the Lucasfilm Animation facility in Singapore.

A staff of 57 IT professionals provides network and IT services for the company, which numbers about 1,200 employees. As you might expect, the demands on that IT group are significant, given the computing horsepower it takes to enable the likes of Johnny Depp to ward off sea creatures with creepy, octopus-like heads.

Kevin Clark, director of IT operations for Lucasfilm, and Peter Hricak, senior manager for network and telecommunications, explain how, even with a server farm of more than 4,000 machines and a WAN with 10Gbps links, ways still can be found to optimize.

Can you describe your network setup?

Peter Hricak: For our campus networks we have three network cores, each based on a pair of 10G Ethernet chassis-based routers with a total of 128 10G ports. All desktops are usually linked at 1G to edge switches, which we connect to building distribution cores with two 10G interconnects. The building distribution cores then aggregate to the network core with four 10G interconnects each. Storage is directly 10G connected; we try to get as fast a path to the storage as we can.

On the WAN, we have two OC-3's connecting our campuses in the Bay Area and another to Singapore. We also have 10G dark fiber between two of our Bay Area campuses, as well as a 10G dark fiber line to a telco hotel in downtown San Francisco.

What kinds of traffic are going back and forth, especially over the wide area?

PH: The essence of the traffic is the work in progress that's being transferred and worked on by artists on a day-to-day basis.These are generally large image files, movies. We do frame-accurate motion jpeg on our transmission, so they're not very compressed. They are rendered at night by a render farm for ILM, then reviewed the next day, and more changes are made and the cycle starts again.

What does the render farm consist of?

Kevin Clark: We've got approximately 4,300 processors available within the data center. We use a distributed rendering model, so we've got a core within our data center of varying generations of systems, but primarily dual-core, dual [AMD] Opteron blades with up to 16G of memory on board. We also use available workstations that are out on the floor [such as after artists log off for the night]. Those are typically single-core or dual-core, dual-Opteron HP workstations. So the render farm in total comprises about 5,500 processors.

How does the rendering process work?

PH: We take models and textures and through mathematical equations - sometimes through off-the-shelf software, sometimes through our own - we render the final images. On the more difficult effects like water, what goes in is textures and some general physics equations, and what comes out is a two-minute sequence of a boat being [swamped] by a wave.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AMDCore TechnologyCreativeFilemakerFilemakerHewlett-Packard AustraliaHPMotionOctopusSpeed

Show Comments