Commentary by Brian J. Pohl.

If ever there was a deterrent to the creative process, it would be dealing with mindless repetition and predictable patterns. Creativity by its very nature has a tricky little habit of jumping past all the tedium and minutiae in order to express itself. This habit also holds true for film directors who make creative decisions without completely understanding the technical ramifications of their choices.

As an industry previs professional, I’ve spent most of my career on the cutting edge of technology. The goal of any good previs supervisor is to translate the director’s vision into something tangible before he or she heads to the stage or goes to shoot on location. In essence, the entire previs process is the early human forerunner of AI for the director and his creative team.

Since time is money, the previs process is intended to prototype the director’s conceptual ideas as quickly as possible. This usually means deploying a multi-person previs team armed with a wide range of tools designed to speed up shot and sequence creation through iterative passes with the director. Currently, the director relies upon advancing technology to improve the speed of trying something new by using faster computers, powerful GPUs, mocap, game engines and if necessary deploying more artists. The previs team can usually keep pace with the director’s ideas and shooting schedule provided they’re on top of their game and nothing catastrophic happens. However, new demands are being placed on the stage floor with the introduction of in-camera visual effects (ICVFX), as pioneered on The Mandalorian. This particular shooting methodology requires what is normally constructed during post-production to be pushed forward into the production and pre-production phases of the schedule. This places a heavy demand on the previs team and the virtual art department (VAD) to supply near-final or fully camera-ready resources on the day of the shoot, all without breaking workflow and pipeline continuity. With very complex visual Fx-based projects, simply throwing more people at the problem becomes too counterproductive and cost-prohibited. Enter the promise of AI.

From the simplest perspective, animation and VFX production is laden with repetitive tasks and the need for creative variability. Each silo of the animation process from story to rendering would benefit from a technology that anticipates recognizable patterns in order to supply the artist with more creative options, especially while under the direct, interactive supervision of the director.

To start things off in the story and art departments, AI tools like ChatGPT can now ingest entire scripts for interactive analysis with the writing team. Once a script moves past its initial draft, tools like Krea can interpret an artist’s rough storyboard sketches and concept drawings and update them with more realistic visuals of their ideas. If the AI model is trained on a company’s specific artistic talent base and its intellectual property, a preservation of style can be maintained without “borrowing” from artistic sources outside the firm. This should help address some, but not all, of the legality issues being raised by AI. Conceptual teams can then add Pika Labs, Runway Gen-2 or Google’s recently released Lumiere text-to-video and image-to-video capabilities to give these artists the ability to rapidly construct pitchvis content for studio executives wanting to obtain the green light.

Right now, this is where AI can shine brightest. Rapid prototyping of a concept is critical for getting things underway. Animation projects can spend months trying to lock down a set of storyboards, whereas live-action film production only has weeks. Any attempts to improve the speed of this process should come as a welcomed advantage provided the copyright issue can be fully resolved.

Moving on to modeling we could see procedural construction tools leveraging AI’s ability to resource thousands of make and model options from various manufacturers to feed the artist with a base geometry to work from. Anyone working in animation will tell you that the modeling phase of a project is the most time consuming. This is particularly true for designs that have no existing frame of reference and must be created fully from scratch. The ability to easily variate geometry and materials by mere suggestion of what the user wants to see is appealing. How many times have you heard the Director or Production Designer say, “show me something I’ve never seen before” or “make it more metallic, aged, and add some patina.” With AI, this becomes a real possibility.

Rigging is up next and if any silo could use the help of AI it’s this one. Rigging demands precision probably more than any other department. Failure to do so can produce less than optimal animation results particularly when it comes to creature animation. AI tools can analyze a potential character model and provide a rig based on thousands of human or creature forms. Imagine current auto rigging tools on steroids, capable of suggesting the right set of deformations to improve facial animation or making the underlying muscle structure and its IK rig more animal-like if needed.

When it comes to layout, these artists are tasked with working with the modeling team to help with staging and character-to-camera blocking for every potential sequence in a movie. With AI, a layout artist could take the existing models from the film and place them into potential arrangements based on a verbal description. From there AI’s analysis of the script could provide suggestions for camera placement based on dialog while also making technical determinations on camera placement, sensor size and lens selection.

Moving on to animation is where one will find what is likely the most controversial implementation of AI since animation is directly related to an actor’s performance. AI could act like an assistant cleaning up mocap data or analyzing specific keyframes placed by an animator to construct in-betweens with all of the associated secondary animation and physics necessary for more realistic movement and behaviours. The industry has already started experimenting with this type of technology through the application Cascaduer.

Finally, there’s lighting and rendering. This is where the industry has seen the most usage of generative AI to alter the artistic style of video content. By leveraging an AI model to analyze rendered footage, modifications to the visual style can be performed without the need to go back into the 3D process. When combining this with live-action content, we can see an eventual merging of the virtual and live-action footage to where they coalesce into something profoundly new.

Now as rosy as all of this sounds, there is a huge caveat to all of this progress. Many in the industry are asking, “why hire artists if AI can do it all?” We’re seeing record layoffs studio after studio and industry studies are showing some disturbing trends. The reality is AI is destined to impact a wide range of professions and it’s not going to be pretty. But with all technological advancement, the democratization of any technology always results in the displacement of one set of workers in exchange for another. The question we must ask ourselves is, how do we reinvent ourselves with this technology in order to remain viable? The 2D animators of mere decades ago went through a similar transitional “purge” when 3D animation took the throne. Many made the switch, but others decided to bow out and move to a different profession. But that seemed different. At least it was human artists taking other artists’ jobs. AI just seems so faceless, corporate and impersonal. It can be very depressing.

So what’s an artist or studio to do? The first thing to realize is despite the speed of these advancements, the deployment of these tools into the actual working industry they’re influencing can be relatively slow despite what the surveys say. Whether that’s due to cinematic purists refusing to use the technology or the political, legal and union ramifications this tech will have on the job market, now is the time to begin learning what this is all about before it completely overtakes current workflows. Get an AI training program in place and familiarize yourself and your company’s artistic teams with all of the new tools and stop denying progress or thinking this won’t affect you somehow. It will.

Next, strengthen your human connections and become more visible and proactive. Join industry guilds and organizations, look into the possibility of unionizing if it makes sense for your company, or form alliances with your peers to help one another find work, train each other and offer necessary motivational encouragement. It’s easy to envision AI doing everything from scriptwriting to final pixel, but there is still the question of whether or not the market will embrace AI’s final product. AI could become the “processed food” equivalent of entertainment. Edible, but not necessarily palatable or nourishing. Be a gourmet chef of pixels instead and allow the human connection to drive your work.

Finally, plan for the future and diversify. The film and animation industry will unquestionably evolve with AI whether it’s regulated or not. Specialists, take a hint from your generalist coworkers and learn to do more things. Generalists, don’t be so diluted from a skills perspective that nobody wants to hire you. And lastly, supervisors and executives, there’s too much money to be lost to simply ignore AI so consider reorganizing your company structure to accommodate AI before it overwhelms you but don’t forget your workforce in the process. If done successfully, AI will become the tool it’s intended to be, something controlled by human artists looking to leverage its unquestionable benefits.

Brian J. Pohl
Author: Brian J. Pohl

Brian Pohl has a career spanning more than 20 years working in various jobs from motion graphics, layout technical director and senior previs supervisor. Recently he led the Unreal Fellowship at Epic Games as Technical Program Manager.

Comments

comments