*Exclusive* Breakdown With Chris Boyle: ‘The Monster in the Mirror is Absolutely Us.’

What motivated the decision to explore artificial intelligence as the central theme of this anthology series?

Private Island has always embraced learning through exploration. A couple of years ago, with the advent of the first text-to-image technologies, we decided to devote more of our time to exploring machine learning. Much of this resulted in social posts and various bits and pieces that showcased a tool or aesthetic—which was fun but not entirely satisfying. So, we decided to create a short anthology of films, all linked by machine learning and AI. The plan was to start with something almost entirely AI-generated, which resulted in Infinite Diversity in Infinite Combinations, and conclude with a film that used AI under our specific guidance within our workflow. In much of our work, the method reflects the message, so it made sense to make films about AI using AI.

HOW DOES THE PERSPECTIVE OF AI AS A “TOOL” RATHER THAN A “CREATOR” INFLUENCE YOUR APPROACH TO THIS FILM?

At Private Island, we firmly believe that AI is a tool; AI has never wanted to make anything. We know what we want the output to be and use ‘ai’ to facilitate it, I believe that to make something truly yours, that’s the right approach; otherwise, you’re just making an advert for someone else’s tech.

In ‘Meme, Myself & AI’ you reflect on the human condition. In what ways has AI changed or deepened this narrative? What is the intention behind the phrase “confronting the monster in the mirror” in relation to AI’s role in society?

Fundamentally, generative AI is a reflection of us. It’s trained on our data and built to replicate us. Almost all the issues we might perceive with AI are really just a mirror held up to ourselves. If we train these engines with faulty or biased data—be it a trashy subreddit or scraping social media—we shouldn’t be surprised when uncomfortable views are reflected back to us. We demand of AI more than we ask of ourselves. The monster in the mirror is absolutely us.

How do you balance advanced technology with classic storytelling elements to retain a human and ethical tone in your productions? Have you faced moral dilemmas during this series, particularly regarding the use of AI in visual and sound content creation?

Technology is just a means to an end. As filmmakers, we use stories to understand our own feelings and ethical dilemmas. Undoubtedly, AI feels like it’s advancing at light speed and it can feel like playing with matches simply because it will change how we work, aaaaaaaand that is scary because it means change. A thorny topic, especially when this change has been fueled by work that people may not be happy to share.However, I also know every evolution in art —be it Photoshop, digital cameras, or 3D modelling—was perceived as destructive before ultimately allowing creativity to flourish. I don’t want to sugarcoat it; I think it will bring systemic change on a large scale, with ramifications far beyond the creative industries. 

I can’t see the future, and although the camera had to be invented for film making and animation to happen, that was little consolation to the oil painters.

Have your views on AI shifted from the start of this anthology to this final film?

Personally, I think it’s changed from a novelty to a major part of our process. On a wider scale, it’s becoming something that will have major ramifications for our society as a whole. I think we as creators and artists are still shook from that thinking our jobs would be the last to change, when in fact they are the first – but artist adapt – it’s what we’ve always done. I don’t  know if the same can be said of other industries and that needs to be a wider conversation sooner rather than later. 

What role do ethics and morality play in your decisions to incorporate AI into your work? Where do you draw the line?

Private Island is a small company that carefully selects its projects and how they’re made. It might sound trite, but we generally operate on an “if it feels wrong, we don’t do it” principle. Commercially, this mainly relates to content and clients, ensuring we can protect our artists in front of and behind the camera. We’ve often drawn the line and gently walked away from work that we felt crossed it. Right now, with a lot of AI-based work, it’s a complicated call, but we know that experimenting with these processes on self-funded projects helps inform how we approach commercial work—which is super important to me and Helen Power who runs PI with me.

How do you hope audiences interpret the use of AI in ‘Meme, Myself & AI’”? Are you aiming to provoke ethical and philosophical questions?

I think using AI is a good way to promote discussions about it. I’m somewhat fascinated that, outside of a jump scare in horror movies, some of this synthetic footage creates a sort of uncanny, unnerving feeling as we instinctively know there’s something off about it. It sort of hacks the viewer, and that’s wild. ‘Meme… is a relatively personal film that perhaps articulates some of my thoughts on the topic, so I’m wary of putting too much import on a film that is quite ridiculous, but… so am I and the world we live in.

In what sense do you think AI could become a “monster” for creators, or even for society at large?

I think any tool in the wrong hands can become a weapon. I’m not so much worried about AI itself; rather, and I guess this is a takeaway from the film, I’m concerned that a very small number of people in charge of radical technology could be capable of something monstrous.

Do you believe this series encourages a dialogue on the potential risks and benefits of AI in creativity and entertainment? What is your opinion on AI’s impact on the art and film industries? Do you feel it might alter the authenticity of a work? What responsibility do you think creators have when integrating AI into visual and narrative projects?

Authenticity is a complicated term here. If AI stops a wide and diverse range of voices from speaking and being heard, then we will have failed to use it properly – but that’s on us, not AI. In terms of responsibility, I think we should always prioritize the idea first, then execution. It’s a nuanced conversation, but an example of something I recently avoided was a job where all the casting had been done with generative AI, and the client was extremely keen to match it. The problem is that, unless it’s actively compensated for, generative AI often retains racial bias—so the client is not operating in the real world—and that, albeit accidentally, feels to me like an abdication of responsibility we wouldn’t be comfortable with. 

In what ways did you seek to make the visual aesthetic of ‘Meme, Myself & AI’ reflect the deeper implications of technology in our lives?

We aimed for an ever-changing visual gestalt to reflect the onscreen duplicity of the AI voice. By combining live-action footage with synthetic visuals, synthetic audio, and traditional VFX, we created a mixed-media aesthetic that mirrors the complexities and contradictions of technology in our lives.

Have you encountered negative reactions to using AI in your work? How do you interpret and respond to them?

I’m relatively clear-eyed about how my work is perceived. I read all the comments – I probably shouldn’t, but, is me. When the press called Synthetic Summer a “hallucinogenic nightmare from hell,” I took it as a positive—that was how I intended it! I think the vast majority of negative AI commentary is either from a place of fear or in reaction to uninspiring work – I think I can relate to both viewpoints!

How do you hope this series will influence public perception of artificial intelligence?

I want people to realise this isn’t an issue rendered in black and white. Good work can and should be made with this technology and working with these tools will be a legitimate way of creating sooner or later. 

I don’t rotoscope by hand now that there are automated options and I don’t think that inherently diminishes the value of the end product. I work in mixed media and see this as a logical evolution. It’s not a tech showcase; it’s hopefully just a watchable film—and that, in my opinion, is the most important litmus test: is it any good?

What role do you think artists and filmmakers will play in shaping the conversation around AI and it’s ethical impacts on society?

It’s crucial that they do, and not just as advertisements for technology, as that warps the output. We need to be very aware of this, or we’ll revert to some kind of medieval court-sponsored artist circus. 

It’s a hard balance to figure out, of course, as almost all creative people have some sort of commercial output—as, for want of a better term, “pure art” rarely pays the bills—so there’s probably some compromise to be made. Honestly, one of the things we’re most proud of with ‘Meme… is simply the fact that it exists, which is a testament to the team at PI and the cast and crew who were prepared to make something simply for the sake of creating something different.

How was the conceptualization process for ‘Meme, Myself & AI’ as a mixed-media production? What inspired the integration of machine learning into your workflow?

With this project, we were keen to stay as true as possible to PI’s usual pipeline, just integrating new tools. Our interest in machine learning naturally led us to incorporate it into our workflow to explore new creative possibilities.

What role did 2D and 3D software play, and how did they contribute to the final aesthetic?

For the past few years, PI has used Blender and Cinema 4D for 3D work, with 2D and compositing done in After Effects. ‘Meme… is very mixed media, so there’s a real back and forth between these tools and our Comfy setup. The aesthetic is my personal preference for an ever-changing gestalt, reflecting the onscreen duplicity of the AI voice. PI started by making quirky internet spots for Nike almost a decade ago, and we feel much of contemporary culture is often visually compressed into glitches, emojis, and flashy transitions. We’re eager to showcase new and interesting looks in the film that we believe are forward-looking.

What led you to use an internal Comfy UI setup, and how did it integrate with SD 1.5 and Flux?

There are many ways to work with open-source Stable Diffusion pipelines right now. It’s a bit of a wild west, and we love that—it’s always changing and evolving. Having the latest thing isn’t as important as having a tool you can rely on and get results from. That said, advancements can provide better results, which we found when moving from SD 1.5 to Flux.

Can you share how Runway and Dream Machine were used in the production? What roles did these programs play in generating both stills and animation?

While much of the work was done on our own setup, some off-the-shelf options have the research and development to push them ahead of what we—and the open-source community—can achieve. So, where appropriate, we integrated them into our pipeline. Runway has been a repository of excellent machine learning tools. Luma, which we’ve used for a long time in our photogrammetry work, has recently excelled in video synthesis, making it useful for some of our transitions.

What was the importance of initial moodboarding and pre-visualization in Midjourney? How did they help set the lighting schemes?

We pre-visualized this film like any project at PI. Generative AI allows us to create style frames and moodboards for shooting. We love Midjourney because our team can experiment to create looks that can then be refined in Comfy. These visuals help us dial in the lighting and aesthetics during live-action shooting.

Why did you decide to train the Comfy LORAs on the cast, and how did this affect the film’s visual appearance?

Having a LORA of a specific person allows greater fidelity when replicating them within the Comfy pipeline. While this workflow is already evolving and becoming simplified, it enabled the creation of generative AI doubles and VFX takeovers within the shots.

What challenges did you face in tasks like lip-sync, matte painting, and VFX for live-action scenes?

These tasks were complex. Often, we’re working with tools that are still in development, so initial results can be suboptimal and inconsistent. It’s both amazing and frustrating. Generative AI often feels like a slot machine, requiring a lot of patience and fine-tuning.

How did continuous technological advancements impact your work? Were there moments when this necessitated a significant change of tools?

Absolutely. I believe that nothing is ever truly finished, only abandoned. It’s tricky when, after months of brute-forcing a solution, a better one emerges overnight. For instance, if we were to do the lip-sync now, it could—and should—be better. But I guess most people won’t notice!

How did the team benefit from the ability to share Comfy configurations across the studio? What effect did this have on collaboration?

We’re at an interesting point where many generative AI tools are becoming usable in a production environment. The existence of a Comfy.exe is proof of that. Before that, and the reason we switched from Automatic 1111 to Comfy, was the ease of sharing setups. In 2D and 3D work, sharing configurations is standard, but with generative AI, it’s been less straightforward until now.

How did you balance the use of AI tools and manual intervention to achieve a consistent aesthetic?

At PI, we’re small enough that everyone gets involved in the hands-on work, myself included. We all know that most projects involve meticulous tweaking. ‘Meme… required a significant amount of detailed work to maintain a consistent look, given its mixed-media aesthetic and the use of relatively untested software. As a studio, we generally deliver shorter pieces—30 seconds to two minutes—so producing a six-and-a-half-minute film was a significant undertaking for us.

Final Q: Are we at the beginning of a new narrative and aesthetic?

I think we’re always at the beginning of a new aesthetic, but the narrative essentially stays the same. I’m super excited about what the addition of generative AI will unlock in our work. It’s an exciting time at Private Island because we’ve spent a couple of years exploring this technology, and now we’re finally executing in a way that feels fun and controllable. We’re eager to find commercial partners who want to explore with us, but ultimately, we’re most interested in doing what we’ve always done: making unusual and unconventional work, whatever the medium.

 

Social Media:

Instagram: @privateislandtv
Linkedin: privateisland
Web: www.privateisland.tv