Evolving at speed
In October 2022, just as we all started to test out generative AI, LLMs etc, and getting the first notions of a true shift in the world looming just on the horizon, I wrote a blog post about AI-generated art with the subtitle: “AI art is disruptive as hell. How can we use it in our creative processes? And should we?”
At the time, Midjourney and Stable Diffusion were producing images that were impressive but clearly synthetic. The hands, the hallucinations… it all invited ridicule as much as awe and wonder. Three and a half years later, the question is no longer whether AI can produce compelling creative material. It absolutely can. The question is what that means for those of us who make a living, and find meaning, perhaps even a sense of self-worth, in creative work.
The acceleration has been staggering, even for us who’ve lived through the advent of the Internet, the dot-com boom, the early days of mobile and streaming and on and on… In early 2023, I wrote in this blog about AI tools that could help with mood boards and pitch decks and many other things (mirroring what I was experimenting with at the time). I was noting that only creativity and time set any limits for anyone with internet access and basic app knowledge. By 2024, I was running workshops and trials where journalism students could not reliably distinguish AI-generated images from authentic photographs, and these were people being trained to work in and with media. The overall audience was allowed to try to identify these AI images themselves, and although they knew some of the images were AI-generated, on average they only got thirteen of eighteen images correct. So, professionals struggle, and the general audience even more. And this was two years ago – today’s generative AI is lightyears ahead of those ancient models.
This is what I mean by the precipice. We are at a point where the democratisation of content creation, be it video, music, imagery, text or something else, is flooding the market with material that is increasingly indistinguishable from human-created work. The competitive advantage that traditional media held, in comparison to the individual creators of yesterday, an advantage attained through production quality and craft is eroding, as millions of people now have access to tools that were, until recently, only truly accessible to larger companies and organisations with big coffers.
What AI actually does well for creatives (at least right now)
If we try to look beyond hype as well as fear, here are some of the things that generative AI reliably delivers for creative professionals in early 2026:
We can rapidly generate imagery, for instance concept art, mood boards, and pitch visuals that effectively and quickly communicate creative intent to audiences and other stakeholders. This is not necessarily about replacing concept artists; it is about enabling pre-visualisation at a stage where no budget for concept artists exists yet. In a talk I did this week around AI, for DCampus and their documentary filmmakers, there was a comment about using AI to create cartoon-ish images for illustration purposes. I suggested working with a graphic artist utilising AI possibilities, and would suggest the same for concept art etc.
We can pretty quickly get feedback on narratives, stories, we can quickly dissect story structures, character arcs, and plot variations. AI can generate dozens of narrative variations that a writer can evaluate and select from, and can refine a process that often can take weeks of solo brainstorming. However, a talk by Danish filmmaker Jeppe Lange I had the pleasure of listening to at the Tromsø International Film Festival in January put this into perspective for me. An LLM is still simply looking for which word most likely comes next in a sentence. Adding the human randomness and ability to think outside the box to an LLM just introduces chaos and leads to totally incoherent content.
That said, one of the things AI absolutely can help with – if provided with the right background material and prompted in the right way – is monitoring large narrative systems for consistency. I believe this will become particularly valuable in transmedia projects where storylines span multiple platforms and timelines.
We should also probably list archive restoration, as it’s nowadays very much AI based activities. Image and film upscaling, colourising, and enhancing archival footage and photography… I know I, and many other documentary filmmakers, have already benefited enormously from this. Of course, possible hallucinations still lead to for instance me sitting hours to try to find out if the ribbon to the discontinued Hungarian medal the one officer is wearing in the image from the late 30s is red – as AI suggested – or some other color (spoiler alert: it was actually green!)
AI is also a god-send when it comes to transcribing and understanding natural language. We’re talking everything from transcribing filmed material in Resolve or Premiere to real-time dubbing and subtitling that makes content accessible across language barriers. This could ultimately be opening global distribution for work that previously had only regional reach.
There are also a number of companies offering things like sentiment analysis and engagement pattern recognition that helps creators understand how audiences respond (or could respond) to different narrative elements across platforms. There are companies that looking at your script or trailer can confidently predict box office success and audience uptake, and so on.
What AI actually does not do (at least not yet (and maybe never))
As much as it’s great to know what AI can do, it is equally important to be clear about what AI does not deliver. AI is really good at coming up with variations of an idea or a story, but is still in the dark when it comes to relating these stories to an audience, and can’t independently determine which narrative constraints and boundaries actually protect consistency in the eyes of the audience, allowing them to feel sure about investing in a story world over time. One of Henry Jenkins’ core principles for transmedia was Continuity, i.e. the coherence that runs across a well structured transmedia property. AI can generate content at scale but is much worse at generating internal logic to ensure all content belongs to the same story world.
AI does not ultimately understand why a story matters or who it is for (even though this of course can at least be attempted to thoroughly prompt for). It does not make informed and flawless editorial judgments about what to include and what to leave out. It does not bring the lived experience that allows us as filmmakers to give authenticity to documentary work, nor does it bring the cultural knowledge that grounds, for instance, indigenous storytelling. It does not truly listen, and as Jeff Gomez reminded us, genuine listening is the rarest and most valuable thing in the world, the thing that sits at the core of any fandom.
AI is an amplifier, not an author. It can scale and extend human creative decisions, but it cannot yet originate them. Every application of AI in storytelling should begin with a human creative vision and end with human editorial judgment. In between is where you can find the technology, helping to accelerate the pace of the work in the middle.
Authenticity coming with a premium
Having worked with generative AI for several years already and for a variety of projects, I believe that as AI-generated content becomes increasingly accessible, authentic human creative work will become more valuable, not less. We are entering an age where we will have more content than we can ever take part of. But what will be scarce is content and stories with a verified human creative intention. The filmmaker who can demonstrate a genuine creative vision, executed with integrity and grounded in real human experience, will ultimately command a premium in a market flooded with synthetic alternatives.
I do not believe that this is wishful thinking. It is in many ways the same dynamic that made live performance more valuable in the age of recorded music, or handcrafted goods more desirable and fetching more of a premium in the age of mass production. When machine-made becomes the default, human-made becomes the distinction. The strategic question will be how to play both worlds to perfection, i.e. how to position yourself on the right side of that distinction while still leveraging AI’s genuine capabilities.
Finally, the authenticity premium is only half the story. The deeper challenge is ultimately truthfulness. This is not to be seen as truthfulness in the narrow sense of being factually correct, but in the sense that transmedial scholars have begun to explore: does the content we create, and the worlds we build across platforms, contribute to a truthful understanding of reality? I know I have this doubt every time I generate an image or a line of text that could be used in a production – am I again helping to distort reality? When a transmedia documentary about climate change uses AI to generate visualisations, translate testimony, and personalise delivery, it is not simply producing content. It is constructing a world, and that world shapes how its audience perceives the real one. This has always been the case with filmmaking overall and documentary filmmaking in particular. But now, governing that world-construction process should not be seen as a technical afterthought. It is the main creative and ethical challenge of AI-assisted transmedia.