While I was mapping out the final chapters in this series of thoughts on AI and its impact on transmedia storytelling, Jeff Gomez published an article on The Mythic Field on LinkedIn. Having read that, I had to start over with my thinking and my texts, so it’s taken a while longer to get this chapter published. Sorry! But I blame Jeff 🙂
See, Jeff is nothing if not inspiring and illuminating in his thinking and writing (and actions!), and reading his article I saw so many touch points with what I’ve recently been expressing, and will continue to flesh out in the text below. I’ll do my best to point out the parallels as we move forward.
In my last post I tried to draw up a framework for integrating AI in transmedia – or any media – productions, and Jeff’s post spoke directly to the core of the framework. My starting point was a Story Core, where AI has no business interacting or interfering, a layer fully human, building on human experiences and wants and needs. To cite Jeff on his thoughts about The Mythic Field:
The Mythic Field is not a story bible, though it may be expressed through one. It is not canon, though canon emerges from it. And it is not a set of constraints imposed on creativity. Rather, it is the origin level of a storyworld, the underlying structure of meaning that reflects the intrinsic vision, experience, and symbolic language of its creators. It begins to take shape long before characters act or events unfold, coalescing out of the artistic, emotional, and subjective impulses that compelled the world into existence in the first place.
More eloquently put than I managed, naturally. Following the thinking in my previous post, AI can conceivably work extremely well within this Mythical Field. But AI can’t create a Mythical Field to begin with.
The two conversations on AI and creative work
Perhaps you’ve noticed this too. When it comes to AI and creative work and how people have been talking about how and where these two areas meet, there are essentially two different conversations happening. The first is a breathless conversation about tools, where practitioners and other interested people share what AI solutions generate the best images, which LLM provides the best answers and most intricate reasoning, which service can generate the most believable videos or visual effects, which might be able to transcribe or put together a rough cut or something similar, in a reliable way. You can find this conversation just about anywhere, on LinkedIn (yeah, my LinkedIn as well) or social media, in blogs and in conferences and articles anywhere.
The second conversation is more subdued but no less intense. It’s about the ethical aspects of AI. Should we actually be using AI? What about trust and what about the people relying on us to tell the truth? What about all the jobs that are threatened by AI? Where did the training data come from? Am I profiting from the AI companies stealing what other people have created before? What gives with the plethora of disturbing deep fakes crowding the Internet? Should there be rules and regulations and what kind?
It’s seldom that the same people strike up both of these conversations, and even more seldom that it happens in the same venues on the same days.
The problem is that these two conversations aren’t really separate. They’re different sides of the same coin. You arguably cannot have one without the other. If you take the decision to use an AI tool in a production, be it transmedia or some other kind, you are at the same time shouldering the responsibility of governing the process. You decide what should be created and what shouldn’t be. You decide who benefits from the process and thus you also decide who is (potentially) harmed by it. When deciding on what capabilities you want to avail yourself of, you also automatically shoulder the responsibilities those choices bring.
If you read my previous post, you saw the framework I was suggesting, where I placed the Story Core as the foundation, governed by humans way before any AI system was to be engaged. This is basically why. Whatever ethical questions you need to answer and commit to need to be a core part of the initial creative vision. I did a talk on AI for European documentary filmmakers a couple of weeks ago and we talked about the trust audiences put in you, and your commitment to truthfulness as a documentary filmmaker. This commitment is really not a limitation but rather a defining characteristic. In much the same way, if you’re a storyteller or producer using AI, how you govern these tools will be part of defining your work.
In my next post I’ll draw up …. Well, not commandments, but rather points of commitment, what we as creators, in my mind, should keep at the forefront when working with AI. (Need to read Jeff’s post a couple of times more, first, though 🙂
The framing of two separate conversations that are actually the same conversation is spot on. Tool decisions are ethical decisions. Most people just don’t treat them that way yet.
I feel it will only become even more so as tools become even more advanced. Thanks for reading!