“It’s impacting everybody even if not everybody realises what's happening”
Davy Chadwick of PopScreen Games shares his insights into the future of generative AI in the creative workplace.
Welcome to the AI Gamechangers newsletter. This week, we meet Davy Chadwick from PopScreen Games and chat about how his studio is integrating AI across its development pipeline. He shares candid thoughts on AI's transformative impact on creative jobs and the necessity for workers to adapt.
Feel free to forward this to your colleagues, and be sure to check back next week for more. Each edition of AI|G is a conversation with a leader working at the intersection of games and AI, and you’ve got Q&As with ArenaX Labs, X&Immersion, and Voicemod winging your way soon.
Stroll to the end for news about Elon Musk, autonomous NPCs in Minecraft, Niantic’s Large Geospatial Model and more, including useful links and a free video presentation to watch.
Davy Chadwick, PopScreen Games
Meet the CEO and co-founder of PopScreen Games, a Paris-based mobile studio founded by industry veterans from EA, Ubisoft, and Gameloft. Its current game is the stylised RPG Wacky Battles, with more in the pipeline that will showcase the team’s use of AI for development.
Top takeaways from this interview:
AI is fundamentally transforming and eliminating creative jobs. Davy Chadwick compares this shift's magnitude to the advent of the internet or smartphones, and sees it as inevitable given the billions being invested by major tech companies.
Mastery of AI tools is non-negotiable for survival. Creative professionals must engage with AI tools or risk becoming obsolete. He recommends starting with tools like Flux and ComfyUI, investing significant time in learning them.
Small game studios have an advantage in AI adoption. They can be more agile in implementing AI tools and retraining staff to work across multiple disciplines.
AI Gamechangers: How did AI first end up on your radar?
Davy Chadwick: We've been looking into this for years. But like 99% of the people on this planet, there was a “before” and then an “after” ChatGPT! AI has existed for years, but it was the rise of generative AI and the accessible interface provided by ChatGPT that got us into it, and quickly after, we started to look into Stability AI, too. This is where everything opens up for us as users, as it did for all professional people.
Let's talk about use cases. Can you outline some examples of how PopScreen Games uses AI in its workflow?
It’s very operational. The first thing we make is a table of what we have to produce. I'm not talking about code. I'm talking about content. So we know exactly, with an exhaustive list, of what we need to produce for background units, UI weapons, shields, and all this for our games.
For each of the items we need to build, we need to see if a tool exists, how we can use it, and whether we can integrate it into a pipeline for production. And ask the question, is this pipeline easier to use than manual production? (Now or later?)
Take backgrounds, which are probably the simplest elements to produce. We need backgrounds for our battle scene. The background will have some very easy constraints. We need to have an empty part at the bottom where the battle will be happening and something more fancy at the top. So we built this model using a diffusion model, and we started generating, prompting with all kinds of parameters. We train, we fine-tune, and in the end, we have this background factory.
We do this with the background, for the units, for the UI (which is more complicated). I would say this is level one, where everybody should be doing this because it's not that complicated.
But now we try for level two, which is a bit more tricky! It's to connect the LLM, the text part, and the diffusion model. The ideal approach would be to have your narrative designed by your LLM, and the narrative output of the LLM be a prompt for your diffusion model. And it's not only a way to be quicker, it’s to keep a very consistent environment and consistent settings in your game, because the prompt will come from the narrative.
Really, it’s about defining what you have to do and creating the pipeline. The biggest challenge is the training and people. We are a small team, so it's fine for us, and a small studio in this industry will be very agile.
People in the industry are worried about their jobs. What's your take on that?
We are a small team with very geeky people; we can all do everything. My designer is much better than me at design; my artist is much better than me at art. But the reality is we all need to do multiple jobs now. There's no other option.
A designer will work with an art guy to define an art style, and then we will build the generative AI pipeline and the models’ fine-tuning. The designer will prompt and create content under the supervision of the art guy. It's really multi-modal. We all need to be able to use the tools.
But it's not completely settled yet because the tools keep changing, and we understand more and more how to use them, but it's easier for a small team. A big team with 300 people – I have no idea how you would need to restructure. What will the ending be, and how do you get there? There’ll be a need for training. Thousands of jobs will surely be lost, and while new ones will emerge, they will require different skill sets. We are entering a difficult time, and it's not linked to the crisis we have today. It’s really different.
I think it's as big as the internet, or maybe as big as the smartphone. There are two reasons:
One, it’s impacting everybody, especially those working with digital tools, knowledge-based jobs, or dematerialised processes, even if not everybody realises what's happening.
And two, you'll have to adapt or be put aside. Obviously, jobs taking care of people or working in a factory will be impacted much less. But the traditional role of a writer for gaming (I’m not talking here about novel writers or journalists), for instance, is evolving and may disappear in its current form. I’m not saying changing: the way it’s done today is disappearing. But another one will be arriving. It's really massive.
You have such big players behind this, with billions of support every year, from Google to META and Amazon. All this is obviously changing the world because too much money is being poured into it. It's really massive. It's even a bit scary.
People are still figuring out legal questions around the use of AI. Assets that are generated by AI – what’s your understanding of their copyright status right now?
The model within an LLM or diffusion model is trained with some data. If your data set is not properly licensed and curated, if you don’t have the rights, your model will have problems. And this is exactly the problem with Stability AI. There's a lawsuit in the UK. Getty Images is suing Stability AI because they've been training the model with the Getty Images database, and they didn't have the rights.
This is the situation today, but this is over now already. If you look at LLaMA 3.2, the META model, they've been training it with content for which they have approval: you can opt out, but if you opt in, they can use all the content to build and train their models for commercial use.
“We need to make sure to help people train and evolve. You cannot abandon the workers. It comes back to support, training and money, ideally backed by public institutions.”
Davy Chadwick
So the “old” world, which was just two years ago, is indeed problematic. And also from a moral perspective. In the new world, these issues are being addressed more effectively. See Firefly from Adobe; Firefly is their generative AI embedded solution. The rights belong to Adobe, so they have a commercial use. And the last example is Flux. Flux is a new diffusion model – it’s incredible. They have a commercial use where they say you can use it; there's no IP infringement from what they say.
This problem was so big until today, but I have no concerns about whether it will be fixed. Long story short, in one year, we will not talk about the legal side any more. There's too much money. So many billions are poured into this. If you have a legal problem, you’ll block the whole industry, and we know the solution: own your data set.
In your experience, what are some of the misconceptions or fears about AI?
It’s about your job and your comfort zone; those are the psychological side of things. If your job is threatened, obviously, you don't want to use [AI] and this is understandable, it’s human. I think this is the biggest problem.
I'm from France, and we benefit from significant state funding for education and professional training, with substantial subsidies allocated to universities and vocational programs. We need to make sure, in France as in all other countries, to help people train and evolve. You cannot abandon the workers.
This happened with the coal mines in Britain – it was very tough in the 1980s when you had Margaret Thatcher and the coal mines were closed. It was a deeply challenging time for many people. In France, the closures were less dramatic because workers were supported by long-term measures. It was because when the mines were closed, people were taken care of for 10 years! The salaries were paid, and they had some grants until retirement; it was very expensive, but on the other hand, there was no other option. It basically comes back to support, training and money, ideally backed by public institutions.
Throughout history, humans have sought to automate processes, whether it’s the printing press or the factory production line. We should be accustomed to adapting to such changes now...
It’s productivity in the end. If nothing else, it’s a real matter of, “How can I be more productive?” while still keeping my creativity (this is the main difference between creative industries like gaming and others).
I spent 10 years of my career managing outsourcing operations in so-called low-cost countries (yet very talented). When I was in EA or Vivendi, I used to build teams in India, China, Russia and Romania because we needed more manpower at a lower cost. But still, it was not only a matter of lowering the cost; it was very hard to find people. Obviously, it was bad for the planet! I was travelling all the time. And bad for my family. And very complicated for corporate, because we needed a lot of overheads and middle managers to handle this. I don't want to say this sort of problem is “solved” now – it was not a problem as such – but it now needs to be re-evaluated with gen AI entering the equation. And like all these new tools, the entry barrier is much lower, so much better for creativity.
There’s a mindset in games that we need to work harder and work overtime. Actually, you need to adapt to whatever tool you'll be using. If you don't know how to use your tool, you'll be put aside, and that's a problem. You need to be very careful with artists, writers, and musicians. At our company, one guy decided to leave because he didn’t want to use AI. I said, “It's sad because you're an incredible artist; you could do fantastic things”. If you don’t master the tools in 10 years, you’ll likely be left behind professionally.
Let’s talk about those changing skillsets in the industry. Do writers need to learn new skills? What are those skills?
I call it geek skills! I’m a geek. It's true that artists or writers are more creative people; when you are creative, it’s a completely different mindset. AI is a different way to create, and that’s a stretch for many people.
“I think it's as big as the internet, or maybe as big as the smartphone. It’s impacting everybody, even if not everybody realises what's happening.”
Davy Chadwick
There were steps from writing on paper to writing on the computer, and then using autocorrect… It's a big step to gen AI. But it's a continuity of having tools to help you. I think you can write a book and then have it reread by an LLM to make sure there are no typos and that there’s nothing illogical in what you said from the beginning through to the end. It can review the whole book in 10 seconds. This is a big help, especially if you write something around a crime scene, where you need to make sure there are no inconsistencies in the whole book. So you could have an LLM to check all this. It's still your creation behind it; it’s not doing the book instead of you! But it’s another tool, and I believe it’s a big game-changer.
What advice would you give to a small company, like an indie developer, about where to begin with AI?
Flux and Comfy UI, which are the trendy models in AI. It's still very complicated at the beginning. Comfy UI is super powerful, but if you don't use it every day, you may lose track.
Be proactive! You can start by using tools like Midjourney or DALL-E to produce some assets and force yourself to do so. You need to spend a lot of time on tutorials. You need to spend a lot of time playing with all the plugins and extensions and understand the use and power of them. After spending weeks, the power of the tool will really help you in day-to-day life. You will understand what you can do and what you cannot do.
Today, there are two things you cannot do well: 3D and animation. There are some tools, but the main problem with 3D is that the data set is too small, so it's very hard to train a model. But it will come, for sure (same for animations; maybe it’s time for an article on this to be published). You can see more and more tools coming on the market, and you will need to master the tools.
One last thing: avoid the commercial versions online. There are dozens of tools online, but they're very expensive. You need to work at a lower, more technical level. There are plenty of open-source, free tools which you can use and implement in your production pipeline.
Further down the rabbit hole
Your handy digest of what’s hot in the world of games and AI recently:
In response to a post by Dogecoin’s Billy Markus, Elon Musk announced on X that his company “is going to start an AI game studio to make games great again” because too many studios “are owned by massive corporations”.
Altera's "Project Sid" AI experiment shows autonomous agents creating complex social structures in Minecraft. Robert Yang’s team deployed some 1,000 LLM-powered NPCs who developed specialised roles, formed hierarchies, traded resources, and even spread cultural memes, without explicit programming for these behaviours.
Supersonic from Unity has launched its Top Creatives Library tool. The hub features an AI-powered Game Idea Generator.
Niantic is leveraging Pokémon GO scans to build a Large Geospatial Model (LGM), part of its Visual Positioning System (VPS) system. Think GPT but for 3D space. This system understands geographic relationships and can make intelligent guesses about unseen angles of buildings. Dystopian potential aside, it has fascinating implications for AR game design.
AI programming tool Cursor has released an update that steps closer to coding automation with AI agents that can respond to error messages and make autonomous decisions to resolve issues.
GDC has published a free ebook called What Game Developers Want From Generative AI. Taken from its State of the Game Industry survey, the simple answer seems to be that devs want AI to handle the boring bits while preserving human creativity and jobs. While almost half said AI tools were being used at their studios, 84% said they have ethical concerns about it.
Here’s a free video of Kseniia Maiboroda of Elevatix, who chatted with AI Gamechangers in the summer, delivering a speech in Helsinki about using machine learning for personalised monetisation in games.