"We will come up with a new genre, or games that were not possible before"
Piero Molino and Ennio De Nucci from Studio Atelico discuss on-device AI and emergent behaviours in games.
Hello, and welcome to the latest edition of AI Gamechangers, the weekly newsletter where we quiz leaders in the games sector about AI.
Next week’s might be a little delayed because we’re off to San Franciso! Yes, it’s conference week, and we’re already lining up conversations with AI insiders attending GDC, PG Connects, and other fringe activities springing up around the city.
But today, we’re chatting with Piero Molino and Ennio De Nucci, co-founders of new developers Studio Atelico. They discuss their backgrounds, their on-device AI ambitions, and their vision for creating entirely new gaming experiences through emergent behaviours.
And as usual, do scroll to the end for links to the latest AI and games news stories from around the web, including LLM’s playing Super Mario Bros, ongoing voice acting disputes, and funding for Welevel, Triple Tap and Intangible.
Piero Molino and Ennio De Nucci, Studio Atelico

Meet Piero Molino and Ennio De Nucci, two of Studio Atelico’s four co-founders. Piero brings 15 years of AI expertise from companies like IBM Watson, Yahoo, and Uber, while Ennio comes from the game design world, having served as lead game designer at Creative Assembly.
Childhood friends reconnected by technology, they've developed the Atelico AI Engine, an on-device AI system which reduces costs for developers while also providing flexibility thanks to its modular design. With that, they’ve revealed GARP (Generative Agents Real-time Playground), a tech demo showcasing emergent behaviours.
We discuss their technology's unique on-device advantages and the future of AI in game development as a whole.
Top takeaways from this conversation:
Studio Atelico's core innovation is bringing high-quality AI processing to run locally on devices rather than in the cloud, eliminating ongoing costs while maintaining quality through optimisation.
The team believes the real potential of AI in games lies in emergent behaviours that extend far beyond chatbots (potentially creating entirely new gaming genres).
It’s about embracing unpredictability. The team is interested in games that deliberately leverage the non-deterministic nature of AI systems, where managing unpredictable outcomes becomes a core part of gameplay.
AI Gamechangers: It sounds like you’ve known each other for a long time. Please give us some insight into how you came to establish this studio.
Piero Molino: My background is in AI. I’ve been working in it for 15 years, for large companies, such as IBM Watson, Yahoo, and then Uber, but also for very small startups, some of which I started, some of which I joined.
I always had a huge passion for games. And honestly, I was doing AI because I wanted to eventually use it for games! The moment that I saw generative AI, in particular, and the possibilities to apply it within the game experience kickstarted things for me.
I’ve known Ennio for 25 years. We played together when we were kids, and we studied together to make games. When I went to work with AI, he kept working on games. Now it was the perfect timing for us to cross our paths again and apply my knowledge of AI and his knowledge of games into a single thing – that’s the studio we’re building now.
Ennio De Nucci: Most recently, I was lead game designer at Creative Assembly, working on Total War. I was there for five years, and then I briefly moved to Fuse Games, a new studio that was founded in Guildford by former Criterion Games people looking for a smaller, more exciting, fast-paced development.

But at the same time, Piero was telling me about the tech! I went to GDC, met with Piero and the other founders. He showed me what he had built, and it totally clicked with me.
I had been keeping an eye on what was happening with generative AI, but it was mostly in the cloud and mostly chatting with NPCs, a bit like the experience of a chatbot. I knew developers were doing other things as well, but I hadn’t seen anything yet. What Piero showed me was mind-blowing! And the fact that it was running on-device cleared a lot of doubts that I had about the business model and the cost implications of this technology running at real-time during gameplay. I was completely impressed.
I also really clicked with the other two founders Piero was already partnering with. So I said, “These are the stars aligning. I’m just going to join them.” And so I left Fuse and joined Piero, JP [Chen] and Paul [Szerlip] in this project, which is an amazing, interesting, different path for me. I was leading design teams and working in triple-A. The idea of being part of a Silicon Valley startup was unique for me; it was a once-in-a-lifetime opportunity. I decided that this could be a real way of making a dent in the games industry, not just as a game designer but as a business, and maybe creating something very special.
Let’s talk about what you’re building. You’ve got GARP [Generative Agents Real-time Playground], you’ve got on-device AI. Please tell us more about what makes it all special.
Piero Molino: Basically, there are three pillars. First of all, what it enables: interaction with characters and, in general, text-based (both inputs and outputs) obtained within a game. That implies potential agents and potential interaction of characters and dialogue, but also the behaviour of the characters, their planning of what they’re going to do, their decision-making, collecting memories about what they’ve been doing, how dynamic they become, and emergent behaviour that comes out of it.
“Many developers are looking at AI currently and thinking, ‘How can we save time?’ I don’t think that’s the future of AI. Instead, developers are going to start using generative AI to enhance their experiences”
Ennio De Nucci
[The second pillar is that] from a technical perspective, what makes that possible is running on-device. In order to do that, we have our own proprietary mechanisms that bridge the gap of quality of large language models. The ones that are currently on-device are not good enough, so we bring them to the quality of the larger ones that run in the cloud but are still resource-constrained within the local devices. That also makes it possible to have zero cost for the developers when the games are playing, which is one of the biggest problems with cloud-based solutions.
The third aspect is the engine is constructed in a very modular way, meaning that every developer can decide which components to use and how to assemble them, depending on the specifics of the game they’re building. There could be games that use agentic behaviour, similar to our GARP demo, but the other developers may just sprinkle it on top of pre-existing game mechanics, and they don’t need to use all the complex agentic behaviour. Just a little bit of dialogue is enough. But there could also be entirely new games that we have not yet imagined that could make use of these components that we’re going to be providing throughout the engine.
With regard to on-device AI, the advantages are clear with regard to cost (and the fact that you’re not dependent on making a call to somebody else’s service). But is there any loss of accuracy or performance by doing it all locally?
Piero Molino: That’s a good question. And that’s also the kind of stuff that me, Paul and JP, the other two founders, have been doing for our entire careers – making models work really, really well for specific tasks.

We break down the tasks that characters do in games, and we make it so that they work well, thereby adapting to the specific games. That’s what bridges the gap in quality that there would be if you were just to do it naively. And that’s our bread and butter. Part of our value proposition is exactly bridging that gap in quality and making that run locally within really high-quality constraints.
Let’s talk about GARP and the tech demo you’ve shared. It’s got emergent behaviours and interactions between characters. What are the technical and design challenges in creating believable characters that operate in that way?
Ennio De Nucci: Creating the simulation has been really fun. These characters come alive on their own, and you really need very little information – just a little bit of the background of the character. What is their purpose? What are their drives? And they just come alive.
I’ve noticed that the higher the quality of the initial writing you put into the character piece, the better the outcome of their interactions. This is not just about having these characters and putting “whatever” as their background or story. It’s about crafting it. I really enjoyed that process of making up who these characters are, putting in some real writing techniques and creating interesting characters. The difference in the behaviours and the emergent behaviours is noticeable.
We don’t only want to create the tech for developers to use; we want to create games ourselves. GARP is an example that uses the features that we developed with the engine, but I wouldn’t call it a game. The challenge as we move forward with Studio Atelico would be to actually create a fun game. GARP is like a fish tank. You look at the characters, and they do nice things. You can interact with them. You can suggest that they do things. You can inject memories so they act in a different way. But it’s not a game per se. I think we will still have to see what kind of games we’re going to develop, and we plan to do that.
“Imagine a puzzle game where the goal is to figure out how to convince characters to do something for you. The emergent behaviour is part of the fun”
Piero Molino
One of the things that we want to enable with our tech is this collaborative environment and community of developers using the tech, and trying to figure out what those games that we haven’t yet designed are going to be. We really believe that we will come up with some completely new genre, or some games that were really not possible before. This is where generative AI is incredibly exciting for me.
Let’s discuss new game genres and fresh things you can do with AI. Originally, the AI discourse seemed to be about how we could streamline our existing processes; then, it was about building NPCs that you could talk with more naturally. But surely more is possible?
Piero Molino: The way we see it is that, basically, there are three buckets. They’re not the only possibilities, but these are the ones that we are thinking about. There could be many more, but the three buckets, at a very high level, are:
One, as you mentioned, is building things that people are already building right now, faster. But that is the artistry and artisanal craft of building games. That’s why we love games. And so we absolutely don’t want to take that away! We want to give new possibilities to developers, so we don’t care for that aspect.
The second potential aspect is filling in the blanks. Imagine a game like Skyrim, where the vast majority of the world is actually empty. There are dense parts where there are characters that are hand-designed by narrative designers, but there’s a lot of it which is empty. What if that empty part was filled with something that was substantially better than just maybe a single line of dialogue? So it’s about expanding the amount of things that would be possible, together with what is already there, designed by developers.

The third bucket is the new things that were entirely not possible before. Among those, we are exploring many, many different directions. One that seems particularly compelling is where you are not directly playing as [a character], but now you have a higher managerial view, and you can direct other characters to do things for you. The fun of the game is to interact with them in a way that they come back to you with [reports]; for example, they’ve failed a mission, and you have to adjust and convince them. For me, a really big inspiration in this direction is games from the ‘90s, god games like those from Bullfrog, such as Dungeon Keeper or Populous. I really love this style of game. This kind of technology can breathe new life into that genre, for sure!
Ennio De Nucci: I think we’re going to see some amazing new games developed in this direction. The buzzword is obviously the chat, and it’s the first thing you think of because of ChatGPT. But we believe that just chatting with NPCs is not exactly the future or a breakthrough! It could be a great feature to have sometimes, but basing entire games on the feature of speaking with NPCs is a limitation. There’s a lot more to this technology than that.
We all know that reading is already a problem in games. There’s so much text, and players don’t want to read it all. And we are adding on top of reading also writing?! Our idea is to move away from that. I’m sure other developers will find great ways of chatting and interacting and making that the core of their experiences. We want to really focus on emergent behaviours, and so let these characters think. Not necessarily communicate with the players or interact at a dialogue level, but basically advance traditional AI, the AI we already have in games like HTN [Hierarchical Task Network planning], behaviour trees, all those things. I think we can create better technology to make characters alive in games.
Piero Molino: That blends really well with game mechanics. It’s not a replacement, but it’s something that interlocks with existing game mechanics and existing game design principles. We’re not reinventing the wheel. We want to make it so that they blend really well together.
In that environment, where emergent behaviour is possible, how do you ensure that people have a good game? Can you combine the possibilities of AI and the craft of game-making, while applying guardrails so AI doesn’t do weird things with your intellectual property?
Piero Molino: With respect to the emergent behaviour aspect, it’s a matter of game design, really. Maybe most of the games that use this technology will be more sandbox-y, and players may find their own fun in the game.
It’s like physics: the moment you have physics in a game and a bunch of objects, you start to interact with them, and at a certain moment, you discover that their interaction gives you a new plan to solve a puzzle in a way you didn’t think was even possible. Now imagine a puzzle game where the goal is to figure out how to convince characters to do something for you. The emergent behaviour is part of the fun. It’s not a problem to solve, it’s actually what makes the game fun.
“The buzzword is obviously chat, and it’s the first thing you think of because of ChatGPT. But we believe that just chatting with NPCs is not exactly the future or a breakthrough! We want to really focus on emergent behaviours”
Ennio De Nucci
With regard to the second part of the question, we are going to integrate guardrailing mechanisms directly within our technology. We are also going to make it possible for developers to have “internal classifiers” so they will be able to know when a certain piece of text is produced by the AI models. What are the characteristics of the text? Is it insulting? Is it insensitive? Is it problematic in any other way? If it is, they will be able to do something about it, either not generating it at all or generating a new one that is nudged towards it not being like that. These mechanisms will make it so that developers can have the level of control that they need.
Sometimes, the opposite is a problem for them: when they try to use things like OpenAI or Anthropic, sometimes they try to generate dialogue for evil characters, and they’re not getting anything out! As a consequence, we want to give developers the levers to make their own decisions.
Ennio De Nucci: There’s something very interesting in the unexpected that can come from these things. Obviously, you need to have some sort of control and guardrailing, and this is the mechanism Piero mentioned. But we are also thinking about concepts that are based purely on the chaos emerging from these things, and the unpredictability! I’m very interested as a game designer to lean into this non-determinism in games, where you know the outcomes can really go wild, and then the player has to figure out how to manage this unpredictable outcome.
If the game is designed around that, and the fun is actually [in the] unpredictable outcomes and staying on top of the chaos emerging from the actions in the game, I think there’s something really nice there.
Scripting these kinds of things is probably not even possible. The very act of predicting all the possible outcomes, from the designer’s point of view, means there are certain rules, so there is always determinism in these things. LLMs can break that rule, and we can design some games that might be super interesting. They’ll also be very challenging, on an intellectual level, for the player.
When will developers be able to start working with your modular AI engine?
Piero Molino: We are planning for a year and a half, two years, to get to the point of general availability.
But before then, we’re going to make smaller, gradual, incremental releases, adding more and more of the components. We already have a first batch of developers who will be trying our first version of it, and once we validate it with them and know that it works super [well] for their cases, then we’re going to release a newer version incrementally. We plan to have a release more or less every three to six months.
Do you think that gamers need to be educated about AI? How do you think the gaming public feels about AI games right now?
Ennio De Nucci: I’ve chatted with a lot of gamers. The general feeling is the usual scepticism, and people are generally cold about AI. There is obviously a small niche of early adopters, like in any tech, that are very enthusiastic about it and cannot wait to to play with whatever comes out.
I don’t think we need to educate players. I think we need to release games that are actually super fun to play! We don’t need to advertise a game as being powered by generative AI, necessarily. I think players are always waiting for the next super fun experience, and that’s what we should be aiming to give them, regardless of the technology that’s in there.

I truly believe that this technology enables some gameplay that is unique, completely new, and super fun. The moment those games are in the hands of the players, they will be the judge, and I think they will be moving towards positivity. At the moment, it’s a buzzword, and I can honestly say there are a lot of actors in the space who are not doing AI any favours. So it’s about the games. The games will come, they will be fun, and regardless of the technology underneath, players will appreciate them.
Looking ahead, if we conduct this interview again in five years’ time, how do you think the relationship between AI and games will have changed by then? What role do you hope that your studio will play in that future?
Piero Molino: When I started getting interested in the idea of applying generative AI to games, that’s exactly the thought I had when I saw the Generative Agents paper, which is a great inspiration for GARP. GARP is a real-time, local version of this Generative Agents paper that came out in 2023. When I saw that, I said, “This is what games are going to look like five years from now.”
The problem was that the technology to do that was not there. It was too expensive, and it was not running in real-time (it was running as a simulation on a supercomputer, really). I knew that the games were going to look like that, but there needed to be the tool to make it possible for game developers to realise that vision. That’s why I got started working on this, and our role is to try to build exactly the tool that enables the chicken and egg problem, right? Even games that are being developed today with AI, the reason why we have not seen more of them is exactly because the tooling around it is not mature enough yet. So a tool like ours is what will enable that.
Right now, I think there are some early signs of people liking some of the games. Suck Up! is a really good example of that. It’s relatively small and maybe a little bit gimmicky, but the novelty and the fun that it enables is something players haven’t experienced before, and so that’s probably the reason why it resonated with many players. I can imagine an experience like that but scaled up beyond just a single interaction (in this case, it’s a character at the door that you have to convince if you are getting in or not). Imagine an entire set of mechanics based on those kinds of interactions: not a small gimmick but a full-scale game built like that. That’s how I imagine games to be and how players will interact with them as a consequence.
Ennio De Nucci: I think more developers, bigger developers, are going to start using this technology in the right way. Many developers are looking at AI currently and thinking, “How can we save time? How can we generate more assets or code by using AI?” I don’t think that’s the future of AI. Studios that still handcraft their content are the ones that are going to survive because players are looking for handcrafted games, not computer-generated games.
“We have our own proprietary mechanisms that bridge the gap of quality of large language models. The ones that are currently on-device are not good enough, so we bring them to the quality of the larger ones that run in the cloud but are still resource-constrained within the local devices”
Piero Molino
Instead, these developers are going to start using generative AI so as to enhance their experiences, to fill those blanks that, for many reasons, were never obviously fully curated like the rest of the game. We’ll see games like Civilization having an incredibly deep diplomacy system all of a sudden. We’ll see triple-A games expanding their content using this technology, making better experiences. We’ll see players be more comfortable interacting with the technology and going into these games without the prejudices that they might have now, and fully enjoying these things.
As the industry as a whole understands this technology better, it is going to be improved over time, and not just by us. Our role in this is to enable as many developers as possible to do more with the technology. This is one of the things that I’m really proud of. Our mission is not just about making games or making a piece of tech. We really want to enable others to create this future.
Further down the rabbit hole
What’s been happening in AI and games? Your essential news round-up:
If you enjoyed watching Claude play Pokémon last week, this week you can watch it (and other LLMs) attempt to play Super Mario Bros.
Generative AI remains a sticking point in the SAG-AFTRA strike, as video game performers picketed WB Games earlier this month. Meanwhile, a post on social media by Pascale Chemin, the French voice actor for Wraith in Apex Legends, suggests that all 32 voice actors have dropped out of the EA game over AI updates to their contracts.
Games platform Sparq has announced pre-production on its mobile title Crown U, a collegiate sports trivia game. The company claims its proprietary AI approach extends beyond game design to also power its user acquisition strategy.
Munich-based indie studio Welevel has secured $5.7 million in funding (led by Bitkraft Ventures) to develop its proprietary AI tools for procedural game generation. Its tech aims to streamline world-building, NPC behaviour, and quest generation, with a sandbox survival game on the way to demonstrate these elements.
Infinite Realms is an AI engine that can turn the text of books into living worlds you can explore. That’s the premise of the new tool that spun out of studio Unleashed Games. The team plans to license top fantasy novels and quickly generate games from them.
Mumbai-based Triple Tap Games has raised $1.2m for AI-enhanced mobile puzzle games.
Intangible, led by former Unity VP of product design Charles Migos, has secured $4 million in seed funding to launch its AI-powered 3D creative tool. The web-based platform enables users to generate 3D worlds for games or films, with a library of thousands of assets and real-time collaboration features.