"We're interested in AI to make experiences that were not possible before"
Andrew Friday and Dan Fessler from Incite discuss emergent narratives and their new cosy life sim game, AuraVale.
Welcome to the latest edition of your AI Gamechangers newsletter, where we share exclusive interviews with industry figures. Today, we’re speaking with Andrew Friday and Dan Fessler of Incite. These industry veterans are working on a new life sim game which uses LLMs to enable natural language interactions and emergent narratives.
Scroll to the end for a report about investment in AI-adjacent games companies, a blog about bots preventing buggy game launches, details of an agreement to make ethical AI voice replicas, and the trailer for games platform Bitmagic (which bagged first place in the Generative AI category at Game Changers 2025 last week).
Be sure to share this with any of your colleagues interested in the role AI can play in gameplay, and check back next week when we’ll meet the COO of Bitmagic, a game development platform powered by AI.
Andrew Friday and Dan Fessler, Incite
It’s a two-for-one week in AI|G as we connected with two co-founders of Incite, CEO Andrew Friday and Chief Creative Officer Dan Fessler. We discuss their upcoming cosy life sim game, AuraVale. What’s striking about this conversation is how thoughtful they are about what AI will genuinely bring to the experience that wasn’t possible before (and also how important it is to respect human creativity for things like art and design). We’ll be diving into UGC as well.
Top takeaways in this interview:
Their AI approach is focused on enabling new types of gameplay rather than replacing existing development workflows. They explicitly avoid using AI for content generation, emphasising the importance of human craftsmanship.
An Aura is an NPC that you train and instruct. Auras speak, feel and live in your virtual world, and in future they will interact with other players’ Auras.
User-generated content is a big part of their thinking. They support modding, drawing on lessons from platforms like Roblox.
Incite takes a model-agnostic approach to its AI infrastructure, supporting different LLMs for different tasks based on latency and reasoning requirements.
AI Gamechangers: Please give us a little insight into your personal backgrounds.
Andrew Friday: I was born and raised in Columbus, Ohio. But I had wanderlust and I wanted to travel and have an adventurous life, so I moved to Beijing and spent five years in China learning Mandarin. I built games on the side – one of those games got me an interview at a Chinese gaming company called Happy Elements. I never left games, and I’ve been in gaming for over a decade now! I went to Glu Mobile, where I worked with the Beijing studio and then moved over to the Kim Kardashian game team.
From there I went to Zynga, where I worked on the emergent platform team. Our team built games like Words With Friends. That game had something like 100 million installs in 2017, and only two engineers built it. I saw the power of instant gaming then. That’s where I met Incite’s co-founders, Dan and Nico [Nicolas Coderre]. I spent a few years trying crazy stuff in the instant gaming space. Later, I was talking to Dan and Nico about emergent narrative and where we see the future of gaming going, and that led us here.
Dan Fessler: I’m more on the creative side. I got into gaming during high school, doing game jams online – I ended up doing a bunch of art for them, primarily as a pixel artist in the early days, just as a hobby at first. But there was one industry that still really needed pixel art, which was the mobile phone game industry at the time, in the pre-iPhone era! We had these feature phone handsets, and good pixel artists were hard to come by, so I got hired initially at Gameloft in New York.
“We would spend our lunches talking about all the cool stuff you can do with emergent narrative. This was before language models, and we shelved that idea for a while because [AI] was the missing piece.”
Dan Fessler
After that, Glu Mobile. The iPhone came out, and everything changed. I was living through the history of game development in fast forward. I ended up at a startup called A Bit Lucky. They ended up getting acquired by Zynga. We stayed at Zynga for about four years, and that was when we had a lot of conversations about emergent narratives.
Nico, our CTO, and I worked together at A Bit Lucky, so we have a long history. We would spend our lunches talking about all the cool stuff you can do with emergent narrative. This was before language models, and we shelved that idea for a while because [AI] was the missing piece. I eventually left Zynga to be the first employee at Manticore.
Friday: He left there to join Incite when we started the A16Z Speedrun. I said, “Dan, we’re in! They want to invest 500k, and you’ve got to quit your job now.” So that was the thing that pushed Dan out of Manticore!
Our third co-founder, Nico, is our CTO, and he was one of the first 15 people at Roblox post-IPO. He was messing around with language models in 2018 or 2019, building an education-focused role-playing game using LLMs. He actually built multiple products using early natural language processing models, probably around GPT-2, fine-tune training his own models. I said, “Nico already knows how all this stuff works. Dan has a vision for what we want to build. This is an exciting project!”
You had conversations back then about emergent narrative and emergent gameplay. And then, along come large language models, and AI becomes a hot topic! So, what are you able to build now?
Fessler: The thing that drives us, and that’s always been exciting to me, is emergence in general. Some of the biggest games in history have shared this common foundation, whether it’s social emergence in MMOs and the dynamics that happen there, or maybe it’s physics, like in a game like Rocket League, where no two games are the same, and you can replay it infinitely.
But no one has really cracked the nut of narrative emergence because it’s difficult. The best that we could do in the recent past was branching narratives. They can explode in terms of cost and complexity, and it’s just never enough. And there are some really deep, simulation-based games, like Dwarf Fortress. The Sims lightly plays with it, too. But the problem with a lot of these is they’re so deep that they’re inaccessible to most people. Especially things like Dwarf Fortress; people like hearing about it, but they don’t play it!
That’s what’s so exciting to us about language models: they allow us to do narrative emergence in a way that is digestible and easily accessible to everyone. Now, we can deliver the narrative that unfolds in the simulation, not in the form of bars and graphs and stats you have to interpret, but in the form of compelling character dialogue that lets you stay immersed in the world.
We think there’s a huge opportunity for AI, specifically in the life simulation genre. The life simulation genre is about simulating intelligent beings! Always has been, and that’s what we’re making now.
AuraVale has elements of The Sims but also of social games like Animal Crossing. It’s a game where you take care of “Auras”, as we call them. You embody these Auras. We call them that because they’re not quite an NPC and not quite an avatar that you just control. They are intelligent characters. They are more intelligent than other games would let you have before. You can define these characters in a deep sense. You can use your words to describe the backstory and describe how they speak and act, all in natural language. And then they actually behave that way in the game.
Our ultimate vision is to have this connected so every character you meet in AuraVale is the Aura of another player. All of the social interaction you have is the Aura of another person, and every interaction is deep and meaningful because it is all crafted by real players.
Please talk us through the core gameplay mechanics. If a person were to fire up AuraVale right now, what would their experience be?
Fessler: Let’s talk about the biggest difference between what you would do in our game versus something like The Sims.
The Sims was modelled on the psychological model of Maslow’s “Hierarchy of Needs”. It’s what they aspired to: Sims have primitive needs, like food and shelter, and then personal fulfilment at the upper end. The Sims tries to cover all those different levels of the pyramid.
But, in practice, the vast majority of the time in The Sims, you’re dealing with the bottom-most layers of the pyramid. You’re making sure your Sim takes a shower, gets to work on time, eats, and doesn’t pee on the floor! You spend very little time in the upper layers, even the ones like social dynamics. It’s the whole premise of why people like The Sims, but you don’t even get to hear the conversations they’re having.
With what we’re building now, you can play and operate primarily on the upper levels of the pyramid. Your input isn’t just clicking on objects and pressing an action. Instead, you can describe what you want your character to do in natural language. It’s about higher-order things, needs that must be interpreted abstractly for a character to understand how to do it.
I’ll give you an example. You could write, “Spread gossip about Jessica in the town.” You can’t do that sort of verb in The Sims, but you could do that in our game. The character will know what to do because it’s driven behaviourally by language models. The primary difference in input is that we have this ability to write character motivations in natural language and we’re excited about the possibilities that open up when you can directly write motivations on the higher orders of Maslow’s pyramid.
How have you applied guardrails to this? Human nature being what it is, in two minutes your players will be trying to push it to its limits to see what happens.
Friday: First of all, any language model that we use has some guardrails built into it. Whether it’s Llama 3 or OpenAI, they all put some professional-grade guardrails in the model itself.
Secondly, a lot of the need for guardrails comes when you do more multiplayer. We allow players to opt into how much multiplayer they want to do. They can set their house to private, and no one will come in and bother their Aura. Or they can say, “I’m open to it.” Then anyone can visit them. We let people opt into what they want to do.
Also, this is intelligent. When somebody does something bad, the character will know they did something bad, and they’ll say, “Hey, I don’t like what you’re saying! I’m leaving.” You can even write in your backstory how much BS your character is willing to put up with!
Fessler: Are you familiar with AI Dungeon? One of our critiques of past implementations of emerging narrative through language models is that it’s not grounded in anything meaningful. What I mean by that is that in AI Dungeon, you could just say, “And suddenly, I take out my Sword of Destiny and swipe everyone down in a single blow!” Language models are “yes, and...” machines, like improv partners. So AI Dungeon will say, “Oh yeah, totally. Let’s keep going in that direction!” On the one hand, you get interesting stories driven by the player. But on the other hand, they lack meaning because there’s no consequence for anything. They don’t have a simulated ground truth.
“We believe in the craft of game-making and the artistry of it. There’s a lot of discussion about what it means for artists to live in a post-AI world. We’re not removing the craftsmanship. We’re interested in AI to make new kinds of experiences that were not possible before”
Dan Fessler
It was important for us, when we built this, that it is always grounded in some sort of reality. The way we have our architecture, for instance. We have a base simulation that’s driven outside of the language model, and we have an input-output mechanism whenever a player is trying to do something. It gets interpreted through a language model (we call it the labeller). It asks, “What is the intent of this player?” And then it performs those actions in the simulation layer. Then it comes back out through a language model to say, “This stuff happened in this simulation. How do I convey that to the player through the form of dialogue or whatnot?” The consequence of this architecture is that you end up not needing as many guardrails; it all acts as a guardrail – you are in the sandbox of what the simulation allows you to do.
Friday: Language models are like water. They fit within whatever bounds you give them. And for us, the simulation layer was the groundwork that we did. Our first year was building all that up so that we could build the game on top of it.
Which foundation models are you working with? You mentioned a couple there, but do you have a chosen LLM for the project?
Friday: The infrastructure we’ve built can support any number of models. We probably support between eight and 10 models right now. At any given time, we’ll use whatever is the fastest and the cheapest. At the moment, we’re using [Meta’s] Llama 3 and Groq (Groq with a Q!), which has super fast inference.
One cool thing we’ve realised is that as the models get smarter, we don’t have to change many things in our prompting any more. If the model is smarter, it just automatically understands the previous prompts (even better than the older models). So we can be pretty flexible.
Our CTO is dying to train and fine-tune our own models. He thinks we can get massive savings and better performance. But right now, we’re just focused on the actual game development. When we need to scale and make things faster, cheaper, better, then we have a pretty clear plan to start training and fine-tuning our own models based on the player input we get.
Fessler: We’re not trying to find “the one model”. There are different parts of the game that require different things. Some parts of the game require very low latency – simple reasoning tasks like labelling the sentiment of a sentence. We’ll choose a model that does that. Other elements of the game require very high-level reasoning, for example, that one about spreading gossip in the town about Jessica. We might lean on a much more robust model for those sorts of tasks. We’re experimenting all the time about what the best fit is.
Let’s talk about the look and feel of the game. Part of that is the unique, cosy art style. Although you’re using AI for the language-driven gameplay, you’re not using any AI-generated art. Can you tell us why that distinction is important to you?
Fessler: There are a few different directions to go in here. We’re competing against The Sims, and that is huge; it’s a monster of a game with tons of content that took hundreds of people to produce. We are a small, scrappy startup! We need to figure out how we adequately compete against The Sims in terms of content.
That impacts our decisions in a variety of ways. We think AI can really help us on the simulation side. But on the art side: no AI art. We needed an art style that allowed us to move quickly and that was also visually appealing. We’re calling it our 2.5-D art style. Every asset in the game is hand drawn, using a 2D illustration process, and then it’s projection-mapped onto the geometry to give it an extra layer of depth and dimension that you typically don’t see in these types of games. It’s very appealing to us because of its efficiency, and the style has also been positively received. I put out a tutorial about how to produce art in our 2.5-D style, and it blew up! It’s one of those things that we validated early on.
There are two reasons why we’ve done it like that and not used AI. The boring reason is it’s just not there yet! AI art is not ready for production use in these contexts. Maybe you could use it at the concept stage or for inspiration, maybe for a marketing pitch deck. But once you get into the specific needs of what a game engine requires, you’re fighting an uphill battle.
The more interesting answer is that we still believe in the craft of game-making and the artistry of it. There’s a lot of discussion going on, especially in art circles, about what it means for artists to live in a post-AI world. There are a lot of feelings about the fact that these models were trained on their art without their permission. It’s a very deep topic, and I totally understand it – I feel a lot of those same things myself.
There’s nuance to the topic, but at Incite, our company’s mission is to “incite” the best stories in games. So we’re not looking at AI as a means to make our jobs faster or more efficient or anything like that, and we’re not removing the craftsmanship.
We’re interested in AI to make new kinds of experiences that were not possible before. How do we drive this emergent narrative concept in a way that we haven’t seen in games before? That’s pretty much exclusively where we lean on AI at all. In the past, people have handwritten dialogue, yes, but could you use that to make an emergent narrative? No, you just can’t. It requires an infinite number of possible things that a character can say. And that is where it makes sense for AI to exist.
Friday: One other reason is the UGC [user-generated content] side of it. Once players are able to make their own stuff, why do you need generative AI to make things? We don’t need AI to make a blue chair; once anyone can make a chair, there’ll be plenty of blue chairs to pick from.
“There’s a long history of UGC not being properly supported. Why don’t you build that into the core game and make it easy to create and play experiences made by other people?”
Andrew Friday
Generative AI art would generate a bunch of slop that is low quality. We prefer to have high-quality stuff! When you played Second Life, everything that you would want existed in there because someone made it. Why do you need AI to remake the same table 1,000 different ways? Humans will make different, good ones: if there isn’t something they want, they’ll make it themselves.
UGC is a very hot topic at the moment. You’ve built AuraVale to support modding and empower the players to create. How do you see that impacting the longevity of the game and players’ engagement over time?
Friday: All three of us have pretty deep expertise in UGC. Nico was at Roblox early on and worked on systems there. Dan was the first person to join Manticore. We know the power of UGC and the work that goes behind it. We’ve been thinking about it for a long time.
Fessler: It’s something we have a shared history with and that we understand. We are passionate about what’s possible when you make a game that’s compelling but also allow it to grow through the community.
We’re passionate about AuraVale being much, much more than just what we make. Roblox is the largest platform in the world for games because of their respect for UGC, and that was largely what we were chasing at Manticore as well. There’s a long history of games that missed that opportunity to capture the momentum that came from modding. Blizzard missed the mark. They could have owned the entire MOBA ecosystem if they had only built a system that was able to have a symbiotic relationship with the modders and the UGC creators.
I mentioned Maslow’s Hierarchy of Needs earlier. One of the things that’s interesting about that pyramid is that at the bottom of the pyramid, everybody has the same shared needs. Everybody needs food and shelter. Everybody needs water. You can build systems easily to capture something that everybody can relate to. But as you work up the pyramid, not only do you need more intelligent possibilities with characters that can interpret things in a much higher order than before, but also everybody’s version of what they aspire to be starts to diverge. You might want to be a rock star. I might want to be a chef. That becomes a content problem! How do we represent every player’s aspirations at the top of this pyramid? The answer is you can’t. The only way you can do this is if you open this up to players to create those kinds of experiences with you.
Even before we announced AuraVale, we’ve been experimenting with agentic games built on top of our game engine. We’ve released a few of them. One was, “Can you survive a night with a vampire?” You enter a room with a girl that you thought was really into you, but then she reveals that she’s actually a vampire, and you have to convince her to let you live and leave. It’s just representative of the kinds of things that you could do today on top of our game engine. I suspect there are going to be all sorts of experiments or scenarios that people build that go in different directions.
Friday: One last observation on UGC is that even with a game like The Sims, there’s a long history of UGC not being properly supported. Go to any forum today, and people will be [talking about] downloading a bunch of mods, but you have to be a hacker, basically! You download a bunch of shady mods onto your computer. Why don’t you just build that into the core game and make it really easy to create and play experiences made by other people?
A word you haven’t used today is “metaverse”. Is that a dirty word now? We’re talking about Roblox and shared universes and opportunities for users to influence things. Are you creating a metaverse sandpit to play in?
Fessler: One thing I’ve learned over the past few years is that lots of us agree there are appealing concepts to the metaverse. Even people who are very staunchly against the metaverse, if you break it down into its components, find it appealing. It already exists in a lot of different contexts! You’ve got World of Warcraft. If you broke down the definition of “metaverse” into its components, you would say that World of Warcraft fits, right? So, it’s not that the metaverse as a concept is wrong. It’s just that you need a compelling host experience for people to care. You can’t just build a place and expect people to go there. You need to make an interesting place, and then people will collaborate with you.
“Right now, we’re focused on the actual game development. When we need to scale and make things faster, cheaper, better, then we have a plan to start training and fine-tuning our own models based on the player input we get”
Andrew Friday
That’s what we’re hoping to do with AuraVale. We’re basing it on life simulation as a genre, but we know that it can be much bigger. If the foundation of the experience is not compelling, if people don’t want to play from day one, then creators are never going to care enough to build on top of it. So that’s our first order of business – make a really compelling world. If you want to call it a metaverse later, if it starts to feel like that, then sure. But we are focusing on the value proposition. We released our trailer for AuraVale, and we did not mention the word AI at all – not because we’re hiding it (anybody who watches the trailer will see that it is AI) but because when you focus on the value proposition, people can’t really argue with it.
What’s next for AuraVale, and what’s your timeline?
Friday: We’re working towards the first town. We want to have a demo later this year where you have a bunch of houses. You’ll have your Aura in the house. They have needs. You walk around; you take care of them. You can leave the house, you can walk around the town, talk to any other character. They’re going to have problems which you can help solve. There’ll be points of interest you can visit, a bar where you can meet people. That’s the focus for this year: to get that core of the game working. And then shooting for Early Access in 2025.
Further down the rabbit hole
Your digest of what’s been going on in the world of AI and games since last week.
According to investment firm Konvoy’s latest Gaming Industry Report, 22% of the games sector’s VC funding in Q3 went into companies related to or referencing AI. (While noteworthy, Konvoy observes that less VC capital is going into AI-related companies now compared to blockchain in 2021.)
Christoffer Holmgård, CEO of modl.ai, has blogged about how AI-powered testing bots can prevent expensive and reputation-damaging buggy launches. (Holmgård was our AI|G interviewee back in early September.)
SAG-AFTRA and Ethovox partner to build ethical AI voice replicas. Meanwhile, SAG-AFTRA extends interactive media agreement negotiations after three days of talks.
The growing PG Connects London 2025 conference is accepting speaker submissions in all categories, including its UGC Update, Future Formats and Practical AI tracks.
Bitmagic took first place in the Generative AI category at Game Changers 2025, hosted by Lightspeed Venture Partners, GamesBeat, and NASDAQ. You can read an interview with Bitmagic in AI|G next week. In the meantime, here’s their trailer: