"In every technological revolution there’s a job security challenge"
Asaf Yanai of Alison.ai discusses AI for market analysis, data-driven insights, and augmenting video workflows.
Welcome to this week’s edition of the AI|G newsletter, where we interview top figures from the games industry about how they use AI. Please forward this to your colleagues and encourage them to register for free - we’re building up an impressive collection of Q&As from across a variety of disciplines. This week’s interview is with Asaf Yanai, CEO of Alison.ai, the AI-powered creative analysis and video platform. Scroll to the end for more news from around the industry.
Asaf Yanai, Alison.ai
In this week's feature, we meet Asaf Yanai, co-founder and CEO of Alison.ai, a platform using AI to optimise digital marketing content. We discuss how Alison.ai helps games companies enhance their marketing strategies through machine learning. Their tech analyses thousands of creatives, including competitors', providing data-driven insights without extensive A/B testing. Alison.ai can rapidly generate smart briefs, storyboards, and videos. In our conversation, we address data privacy concerns and explore how AI augments creative workflows rather than replacing human roles.
The top takeawards from this conversation are:
Alison.ai uses AI models to analyse marketing creatives, breaking down thousands of videos and ads into their features to see what elements work best.
It operates without needing personal identifiable information, focusing instead on campaign-level data and creative elements.
They do this on your competitors too! The platform enables a company to understand what strategies are working across the industry, to identify gaps and opportunities.
Alison.ai now also helps generate marketing content. It can create "smart briefs" based on billions of data points, then automatically produce storyboards and commercially-ready videos.
It enables small teams to produce work at the scale of larger agencies and rather than replacing creative teams, Alison.ai aims to augment existing workflows.
AI Gamechangers: Please tell us about Alison.ai – what’s your pitch?
Asaf Yanai: We’re creating an analysis and insights technology. In my 15 years of experience in marketing, I’ve worked with the biggest, most notable brands in the world, and most of them are gaming companies. Gaming companies are extremely savvy. They like to try new things and new technologies. They are fanatics about optimisation, and maximising their performance.
We realised that one of the biggest obstacles and hurdles for game developers when they go to run their online marketing is actually analysing what’s working and what’s not working. What should they be utilising more heavily, or what should be paused altogether, to maximise the performance? In online marketing, specifically for games, it’s enigmatic; it’s a black box when you advertise on Facebook or Google, TikTok, Snapchat, and all the main platforms. You have zero visibility on what’s working when it comes to creative, especially your visuals.
This is how Alison came together. There was no real tool out there that could help marketers at gaming companies optimise their creative specifically and understand what’s working and what’s not working within creatives. This is where we come in. We leverage AI and machine learning, more than 10 different models that work in unison, to scan every single piece of creative that our customers have ever tested or launched. In games, that could be thousands or tens of thousands, and you can squeeze a lot of data out of those creatives.
After scanning the creatives, we dismantle them, breaking them down into elements, such as the sound, the voiceover, the backgrounds, the characters, the offers, the buttons, and even the product attributes. For example, with slot machines, we can say exactly how many spins in the slot machine would work better for specific types of audiences. Or should you display a real hand, an animated hand, a female or a male hand, with or without voiceover? Then you start dissecting it to different platforms: Facebook, Instagram, Messenger, and the Meta Audience Network. It’s a big challenge to understand what’s working where. We do this entire analysis automatically by leveraging 10 different AI engines that work in unison. All our proprietary technology.
If you have a 10-person team, trust me, those 10 people are suffering. Their managers are pressing on their necks to come up with innovative concepts and more creativity.
Asaf Yanai
I’ve witnessed the challenges faced by big brands, from Uber to gaming companies like Playtika, Bandai Namco, Zynga, and Warner Brothers. They’re all our customers! We’ve seen first-hand their difficulties when they go about their creative strategy. And this is where we come in. We really help them optimise by first understanding which features are actually driving the performance within the creatives themselves, so they don’t have to A/B test again. The concept is this: you’ve already tested so much and with so many different variations. Now it’s time to use this entire data set and start acting.
Secondly, we’re also the only company in the world that runs exactly the same process on your direct competitors. We can show you what your competitors are leveraging. We can show you if it’s working for them and the gaps between your activity (your creatives) and your competitors’. You can drive better insights and better conclusions.
We call the output from this the Alison insight. It’s our recommendation. Think of it as a recipe for creative success. It tells you which platform, which audience, which concept, which creative type. What should the creative composition be? What should it include? What should be the video?
I laugh because when I met with a client a few months ago, they described us as “the Gordon Ramsay of creatives”! But instead of coming up with recipes for meals, we come up with recipes for winning videos and creatives.
But you’re going one step further now, using AI to help with the creative process, too, right?
After working with all of our customers, we asked them if these insights were useful and if they were using them. They said, “Look, this is phenomenal. We have internal tools that we’ve been developing for years, and there’s no way we can reach 5% of what you guys are doing. But these are data points. We still have a long way to go after we see these insights. We need to come up with a brief. We need to come up with a storyboard. We need to get different versions approved; then we need to go and get videos and other creatives made….”
That’s when the second light bulb lit up in the Alison lifecycle! I realised that we have the tech, the AI, the data and the capabilities – we can do more than just pause once we’ve delivered insights and recommendations. We can take it a notch further.
I’m not familiar with any other company that runs the same process, but what we’ve done next is we’ve taken our insights and transformed them into a smart prompt. We engineer prompts based on recommendations. It’s not a text- or a language-based prompt; it’s pure data. We use those prompts with a variety of different generative AI models so we can come up with deliverables and actual visual and contextual outputs. We automatically generate a smart brief based on billions of data points correlated with the business goals and KPIs you’re trying to achieve.
User acquisition is like a rally. One day, you hit the right audience with the right creative – boom, jackpot. But within weeks, the creative will get fatigued, the audience will get fatigued, and then what?
Asaf Yanai
Think of it as an army of analysts going through every single creative you ever had tested, then doing the same for your competitors and then coming up with the right brief that would be spot on – exactly what you need to produce. Producing everything automatically takes less than 30 seconds. We generate a storyboard. So if it’s a 30-second video with five scenes, we would show you an exact visual example for each of the scenes together with the story of each scene. You have this smart brief, you have the storyboard, and now ultimately, you also have the video. We can generate a live video that’s dynamic but also commercially ready.
The models are simply not built to produce ads. They’re built to produce raw videos, raw footage; that’s different from an ad, which needs to have your brand tone and guidelines and maybe even the specific themes from the game, its look and feel, logos, any messaging, even disclaimers. There’s a long way to go from raw footage to an actual, ready-made commercial. But when we produce the creative at Alison, it is commercially ready as an ad that you can really use on your Facebook campaigns within a minute.
We are living in a privacy-first world now. Everyone’s conscious of legislation. People are more savvy than they used to be. Can you even get all the data you need for sound analysis?
It’s a very good question – but, luckily, if you’re only looking at the creative when it comes to online marketing channels, and you’re not tapping into a user’s identity, we don’t need the PII [personally identifiable information]. We have no use of PII whatsoever.
We look at campaigns, platforms, ad sets and creatives. That’s what we need. From there, we create a whole new world of measurable data. We are generating 25,000 features on each 30-second video. Now imagine you have 2,000 videos from last year. Imagine the vast volume of data: it’s unparalleled, data you simply didn’t have before. Now add your competitors’ creatives, also broken down to the same 25,000 features per video. You end up with billions of data points. That’s more than enough. We don’t need to know the user identity! We know the general audience demographic so we can come up with relevant insights and recommendations for specific use cases and segments. But it’s segments, not individuals.
You offer a one-click video generation feature. Is a tool like yours replacing the human creative director?
I’ll tell you why I think it’s a tough question. In every industrial or technological revolution throughout history, there’s always a job security challenge. When the first factories and product lines came to life, millions of technicians lost their jobs. I don’t think this is the case right now, but I do think that in the future, once every single company utilises AI to increase efficiency, I imagine we’ll see smaller teams. Smaller companies, smaller marketing departments.
But we’re not really replacing all of them right now. We’re focusing on augmenting workflows and augmenting processes.
If you have a 10-person team today, trust me, those 10 people are suffering. They are experiencing challenges on a daily basis. Their managers are pressing on their necks to produce more, and to come up with innovative concepts and more creativity. This takes a big toll on those teams. What can they do? They typically hire more people, or they outsource, or they guess.
I’m most worried about the guesstimating, because I think this is why it’s so difficult for a lot of companies to maintain a strategy and increase their performance. User acquisition in online marketing is like a rally. One day, you hit the right audience with the right creative – boom, jackpot. But within two, three, maybe four weeks, the creative will get fatigued, the audience will get fatigued, and then what? Then you start everything from scratch, and you need to climb slowly back.
Billions of dollars were poured into foundation model companies. We’re speaking about 12 to 15 companies infused with more than $5 billion in a period of a year or two. The cash and the capital make for technological advancement much faster than anybody could anticipate.
Asaf Yanai
We’re focusing on augmenting those teams and giving them a tool covering 100% of the workflow. Creative analysis, competitor analysis, creative insights, marketing groups, the storyboards and the ready-made creatives. One person is maybe limited in their capacity because, they’re only one person! They need more time in the day. One person could do the work of a full agency team using our platform. That’s what we’re focusing on – augmenting rather than replacing.
My humble opinion is that we will see many people worrying about their jobs in the future. But this is the short- and mid-range. In the long range, you will still need people to monitor, to guide, to supervise, and even come up with new tools.
A few years ago, generative AI wasn’t very good at consistency, and videos were slightly shonky. But it’s come such a long way. What’s changed in the last few years – and how do you ensure the necessary quality with the videos you create?
When you ask people what they think they need, they don’t really know. There’s a story that Henry Ford asked people what they wanted, and they said “faster horses”. They couldn’t even imagine the technology that could replace what had come before. It’s the same with AI. A lot of companies don’t even realise what is possible. They’re afraid of the unknown, or they say we want more of what we already have.
What’s changed is adoption: generative AI is not a curse word like it was two years ago. Gen AI companies have used tons and tons and tons of cash to optimise their models, to reduce the hallucinations and make things more realistic. Five years ago, we saw one foundation model company, maybe two. Now there are dozens, maybe more so. Competition around gen AI made technological improvement more rapid. As there is adoption, think of the feedback process. When somebody uses DALL-E or Midjourney, they may send feedback to the supplier if they’re not getting the right outputs. So, with adoption, user feedback from real-life cases increased dramatically. It’s the best way for the company to iterate and tweak the models that fit actual use cases.
I said AI fits the use case of generating raw footage right now, but it doesn’t yet meet the use cases of generating an ad, right? That’s the void where we come in. That’s the blank space that we operate in. Companies, investors, and people in the industry, whenever they see Sora or another foundation model company come up with a new model, immediately call me and ask, “Are you afraid? Aren’t they jeopardising your technology?” I say, “On the contrary – the more advanced those models are, the better our prompts, the better the insights that we can generate, and then ultimately, the better the creatives we’re able to generate.”
Between 2020 and 2022, billions of dollars were poured into foundation model companies. Think about it. We’re speaking about 12 to 15 companies infused with more than $5 billion in a period of a year or two. The cash and the capital make for technological advancement much faster than anybody could anticipate.
What’s next for Alison.ai? What are you working on now?
Video generation and aggregated data reports.
The video generation model is one of the flagships here at Alison.ai. The next thing for us is keeping the feedback loop alive, automatically feeding the Alison.ai platform with the performance uplifts that our own creatives generate. We’re already integrated with Facebook, Google, and other platforms. Once Alison.ai generates a creative, this is automatically pushed to the media platform. Once it’s live, we look back at the performance of this specific creative that we’ve generated. In a day or two, we already give you new insights that you can use to generate the next creative based on the new one that we just generated. So the feedback loop is extremely important; it’s not enough just to look at prior data and then come up with a new creative. You also need to test the creative live and iterate very fast. If you don’t iterate fast enough and don’t feed this feedback loop fast enough, you’re losing time, which can cost you a lot of money, and the workflow won’t make sense.
There was no real tool out there that could help marketers optimise their creative specifically and understand what’s working. This is where we come in. We leverage more than 10 different models to scan every single piece of creative that our customers have ever tested or launched.
Asaf Yanai
The second thing is we’ve also realised that many of our customers ask for aggregated data. We’ve never thought about this before. We’ve used data separately for the sole use of individual clients. Now we’ve realised that we generate a lot of new feature-level data. Now imagine looking at the entire gaming industry, dissected into genres and types of gamers, and then coming up with an aggregated report with insights that would help you.
For example, if you’re a slot machine company, and your target audience is in the US, 55 to 65 years old, who also play Sudoku and are on Facebook. There used to be best practice on the big platforms: you should have a free offer in the first five seconds, you should have an end card with the actual slot machine, or whatever. But these are one-size-fits-all concepts and would never fit the specific use case; thus, you could never reach an optimal level. What we’re doing now is coming up with market reports, platform reports, asset reports, that will hold all the different insights, and all the different recommendations that you specifically need to have in order to launch a new game, penetrate a particular market, discover new audiences, utilise new assets and so on.
Further down the rabbit hole
Some useful news, views and links to keep you going until next time…
Startup company Series, led by industry veteran Pany Haritatos, has raised $28 million from investors, including a16z, Netflix and Dell, to develop its Rho Engine, "the first AI-native, multimodal full-stack game creation platform".
King's College London’s Institute for Artificial Intelligence will host Next Level 2024 this month (25th October). It’s a one-day conference “exploring the future of science and games”, and registration is free.
OpenAI has raised $6.6 billion in a record-breaking funding round, valuing the company at $157 billion and solidifying its position as a leader in the generative AI market.
Language learning app Duolingo plans to introduce an AI-powered game mode called Adventures.
This week, Black Forest Labs released a new version of its image generation model called FLUX1.1 [pro] and made its beta API available to developers. It offers improved image quality, better prompt adherence, and increased diversity.