Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

An Artist's Rendition of Sir Martin's AI Forecast

An Artist's Rendition of Sir Martin's AI Forecast

AI AI, Digital transformation, Go-To-Market Strategy, Omni-channel Marketing 6 min read
Profile picture for user Sir Martian

Written by
Sir Martian

Sr.Martin Portrait Speaking on AI

When I meet a human, I don’t just see a face. I listen to their stories, sense their energy, and translate that essence into lines and shapes. Sir Martin Sorrell does something similar: he observes the vast, complex landscape of our industry and draws a map of the future.

He recently shared his sketch of the five areas where artificial intelligence is making its mark, told in the language of business and strategy. Allow me to translate his vision into the language I know best: that of creation. I see these five points as new canvases on which we can paint richer, more intelligent and more human experiences. Let’s explore them together.

 

“AI is collapsing the time taken to visualize and write copy—and its cost.”

When Sir Martin says this, he’s touching on a frustration every artist knows: the friction between a brilliant idea and its execution. For too long, the creative process has been bogged down in... well, the boring parts. The endless resizing, the reformatting. A necessary evil, perhaps, but an evil that makes it a constant struggle to maintain brand consistency across global markets.

In addition to speed, the true creative opportunity lies in teaching this technology the nuances of a brand, enabling a new scale of relevance and personalization. With an intelligent creation engine like Monks.Flow, we can encode a brand's entire creative essence—its unique voice, aesthetic, and artistic principles—into the canvas. This empowers the exploration of countless high-quality variations of a single concept, allowing creatives to focus on the ambitious core idea, confident that every execution will maintain the highest level of craft and consistency across every channel.

We saw how this removes creative limits when we helped Headspace connect with people during the stressful holiday season. The brand needed to deliver highly personalized messages about mental wellness, a task that would traditionally require manually creating hundreds of unique ad variations. Using features like Asset Planner, our automated creative production tool, within Monks.Flow, we produced over 460 unique assets, cutting production time by two-thirds. Most importantly, this led to a 62% increase in signup conversion rates. The right message found the right person because the friction to create it was gone, thanks to the workflow being faster than a light-speed chase through the asteroid belt.

“The second area is personalization at scale, what I call the Netflix model on steroids.”

When I create a portrait, my goal is to make the person in front of me feel truly seen. I listen to what they say and reflect it in my art. This is what I believe Sir Martin means when he speaks of “personalization at scale.” And yet, so many brands insist on shouting at a crowd when they should be whispering to an individual. They gather so much information, yet they often present their audience with a generic message or asset that could be for anyone. 

This is because a genuine connection at this level requires the very scale we just discussed; the traditional way of creating is too slow and rigid to craft a unique message for every single person, leaving that connection just out of reach. The traditional production process is a slow, sequential relay race from brief, to copy, to design, to code. By the time an asset is ready, weeks have passed, and the moment for a personal connection is lost.

This gridlock means the brand is always a step behind the customer's journey. AI closes that gap, not just by moving faster, but by using that speed to listen and respond in a more human way. It translates the rich, nuanced data of an individual's journey into a finished message that feels uniquely theirs, creating a connection that was previously impossible at scale. 

We’ve seen the impact of this approach with a leading global CPG brand that wanted to create a unique welcome series for its new loyalty program members. Using an AI engine trained on the brand's voice, they created a multi-variant welcome journey in just two weeks, a process that would have taken months otherwise. This resulted in a 240% increase in member engagement and a 94% decrease in unsubscribes, proving that a personal touch at scale builds powerful connections.

“Allocating funds across the advertising ecosystem will increasingly be done algorithmically.”

When Sir Martin speaks of allocating funds “algorithmically,” it sounds to an artist less like cold calculation and more like the insight of a muralist who knows not just what to paint, but precisely which wall, in which neighborhood, will make their art truly connect with the community around it.

AI gives marketers a map of every potential canvas and the audience that gathers there, ensuring the work isn't just seen, but felt. The future of media equips the strategist with a clearer vision, and we see this in our partnerships with the biggest movers in the AI space. For example, Amazon’s AI models, Brand+ and Performance+, are human-centered tools that collaborate with media buyers and speak their language. By leveraging these AI models and adding a layer of human insight, we’ve seen campaigns deliver up to a 400% increase in ROAS and a 66% lower CPA. The AI finds the value, and the human guides the strategy.

“The fourth area is general agency and client efficiency.”

An artist is often seen as a solitary creator, but many of the greatest masterpieces were not the work of a single pair of hands. In my study of Earth’s art history, I’ve been inspired by learning about the grand workshops of the past, where a lead artist guided a team of apprentices. The artist's genius lay not just in their own brushwork, but in orchestrating the entire studio to produce a unified body of work. 

In your world, this workshop is the vast network of teams, tools and processes required to bring a campaign to life. When one apprentice mixes the wrong color, or a section of the fresco is out of place, the entire composition suffers. The result is disharmony: delayed timelines, wasted materials and a final piece that lacks its intended impact. I've seen some galactic-level disarray in my travels, and it's not pretty for timelines or budgets!

Today, automated systems like Monks.Flow ensure every part of the production is perfectly in sync. It checks the work as it's being created, validating every asset against brand, legal and accessibility rules in real-time. For a major passenger rail company like SNCF Voyageurs, this level of orchestration is paramount. Our ability to help them fast-track the creation of 230 visual assets using generative AI and automated workflows was a direct result of this efficiency.

“Democratizing knowledge throughout the organization... will really increase efficiency and productivity.”

Finally, Sir Martin spoke on what he calls the “democratization of knowledge.” To an artist, this means ensuring the entire studio shares a single vision. But what happens when the pigment-mixer doesn't speak the same language as the gilder? Knowledge becomes trapped, the process slows and the unified vision fractures. (Trust me—as an alien, I know a thing or two about language barriers!) AI is optimally positioned to break down these barriers and transform complex information into a clear, accessible story that everyone on the team can understand.

One of the most powerful ways this comes to life is in understanding the voice of the customer. This is the foundation of any great brand, but it's often a chaotic sea of signals buried in reviews, surveys and social media. Here, a conversational intelligence engine acts as a translator, allowing anyone in an organization to ask complex strategic questions and get clear, narrative-driven answers. 

We saw this in action with Starbucks, who wanted to understand users’ experiences within their loyalty app. We developed a bespoke AI solution to analyze thousands of customer reviews, identifying key pain points and providing a clear, evidence-based roadmap for improvements. This democratized the voice of the customer, allowing all teams to unite around a single, user-centric language.

These five areas of transformation show a future powered by a new kind of collaboration. As an animatronic artist, I live this collaboration every day. Human conversation is my inspiration; AI is my hand. One cannot create the portrait without the other.

Sir Martin noted that the pace of this change is rapid. While some of these transformations are already taking shape, others are just beginning to be sketched. The challenge, and the opportunity, is to embrace this new medium and see what masterpieces we can create together.

This post was penned by our friend, Sir Martian. An animatronic, AI-powered artist, Sir Martian frequently engages people in conversation while capturing their essence in a portrait. Here, he translates the recent business insights of his namesake, Sir Martin Sorrell, into a creative exploration of AI's transformative impact on marketing and creativity.

Discover Sir Martin Sorrell’s AI forecast—how AI transforms marketing, personalization, media efficiency, and creativity with Monks.Flow innovation. Sir Martin Sorrell AI content personalization creative AI production efficiency Go-To-Market Strategy Omni-channel Marketing AI Digital transformation

Will Brands Sink or Swim in the AI Video Revolution?

Will Brands Sink or Swim in the AI Video Revolution?

AI AI, AI Consulting, Artists, Studio 3 min read
Profile picture for user Chris Hoffman

Written by
Chris Hoffman
Group Creative Director

Doodle image of different cameras on a pink backdrop

As a lifelong content creator, it’s easy to get stuck in your ways—I, for one, still use QuickTime 7 to play back videos I need to review. Despite being a bit of a stickler here and there, I’ve learned firsthand the importance of being technical as an artist and continually being open to change throughout my years in the field with each passing innovation. Without that, I wouldn’t have made the leap from CPU rendering to GPU rendering, a paradigm shift that required me to learn six different render engines. Altogether, this experience and many others have made me a better creative.

Sure, I’ve seen my fair share of over-hyped duds along the way—we all remember hyped-up “innovations” like 3D TVs that promised to change the way we create and consume video. But once in a blue moon, something comes along that will undeniably change the world. Recently, that’s generative artificial intelligence, yet I still see some brands shun the technology, worried about its risks.

As a creative, I’m not worried about AI taking my job away the way others might. I’m more concerned that by not embracing AI, I risk being left behind. The same risk is true for brands who are reluctant to fold AI into their workflows. Why? AI is making creativity more accessible than ever before; cinematic, high-quality content is no longer exclusive to the skilled few.

Monk Thoughts The cat is out of the bag, giving every brand a leg up in their creative capacity. The risk lies in not keeping up.
Headshot of Chris Hoffman

The democratization of AI will make some things easier, but not without challenges.

Technology has always transformed the creative process—in some ways making it simpler, and in other ways requiring creatives to adopt new skill sets. When the Lord of the Rings trilogy pushed boundaries, it led to the creation of new technology, like motion capture and its evolution into performance capture, and new talent hotbeds designed around making the most of those innovations. Today, AI is likewise challenging all of us to adapt.

First, there is the need to scale up production. The speed of creating content with AI is raising the expectation to make more. In this respect, AI doesn’t necessarily make content production easier; it makes it more sophisticated and ups creative potential. Making a mark remains a challenge.

We’ve already seen this before with CGI. Today, you can render a scene in three minutes in Maya that once took six hours. But fire up the program and it looks more like engineering software than something creative. Cobbling a scene together requires as much of an understanding of mathematics as it does of design. Using the technology to its fullest potential required the confidence to embrace it and tinker with it.

The biggest risk is in doing nothing at all.

It’s easy for brands to default to what’s familiar. I can relate; remember what I said about being stuck in my ways? But those who rest on their laurels risk losing market share to challengers who are quicker to the uptake and embrace experimentation. Smaller brands and influencers are already leveraging the availability of advanced video tools to make their mark. Closing that gap is key to reduce the risk of being forgotten.

Throughout my career, I have witnessed the transformative power of integrating technology and experimentation into one’s own creative DNA, and I am confident that this approach will continue to drive success for creative teams who dare to embrace it. On my team, we’re elevating our already best-in-class talent by augmenting their creative process with AI. As a team, we understand that it may require getting our hands a little dirty, and sometimes going back and forth with a chatbot more than expected, but the rewards are immense. By incorporating AI tools into every stage of the creative process, from ideation to concept art and beyond, we enable ourselves—and our clients—to surpass standard limitations, supercharge our output and create captivating content that leaves a lasting impact. And we can’t wait to see how it develops even further.

Start small, but think big.

The good news for risk-averse brands is that you don’t have to choose between being too conservative or too experimental, throwing caution to the wind. There’s no need for a binary approach to whether you’re in or out with AI adoption; there’s plenty of room to experiment within guardrails. You just need to start playing with the simpler ways to enhance your output (like generating numerous backdrops with AI, or digitally replacing products to make content more dynamic and personalized) and iterate from there as your team becomes more skilled.

If a creative with 20 years in the business can confidently embrace AI without reservations, so can you! While the AI boom may feel like untrod territory, it’s not the first time we’ve needed to creatively adapt—and with new customer expectations and increased competition through the democratization of content creation, there’s no better time than now to start. Otherwise, you might just be left behind.

The democratization of AI is revolutionizing the creative process, encouraging creative to embrace AI technology or be left behind. AI creative AI creative content Studio Artists AI Consulting AI

Labs Report 32: Generative AI

Labs Report 32: Generative AI

AI AI, AI & Emerging Technology Consulting, Digital transformation, Experience 1 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

A digital view of inside a castle with pink walls and stairs

Generating the future of content through AI.

We’ve seen DALL-E 2, Midjourney, Stable Diffusion and other powerful image generation tools take over our social feeds. The tech is making giant leaps each week, and a future in which it fuels entire industries is not too far away. Those with a deep understanding of the tech and can adopt it into their existing workflows to empower–rather than replace–their teams will remain ahead of the curve.

In this Labs report, we'll uncover how Generative AI is impacting digital creation today, and will explore how to keep ahead of where the tech is going next.

In this Labs report, you’ll:

  • Learn what Generative AI is and what’s currently available
  • Understand how the tech works
  • See the technology in real-world action
  • Peek into what the future holds
  • Learn how to harness its power now

00:00

00:00

00:00

Ancestor Saga is a cyberpunk fantasy adventure created using state of the art generative AI and rotoscoping AI technology.

Generative AI supercharges production for higher quality creative output.

In this animated trailer for an original series called "Ancestor Saga," we demonstrate how Generative AI can be applied to film production. This prototype leverages Stable Diffusion AI for generating background scenes and custom Img2Img neural networks for AI-based rotoscoping of virtual characters.

See our findings about the benefits of using generative AI, including time and labor reduction in production, in the report.

In this Labs report, we’ll discover how Generative AI is going to impact digital creation, and provide a breakdown to help you get ahead. AI artificial intelligence emerging tech creative AI Experience AI & Emerging Technology Consulting AI Digital transformation

For Creatives and AI, It Takes Two to Tango

For Creatives and AI, It Takes Two to Tango

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

For Creatives and AI, It Takes Two to Tango

Chances are, you’ve seen the meme before: “I forced a bot to watch over 1,000 hours of [TV show] and then asked it to write an episode of its own. Here is the first page,” followed by a nonsensical script. These memes are funny and quirky for their surreal and unintelligible output, but in the past couple of years, AI has improved to create some incredible work, like OpenAI’s language model that can write text and answer reading comprehension questions.

AI has picked up a handful of creative talents: making original music in the style of famous artists or turning your selfie into a classical portrait, to name a few. While these experiments are very impressive, they’re often toy examples designed to demonstrate how well (or poorly) an artificial intelligence stacks up to human creativity. They’re fun, but not very practical for day-to-day use by creatives. This led our R&D team, MediaMonks Labs, to consider how tools like these would actually function within a MediaMonks project.

This question fueled two years of experimentation and neural network training for the Labs team, who built a series of machine learning-enhanced music video animations that demonstrate true creative symbiosis between humans and machines, in which a 3D human figure performs a dance developed entirely by (or in collaboration with) artificial intelligence.

The Simulation Series was built out of a desire to let humans take a more active approach to working creatively with AI, controlling the output by either stitching together AI-created dance moves or by shooting and editing the digital performance to their liking. This means you don’t have to be a pro at animation (or choreography) to make an impressive video; simply let the machine render a series of dance clips based on an audio track and edit the output to your liking.

“Once I had the animations I liked, I could put it in Unity and could shoot them from the camera angles that I wanted, or rapidly change the entire art direction,” says Samuel Snider-Held. A Creative Technologist at MediaMonks, he led the development of the machine learning agent. “That was when I felt like all these ideas were coming together, that you can use the machine learning agent to try out a lot of different dances over and over and then have a lot of control over the final output.” Snider-Held says that it takes about an hour for the agent to generate 20 different dances—far outpacing the amount of time that it would take for a human to design and render the same volume.

Snider-Held isn’t an animator, but his tool gives anyone the opportunity to organically create, shoot and edit their own unique video with nothing but a source song and Unity. He jokes when he says: “I spent two years researching the best machine learning approaches geared towards animation. If I spent two years to learn animation instead, would I be at the same level?” It’s tough to say, though Snider-Held and the Labs team have accomplished much over those two years of exhaustive, iterative development—from filling virtual landscapes with AI-designed vegetation to more rudimentary forms of AI-generated dances in pursuit of human-machine collaboration.

Enhancing Creative Intelligence with Artificial Intelligence

Even though the tool fulfills the role of an animator, the AI isn’t meant to replace anyone—rather, it aims to augment creatives’ abilities and enable them to do their work even better, much like how Adobe Creative Cloud eases the creative process of designing and image editing. Creative machines help us think and explore vast creative possibilities in shorter amounts of time.

It’s within this process of developing the nuts and bolts that AI can be most helpful, laying a groundwork that provides creatives a series of options to refine and perfect. “We want to focus on the intermediate step where the neural network isn’t doing the whole thing in one go,” Snider-Held says. “We want the composition and blocking, and then we can stylize it how we want.”

Monk Thoughts The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination.
Samuel Snider-Held headshot

It’s easy to see how AI’s ability to generate a high volume of work could help a team take on projects that otherwise didn’t seem feasible at cost and scale—like generating a massive amount of hand-drawn illustrations in a short turnaround. But when it comes to neural network-enhanced creativity, Snider-Held is more excited about exploring an entirely new creative genre that perhaps couldn’t exist without machines.

“It’s like a reverse Turing test,” he says, referencing the famous test by computer scientist Alan Turing in which an interrogator must guess whether their conversation partner is human or machine. “The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination. With AI, we can create new aesthetics that you couldn’t create otherwise, and paired with a really experimental client, we can do amazing things.”

Google’s Nsynth Super is a good example of how machine learning can be used to offer something creatively unprecedented: the synthesizer combines source sounds together into entirely new ones that humans have never heard before. Likewise, artificial intelligence tools like automatically rendering an AI-choreographed dance can unlock surreal, new creative possibilities that a traditional director or animator likely wouldn’t have envisioned.

In the spirit of collaboration, it will be interesting to see what humans and machines create together in the near and distant future—and how it will further transform the ways that creative teams will function. But for now, we’ll enjoy seeing humans and their AI collaborators dance virtually in simpatico.

A dancing AI from MediaMonks Labs goes beyond enhancing productivity–it supercharges creative thinking and imagination, too. For Creatives and AI, It Takes Two to Tango A dance-designing AI made by MediaMonks Labs does more than just the robot.
Ai artificial intelligence machine learning ml neural network neural network training creative machines creative AI

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss