Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

How AI Is Changing Everything You Know About Marketing

How AI Is Changing Everything You Know About Marketing

AI AI, AI & Emerging Technology Consulting, AI Consulting, Digital transformation, New paths to growth, Technology Consulting, Technology Services 1 min read
Profile picture for user mediamonks

Written by
Monks

How AI Is Changing Everything You Know About Marketing

Artificial Intelligence is disrupting every aspect of business across content, data and digital media, and technology. The delivery of hyper-personalized experiences, real-time insights via predictive marketing intelligence, and the emergence of owned machine learning models are just a handful of ways that AI has turned business-as-usual into an unfamiliar landscape that continues to evolve at the blink of an eye.

Indeed, the efficiencies and opportunities that AI enables can radically uplevel brand experience and output, though unlocking their true potential relies on understanding how to uplevel teams to use the technology effectively. Those who can fully leverage the power of AI and infuse it within every aspect of their business will dominate the market. But for those lagging behind, this is a Kodak moment: there will be no loyalty for businesses that are slow to deliver AI-powered experiences that make consumers’ lives easier.

Throughout this guide, we’ll showcase AI’s potential to transform marketing today and tomorrow, as well as the actions you can take right now to reap those rewards and lead in the new era.

A person with glitchy different faces in a collage style type of design

You’re one download away from…

  • Preparing for your journey to AI transformation now
  • Establishing a strong data foundation to serve AI innovation
  • Finally unlocking true personalization across the customer journey
  • Future-proofing your business culture and teams for the new era

This experience is best viewed on Desktop.

Download Now
In this report we discuss the impact of AI on the business landscape and how it can offer hyper-personalized experiences and real-time insights for brands. AI Personalization artificial intelligence creative technology emerging technology automation Technology Services AI Consulting AI & Emerging Technology Consulting Technology Consulting AI Digital transformation New paths to growth

For Media.Monks, AI Isn’t a Pivot—it’s Our Reason for Being

For Media.Monks, AI Isn’t a Pivot—it’s Our Reason for Being

AI AI, AI & Emerging Technology Consulting, AI Consulting, Monks news, Technology Consulting, Technology Services 3 min read
Profile picture for user Wesley ter Haar

Written by
Wesley ter Haar
Co-CEO, Content

Wes ter Haar headshot on top of a cloudy sky

There’s seemingly no limit to what artificial intelligence can do—and if you can find one, it will probably be overcome soon. A year after GitHub launched its Copilot tool, 40% of code has been made using AI in files where the feature is enabled. Google’s Performance Max campaigns apply machine learning to automatically create and deliver customized ads optimized for customers across channels. Everywhere you look, artificial intelligence is disrupting our industry, extending across the realms of creative content, data and digital media, and tech services alike.

It’s true that we’re at the peak of the hype cycle with AI, particularly generative AI. But this disruption is a significant one, one that has become a make-or-break moment for many in our industry. 

We’re not just bullish on AI; as a change agent in our industry, it makes our model inevitable. Automation has played a great role in helping us scale up our business and outmaneuver our more traditional peers since day one. Meanwhile, today’s rapid evolution of AI is reshaping the nature of the brand-agency in real time, along with how brands themselves can go to market. Winning in the new era requires a willingness to embrace all that AI has to offer.

Prior to the hype, generative AI sprung onto the scene about four years ago, and our innovation team quickly began to experiment with it by training our own models. One turned lines and doodles into foliage; another neural animation tool created original dance choreography based on simple input like stick figures. More recently, we made an entire short film with the help of AI at every step of the production process. These experimental prototypes anticipated a future in which AI would open a new world of creative possibilities—that future is now.

But AI will have other, far-reaching potential across our industry as brands set out on new paths to growth beyond just content creation; its rapid development signals the start of truly personalized digital experiences. Cookies are crumbling, and brands have taken that as a cue to recognize the people behind the numbers and provide a better value exchange for their data. Yet the tools that marketers have historically relied on have always failed to turn the ideal of 1:1 marketing across the customer journey into a reality. AI finally unlocks that ability—across content, data and digital media, and tech services—to build and deliver hyper-personalized, highly empathetic customer journeys at scale, and fast.

Some are wary of these tools, focused on the balance between augmenting human ability versus replacing it. That’s a noble and important conversation to have. But automation has always been crucial to the growth of our team, and we are confident AI will continue to lead to more and greater work for our people. As a tech-agnostic team of makers, one that has already enthusiastically adopted these tools across every part of our business, we are the best positioned to take advantage of the abundance AI enables.

2D metaverse of showing media.monks hackathon in Workato

A bird's-eye perspective of Gather.town, a 2D metaverse. It served as the location for our recent hackathon, conducted in partnership with automation solutions platform Workato.

There are incredible opportunities on horizon: making more with less, making marketing intelligence more intelligent, or even driving new value to intellectual property through the training of owned AI systems. Each possibility will challenge the role of a creative, data and digital media, or tech services partner to evolve—lest they be surpassed by any number of new startups and cottage industries that have cropped up designed around AI. We don’t shy away from the challenge; we’re poised and ready to lead in the disruption, because that’s what we’ve always been built to do.

AI is revolutionizing the industry, from creative content to tech services. Discover how we're leveraging AI to drive efficiency and budgetary gains for brands. AI artificial intelligence content personalization personalized marketing Technology Services AI Consulting AI & Emerging Technology Consulting Technology Consulting AI Monks news
A woman with her hair swaying
`

AI Solutions

Stop Automating & Start Orchestrating Real-Time Relevance

We move your business beyond manual workloads to AI-native strategy, orchestration, and execution at scale for maximum relevance and ROI.

AI transformation for marketing workflows and hyper-personalization.

AI is a platform shift on the scale of the internet, forcing every business to rethink how they go to market. Seizing this opportunity means fundamentally transforming your marketing operations, going far beyond automating old processes to deliver unprecedented relevance and value.

Navigating this shift is a monumental challenge. We act as your change agent, helping you harness this shift while de-risking the transformation. Our approach turns your rigid marketing supply chain into a fluid, real-time engine for growth, applying our unique experience from the front lines of the AI revolution so your business doesn't just survive the shift, but thrives in it.

Solutions

Explore our solutions driving the new era of AI-first marketing.

    • Monks.Flow

      Monks.Flow logo in black with a flowing circle evolving in colors

      Our AI-powered professional managed service is built for enterprise complexity, deploying specialized agents across your complete marketing ecosystem. These agents handle tasks from strategy and creative to adaptation and performance, transforming your team from operators into orchestrators. Monks.Flow is highly extensible, integrating with any existing tool or workflow to deliver autonomous execution and measurable results without vendor lock-in.

      Learn more
    • AI Campaigns

      Google Pixel 9 Pro phone

      Our AI Campaigns solution transforms the rigid content supply chain into a fluid, intelligent ecosystem. We move your brand beyond manual workflows to a system that delivers higher quality, more culturally relevant, and performance-optimized creative at unprecedented speed. By embedding intelligence at every stage, from insights to delivery, we help you get exponentially more from your creative investment while cutting production costs.

    • Search Experience Solutions

      Close-up of a person using a dark smartphone for mobile search, illustrating the importance of a mobile-first SEO strategy and user experience.

      The way brands are discovered is undergoing a massive transformation as search becomes an AI-powered conversational experience. To thrive, your brand must be understood, trusted and recommended by the AI models that now guide users. We combine traditional SEO with Answer Engine Optimization (AEO) to make your brand an authoritative source, ensuring you capture high-intent traffic and dominate the new era of discovery.

      Learn more
    • AI Consulting

      Employees around a table consulting with each other

      True transformation requires more than technology; it requires a strategic partner who can align people, processes and culture for an AI-native future. Our global consulting practice drives purposeful, AI-powered change for the Fortune 100. We provide expert guidance and actionable roadmaps for everything from AI readiness to building and scaling human-centric technology solutions that move your business forward.

Brands we work with:

Check out more work
`

Connect

Reinvent your marketing pipeline with AI and Monks.

Our commitment to responsible innovation.

Our commitment to responsible innovation is publicly detailed in our Ethical Marketing Policy, while our internal operations are guided by the Global AI Policy for all Monks. AI's transformative power demands an equally powerful commitment to responsible governance. Our approach to AI governance is a living framework, designed to ensure our ethical standards evolve and lead the way as the technology advances. This framework is built on a foundation of trust, safety, and human oversight.

  • Human-in-the-Loop Governance. At every stage, our human creative and strategic teams act as the essential control layer, operating within a rigorous governance model anchored by the cross-functional AI Core team (Legal, Data Privacy, and Information Security). We maintain a clear review process with gates for legal clearance and brand safety, ensuring that AI serves as a tool to augment—not replace—the informed, ethical judgment of our experts.
  • Ethical Data & Sourcing. We recognize the risks of models trained on unvetted internet data. We strongly favor tools that use proprietary, transparently sourced and legally permissible datasets. Furthermore, we apply a rigorous Vendor Security Assessment (VSA) process to demand contractual assurances (including NDAs and DPAs) from our technology partners, protecting our clients from copyright and data privacy risks.
  • Active Bias Mitigation. We proactively address the biases that AI models can inherit from their training data, aligning with our broader Diversity, Equity & Inclusion (DEI) commitments. Our teams undergo mandatory trainings and actively work to identify and correct stereotypical or inequitable representations in AI-generated content. This commitment is reflected in various initiatives, ensuring the work we produce authentically reflects the diverse audiences our clients serve.

Learn how we’re helping brands lead in the AI era.

25 Minutes of AI logo with a colorful gradient background

Events

Stay Ahead. Own the AI Curve in 25 Minutes.

Your monthly, hyper-focused dose of AI innovation, inspiration, and essential information. We deliver the strategic insights you need to keep your organization ahead of the industry curve—no time wasted.

Learn more

Want to talk artificial intelligence? Get in touch.

Hey👋

Please fill out the following quick questions so our team can get in touch with you.

More on AI

The Labs.Monks Count Down to Most Anticipated Trends of 2023

The Labs.Monks Count Down to Most Anticipated Trends of 2023

AI AI, AI & Emerging Technology Consulting, Extended reality, Metaverse, New paths to growth, Technology Consulting, Technology Services 7 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

Labs.Monks logo with triangular designs

Firmly settled into the new year, we’re already looking ahead at tech trends that lie on the horizon. And who better is there to predict what they might look like than the Labs.Monks, our innovation team? As an assessment of their trend forecast from one year ago (spoiler alert: they got more than a few right) and a glimpse into the near future of digital creation and consumption, the Labs.Monks have come together again to share their top trends for the new year. Let’s count them down!

10. Digital humans get more realistic.

Digital humans may have earned a spot on our list of trends last year, but we haven’t grown tired of traversing the uncanny valley to play with the technology. In fact, the recent explosion of conversational AI will likely inject new life into digital humans and transform the realms of customer service, entertainment and more. Whether used to hand-craft original characters or refine scanned-in digital twins, digital human creation tools are becoming increasingly complex to deliver lifelike avatars. 

“We’ll see more competition between Unreal’s MetaHuman Creator and Unity’s Ziva,” says Geert Eichhorn, Innovation Director. In fact, Media.Monks has used Unreal’s tool to create a digital double of our APAC Chief Executive Officer, Michel de Rijk. Because why not?

00:00

00:00

00:00

9. Motion capture becomes more accessible.

Last year, we released a Labs Report dedicated to motion capture and how its increasing accessibility influenced content production for both professional film teams and everyday consumers. New technologies available at consumer price points are helping to bring motion capture into even more people’s hands. Meta’s Quest Pro headset, which released late last year, features impressive facial tracking that will be key to expressing the nuances of human emotion in VR. Move.ai, currently in beta, enables 1:1 motion tracking with a group of mobile devices—no bodysuits, no markers, no extra hardware needed. Using computer vision, the platform allows anyone to make motion capture video in any environment.

8. Mixed reality and mirror worlds mature.

With smaller and more comfortable AR headsets shown off already at CES, we can expect augmented and mixed reality to become more immersive, accessible and practical over the course of 2023 (check out more of what we saw at CES here). The VIVE Flow, for example, includes diopters so that users can replicate their prescription lenses in the device, amounting to a more comfortable experience overall. 

But it’s not just about hardware. “One of the major advancements is not in the headsets, but in the software,” says Eichhorn, noting that VPS has the power to pinpoint a user’s exact position and vantage point in the real world. “They do this positioning by comparing your camera view to a virtual, 3D version of the world, like Street View.” We covered mirror worlds in last year’s trend list, but the development of VPS is now bringing this vision closer to everyday consumers.

While VPS currently works only outdoors, we’ve already seen the power of the technology with Gorillaz performances in Times Square and Piccadilly Circus in December 2022.

Monk Thoughts This innovation ultimately unlocks the public space for bespoke digital experiences, where brands can move out of billboards and storefronts and move into the space in between.
Portrait of Geert Eichhorn

7. More enterprises embrace the hybrid model.

For many businesses the return to the office hasn’t been a smooth transition; while some roles require close collaboration within a shared space, others enjoy more flexible setups that support childcare, offer privacy for focus work or greater accessibility. Given the benefits of flexible work setups and the development of technologies that build presence in virtual environments, Luis Guajardo Díaz, Creative Technologist, believes more enterprises will embrace the hybrid work model.

Media.Monks’ live broadcast team, for example, built a sophisticated network of cloud-based virtual machines hosted on AWS to enable people distributed around the world to produce live broadcasts and events. Born out of necessity during the pandemic, the workflow goes beyond bringing teams together—it’s designed to overcome some of the challenges traditional broadcast teams face on the ground, like outages or hardware malfunctions. It stands to show how hybrid models can help enhance the ways we work today.

6. Virtual production continues to impress.

Virtual production powered by real-time become popular in recent years: the beautiful environments of The Mandalorian or grungy urban landscape of The Matrix showed what was possible by integrating game engines in the production process, while pandemic lockdowns made the technology a necessity for teams who couldn’t shoot on location.

Now, further advancements in game engines and graphics processing offer a look inside the future of virtual production. Sander van der Vegte, VP Emerging Tech and R&D, points to Unreal’s Nanite, which allows for the optimization of raw 3D content in real time.

Monk Thoughts From concept to testing, the chronological steps of developing such projects will follow a different and more iterative approach, which opens up creative possibilities that were impossible before.
Sander van der Vegte headshot

Localization of content is one example. “In 2023 we’re going to see this versatility in the localization of shoots, where one virtual production shoot can have different settings for different regions, all adapted post-shoot,” says Eichhorn.

5. TV streaming and broadcasts become more interactive.

With virtual production becoming even more powerful, TV and broadcasting will also evolve to become more interactive and immersive. “Translating live, filmed people into real-time models allows for many new creative possibilities,” says van der Vegt. “Imagine unlocking the power to be the cameraman for anything you are watching on TV.” 

It might sound like science fiction, but Sander’s vision isn’t far off. At this year’s CES, Sony demoed a platform that uses Hawk-Eye data to generate simulated sports replays. Users can freely control the virtual camera to view the action from any angle—and while not live, the demo illustrates the power of more immersive broadcasts. The technology could be a game changer for sports and televised events that let audiences feel like they’re part of the action.

Post malone singing with a large camera hanging
Post malone on a smokey stage

4. Metaverse moves become more strategic.

“2021 was a peak hype year for the metaverse and Web3. 2022 was the year of major disillusionment,” says Javier Sancho, Project Manager. “There are plenty of reasons to believe that this was just an overinflated hype, but it’s a recurring pattern in tech history.” Indeed, a “trough of disillusionment” inevitably follows a peak in the hype cycle.

This year will challenge brands to think of where they fit within the metaverse—and how they can leverage the immersive technology to drive bottom-line value. Angelica Ortiz, Senior Creative Technologist, says the key to unlocking value in metaverse spaces is to think beyond one-time activations and instead fuel long-term customer journeys.

Monk Thoughts NFTs and crypto have had challenges in the past year from a consumer and legal perspective. Now that the shine is starting to fade, that paves a new road for brands to go beyond PR and think critically about when and how to best evolve and create more connected experiences.
Angelica Ortiz headshot

A great example of how brands are using Web3 in impactful ways is by transforming customer loyalty programs, like offering unique membership perks and gamified experiences. These programs reinforce how the Web3 ethos is evolving brand-customer relationships by turning consumers into active participants and collaborators.

3. Large language models keep the conversation flowing.

With so much interest in bots like ChatGPT, the Labs.Monks expect large language models (LLMs) will continue to impress as the year goes on. “Large Language Models (LLMs) are artificial intelligence tools that can read, summarize and translate texts, and generate sentences similar to how humans talk and write,” says Eichhorn. These models can hold humanlike conversations, answering complex questions and even writing programs. But these skills open a can of worms, especially in education when students can outsource their homework to a bot.

LLMs like GPT are only going to become more powerful, with GPT-4 soon to launch. But despite their impressive ability to understand and mimic human speech, inaccuracies in response still need to be worked out. “The results are not entirely trustworthy, so there’s plenty of challenges ahead,” says Eichhorn. “We expect many discussions over AI sentience this year, as the Turing Test is a measurement we’re going to leave behind.” In fact, Google’s LaMDA already triggered debates about sentience last year—so expect more to come. 

2. Generative AI paints the future of AI-assisted creativity.

If 2021 was the year of the metaverse, the breakout star of 2022 is generative AI in all its forms: creating copy, music, voiceovers and especially artwork. “Generative AI wasn’t on our list in 2022, although looking back it should have been,” says Eichhorn. “The writing was on the wall, and internally we’ve been working on machine learning and generating assets for years.” 

But while the technology has been embraced by some creatives and technologists, there’s also been some worry and pushback. “These new technologies are so disruptive that we see not only copywriters and illustrators feel threatened, but also major tech companies need to catch up to not become obsolete.” 

In response to these concerns, Ortiz anticipates a friendly middle ground where AI will be used to augment—not erase—human creativity. “With the increasing push back from artists, the industry will find strategic ways to optimize processes not cut jobs to improve workflows and let artists do more of what they love and less of what they don’t,” she says. Prior to the generative AI boom, Adobe integrated machine learning and artificial intelligence across its software with Adobe Sensei. More recently, they announced plans to sell AI-generated images on their stock photography platform.

00:00

00:00

00:00

Ancestor Saga is a cyberpunk fantasy adventure created using state of the art generative AI and rotoscoping AI technology.

Monk Thoughts We’re suddenly seeing a very tangible understanding of the power of AI. 2023 will be the Cambrian explosion of AI, and this is going to be accompanied with serious ethical concerns that were previously only theorized about in academia and science fiction.
Javier Sancho Rodriguez headshot

1. The definition of “artist” or “creator” changes forever.

Perhaps the most significant trend we anticipate this year isn’t a tech trend; rather, it’s the effect that technology like generative AI and LLMs will have on artists, knowledge workers and society. 

With an abundance of AI-generated content, traditional works of art—illustrations, photographs and more—may lose some of their value. “But on the flip side, these tools let everyone become an artist, including those who were never able to create this kind of work before,” says Eichhorn. This can mean those who lack the training, sure, but it also means those with disabilities who have found particular creative fields to be inaccessible.

When everyone can be an artist, what does being an artist even mean? The new definition will lie in the skills that generative AI forces us to adopt. Working with generative AI doesn’t necessarily eliminate creative decision-making; rather, it changes what the creative process entails. New creative skills, like understanding how to prompt a generative AI for specific results, may reshape the role of the artist into something more akin to a director. 

Eichhorn compares these questions to the rise of digital cameras and Photoshop, both of which changed photography forever while making it more accessible. “The whole process will take many more years to settle in society, but we’ll likely see many discussions this year on what ‘craft’ really entails,” says Eichhorn.

That’s all, but we can expect a few surprises to emerge as the year goes on. Look out for more updates from the Labs.Monks, who regularly release reports, prototypes and podcast episodes that touch on the latest in digital tech, including some of the topics discussed above. Here’s to another year of innovation!

Our Labs.Monks have come together again to share their most anticipated and top trends for the new year. AI artificial intelligence metaverse emerging tech trends technology Technology Services Technology Consulting AI & Emerging Technology Consulting New paths to growth AI Extended reality Metaverse

Meet Your Digital Double: How Metahumans Enhance Personalization

Meet Your Digital Double: How Metahumans Enhance Personalization

AI AI, AI & Emerging Technology Consulting, Experience, Extended reality, Web3 4 min read
Profile picture for user mediamonks

Written by
Monks

A virtual human head inside a clear box

Picture this: you’re a well-known figure in your field, perhaps even a celebrity, who follows a similar routine every day. You shoot commercials for different markets, reply to every single message in your DMs with a personalized note, host a virtual event where you meet and greet thousands of fans and even teach an on-demand class where you and your students engage in meaningful conversations. It’s all happening at the same time and all over the world, because it’s not your physical self who’s doing it, but your digital double.

Since its launch in 2021, Epic Game’s MetaHuman Creator, a cloud-based app for developing digital humans, has extended its range of possibilities by adding new features—such as Mesh to MetaHuman. Using Unreal Engine, this plugin offers a new way to create a metahuman from a 3D character mesh, allowing developers to import scans of real people. In other words, it makes it easier to create a virtual double of yourself (or anyone else) almost immediately.

Inspired by this significant update and following our tradition of enhancing production workflows using Unreal Engine, our team of dedicated experts decided to build their own prototype. Needless to say, they learned a few things along the way—from the practical possibilities of metahumans to the technicalities of applying motion capture to them. As explained by the experts themselves, here’s what you need to know about creating and unlocking the full potential of virtual humans.

00:00

00:00

00:00

Be everywhere at once—at least virtually.

If you ever fantasized about cloning yourself to be able to comply with all your commitments or complete your pending tasks, metahumans may be just what you were looking for. Virtually, at least. As digital representatives of existing individuals, metahumans offer endless possibilities in terms of content creation, customer service, film and entertainment at large. Sure, they won’t be able to do your dishes—at least not yet—but if you happen to be a public figure or work with them, it’s a game changer. 

By lending likeness rights to their digital doubles, any influencer, celebrity, politician or sports superstar will be able to make simultaneous (digital) appearances and take on more commercial gigs without having to be on set. As John Paite, Chief Creative Officer of Media.Monks India, explains, “Celebrities could use their metahuman for social media posts or smaller advertising tasks that they usually wouldn’t have the availability for.” Similarly, brands collaborating with influencers and celebrities will no longer need to work around their busy schedules.

The truth is, virtual influencers are already a thing—albeit in the shape of fictional characters rather than digital doubles of existing humans. They form communities, partner with brands and are able to engage directly and simultaneously with millions of fans. Furthermore, they are not stuck in one place at a time nor do they operate under timezone constraints. In that regard, celebrities’ digital doubles combine the benefits of virtual humans with the appeal of a real person.

A new frontier of personalization and localization.

Because working with virtual humans can be more time-efficient than working with real humans, they offer valuable opportunities in terms of personalization and localization. Similarly to how we’ve been using Unreal Engine to deliver relevant creative at speed and scale, MetaHuman Creator takes localization to a new level. As Senior Designer Rika Guite says, “If a commercial features someone who is a celebrity in a specific region, for example, this technology makes it easy for the brand to replace them with someone who is better known in a different market, without having to return to set.” 

But not everything is about celebrities. Metahumans are poised to transform the educational landscape, too, as well as many others. “If you combine metahumans with AI, it becomes a powerhouse,” says Paite. “Soon enough, metahumans will be teaching personalized courses, and students will be able to access those at a lower price. We haven’t reached that level yet, but we’ll get there.”

For impeccable realism, the human touch is key.

To test how far metahumans are ready to go, our team scanned our APAC Chief Executive Officer, Michel de Rijk, using photogrammetry with Epic Games’ Reality Capture. This technique works with multiple photographs from different angles, lighting conditions and vantage points to truly capture the depth of each subject and build the base for a realistic metahuman mode. Then, we imported the geometry into MetaHuman Creator, which our 3D designers refined using the platform’s editing tools. 

“Because Mesh to Metahuman allows you to scan and import your real face, it’s much easier to create digital doubles of real people,” says our Unreal Engine Generalist Nida Arshia. That said, the input of an expert is still necessary to attain top-quality models. “Certain parts of the face, such as the mouth, can be more challenging. Some face structures are harder than others, too. If you want the metahuman to look truly realistic, it’s important to spend some time refining it.” 

Once we got our prototype as close to perfection as possible, we used FaceWare’s facial motion capture technology to unlock real-time facial animations. While FaceWare’s breadth of customization options made it our tool of choice for this particular model, different options are available depending on the budget, timeline and part of the body you want to animate. Unreal’s LiveLink, for example, offers a free version that allows you to use your phone and is easy to implement both real-time and pre-recorded applications, but focuses on facial animations only. Mocap suits with external cameras allow for full-body motion capture, but with mid-fidelity, and recording a real human in a dedicated mocap studio unlocks highly realistic animations for both face and body. 

At the same time, the environment we intend the metahuman to inhabit is worth considering, as the clothes, hair, body type and facial structure will all need to fit accordingly. Naturally, different software may adapt better to one style or another. 

While this technology is still incipient and requires some level of expertise, brands can begin to explore different ways to leverage metahumans and save time, money and resources in their content creation, customer service and entertainment efforts. Similarly, creators can start sharpening their skills and co-create alongside brands to expand the realm of possibilities. As Arshia says, “We must continue to push forward in our pursuit of realism by focusing on expanding the variety of skin tones, skin textures and features available so that we can build a future where everyone can be accurately represented.”

Our experts share what you need to know about creating and unlocking the full potential of virtual humans. Virtual humans unreal engine artificial intelligence AI Personalization Experience AI & Emerging Technology Consulting AI Web3 Extended reality

Scrap the Manual: Generative AI

Scrap the Manual: Generative AI

19 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

a blue backdrop with the copy "Generative AI (Artificial Intelligence)"

Generative AI has taken the creative industry by storm, flooding our social feeds with beautiful creations powered by the technology. But is it here to stay? And what should creators keep in mind?

In this episode of Scrap the Manual, host Angelica Ortiz is joined by fellow Creative Technologist Samuel Snider-Held, who specializes in machine learning and Generative AI. Together, Sam and Angelica answer questions from our audience—breaking down the buzzword into tangible considerations and takeaways—and why embracing Generative AI could be a good thing for creators and brands.

Read the discussion below or listen to the episode on your preferred podcast platform.

00:00

00:00

00:00

Angelica: Hey everyone. Welcome to Scrap the Manual, a podcast where we prompt "aha" moments through discussions of technology, creativity, experimentation and how all those work together to address cultural and business challenges. My name's Angelica, and I'm joined today by a very special guest host, Sam Snider-Held

Sam: Hey, great to be here. My name's Sam. We're both Senior Creative Techs with Media.Monks. I work out of New York City, specifically on machine learning and Generative AI, while Angelica's working from the Netherlands office with the Labs.Monks team.

Angelica: For this episode, we're going to be switching things up a bit and introducing a new segment where we bring a specialist and go over some common misconceptions on a certain tech.

And, oh boy, are we starting off with a big one: Generative AI. You know, the one that's inspired the long scrolls of Midjourney, Stable Diffusion and DALL-E images and the tech that people just can't seem to get enough of the past few months. We just recently covered this topic on our Labs Report, so if you haven't already checked that out, definitely go do that. It's not needed to listen to this episode, of course, but it'll definitely help in covering the high level overview of things. And we also did a prototype that goes more in depth on how we at Media.Monks are looking into this technology and how it implements within our workflows.

For the list of misconceptions we’re busting or confirming today, we gathered this list from across the globe–ranging from art directors to technical directors–to get a variety of what people are thinking about on this topic. So let's go ahead and start with the basics: What in the world is Generative AI?

Sam: Yeah, so from a high level sense, you can think about generative models as AI algorithms that can generate new content based off of the patterns inherent in its training data set. So that might be a bit complex. So another way to explain it is since the dawn of the deep learning revolution back in 2012, computers have been getting increasingly better at understanding what's in an image, the contents of an image. So for instance, you can show a picture of a cat to a computer now and it will be like, "oh yeah, that's a cat." But if you show it, perhaps, a picture of a dog, it'll say, "No, that's not a cat. That's a dog."

So you can think of this as discriminative machine learning. It is discriminating whether or not that is a picture of a dog or a cat. It's discriminating what group of things this picture belongs to. Now with Generative AI, it's trying to do something a little bit different: It's trying to understand what “catness” is. What are the defining features of what makes up a cat image in a picture?

And once you can do that, once you have a function that can describe “catness”, well, then you can just sample from that function and turn it into all sorts of new cats. Cats that the algorithm's actually never seen before, but it just has this idea of “catness” creativity that you can use to create new images.

Angelica: I've heard AI generally described as a child, where you pretty much have to teach it everything. It's starting from a blank slate, but over the course of the years, it is no longer a blank slate. It's been learning from all the different types of training sets that we've been giving it. From various researchers, various teams over the course of time, so it's not blank anymore, but it's interesting to think about what we as humans take for granted and being like, "Oh that's definitely a cat." Or what's a cat versus a lion? Or a cat versus a tiger? Those are the things that we know of, but we have to actually teach AI these things.

Sam: Yeah. They're getting to a point where they're moving past that. They all started with this idea of being these expert systems. These things that could only generate pictures of cats...could only generate pictures of dogs.

But now we're in this new sort of generative pre-training paradigm, where you have these models that are trained by these massive corporations and they have the money to create these things, but then they often open source them to someone else, and those models are actually very generalized. They can very quickly turn their knowledge into something else.

So if it was trained on generating this one thing, you do what we call “fine tuning”, where you train it on another data set to very quickly learn how to generate specifically Bengal cats or tigers or stuff like that. But that is moving more and more towards what we want from artificial intelligence algorithms.

We want them to be generalized. We don't want to have to train a new model for every different task. So we are moving in that direction. And of course they learn from the internet. So anything that's on the internet is probably going to be in those models.

Angelica: Yeah. Speaking of fine tuning, that reminds me of when we were doing some R&D for a project and we were looking into how to fine tune Stable Diffusion for a product model. They wanted to be able to generate these distinctive backgrounds, but have the product always be consistent first and foremost. And that's tricky, right? When thinking about Generative AI and it wanting to do its own thing because either it doesn't know better or you weren't necessarily very specific on the prompts to be able to get the product consistent. But now, because of this fine tuning, I feel like it's actually making it more viable of a product because then we don't feel like it's this uncontrollable platform. It's something that we could actually leverage for an application that is more consistent than it may have been otherwise. 

So the next question we got is: with all of the focus on Midjourney prompts being posted on LinkedIn and Twitter, is Generative AI simply just a pretty face? Is it only for generating cool images?

Sam: I would definitely say no. It's not just images. It's audio. It's text. Any type of data set you put into it, it should be able to create that generative model on that dataset. It's just the amount of innovation in the space is staggering.

Angelica: What I think is really interesting about this field is not only just how quickly it's advanced in such a short period of time, but also the implementation has been so wide and varied.

Sam: Mm-hmm.

Angelica: So we talked about generating images, generating text and audio and video, but I had seen that Stable Diffusion is being used for generating different types of VR spaces, for example. Or it's Stable Diffusion powered processes, or not even just Stable Diffusion... just different types of Generative AI models to create 3D models and being able to create all these other things that are outside of images. There's just so much advancement within a short period of time.

Sam: Yeah, a lot of this stuff you can think about like LEGO blocks. You know, a lot of these models that we're talking about are past this generative pre-training paradigm shift where you're using these amazingly powerful models trained by big companies and you're pairing them together to do different sorts of things. One of the big ones that's powering this, came from OpenAI, was CLIP. This is the model that allows you to really map text and images into the same vector space. So that if you put in an image and a text, it will understand that those are the same things from a very mathematical standpoint. These were some of the first things that people were like, "Oh my gosh, it can really generate text and it looks like a human wrote it and it's coherent and it circles back in on it itself. It knows what it wrote five paragraphs back." And so, people started to think, "What if we could do this with images?" And then maybe instead of having the text and the images mapped to the same space, it's text to song, or text to 3D models?

And that's how all of this started. You have people going down the evolutionary tree of AI and then all of a sudden, somebody comes out with something new and people abandon that tree and move on to another branch. And this is what's so interesting about it: Whatever it is you do, there's some cool way to incorporate Generative AI into your workflow.

Angelica: Yeah, that reminds me of another question that we got that's a little bit further down the list, but I think it relates really well with what you just mentioned. Is Generative AI gonna take our jobs? I remember there was a conversation a few years ago, and it still happens today as well, where they were saying the creative industry is safe from AI. Because it's something that humans take creativity from a variety of different sources, and we all have different ways of how we get our creative ideas. And there's a problem solving thing that's just inherently human. But with seeing all of these really cool prompts being generated, it's creating different things that even go beyond what we would've thought of. What are your thoughts on that?

Sam: Um, so this is a difficult question. It's really hard to predict the future of this stuff. Will it? I don't know.

I like to think about this in terms of “singularity light technology.” So what I mean by singularity light technology is a technology that can zero out entire industries. The one we're thinking about right now is stock photography and stock video. You know, it's hard to tell those companies that they're not facing an existential risk when anybody can download an algorithm that can basically generate the same quality of images without a subscription.

And so if you are working for one of those companies, you might be out of a job because that company's gonna go bankrupt. Now, is that going to happen? I don't know. Instead, try to understand how you incorporate it into your workflow. I think Shutterstock is incorporating this technology into their pipeline, too.

I think within the creative industry, we should really stop thinking that there's something that a human can do that an AI can't do. I think that's just not gonna be a relevant idea in the near future.

Angelica: Yeah. My perspective from it would be: not necessarily it's going to take our jobs, but it's going to evolve how we approach our jobs. We could think of a classic example of film editors where they had like physical reels to have to cut. And then when Premiere and After Effects come out, then that process is becoming digitized.

Sam: Yeah.

Angelica: And then further and further and further, right? So there's still video editors, it's just how they approach their job is a little bit different.

And same thing here. Where there'll still be art directors, but it'll be different on how they approach the work. Maybe it'll be a lot more efficient because they don't necessarily have to scour the internet for inspiration. Generative AI could be a part of that inspiration finding. It'll be a part of the generating of mockups and it won't be all human made. And we don't necessarily have to mourn the loss of it not being a hundred percent human made. It'll be something where it will allow art directors, creatives, creators of all different types to be able to even supercharge what they currently can do.

Sam: Yeah, that's definitely true. There's always going to be a product that comes out from NVIDIA or Adobe that allows you to use this technology in a very user friendly way.

Last month, a lot of blog posts brought up a good point: if you are an indie games company and you need some illustrations for your work, normally you would hire somebody to do that. But this is an alternative and it's cheaper and it's faster. And you can generate a lot of content in the course of an hour, way more than a hired illustrator could do.

It's probably not as good. But for people at that budget, at that level, they might take the dip in quality for the accessibility, the ease of use. There's places where it might change how people are doing business, what type of business they're doing.

Another thing is that sometimes we get projects that for us, we don't have enough time. It's not enough money. If we did do it, they would basically take our entire illustration team off the bench to work on this one project. And normally if a company came to us and we passed on it, they would go to another one. But perhaps now that we are investing more and more on this technology, we say, "Hey, listen, we can't put real people on it, but we have this team of AI engineers, and we can build this for you.” For our prototype, that's what we were really trying to understand is how much of this can we use right now and how much benefit is that going to give us? And the benefit was to allow this small team to start doing things that large teams could do for a fraction of the cost.

I think that's just going to be the nature of this type of acceleration. More and more people are going to be using it to get ahead. And because of that, other companies will do the same. Then it becomes sort of an AI creativity arms race, if you will. But I think that companies that have the ability to hire people that can go to their artists and say, "Hey, what things are you having problems with? What things do you not want to do? What things take too much time?" And then they can look at all the research that's coming out and say, "Hey, you know what? I think we can use this brand new model to make us make better art faster, better, cheaper." It protects them from any sort of tool that comes out in the future that might make it harder for them to get business. At the very least, just understanding how these things work and not from a black box perspective, but having an understanding of how they work.

Angelica: It seems like a safe bet, at least for the short term, is just to understand how the technology works. Like listening to this podcast is actually a great start.

Sam: Yeah.

Angelica: Right?

Sam: If you are an artist and you're curious, you can play around with it by yourself. Google CoLab is a great resource. And Stable Diffusion is designed to run on cheap GPU. Or you can start to use these services like Midjourney, to have a better handle on what's happening with it and how fast it's moving.

Angelica: Yeah, exactly. Another question that came through is: if I create something with Generative AI through Prompt Engineering, is that work really mine?

Sam: So this is starting to get into a little bit more of a philosophical question. Is it mine in the sense that I own it? Well, if the model says so, then yes. Stable Diffusion, I believe, comes with a MIT license. So that is like the most permissive license. If you generate an image with that, then it is technically yours, provided somebody doesn't come along and say, "The people who made Stable Diffusion didn't have the rights to offer you that license."

But until that happens, then yes, it is yours from an ownership point of view. Are you the creator? Are you the creative person generating that? That's a bit of a different question. That becomes a little bit murkier. How different is that between a creative director and illustrator going back and forth saying:

"I want this."

"No, I don't want that."

"No, you need to fix this."

"Oh, I liked what you did there."

"That's really great. I didn't think about that."

Who's the owner in that solution? Ideally, it's the company that hires both of them. This is something that's gonna have to play out in the legal courts if they get there. I know a lot of people already have opinions on who is going to win all the legal challenges, and that is just starting to happen right now.

Angelica: Yeah, from what I've seen in a lot of discussion, it's a co-creation platform of sorts, where you have to know what to say in order to get it to be the right outcome. So if you say, “I want an underwater scene that has mermaids floating and gold neon coral,” it'll generate certain types of visuals based off of that, but it may not be the visuals you want.

Then that's where it gets nitpicky into styles and references. That's where the artists come into play, where it's a Dali or Picasso version of an underwater scene. We've even seen prompts that use Unreal...

Sam: Mm-hmm

Angelica: ...as a way to describe artistic styles. Generative AI could create things from a basic prompt. But there's a back and forth, kinda like you were describing with a director and illustrator, in order to know exactly what outcomes to have and using the right words and key terms and fine tuning to get the desired outcome.

Sam: Definitely, and I think this is a very specific question to this generation of models. They are designed to work with text to image. There's a lot of reasons for why they are this way. A lot of this research is built on the backs of transformers, which were initially language generation models. If you talk to any sort of artist, the idea that you're creating art by typing is very counterintuitive to what they spent years learning and training to do. You know, artists create images by drawing or painting or manipulating creative software and its way more gestural interface. And I think that as technology evolves–and definitely how we want to start building more and more of these technologies to make it more engineered with the artist in mind–I think we're gonna see more of these image interfaces.

And Stable Diffusion has that, you can draw sort of an MS paint type image and then say, "Alright, now I want this to be an image of a landscape, but in the style of a specific artist." So then it's not just writing text and waiting for the output to come in, I'm drawing into it too. So we're both working more collaboratively. But I think also in the future, you might find algorithms that are way more in tune with specific artists. Like the person who's making it, how they like to make art. I think this problem's gonna be less of a question in the future. At one point, all of these things will be in your Photoshop or your creative software, and at that point, we don't even think about it as AI anymore. It's just a tool that's in Photoshop that we use. They already have neural filters in Photoshop–the Content Aware fill. No one really thinks about these questions when they're already using them. It's just this area we are right now where it's posing a lot of questions.

Angelica: Yeah. The most interesting executions of technology have been when it fades into the background. Or to your point, we don't necessarily say, "Oh, that's AI", or "Yep, that's AR". That's a classic one too. We just know it from the utility it provides us. And like Google Translate, for example, that could be associated with AR if you use the camera and it actually overlays the text in front. But the majority of people aren't thinking, oh, this is Google Translate using AR. We don't think about it like that. We're just like, "Oh, okay, cool. This is helping me out here."

Sam: Yeah, just think about all the students that are applying to art school this year and they're going into their undergrad art degree and by next year it's gonna be easier to use all this technology. And I think their understanding of it is gonna be very different than our understanding of people who never had this technology when we were in undergrad. You know, it's changing very quickly. It's changing how people work very rapidly too.

Angelica: Right. Another question came relating to copyright usage, which you touched on a little bit, and that's something that's an evolving conversation already in the courts, or even out of court–or if you're looking in the terms and conditions of Midjourney and DALL-E and Stable Diffusion.

Sam: When you download the model from Hugging Face, you have to agree to certain Terms and Conditions. I think it's basically a legal stop gap for them.

Angelica: Yep.

Sam: If I use these, am I going to get sued? You want to talk to a copyright lawyer or attorney, but I don't think they know the answer just yet either. What I will say is that many of the companies that create these algorithms–your OpenAIs, your Google's, your NVIDIAs–a lot of these companies also have large lobbying teams and they're going to try to push the law in a way that doesn't get them sued. Now, you might see that in the near future because these companies can throw so much money at the legal issue that by, in virtue of protecting themselves, they protect all the people who use their software. The way I like to talk about it is, and maybe I'm dating myself, but if you think about all the way to the early 2000's with Napster and file sharing, it didn't work out so well for the artists. And that technology has completely changed their industry and how they make money. Artists do not make money off of selling records anymore because anyone can get them for free. They make money now primarily through merchandise and touring. Perhaps something like that is going to happen.

Angelica: Yeah. When you brought up Napster, that reminded me of a sidetrack story where I got Napster and it was legitimate at that time, but every time I was like, "Oh yeah, I have this song on Napster." They were like, "Mmmm?" They're giving me a side eye because of where Napster came from and the illegal downloading. It's like, "No, it's legit. I swear I just got a gift card." 

Sam: [laughter] Well, yeah, many of us now listen to all of our music on Spotify. That evolved in a way where they are paying artists in a specific way that sometimes is very predatory and something like that could happen to artists in these models. It doesn't look like history provides good examples where the artists win or come out on top. So again, something to think about if you are one of these artists. How do I prepare for this? How do I deal with it? At the end of the day, people are still gonna want your top fantasy illustrator to work on their project, but maybe people that aren't as famous, maybe those people are going to suffer a bit more.

Angelica: Right. There's also been a discussion on: can artists be exempted from being a part of prompts? For example, there was a really long Twitter thread, we'll link it in the show notes, but it was pretty much discussing how there was a lot of art that was being generated using her name in the prompt, and it looked very similar to what she would create. Should she get a commission because it used her name and her style to be able to generate that? Those are the questions there. Or if they're able to get exempt, does that also prevent the type of creative output Generative AI is able to create? Because now it's not an open forum anymore where you can use any artist. And now we're gonna see a lot of Picasso uses because that one hasn't been exempted. Or more indie artists aren't represented because they don't want to be.

Sam: I don't think the companies creating these exemptions are really going to work. One of my favorite things about artificial intelligence is that it's one of the most advanced high tech technologies that's ever existed, and it's also one of the most open. So it's going to work on their platforms because they can control it, but it's an extremely open technology. All these companies are putting some of their most stellar code and train models. There's DreamBooth now where you can basically take Stable Diffusion and then fine tune it on specific artists using a hundred or less images or so.

Even if a company does create these exemptions, you can't create images on Midjourney or DALL-E 2 in the style of Yoshitaka Amano or something like that, it wouldn't be so hard for somebody to just download all the free train models, train it on Yoshitaka Amano images, and then create art like that. The barrier to entry to do these things isn't high enough that this is a solution for that.

Angelica: Yeah, the mainstream platforms could help to get exempt, but if someone was to train their own model, then they could still do that.

Sam: It's starting to become kind of a wild west, and I can understand why certain artists are angry and nervous. It's just...it's something that's happening and if you wanna stop it, how do we stop it? It has to come from a very concerted legal sort of idea. Bunch of people getting together saying, "We need to do this now and this is how we want it to work." But can they do that faster than corporations can lobby to say, "No, we can do this." You know, it's very hard for small groups of artists to combat corporations that basically run all of our technologies.

It's an interesting thing. I don't know what the answer is. We should probably talk to a lawyer about it.

Angelica: Yeah. There's other technologies that have a similar conundrum as well. It's hard with emerging tech to control these things, especially when it is so open and anyone's able to contribute in either a good or a bad way.

Sam: Yeah, a hundred percent.

Angelica: That actually leads to our last question. It's not really a question, more of a statement. They mentioned that Generative AI seems like it's growing so fast and that it will get outta control soon. From my perspective, it's already starting to because of just the rapid iteration that's happening within this short period of time.

Sam: Even for us, we were spending time engineering these tools, creating these projects that use these and we'll be halfway through it and there's all these new technologies that might be better to use. Yeah, it does give a little bit of anxiety like, "Am I using the right one? What's it going to take to change technology right now?" Do you wait for the technology to advance, to become cheaper?

If you think about a company like Midjourney spending all this investment money on creating this platform, because theoretically only you can make this and it's very hard for other companies to recreate your business. But then six months later, Stable Diffusion comes out. It's open source, anyone can download it. And then two months later somebody open sources a full on scalable web platform. It's just that sort of thing where it evolves so fast. And how do you make business decisions about it? It's changing month to month at this point. Whereas before, it was changing every year or so, but now it's too fast. It does seem like it is starting to, again, become that singularity light type technology. Who’s to say that it's going to continue like that? It's just so hard to predict the future with this stuff. It's more what can I do right now and is it going to save me money or time? If not, don't do it. If yes, then do it.

Angelica: Yeah. The types of technologies that get the most excitement are the ones that get different types of people more mobilized that then makes the technology advance a lot faster. It just feels like towards the beginning of the summer, we were hearing like, "Oh, DALL-E 2, yay! Awesome." And then it seemed like it went exponentially fast from there based on a lot of the momentum. There was probably a lot of stuff behind the scenes that made it feel exponential. Would you say that it was because of a lot of interest that brought a lot of people into the same topic at one point? Or do you feel like it might have always been coming to this point?

Sam: Yeah, I think so. Whenever you start to see technology that is really starting to deliver on its promise, I think, again, a lot of people become interested in it. The big thing about Stable Diffusion was that it was able to use a different type of model to compress the actual training size of the images, which then allowed it to train faster and then be able to be trained and executed on a single GPU. That type of thing is how a lot of this stuff goes. There's generally one big company that creates the, "We figured out how to do this." And then all these other companies and groups and researchers say, "Alright, now we know how to do this. How do we do it cheaper, faster, with less data, and more powerful?" And any time there's something that comes out like that, people start spending a lot of time and money on it.

DALL-E was this thing that I like to say really demonstrated creative arithmetic. When you say, I want you to draw me a Pikachu sitting on a goat. And not only does it know what Pikachu and a goat looks like, but it understands that in order for us to believe that it's sitting on it, and you have to have it sitting in a very specific space. Pikachu's legs are on either side of it.

The idea that a machine can do that, something so similar to the way humans think, got a lot of people extremely excited. And at the time it was just, I think at the time it was like, 256 pixels by 256. But now we are doing 2048 by 24... whatever size you want. And that's only two years later. So yeah, a lot of excitement, obviously.

I think it is one of those technologies that really gets people excited because it is starting to deliver on the promise of AI. Just like self-driving cars–AI doing protein folding–you're starting to see more and more examples of what it could be and how exciting and how beneficial it can be.

Angelica: Awesome! Well, we've covered quite a bit, lots of great info here. Thanks again, Sam, for coming on the show.

Sam: Yeah, thanks for having me.

Angelica: Thanks everyone for listening to the Scrap The Manual podcast!

If you like what you hear, please subscribe and share! You can find us on Spotify, Apple Podcasts and wherever you get your podcasts. If you want to suggest topics, segment ideas, or general feedback, feel free to email us at scrapthemanual@mediamonks.com. If you want to partner with Labs.Monks, feel free to reach out to us at that same email address. Until next time!

Sam: Bye!

Our Labs.Monks provide insight into APAC’s emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region. artificial intelligence technology emerging technology

A Frame-by-Frame Look at How Generative AI Supercharges Creativity

A Frame-by-Frame Look at How Generative AI Supercharges Creativity

AI AI, AI & Emerging Technology Consulting, Digital transformation, Experience 6 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

A landscape animated mountainside with mist and fog

By now, you’ve seen it all over social media: uncanny images painted by artificial intelligence. Fun to play with thanks to its accessibility, generative AI has exploded in popularity online. But it’s also raised questions about the nature of human creativity: what is the value of artistry and craft if anyone can generate images in a few seconds?

The impressive output of generative AI has led some to voice concerns about whether their livelihoods are in jeopardy. Creativity, after all, has long been considered a strictly human skill.

But creatives aren’t about to lose their jobs to robot overlords who can spin strings of text into pixelated gold. To the contrary, these tools—which rely on human input and some level of artistic aptitude to really shine—are unlocking creative potential and helping people bring their concepts to life in new ways. This outlook prompted the Labs.Monks, our research and development team, to explore how generative AI can uplevel the work of our teams and our clients.

“We’ve been playing with this technology for a while, and after it began to trend, we’ve been getting more and more questions about it,” says Geert Eichhorn, Innovation Director and Head of Labs. For instance: a lot can be said about the future of content creation aided by AI, but how could today’s tools integrate into a present-day production pipeline? 

Looking for an answer, the Labs.Monks collaborated with animators and illustrators on our team to develop a prototype production workflow that blends traditional animation methods with cutting-edge AI technology. The result is an animated film trailer made using a fraction of the time and resources that a typical, frame-by-frame animation of its length would require.

00:00

00:00

00:00

Learn to live with the algorithm.

Ancestor Saga is a 2D-animated side project focused on a central question: what if people in the Viking Age realized they were living in a simulation? After learning that their purpose in life is to entertain the gods, will they accept their new reality, or put an end to the world by bringing about Ragnarök?

The theme might feel familiar to anyone trying to make sense of the increasingly algorithmic world we’ve suddenly found ourselves in. “We wanted to tell a story that could integrate with the tech we’re using: virtual worlds and virtual people,” says Samuel Snider-Held, Creative Technologist. Associate Creative Director Joan Llabata takes this thought further, citing some of the challenges faced when humans and AI don’t quite connect. “There’s some space where we need to find the best way to communicate with the machine effectively,” he says.

When using generative AI, a bespoke approach is best.

That challenge of getting humans and AI to play nice demonstrates the need for a team like the Labs.Monks to experiment with the tools that are available. While off-the-shelf tools are great for empowering individual creators, integrating them into team pipelines requires a more custom solution.

AI is designed to do specific tasks very, very well. Projects that involve multiple capabilities and phases call for a workflow that can integrate a variety of generative AI to fulfill different goals throughout. With an animation project, this means plugging into creative concepting, storyboarding, sound and of course animating the visuals.

In our case, says Snider-Held, “We wanted to explore how AI could allow us to do the work we really want to do, even if the time or the budget isn’t there.” He found that while our animation team loves classic, frame-by-frame animation, the method is often overlooked because it is slower to produce and less cost-efficient than other ways of animating. 

Now the team had a clear goal: orchestrate an AI-based workflow that could output a frame-by-frame animation in record time, without compromising quality. They took inspiration from rotoscoping, a method used by animators like Ralph Baskhi, in which an artist traces images over existing footage. This task of translating an existing recording from one style to another was ideal for image-to-image generative AI. In addition, the team used AI technology to develop background designs and read out the animation’s voiceover.

Generative AI isn’t a radical departure from tradition.

The team began by recording a 3D character model in a virtual setting, capturing a variety of poses for an illustrator to trace over. These visuals were then used to train the AI model on how to draw the character in different movements. “If you draw about five frames, you have enough to teach a neural network how to paint the others,” says Snider-Held, noting that it’s important to select frames that are different from one another so the AI can pick up on various forms, shapes and poses.

In addition to rotoscoping virtual production, the team also experimented with live-action stock footage. Being able to use two different types of visual source material baked extra flexibility into the process; teams could mix and match the different methods according to their specific needs or abilities. Fantastical creatures might be captured more easily in virtual production, while a team lacking in their ability to animate lifelike movements may prefer using live-action film as a base. “You get better acting from footage versus a 3D model, but the visual output is ultimately the same,” says Snider-Held.

Much like how that process emulated classic rotoscoping by hand, other ways of integrating AI followed a traditional animation process, albeit with some additional steps here and there. For example, the storyboarding phase is important for visualizing which types of shots or animations are needed for a specific sequence. In addition to pondering that, the team also planned which kinds of AI would be best for generating this or that shot.

Using Stable Diffusion—a kind of generative AI that translates a text prompt into an image—allowed the team to create a large volume of backgrounds that they could swap in and out to test how they looked. “You can explore a lot in this phase,” says Snider-Held.

As for developing backgrounds in particular, “It’s like describing the shot you want to a director of photography,” says Llabata. He was able to test hundreds of different environments, camera angles, artistic styles, lighting and more, all with relative ease.

a grid of landscapes of a house amid mountains and fields

Unlock efficiencies and long-term gains.

The findings above hit on perhaps the biggest gain that a generative AI-powered workflow can provide: greater flexibility throughout the life of a creative project. Being able to generate 60 frames in one minute—rather than one frame in 60 minutes—makes it incredibly easy to pivot or change things up in the blink of an eye.

Monk Thoughts It’s a producer’s dream to be able to create so many assets so flexibly. It redefines linearity in the pipeline because you can always go back and change things.
Joan Llabata headshot

It doesn’t require a sophisticated hardware setup either, further making content creation accessible to teams of all sizes. “You don’t need a giant server or cloud computing,” says Eichhorn. “A reasonably good gaming PC can churn out assets like backdrops quickly.” Still, more complex uses of AI like rotoscoping may require more power.

The flexibility unlocked by integrating generative AI into a team’s pipeline continues to pay dividends beyond the life of a single project. “If you have a project whose scope is really big, the effort and money you spent in that R&D is compounded in value over time,” says Snider-Held, noting that whether a brand wants to make 10 animations or 30, the steps to lay down an AI-powered foundation will be roughly the same.

Experiment to find an approach that suits your needs.

Tools like stable diffusion aren’t meant to replace those in the creative field. “An AI will not achieve anything by itself,” says Llabata. Instead, these products will give teams the ability to chase more ambitious projects with fewer constraints in time and budget. Just consider how closely the creation of the Ancestor Saga trailer follows a traditional animation process, just with more efficiencies baked in. 

Such flexibility afforded by generative AI can go well beyond traditional animation.

Monk Thoughts The merging of data and creativity is something we’re always exploring at Media.Monks, and this technology is going to supercharge that. Imagine using data that we already use for media campaigns to generate hyper-personalized images.
Portrait of Geert Eichhorn

Whatever your use case for generative AI, understand that while building tools from scratch can be challenging, the result is extremely powerful. “Our approach is that if an off-the-shelf tool is mature enough, use it. If not, create it yourself,” says Snider-Held. In addition to ensuring a tool is calibrated for their specific needs, teams who go the bespoke route will also be better poised to future proof as the technology continues to evolve at a rapid pace.

So, think you’re ready to explore what generative AI means for your field? Learn more about the ins and outs of the technology in the latest Labs Report exploring the rapid evolution of digital creation.

Labs.Monks collaborated with animators and illustrators to develop a prototype production workflow that blends traditional animation with cutting-edge AI. artificial intelligence animation prototyping creative technology Experience AI & Emerging Technology Consulting AI Digital transformation

Labs Report 32: Generative AI

Labs Report 32: Generative AI

AI AI, AI & Emerging Technology Consulting, Digital transformation, Experience 1 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

A digital view of inside a castle with pink walls and stairs

Generating the future of content through AI.

We’ve seen DALL-E 2, Midjourney, Stable Diffusion and other powerful image generation tools take over our social feeds. The tech is making giant leaps each week, and a future in which it fuels entire industries is not too far away. Those with a deep understanding of the tech and can adopt it into their existing workflows to empower–rather than replace–their teams will remain ahead of the curve.

In this Labs report, we'll uncover how Generative AI is impacting digital creation today, and will explore how to keep ahead of where the tech is going next.

In this Labs report, you’ll:

  • Learn what Generative AI is and what’s currently available
  • Understand how the tech works
  • See the technology in real-world action
  • Peek into what the future holds
  • Learn how to harness its power now

00:00

00:00

00:00

Ancestor Saga is a cyberpunk fantasy adventure created using state of the art generative AI and rotoscoping AI technology.

Generative AI supercharges production for higher quality creative output.

In this animated trailer for an original series called "Ancestor Saga," we demonstrate how Generative AI can be applied to film production. This prototype leverages Stable Diffusion AI for generating background scenes and custom Img2Img neural networks for AI-based rotoscoping of virtual characters.

See our findings about the benefits of using generative AI, including time and labor reduction in production, in the report.

In this Labs report, we’ll discover how Generative AI is going to impact digital creation, and provide a breakdown to help you get ahead. AI artificial intelligence emerging tech creative AI Experience AI & Emerging Technology Consulting AI Digital transformation

Scrap the Manual: Tech Across APAC

Scrap the Manual: Tech Across APAC

19 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

Scrap the Manual - Asia Pacific

APAC is not only one of the most populous and diverse regions in the world, it is also leading the way for unique technologies and innovation. In this episode, host Angelica Ortiz is joined with a fellow Media.Monks’ Creative Technologist, Leah Zhao from our Singapore office. Together, Angelica and Leah give a TLDR overview of our newest Labs Report, Tech Across APAC—providing insight into the regions’ emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region.

You can read the discussion below, or listen to the episode on your preferred podcast platform.

00:00

00:00

00:00

Angelica: Hey everyone! Welcome to Scrap The Manual, a podcast where we prompt “aha” moments through discussions of technology, creativity, experimentation, and how all those work together to address cultural and business challenges. My name is Angelica and we have a very special guest host. Yay!

Leah: Hi! It's great to be here, my name is Leah. We are both Creative Technologists with Media.Monks. I specifically work out of Media.Monks’ Singapore office.

Angelica: Today we're going to be giving a quick TLDR of one of our lab reports and deep dive into something that we didn't get to cover in depth in the reports, such as expanding on our prototype we created, a topic that has some interesting rabbit holes that didn't fit neatly onto a slide, you know, that kind of thing. 

Leah: So for this episode, we are going to be covering technology and innovation culture in Asian Pacific region. If you haven't had a chance to read our APAC Lab Report, here's a quick TLDR.

The most influential technologies from the region are AI automation, AR and computer vision, and the metaverse. China and Japan are leading the growth in AI and machine learning together with Singapore and South Korea. If you come to this region, you might be surprised how people are embracing this advanced technology. People accept it because it is just so convenient and thanks to those Super Apps we have. 

Angelica: To clarify for people who may not be familiar, what are Super Apps? 

Leah: Yeah. So Super Apps are mobile applications that can provide multiple services. And you may have heard of some of the Super Apps such as WeChat in China. Kakao from South Korea, Line app from Japan (that's also widely used in Taiwan and Thailand) and Grab from Singapore, which is used in Southeast Asia. On Super Apps, you can use multiple services from online chatting, shopping, food delivery, to car hailing and digital payments. We literally live our social and cultural life on the Super Apps.

Angelica: Is it sort of like if Uber had one app, but not necessarily branded it's more of just, I'm going to go to WeChat, it'll call a ride, rent a scooter, or order in. You just download one app versus having to download five different ones. 

Leah: Yeah, definitely. But actually for WeChat, it's more complicated, I would say, because there is a whole ecosystem on WeChat because WeChat uses mini programs. Just think of as a microsite on WeChat…

Angelica: Mm-hmm. 

Leah: where they can sell their product and they can have these food delivery services. And for other Super Apps like Line app and Grab it's just exactly like you said. One example is that Burberry launched its social retail store in collaboration with Tencent, which integrates its offline store with mini programs on WeChat. It enables some special features in the store, such as earning social currencies, by engaging with the brand and even raising your own animal based avatars. This is pretty cool as it links up our digital and physical experiences. 

Angelica: Yeah. What I really liked about this example was how technology was seamlessly integrated throughout. It wasn't like, “Hey scan this one QR code.” It went a little bit further to say, “Okay, if you interact with this mini program, then you'll have access and unlock particular outfits or particular items for the digital avatar. You'll be able to actually unlock cafe items in the real store.” So it seemed like it was all a part of one ecosystem. It didn't feel tacked on. It was truly embedded within the holistic retail experience. I know with a lot of branded activations within the US specifically, there's always that question of, should it be accessible through a mobile website or is it something that we can use a downloaded app for? And most clients tend to go with the mobile website. 

Leah: Yeah. 

Angelica: Because there's this hesitancy to download just another application, just to do another thing. And then worrying about the wifi strength when on site when asking people to download these apps. But it'd be interesting for brands creating these mini programs within a larger Super App that then consumers won't necessarily have to do anything else other than access that mini program versus having to download something. Then there's a lot more flexibility in what brands can do and they're not limited to what's available on a mobile website. They have the strength of what can be possible with an app. 

Leah: Yeah, agreed. So another observation actually from our report is that the metaverse is on the rise in the APAC region. It might outplay the plans laid down in the West. Some platforms that draw our attention are Zepeto from South Korea and TME land in China

Angelica: Yeah, and what's cool about those platforms is we see this emphasis on virtual idols, avatars and influencers. From the research that we did, we noticed that there are certain countries that are a bit more traditional culturally… 

Leah: mm-hmm

Angelica: and are strict in how people can be in their real selves to have this sort of escape of the bounds culturally of what people can and cannot be because it's right or wrong or not necessarily accepted. People are going towards anonymity…

Leah: Yeah. 

Angelica: for being able to express themselves. Sort of like the Finstagram accounts that happen in the US or expressing themselves through these virtual influencers, because then their virtual selves can be much more free to express themselves than their real versions could be.

Leah: And also Asia has a rich fandom culture. So it's not a surprise that we see the emphasis on virtual idols and virtual influencers because it enables the fans to interact with the superstars anytime, anywhere.

Angelica: Yeah. And from a branding aspect of things as well, virtual influencers and avatars can also be much more easy to control. Like all the controversies that happened because someone did something either way back in their past or something recently, that makes brands nervous about being able to endorse real people because people are flawed. With virtual influencers, you can control everything. You have teams of people being able to control exactly what they look like, what their personality is, what they do, and that flexibility and customizability…that's a lot more intense than it would be for a real person that has real feelings.

So there's some limitations on what the brand can do, where it's a lot more flexible with virtual influencers. 

Okay, we've covered quite a lot there. There's a lot of really interesting examples that we see within the APAC region that definitely could be applied within Western countries as well. With this said, we're gonna go ahead and move on to what we did for the Labs Report prototype and expand a little bit more on our process.

Let's start with: what was even the prototype? For the prototype we leveraged Zepeto. Zepeto is a metaverse-like experience world platform…insert all buzzwords here…where it allows users to interact like you would for a Roblox world that you go and experience to, but it has additional social features to it.

So what we would think of as an Instagram feed or something like that, it has that embedded within the Zepeto platform. So instead of going to Instagram to talk about your Roblox experience, those two experiences are integrated within one platform. What we also wanted to achieve with this prototype is leverage a technology that originated from the APAC region, and specifically Zepeto. Zepeto is available globally for the most part, with a few exceptions, but it originated within South Korea. We really wanted to use Zepeto because it's available globally for most audiences and it takes the current fragmented way of how the metaverse worlds are created and integrates them with virtual influencers and social media.

With these gamified interactable experiences, the social aspects are really what makes this particular platform shine. And we are also doing this because the metaverse even a year or so later is still an incredibly popular topic. People are still having a lot of discourse about what the metaverse is, what it can be, discussing how brands have already interacted with their first steps into the metaverse, how they're going to continue to grow.

And this is part of what we do a lot at Media.Monks. We get a lot of client requests for similar types of experiences, whether that be Roblox, Decentraland, Horizon World, Fortnite…and Zepeto is just a great platform that no one's really talking a lot about within the Western dialogue, but it's incredibly powerful and it reaches so many people. We saw that it was an amazing platform that put the promise of what the metaverse can and will be to the next level.

Leah: Yeah. I also like Zepeto because Zepeto not only has Asian style avatars and it enables you to customize your avatar from your head, body, hair, outfits, and even poses and dancing steps you can have. So with Zepeto you can purchase a lot of outfits and decorations with the Zepeto money, which is a currency that you earn by app purchases or being more active on the platform. 

Angelica: Yeah. There's two different types of currencies that Zepeto has. One of which are called Zems…i.e. gems. And then there's another one, which are coins. For creator made items, you can set a price for how many Zems you want them to go for. Anything that's created by users can only be sold by Zems, which are very difficult to get free with an app. That's where, you know, the free to play tends to come in. With a Euro you can get 14 Zems, so then you can buy more digital clothing. There are coins that you start the experience with that you can use to purchase Zepeto-created items. And so that's kind of how they have that difference there. 

Leah: But my favorite part about Zepeto is the social aspect as you mentioned earlier. For me, it's like TikTok in the metaverse because it has the Feed feature.

You know, there are three pages of the feed: For you, following, and popular. Under the feed you can see live streaming by the virtual influencers and you can have your own live stream as well.

Angelica: For the live stream that's using some motion capture as well, because it's either pre-made models and moves that are created or people can actually have their face being recognized in real time...

Leah: Yeah. 

Angelica: to then translate to that virtual avatar. 

Leah: Yeah. Zepeto, they have the Zepeto camera. So with this camera, you can create content with your own avatar and the AR filter, which copies your facial expression quite accurately, and even brings your own avatar to real life. So you can place your own avatar on the table in your room.

Angelica: One part that I also thought was really cool…you had mentioned earlier with the poses. Think about if we see a celebrity on the street, we're gonna take a photo with them. Right. We can't just let that celebrity pass by without being like, “oh yeah, I totally saw JLo in Miami,” you know? The “take a photo or it didn't happen” type of thing, haha. There's a version of that on Zepeto. Fans can take a photo with you with their virtual avatars with your virtual avatar. So it takes the virtual autograph, of sorts, to a different level. You can live vicariously through your avatar by having them take a photo with your, your favorite celebrity or your favorite influencer. So I really love that aspect of being able to build that audience virtually as well. 

Something also that's really cool about Zepeto is within those world experiences, the social aspects are still very much ingrained in there. It's not just, “Okay, you have this separate social feed, you have the separate virtual influencer side, and then you have the world.” They're all integrated.

An example of this is the other day we were testing out the Zepeto world and we were all in the same experience together. When someone would take a selfie, and that's right: there is a selfie stick in this experience and it looks exactly like what you would imagine, but the virtual version of it too. And when someone takes a photo or a video, it automatically tags people that were within that photo.

So it's generating all of this social momentum, like really, really quickly. And soon as you take that photo, you can either download it directly to your device. Or you can go ahead and immediately upload it. What was great for me personally…figuring out how to have, you know, just the right caption... that's something that takes me way too long to figure out what are the right words and the right hashtags. But you don't even need to worry about captions when taking photos within these worlds. As soon as you say, “I wanna upload it,” it automatically captions, tags people, and also gives other related hashtags for how other people could see that experience from you.

So it's very seamless and easy. 

Leah: Yeah. That's amazing. 

Angelica: It's just like the next level of how it makes sharing super, super, super easy, so that's something I really like there too. 

Speaking of the worlds: now, within this next part of the prototyping process, it was up to us to determine the worldscape and interactions. And as a part of the concept, we wanted to create a world that plays into what real life influencers would be looking for when trying to fill their feed. And that is: creating content. Specifically: selfies. And so we created four different experiences that would have the ultimate selfie moment.

One, which is this party balloon atmosphere. Sort of think about these like really big balloons that you can kind of poke with the avatar as you move around, or even like jump on some of the balloons to get a higher view from it as well.

The second was like a summer pool party. You could actually swim in the pool. It would change the animation of the avatar when you're in the water part. And, you know, the classic, giant rubber ducky in the pool and all those things. So definitely brought you in the moment.

The third was an ethereal Japanese garden, so very much when wanting to get away and have a chill moment, that was definitely the vibe we were going for there.

And then lastly, we had the miniaturized city. So what you would think is the opposite of meditation is the hustle and bustle of the big city. And we created that experience as well. There is also a reference to the Netherlands. So you'll just have to keep an eye out for what that is and let us know if you find it.

Leah: Is there a hidden fifth environment?

Angelica: There it is. Yeah. You know, what was interesting is when we were testing out the environment and we were all together. 

Leah: Yeah. 

Angelica: We created our own room. 

Leah: Yeah. 

Angelica: And then we thought it was just gonna be the eight of us that were testing it out and then other people, random people showed up. 

Leah: Wow.

Angelica: I was just like, “where did you guys come from?” There were two people that actually used the chat within the room and they belined directly to where that fifth environment was. 

Leah: Yeah. 

Angelica:  So it was just really interesting that people one were specifically coming to the world to experience it together.

Leah: Mm. 

Angelica: And then two, we saw a lot of random people. There would be dead spots where it’s just like, “okay it's just one of us in the room.” We're just testing it. But as soon as all of us got in there together and started taking photos, there were so many people that showed up. It's just like “What? This is insane!”

Leah: Was it the recommendation system on Zepeto?

Angelica: Yeah. That's what we're thinking. Because the room that was created…we thought it was not, I guess it wasn't a private room. It was probably a public room. 

Leah: Yeah. 

Angelica: But it was interesting that as soon as we started playing around and posting content, then people were like, “Okay, I'll join this room.”

Leah: Yeah. Maybe because of tagging as well.

Angelica: Yeah, exactly. And that goes to our earlier point of how really powerful that platform is and how posting would give that direct result of someone posting something and other people wanting to be a part of it. There was one person that liked my post that had like 65,000 followers.

Leah: Whoa. 

Angelica: And I'm like, who are you? What is this? 

Leah: That's definitely a virtual idol. 

Angelica: Yeah, exactly. They only had like six posts though, which was a little weird, but they had so many followers. It was nuts. 

Leah: Actually today I just randomly went into a swimming pool party on Zepeto. I went into the world, people were playing with water guns together.

Angelica: Mm-hmm 

Leah: So I had just arrived. Landed. Then someone just shoot me with a water gown and I was hit. I must lose my block. 

Angelica: Oh no! Haha, that sounds fun though. 

Leah: Yeah, that was fun. 

Angelica:  Was it like a big room? Like how many people were in that environment at once? 

Leah: When I was there, it was around 80 people in the world.

Angelica: Oh, wow.

Leah: Yeah, it's quite a lot actually. 

Angelica: There's definitely something to be said about how there's superfans of Zepeto. Like that's kind of part of the daily aspect of it. Being able to meet people through the social aspects and then hang out with them through these worlds.

But all this to say this entire worldscape and all these interactions that we included within the prototypes were all built within what they call their BuildIt platform.

Leah: It's quite user-friendly. It's very easy to create a world yourself even with zero experience of any 3D modeling software. 

Angelica: Yeah. BuildIt is like a 3D version of website builders. You have the drag and drop type of thing. Where instead of a 2D scrolling website experience, now you have that drag and drop functionality with a lot of different assets into a 3D space. We can also create experiences like this through Unity. The only caveat to Unity is that the experience that we would create there would only be available on mobile devices. And we didn't wanna restrict the type of people that would be able to experience this. So we decided to do it on BuildIt because the end result of those worlds would be able to be accessed on both desktop and mobile. 

Leah: Other than the world space, you can also create some clothes for your avatar to make it look more unique and with its own personality. So in our case, we create a more neutral looking avatar with blue skin. Very cool, they're slightly edgy but approachable. And the process of creating clothes was very friendly. So you just download the template and then add the textures in Photoshop. We chose a t-shirt, jacket, bomber, and wind breaker. And then we touched it up with some Oriental elements such as a dragon and soft pink color, which matches our Shanghai office. Everyone can create their own unique clothes with simple editing of the textures. 

Angelica: Yeah. We really wanted to play within clothing specifically because that's a part of this digital ecosystem of being an influencer. You may have branded experiences that you take part of, or brands sponsor you. Influencers will wear custom clothing either that they design or that they're representing another brand. All those things we wanted to integrate within this. 

So the influencers are visiting this world. They could say, “Hey, I'm in this Media.Monks experience” or “insert brand here” experience. And I'm also wearing their custom clothing. It's sort of a shout out to the clothing as well as the world. So it's at the heart of this larger ecosystem. The world is not exclusive to the clothes…is not exclusive to social. All of those elements are all playing together and this leads to creating social content.

Once we had the world and the merchandise solidified, we continue to build off this virtual influencer style by creating content of our own. What we did is we analyzed popular Zepeto influencers. We even made a list of the types of content they create, which is going to someone else's world, doing an AR feature with their real life self. Being able to do posed photos with other avatars. All those were a part of the social content that we created as a part of this. 

Now that the prototype is ready to go, it's time to think about what the prototype did not yet achieve but that we would really like to see in the future. So one thing that we recommend is: when wanting to create branded fully custom worlds, those should definitely be made within Unity to have the most flexibility. At this time of recording, being able to export worlds means that only is on mobile devices. So, you know, that's something to keep in mind there. 

Leah: For clothing creation, there are some limitations. For example, for the texture, the maximum resolution we can upload is 512 x 512. So it means we can't add detailed patterns or logos onto our clothes. And we can't create physics of our clothing materials. That is another thing that I think the platform can improve. 

Angelica: Yeah. It's not able to show the fuzziness of a sweater or if we're creating a dress or a shirt that needs to be flowy, it won't show that that shirt or that dress is fuzzy or flowy. It'll just be the pattern that's shown, but the texture of how a clothing might feel based on seeing it is not reflected there. So it's a give and take where it's very easy to create clothing items 

Leah: Yeah. 

Angelica: …but it doesn't go so far as to have a realistic look. 

Leah: Yeah, but I think this is something that’s not just Zepeto. Other metaverse platforms can improve with that because I don't see many platforms can have physics of the clothing itself. It would be great if the physics of the clothing could be implemented in the workspace as well as in the AR camera. It would add extra immersion and fidelity to the whole experience. 

Angelica: Yeah. It would also help with making those small micro interactions really fun. Let's say there's a skydiving experience that's in Zepeto and someone is jumping off of the plane and is doing their skydive.

Leah: Yeah.

Angelica: It'd be cool. If the physics of the clothing would react to, like this virtual wind that is happening, or something like that. Or if it's a really puffy sweater, it kind of like blows up because all of the air is kind of getting stuck in it. Those are just the fun things that make people get even more immersed within the environment too. 

Moving forward in creating branded experiences, having a closer relationship with Zepeto’s support team and development team will be really helpful in a lot of the things that the BuildIt platform has a restriction for. But when collaborating with Zepeto and with using the Zepeto plugin for Unity, then we can unlock a lot of interactions that make the experience a lot deeper. 

The other thing to mention here is it'd be really great to see Zepeto integrate with other social media platforms versus the Zepeto specific one. We've talked a lot about how Zepeto is a really powerful platform because it combines social with the virtual experience as well. And it would just be great if let's say there's an experience that happens in Zepeto and we're taking a photo or video, we say we wanna post it. Could that be post, all in one swoop, be posted to Instagram, posted to Twitter, posted to Facebook and all of those things, instead of this Zepeto ecosystem kind of being stuck.

So all the cool stuff that we're saying, it gets left within this platform and they're not necessarily shared outside of it unless you did the repost thing. That's kind of how it would work with Zepeto, but it'd be really great if all those rich features that we get with Zepeto could be extended to other platforms.

And I mean, there's already the platform fatigue of having to keep up five or many more social media platforms. So auto captioning for Instagram would be great or having an experience in Zepeto and then moving that on to what I wanna post on Twitter that would just make the process so much easier. 

Leah: The full integration of that might take some time…

Angelica: Mm-hmm 

Leah: since there are more things to consider such as data privacy. 

Angelica: Yep. 

Leah: But we might say it's coming faster in APAC. If one day the metaverse platform is integrated into the Super Apps. Just imagine by then it would be truly one ecosystem. 

Angelica: Exactly. It'd be a really powerful way to have things all within one place. Meta has tried with this “connecting what you do virtually and connecting it to other social media platforms” specifically within its own ecosystem of Facebook, but it's had mixed success. There's just not as much of, “Okay. I'm posting what I'm doing in VR to Facebook.” There's not as much of that traction happening as with going in Zepeto, having this experience, posting it, and people randomly show up because of the social stuff. You could see that immediate interaction. It'd be really great to see this integration outside of just Zepeto social into other social media experiences to really expand its reach. Also particularly because of the virtual influencer aspect of things. Just imagine having this facial mocap that you do within Zepeto and that livestream could go to Instagram, Facebook, and multiple platforms at once. That would really increase the visibility of that virtual influencer and the social clout. 

So we're getting towards the end. Let's go ahead and think about what are some concrete takeaways that the audience can implement and use within their daily lives, as they're considering Zepeto. And then also just in general, the APAC trends that we're seeing here.

Something that I think of is: gaming and social media don't have to be separate anymore. Like when playing online experiences, traditionally, it'll be either playing Warhammer on Steam and having the voice app within there, or opening up Roblox and a Discord channel. But those are two separate platforms: one to connect and one to play. With Zepeto, it's really inspiring to think about how those interactions can be in one. And not just voice, but the social aspect and everything that comes with that. It's really the next level of getting closer to what we talk about the metaverse can be. And Zepeto is really inspiring in that way. 

Leah: Yeah. To your point about this social aspect: Zepeto is actually what we need right now. We can't expect everyone directly dive into virtual without connecting them with the social life in the real world. And Zepeto has this potential to bridge the gap between our social life in the physical world and the digital one. 

Angelica: Yeah, Zepeto is a sleeping giant of sorts where it could have huge potential for a global audience. It is accessible in other countries outside of the APAC region, like we mentioned, but there's just not as much buzz around it as the platform definitely deserves. There are platforms that have tried to have the integration that Zepeto has within those three categories of virtual influencers, social media and experiences. But there just hasn't been as much from those other platforms as Zepeto has been able to succeed in.

So like Decentraland, Sandbox Roblox, Fortnite, Horizon Worlds…all those platforms have tried to get this integration, but it just has not been as successful. Something also to keep in mind and why Zepeto is just a really great platform is that there have been brand activations that have happened on Zepeto already.

There have been concerts and virtual representations of BTS or even Selena Gomez going into those concerts. Like what we applauded a few years ago with the Fortnite concert, Zepeto has already been within those realms already. There's a Samsung activation. There's a Honda activation, and a Gucci one as well.

And those are definitely getting a lot of traction and movement with people who are actually part of those experiences. And because it's integrated within its own social media ecosystem with purchasing items with virtual influencers, there's just so much potential for when brands are getting into these spaces, the type of impact and interaction they can have with consumers.

Leah: Yeah. The last thing we learned from this region: currently the West and the East still feel very distinct technologically and also culturally, with some crossover happening, but it's not as much as we would like to see. Things like virtual influencers, technology in retail, Super Apps, increased use of digital payments, those have been used to deepen collections with consumers and enhance ease of use. It would be amazing to see that more widely integrated within the West.

Angelica: Yeah, exactly. There's a lot of cultural and technological crossover to Eastern countries in terms of, you know, the US culture and colloquialisms always make their way around the globe. And it would be really great to see the really impactful technological and cultural innovations that are happening within the East, make their way more holistically towards the West. Not just here or there, but how Google has been embraced within APAC. It'd be great to have some of those APAC platforms integrated in the west. There's a lot that each can learn from each other and build up on each other. It's not necessarily let's distinguish the West from the East, because we talked about that quite a bit, but what is the way that globally we can improve experiences for consumers. And there's a lot of ways technology can empower people to have those deeper connections and how brands can also be a part of that story.

Leah: Yeah. 

Angelica: So that's a wrap! Thanks everybody for listening to the Scrap The Manual Podcast. Be sure to check out our blog post for more information, references, and also a link to our prototype. Remember to check out the Netherlands references and also the hidden fifth world within that prototype. If you like what you hear, please subscribe and share! You can find us on Spotify, Apple Podcasts and wherever you get your podcasts.

Leah: If you want to suggest topics, segment ideas, or general feedback, feel free to email us at scrapthemanual@mediamonks.com. If you want to partner with Media.Monks Labs, feel free to reach out to us at that same email address. 

Angelica: Until next time!

Leah: Bye.

Our Labs.Monks provide insight into APAC’s emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region. artificial intelligence AR augmented reality technology emerging technology

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss