Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

Systems Modernization • Modernizing Mindset, Methods and Operations

  • Client

    First American

  • Solutions

    Technology ServicesTechnology ConsultingDigital Product DeliveryFull Stack TeamsTechnology Training & Coaching

Results

  • 14 different IT systems rationalized to one new platform
  • 200+ pain points eliminated
  • 1.2M working hours saved in offices
  • 90% fewer employees lost to workload attrition
  • 45% of workload offloaded from offices to new tooling
First American software running on a laptop

A long-term partner devoted to speed, efficiency and scale.

Our partnership with First American, a global provider of title insurance and settlement services, is punctuated by numerous projects focused on digital transformation and overhauling legacy systems, all designed to ensure First American’s employees connect with each other and their customers more efficiently. These initiatives have not only helped the business save millions but have also unified teams and core systems while adapting to global challenges.

A mosaic of First American application screens

Overhauling legacy systems to boost efficiencies.

A key focus of our partnership with First American is modernizing systems. For instance, the brand had an urgent need to overhaul outdated systems at the core of their business: managing sensitive documents and processing billions of dollars in transactions. We built a platform to automate deal flow, reduce manual data input, and securely manage user access rights—consolidating five disparate systems into one scalable platform that improved employee training times, customer retention, and order volume. This led to increased revenue and bolstered First American’s reputation as a technology and customer experience leader.

Speeding up collaboration and connection across teams.

Often, these digital transformation efforts help unify key roles throughout the business. When First American needed to unify digital file management across departments, replacing obsolete technology and reducing cost from multi system inefficiencies in the process, we built a shared library of components, ensuring rapid product delivery while implementing a technology strategy to allow for IT innovation. This new system set First American up to save millions annually and helped align senior executives across the organization.

First American software running on a laptop

Improving the employee and customer experience in one swoop.

Helping First American teams work and collaborate ultimately helps the business connect with its customers. When First American wanted to upgrade their marketplace experience, we developed and prototyped products to reimagine how employees and customers worked together, which formed the basis for a production system we built over the subsequent 13 months. These efforts provided the corporate IT group with more effective product development capability.

First American email from from user interface to conversion

Unlocking a global workforce.

First American realized the cost-benefits of leveraging an international workforce but was struggling to adapt to global challenges. We designed a next-generation platform that simplified areas of employee turnover, training, domain expertise, and technological efficiency, which empowered First American to claim these cost benefits by profiting off increased output from international teams. This, along with other engagements above, showcase how our strong partnership with First American has successfully driven numerous efficiencies and growth for the brand.

  • A person using the First American platform on a laptop
  • Mobile app of the First American platform
  • A person using the First American platform on a iPad

In partnership with

  • First American
Client Words We have tried [reinventing the marketplace experience] before and failed. We could not have done this without you.
First American Logo

C-Level Executive Stakeholder

First American

Overhauling legacy systems.

First American needed to unify digital file management across departments to replace obsolete technology and reduce cost from multi-system inefficiencies. We built a shared library of components ensuring rapid product delivery while implementing a technology strategy to allow for IT innovation. This new system set First American up to save millions annually and helped align senior executives across the organization.

Want to talk tech? Get in touch.

Hey 👋

Please fill out the following quick questions so our team can get in touch with you.

Can’t get enough? Here is some related work for you!

Vision Pro Is a Mixed Reality Milestone—Here’s What It Means for Brands

Vision Pro Is a Mixed Reality Milestone—Here’s What It Means for Brands

Experience Experience, Extended reality, Immersive Brand Storytelling, Metaverse, VR & Live Video Production 4 min read
Profile picture for user mediamonks

Written by
Monks

A person wearing a AR headset

If the reveal of Apple’s Vision Pro has made one thing clear, it’s that we’re currently at an inflection point where hardware innovation meets consumer behavior. Though it isn’t the first mixed reality headset on the market, following Magic Leap and Meta’s Oculus, it comes at a moment when the industry is poised to redefine how we interact with digital content.

“This is perhaps one of the most hotly anticipated product launches in recent years,” says our VP, Interactive Projects Simon Joseph. “It not only gives credit to the field of augmented and mixed reality, but also to its staying power and the potential for the future to come. For the era of spatial computing and AR, this is only just beginning, and we are so excited to see where it goes from here.”   

Anyone who has ever dabbled in augmented reality (AR) knows that it’s a powerful tool for capturing people’s attention and standing out in a crowded market by seamlessly blending digital content with the physical world through visual overlays, engaging audio and motion control. Parallel to the metaverse’s rise in the cultural consciousness, these immersive features are proving advantageous to brands who aim to shine in an abundance of content, stuffed social feeds and crowded app ecosystems. On top of that, the technology promises to evoke truly memorable and emotional responses in consumers. 

Innovations across the board are helping AR advance at speed.

Compared to consumers, brands have been slower to recognize AR’s practical use. Data from Snap and Ipsos shows that 90% of brands think AR is primarily for fun, while only 57% of consumers think of it that way, instead seeing potential in activities such as shopping. As a trio of technological forces—not just hardware, but also software and heightened connectivity—converge to enable a new breed of AR experiences, we believe brands will realize AR’s potential across the customer journey. 

New AR headsets are gaining interest and intrigue—there will be over 1.7 billion active AR devices worldwide in 2024, and 18 million AR/VR headsets will ship this year—but software like visual positioning systems will also greatly enhance multiplayer digital experiences on mobile devices. Moreover, 5G Advanced is set to improve speed, coverage, mobility and power efficiency, which means no latency and no more cache limitations as people will stream high-quality experiences in real time. 

The fact that AR experiences will become more easily accessible for consumers is great news for brands, because AR’s value extends from the top to the bottom of the sales funnel. Research from WARC found that “AR ads capture the attention of broad audiences who are early in their purchase journey, with a +7% increase in aided ad recall among this group of consumers. And AR can help brands nudge consumers who are in the consideration phase by making the brand seem more up-to-date and differentiated.” 

Dive in head first to get ahead.

Time has shown that early adopters can reap first-mover rewards, and the present moment offers brands a chance to get ahead: with the launch of new hardware comes a new app marketplace, and early explorers of AR are primed to benefit from being quick to take the plunge. That said, effectively introducing AR into your customer experience journey requires careful consideration—questions around the medium, culture fit, and collaborating with vendors are bound to come up—so here are some chief concerns marketers should consider in setting themselves up for success.

A table showing 3D moxy hotel perks
A phone showing an augmented avatar

For starters, find out whether immersive AR experiences will excite your audiences. To understand how AR might make sense for your brand, follow the “jobs to be done” framework, an important tool for assessing any innovation. Consider customer needs and the motivations that drive them, as well as the circumstances in which they achieve them. 

Furthermore, make sure you take advantage of the medium. Whether you’re aiming to drive powerful immersion through interactive content or overlay real-world contexts with useful information, the medium determines the benefits. That’s why it’s important to carefully plan how certain benefits from AR can help your brand achieve its goals. 

Finally, explore other tools that aid AR development. Thanks to software kits and frameworks, creating AR experiences has never been easier—and with new urgency to develop immersive 3D content, various AI-powered tools have emerged to streamline content creation. Nvidia’s Instant NeRF allows teams to quickly create digital doubles of photographed objects, while Stability for Blender adds the force of Stable Diffusion to 3D software and Unity AI leverages the power of Unity game engine and large language models by building entire scenes based on a written prompt.

It’s time to break the mold and trust the potential of AR. 

AR is an undeniably powerful tool for brands to connect with their audiences. Through immersive and interactive experiences, this technology is transforming the traditional customer journey, offering a blend of entertainment and utility that captures people’s attention and drives engagement. Several brands are already shaping the future of consumer engagement. By exploring the vast possibilities of AR, addressing key considerations, and leveraging innovative technologies, your brand can unlock the full potential of the technology, too, cementing your position as a leader in this rapidly evolving landscape.

AR is a powerful tool for brands to connect with their audiences. Learn how to unlock the full potential of the technology and cement your position as a leader in this landscape. AR augmented reality mixed reality emerging technology Experience VR & Live Video Production Immersive Brand Storytelling Extended reality Metaverse

Blue Sky Thinking with Salesforce Data Cloud

Blue Sky Thinking with Salesforce Data Cloud

Consumer Insights & Activation Consumer Insights & Activation, Data, Data Privacy & Governance, Data Strategy & Advisory, Data maturity, Death of the cookie 1 min read
Profile picture for user mediamonks

Written by
Monks

gray background with colorful lines

Unlock deep customer insights with a CDP

While the nomenclature of Data Cloud might sound soft and fluffy, a CDP is anything but. CDPs can deliver value across an organization, from marketing operations to IT, data science to paid media, but it’s important to take a few key considerations into account before making the leap.

In this report, you will learn how to handle key considerations like data governance, efficiency management, virtualization principles, consent management, unification, and activation to build a holistic view of what is happening in your business.

gray background with text that reads "Blue Sky Thinking with Salesforce Data Cloud"

You’re one download away from…

  • Understanding governance and privacy standards that come with CDP adoption
  • Seeing how CDPs bridge the gap between the CMO and CIO
  • Assessing your readiness to implement a CDP

This experience is best viewed on Desktop.

Download Now
Thinking about a Customer Data Platform (CDP)? This report guides you through essential considerations like data governance, consent management, and data unification to help your organization gain a holistic view of its customers. AI Personalization customer data artificial intelligence creative technology emerging technology automation Data Data Privacy & Governance Consumer Insights & Activation Data Strategy & Advisory Data maturity Death of the cookie

How AI Is Changing Everything You Know About Marketing

How AI Is Changing Everything You Know About Marketing

AI AI, AI & Emerging Technology Consulting, AI Consulting, Digital transformation, New paths to growth, Technology Consulting, Technology Services 1 min read
Profile picture for user mediamonks

Written by
Monks

How AI Is Changing Everything You Know About Marketing

Artificial Intelligence is disrupting every aspect of business across content, data and digital media, and technology. The delivery of hyper-personalized experiences, real-time insights via predictive marketing intelligence, and the emergence of owned machine learning models are just a handful of ways that AI has turned business-as-usual into an unfamiliar landscape that continues to evolve at the blink of an eye.

Indeed, the efficiencies and opportunities that AI enables can radically uplevel brand experience and output, though unlocking their true potential relies on understanding how to uplevel teams to use the technology effectively. Those who can fully leverage the power of AI and infuse it within every aspect of their business will dominate the market. But for those lagging behind, this is a Kodak moment: there will be no loyalty for businesses that are slow to deliver AI-powered experiences that make consumers’ lives easier.

Throughout this guide, we’ll showcase AI’s potential to transform marketing today and tomorrow, as well as the actions you can take right now to reap those rewards and lead in the new era.

A person with glitchy different faces in a collage style type of design

You’re one download away from…

  • Preparing for your journey to AI transformation now
  • Establishing a strong data foundation to serve AI innovation
  • Finally unlocking true personalization across the customer journey
  • Future-proofing your business culture and teams for the new era

This experience is best viewed on Desktop.

Download Now
In this report we discuss the impact of AI on the business landscape and how it can offer hyper-personalized experiences and real-time insights for brands. AI Personalization artificial intelligence creative technology emerging technology automation Technology Services AI Consulting AI & Emerging Technology Consulting Technology Consulting AI Digital transformation New paths to growth

Scrap the Manual: Generative AI

Scrap the Manual: Generative AI

19 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

a blue backdrop with the copy "Generative AI (Artificial Intelligence)"

Generative AI has taken the creative industry by storm, flooding our social feeds with beautiful creations powered by the technology. But is it here to stay? And what should creators keep in mind?

In this episode of Scrap the Manual, host Angelica Ortiz is joined by fellow Creative Technologist Samuel Snider-Held, who specializes in machine learning and Generative AI. Together, Sam and Angelica answer questions from our audience—breaking down the buzzword into tangible considerations and takeaways—and why embracing Generative AI could be a good thing for creators and brands.

Read the discussion below or listen to the episode on your preferred podcast platform.

00:00

00:00

00:00

Angelica: Hey everyone. Welcome to Scrap the Manual, a podcast where we prompt "aha" moments through discussions of technology, creativity, experimentation and how all those work together to address cultural and business challenges. My name's Angelica, and I'm joined today by a very special guest host, Sam Snider-Held

Sam: Hey, great to be here. My name's Sam. We're both Senior Creative Techs with Media.Monks. I work out of New York City, specifically on machine learning and Generative AI, while Angelica's working from the Netherlands office with the Labs.Monks team.

Angelica: For this episode, we're going to be switching things up a bit and introducing a new segment where we bring a specialist and go over some common misconceptions on a certain tech.

And, oh boy, are we starting off with a big one: Generative AI. You know, the one that's inspired the long scrolls of Midjourney, Stable Diffusion and DALL-E images and the tech that people just can't seem to get enough of the past few months. We just recently covered this topic on our Labs Report, so if you haven't already checked that out, definitely go do that. It's not needed to listen to this episode, of course, but it'll definitely help in covering the high level overview of things. And we also did a prototype that goes more in depth on how we at Media.Monks are looking into this technology and how it implements within our workflows.

For the list of misconceptions we’re busting or confirming today, we gathered this list from across the globe–ranging from art directors to technical directors–to get a variety of what people are thinking about on this topic. So let's go ahead and start with the basics: What in the world is Generative AI?

Sam: Yeah, so from a high level sense, you can think about generative models as AI algorithms that can generate new content based off of the patterns inherent in its training data set. So that might be a bit complex. So another way to explain it is since the dawn of the deep learning revolution back in 2012, computers have been getting increasingly better at understanding what's in an image, the contents of an image. So for instance, you can show a picture of a cat to a computer now and it will be like, "oh yeah, that's a cat." But if you show it, perhaps, a picture of a dog, it'll say, "No, that's not a cat. That's a dog."

So you can think of this as discriminative machine learning. It is discriminating whether or not that is a picture of a dog or a cat. It's discriminating what group of things this picture belongs to. Now with Generative AI, it's trying to do something a little bit different: It's trying to understand what “catness” is. What are the defining features of what makes up a cat image in a picture?

And once you can do that, once you have a function that can describe “catness”, well, then you can just sample from that function and turn it into all sorts of new cats. Cats that the algorithm's actually never seen before, but it just has this idea of “catness” creativity that you can use to create new images.

Angelica: I've heard AI generally described as a child, where you pretty much have to teach it everything. It's starting from a blank slate, but over the course of the years, it is no longer a blank slate. It's been learning from all the different types of training sets that we've been giving it. From various researchers, various teams over the course of time, so it's not blank anymore, but it's interesting to think about what we as humans take for granted and being like, "Oh that's definitely a cat." Or what's a cat versus a lion? Or a cat versus a tiger? Those are the things that we know of, but we have to actually teach AI these things.

Sam: Yeah. They're getting to a point where they're moving past that. They all started with this idea of being these expert systems. These things that could only generate pictures of cats...could only generate pictures of dogs.

But now we're in this new sort of generative pre-training paradigm, where you have these models that are trained by these massive corporations and they have the money to create these things, but then they often open source them to someone else, and those models are actually very generalized. They can very quickly turn their knowledge into something else.

So if it was trained on generating this one thing, you do what we call “fine tuning”, where you train it on another data set to very quickly learn how to generate specifically Bengal cats or tigers or stuff like that. But that is moving more and more towards what we want from artificial intelligence algorithms.

We want them to be generalized. We don't want to have to train a new model for every different task. So we are moving in that direction. And of course they learn from the internet. So anything that's on the internet is probably going to be in those models.

Angelica: Yeah. Speaking of fine tuning, that reminds me of when we were doing some R&D for a project and we were looking into how to fine tune Stable Diffusion for a product model. They wanted to be able to generate these distinctive backgrounds, but have the product always be consistent first and foremost. And that's tricky, right? When thinking about Generative AI and it wanting to do its own thing because either it doesn't know better or you weren't necessarily very specific on the prompts to be able to get the product consistent. But now, because of this fine tuning, I feel like it's actually making it more viable of a product because then we don't feel like it's this uncontrollable platform. It's something that we could actually leverage for an application that is more consistent than it may have been otherwise. 

So the next question we got is: with all of the focus on Midjourney prompts being posted on LinkedIn and Twitter, is Generative AI simply just a pretty face? Is it only for generating cool images?

Sam: I would definitely say no. It's not just images. It's audio. It's text. Any type of data set you put into it, it should be able to create that generative model on that dataset. It's just the amount of innovation in the space is staggering.

Angelica: What I think is really interesting about this field is not only just how quickly it's advanced in such a short period of time, but also the implementation has been so wide and varied.

Sam: Mm-hmm.

Angelica: So we talked about generating images, generating text and audio and video, but I had seen that Stable Diffusion is being used for generating different types of VR spaces, for example. Or it's Stable Diffusion powered processes, or not even just Stable Diffusion... just different types of Generative AI models to create 3D models and being able to create all these other things that are outside of images. There's just so much advancement within a short period of time.

Sam: Yeah, a lot of this stuff you can think about like LEGO blocks. You know, a lot of these models that we're talking about are past this generative pre-training paradigm shift where you're using these amazingly powerful models trained by big companies and you're pairing them together to do different sorts of things. One of the big ones that's powering this, came from OpenAI, was CLIP. This is the model that allows you to really map text and images into the same vector space. So that if you put in an image and a text, it will understand that those are the same things from a very mathematical standpoint. These were some of the first things that people were like, "Oh my gosh, it can really generate text and it looks like a human wrote it and it's coherent and it circles back in on it itself. It knows what it wrote five paragraphs back." And so, people started to think, "What if we could do this with images?" And then maybe instead of having the text and the images mapped to the same space, it's text to song, or text to 3D models?

And that's how all of this started. You have people going down the evolutionary tree of AI and then all of a sudden, somebody comes out with something new and people abandon that tree and move on to another branch. And this is what's so interesting about it: Whatever it is you do, there's some cool way to incorporate Generative AI into your workflow.

Angelica: Yeah, that reminds me of another question that we got that's a little bit further down the list, but I think it relates really well with what you just mentioned. Is Generative AI gonna take our jobs? I remember there was a conversation a few years ago, and it still happens today as well, where they were saying the creative industry is safe from AI. Because it's something that humans take creativity from a variety of different sources, and we all have different ways of how we get our creative ideas. And there's a problem solving thing that's just inherently human. But with seeing all of these really cool prompts being generated, it's creating different things that even go beyond what we would've thought of. What are your thoughts on that?

Sam: Um, so this is a difficult question. It's really hard to predict the future of this stuff. Will it? I don't know.

I like to think about this in terms of “singularity light technology.” So what I mean by singularity light technology is a technology that can zero out entire industries. The one we're thinking about right now is stock photography and stock video. You know, it's hard to tell those companies that they're not facing an existential risk when anybody can download an algorithm that can basically generate the same quality of images without a subscription.

And so if you are working for one of those companies, you might be out of a job because that company's gonna go bankrupt. Now, is that going to happen? I don't know. Instead, try to understand how you incorporate it into your workflow. I think Shutterstock is incorporating this technology into their pipeline, too.

I think within the creative industry, we should really stop thinking that there's something that a human can do that an AI can't do. I think that's just not gonna be a relevant idea in the near future.

Angelica: Yeah. My perspective from it would be: not necessarily it's going to take our jobs, but it's going to evolve how we approach our jobs. We could think of a classic example of film editors where they had like physical reels to have to cut. And then when Premiere and After Effects come out, then that process is becoming digitized.

Sam: Yeah.

Angelica: And then further and further and further, right? So there's still video editors, it's just how they approach their job is a little bit different.

And same thing here. Where there'll still be art directors, but it'll be different on how they approach the work. Maybe it'll be a lot more efficient because they don't necessarily have to scour the internet for inspiration. Generative AI could be a part of that inspiration finding. It'll be a part of the generating of mockups and it won't be all human made. And we don't necessarily have to mourn the loss of it not being a hundred percent human made. It'll be something where it will allow art directors, creatives, creators of all different types to be able to even supercharge what they currently can do.

Sam: Yeah, that's definitely true. There's always going to be a product that comes out from NVIDIA or Adobe that allows you to use this technology in a very user friendly way.

Last month, a lot of blog posts brought up a good point: if you are an indie games company and you need some illustrations for your work, normally you would hire somebody to do that. But this is an alternative and it's cheaper and it's faster. And you can generate a lot of content in the course of an hour, way more than a hired illustrator could do.

It's probably not as good. But for people at that budget, at that level, they might take the dip in quality for the accessibility, the ease of use. There's places where it might change how people are doing business, what type of business they're doing.

Another thing is that sometimes we get projects that for us, we don't have enough time. It's not enough money. If we did do it, they would basically take our entire illustration team off the bench to work on this one project. And normally if a company came to us and we passed on it, they would go to another one. But perhaps now that we are investing more and more on this technology, we say, "Hey, listen, we can't put real people on it, but we have this team of AI engineers, and we can build this for you.” For our prototype, that's what we were really trying to understand is how much of this can we use right now and how much benefit is that going to give us? And the benefit was to allow this small team to start doing things that large teams could do for a fraction of the cost.

I think that's just going to be the nature of this type of acceleration. More and more people are going to be using it to get ahead. And because of that, other companies will do the same. Then it becomes sort of an AI creativity arms race, if you will. But I think that companies that have the ability to hire people that can go to their artists and say, "Hey, what things are you having problems with? What things do you not want to do? What things take too much time?" And then they can look at all the research that's coming out and say, "Hey, you know what? I think we can use this brand new model to make us make better art faster, better, cheaper." It protects them from any sort of tool that comes out in the future that might make it harder for them to get business. At the very least, just understanding how these things work and not from a black box perspective, but having an understanding of how they work.

Angelica: It seems like a safe bet, at least for the short term, is just to understand how the technology works. Like listening to this podcast is actually a great start.

Sam: Yeah.

Angelica: Right?

Sam: If you are an artist and you're curious, you can play around with it by yourself. Google CoLab is a great resource. And Stable Diffusion is designed to run on cheap GPU. Or you can start to use these services like Midjourney, to have a better handle on what's happening with it and how fast it's moving.

Angelica: Yeah, exactly. Another question that came through is: if I create something with Generative AI through Prompt Engineering, is that work really mine?

Sam: So this is starting to get into a little bit more of a philosophical question. Is it mine in the sense that I own it? Well, if the model says so, then yes. Stable Diffusion, I believe, comes with a MIT license. So that is like the most permissive license. If you generate an image with that, then it is technically yours, provided somebody doesn't come along and say, "The people who made Stable Diffusion didn't have the rights to offer you that license."

But until that happens, then yes, it is yours from an ownership point of view. Are you the creator? Are you the creative person generating that? That's a bit of a different question. That becomes a little bit murkier. How different is that between a creative director and illustrator going back and forth saying:

"I want this."

"No, I don't want that."

"No, you need to fix this."

"Oh, I liked what you did there."

"That's really great. I didn't think about that."

Who's the owner in that solution? Ideally, it's the company that hires both of them. This is something that's gonna have to play out in the legal courts if they get there. I know a lot of people already have opinions on who is going to win all the legal challenges, and that is just starting to happen right now.

Angelica: Yeah, from what I've seen in a lot of discussion, it's a co-creation platform of sorts, where you have to know what to say in order to get it to be the right outcome. So if you say, “I want an underwater scene that has mermaids floating and gold neon coral,” it'll generate certain types of visuals based off of that, but it may not be the visuals you want.

Then that's where it gets nitpicky into styles and references. That's where the artists come into play, where it's a Dali or Picasso version of an underwater scene. We've even seen prompts that use Unreal...

Sam: Mm-hmm

Angelica: ...as a way to describe artistic styles. Generative AI could create things from a basic prompt. But there's a back and forth, kinda like you were describing with a director and illustrator, in order to know exactly what outcomes to have and using the right words and key terms and fine tuning to get the desired outcome.

Sam: Definitely, and I think this is a very specific question to this generation of models. They are designed to work with text to image. There's a lot of reasons for why they are this way. A lot of this research is built on the backs of transformers, which were initially language generation models. If you talk to any sort of artist, the idea that you're creating art by typing is very counterintuitive to what they spent years learning and training to do. You know, artists create images by drawing or painting or manipulating creative software and its way more gestural interface. And I think that as technology evolves–and definitely how we want to start building more and more of these technologies to make it more engineered with the artist in mind–I think we're gonna see more of these image interfaces.

And Stable Diffusion has that, you can draw sort of an MS paint type image and then say, "Alright, now I want this to be an image of a landscape, but in the style of a specific artist." So then it's not just writing text and waiting for the output to come in, I'm drawing into it too. So we're both working more collaboratively. But I think also in the future, you might find algorithms that are way more in tune with specific artists. Like the person who's making it, how they like to make art. I think this problem's gonna be less of a question in the future. At one point, all of these things will be in your Photoshop or your creative software, and at that point, we don't even think about it as AI anymore. It's just a tool that's in Photoshop that we use. They already have neural filters in Photoshop–the Content Aware fill. No one really thinks about these questions when they're already using them. It's just this area we are right now where it's posing a lot of questions.

Angelica: Yeah. The most interesting executions of technology have been when it fades into the background. Or to your point, we don't necessarily say, "Oh, that's AI", or "Yep, that's AR". That's a classic one too. We just know it from the utility it provides us. And like Google Translate, for example, that could be associated with AR if you use the camera and it actually overlays the text in front. But the majority of people aren't thinking, oh, this is Google Translate using AR. We don't think about it like that. We're just like, "Oh, okay, cool. This is helping me out here."

Sam: Yeah, just think about all the students that are applying to art school this year and they're going into their undergrad art degree and by next year it's gonna be easier to use all this technology. And I think their understanding of it is gonna be very different than our understanding of people who never had this technology when we were in undergrad. You know, it's changing very quickly. It's changing how people work very rapidly too.

Angelica: Right. Another question came relating to copyright usage, which you touched on a little bit, and that's something that's an evolving conversation already in the courts, or even out of court–or if you're looking in the terms and conditions of Midjourney and DALL-E and Stable Diffusion.

Sam: When you download the model from Hugging Face, you have to agree to certain Terms and Conditions. I think it's basically a legal stop gap for them.

Angelica: Yep.

Sam: If I use these, am I going to get sued? You want to talk to a copyright lawyer or attorney, but I don't think they know the answer just yet either. What I will say is that many of the companies that create these algorithms–your OpenAIs, your Google's, your NVIDIAs–a lot of these companies also have large lobbying teams and they're going to try to push the law in a way that doesn't get them sued. Now, you might see that in the near future because these companies can throw so much money at the legal issue that by, in virtue of protecting themselves, they protect all the people who use their software. The way I like to talk about it is, and maybe I'm dating myself, but if you think about all the way to the early 2000's with Napster and file sharing, it didn't work out so well for the artists. And that technology has completely changed their industry and how they make money. Artists do not make money off of selling records anymore because anyone can get them for free. They make money now primarily through merchandise and touring. Perhaps something like that is going to happen.

Angelica: Yeah. When you brought up Napster, that reminded me of a sidetrack story where I got Napster and it was legitimate at that time, but every time I was like, "Oh yeah, I have this song on Napster." They were like, "Mmmm?" They're giving me a side eye because of where Napster came from and the illegal downloading. It's like, "No, it's legit. I swear I just got a gift card." 

Sam: [laughter] Well, yeah, many of us now listen to all of our music on Spotify. That evolved in a way where they are paying artists in a specific way that sometimes is very predatory and something like that could happen to artists in these models. It doesn't look like history provides good examples where the artists win or come out on top. So again, something to think about if you are one of these artists. How do I prepare for this? How do I deal with it? At the end of the day, people are still gonna want your top fantasy illustrator to work on their project, but maybe people that aren't as famous, maybe those people are going to suffer a bit more.

Angelica: Right. There's also been a discussion on: can artists be exempted from being a part of prompts? For example, there was a really long Twitter thread, we'll link it in the show notes, but it was pretty much discussing how there was a lot of art that was being generated using her name in the prompt, and it looked very similar to what she would create. Should she get a commission because it used her name and her style to be able to generate that? Those are the questions there. Or if they're able to get exempt, does that also prevent the type of creative output Generative AI is able to create? Because now it's not an open forum anymore where you can use any artist. And now we're gonna see a lot of Picasso uses because that one hasn't been exempted. Or more indie artists aren't represented because they don't want to be.

Sam: I don't think the companies creating these exemptions are really going to work. One of my favorite things about artificial intelligence is that it's one of the most advanced high tech technologies that's ever existed, and it's also one of the most open. So it's going to work on their platforms because they can control it, but it's an extremely open technology. All these companies are putting some of their most stellar code and train models. There's DreamBooth now where you can basically take Stable Diffusion and then fine tune it on specific artists using a hundred or less images or so.

Even if a company does create these exemptions, you can't create images on Midjourney or DALL-E 2 in the style of Yoshitaka Amano or something like that, it wouldn't be so hard for somebody to just download all the free train models, train it on Yoshitaka Amano images, and then create art like that. The barrier to entry to do these things isn't high enough that this is a solution for that.

Angelica: Yeah, the mainstream platforms could help to get exempt, but if someone was to train their own model, then they could still do that.

Sam: It's starting to become kind of a wild west, and I can understand why certain artists are angry and nervous. It's just...it's something that's happening and if you wanna stop it, how do we stop it? It has to come from a very concerted legal sort of idea. Bunch of people getting together saying, "We need to do this now and this is how we want it to work." But can they do that faster than corporations can lobby to say, "No, we can do this." You know, it's very hard for small groups of artists to combat corporations that basically run all of our technologies.

It's an interesting thing. I don't know what the answer is. We should probably talk to a lawyer about it.

Angelica: Yeah. There's other technologies that have a similar conundrum as well. It's hard with emerging tech to control these things, especially when it is so open and anyone's able to contribute in either a good or a bad way.

Sam: Yeah, a hundred percent.

Angelica: That actually leads to our last question. It's not really a question, more of a statement. They mentioned that Generative AI seems like it's growing so fast and that it will get outta control soon. From my perspective, it's already starting to because of just the rapid iteration that's happening within this short period of time.

Sam: Even for us, we were spending time engineering these tools, creating these projects that use these and we'll be halfway through it and there's all these new technologies that might be better to use. Yeah, it does give a little bit of anxiety like, "Am I using the right one? What's it going to take to change technology right now?" Do you wait for the technology to advance, to become cheaper?

If you think about a company like Midjourney spending all this investment money on creating this platform, because theoretically only you can make this and it's very hard for other companies to recreate your business. But then six months later, Stable Diffusion comes out. It's open source, anyone can download it. And then two months later somebody open sources a full on scalable web platform. It's just that sort of thing where it evolves so fast. And how do you make business decisions about it? It's changing month to month at this point. Whereas before, it was changing every year or so, but now it's too fast. It does seem like it is starting to, again, become that singularity light type technology. Who’s to say that it's going to continue like that? It's just so hard to predict the future with this stuff. It's more what can I do right now and is it going to save me money or time? If not, don't do it. If yes, then do it.

Angelica: Yeah. The types of technologies that get the most excitement are the ones that get different types of people more mobilized that then makes the technology advance a lot faster. It just feels like towards the beginning of the summer, we were hearing like, "Oh, DALL-E 2, yay! Awesome." And then it seemed like it went exponentially fast from there based on a lot of the momentum. There was probably a lot of stuff behind the scenes that made it feel exponential. Would you say that it was because of a lot of interest that brought a lot of people into the same topic at one point? Or do you feel like it might have always been coming to this point?

Sam: Yeah, I think so. Whenever you start to see technology that is really starting to deliver on its promise, I think, again, a lot of people become interested in it. The big thing about Stable Diffusion was that it was able to use a different type of model to compress the actual training size of the images, which then allowed it to train faster and then be able to be trained and executed on a single GPU. That type of thing is how a lot of this stuff goes. There's generally one big company that creates the, "We figured out how to do this." And then all these other companies and groups and researchers say, "Alright, now we know how to do this. How do we do it cheaper, faster, with less data, and more powerful?" And any time there's something that comes out like that, people start spending a lot of time and money on it.

DALL-E was this thing that I like to say really demonstrated creative arithmetic. When you say, I want you to draw me a Pikachu sitting on a goat. And not only does it know what Pikachu and a goat looks like, but it understands that in order for us to believe that it's sitting on it, and you have to have it sitting in a very specific space. Pikachu's legs are on either side of it.

The idea that a machine can do that, something so similar to the way humans think, got a lot of people extremely excited. And at the time it was just, I think at the time it was like, 256 pixels by 256. But now we are doing 2048 by 24... whatever size you want. And that's only two years later. So yeah, a lot of excitement, obviously.

I think it is one of those technologies that really gets people excited because it is starting to deliver on the promise of AI. Just like self-driving cars–AI doing protein folding–you're starting to see more and more examples of what it could be and how exciting and how beneficial it can be.

Angelica: Awesome! Well, we've covered quite a bit, lots of great info here. Thanks again, Sam, for coming on the show.

Sam: Yeah, thanks for having me.

Angelica: Thanks everyone for listening to the Scrap The Manual podcast!

If you like what you hear, please subscribe and share! You can find us on Spotify, Apple Podcasts and wherever you get your podcasts. If you want to suggest topics, segment ideas, or general feedback, feel free to email us at scrapthemanual@mediamonks.com. If you want to partner with Labs.Monks, feel free to reach out to us at that same email address. Until next time!

Sam: Bye!

Our Labs.Monks provide insight into APAC’s emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region. artificial intelligence technology emerging technology

Scrap the Manual: Tech Across APAC

Scrap the Manual: Tech Across APAC

19 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

Scrap the Manual - Asia Pacific

APAC is not only one of the most populous and diverse regions in the world, it is also leading the way for unique technologies and innovation. In this episode, host Angelica Ortiz is joined with a fellow Media.Monks’ Creative Technologist, Leah Zhao from our Singapore office. Together, Angelica and Leah give a TLDR overview of our newest Labs Report, Tech Across APAC—providing insight into the regions’ emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region.

You can read the discussion below, or listen to the episode on your preferred podcast platform.

00:00

00:00

00:00

Angelica: Hey everyone! Welcome to Scrap The Manual, a podcast where we prompt “aha” moments through discussions of technology, creativity, experimentation, and how all those work together to address cultural and business challenges. My name is Angelica and we have a very special guest host. Yay!

Leah: Hi! It's great to be here, my name is Leah. We are both Creative Technologists with Media.Monks. I specifically work out of Media.Monks’ Singapore office.

Angelica: Today we're going to be giving a quick TLDR of one of our lab reports and deep dive into something that we didn't get to cover in depth in the reports, such as expanding on our prototype we created, a topic that has some interesting rabbit holes that didn't fit neatly onto a slide, you know, that kind of thing. 

Leah: So for this episode, we are going to be covering technology and innovation culture in Asian Pacific region. If you haven't had a chance to read our APAC Lab Report, here's a quick TLDR.

The most influential technologies from the region are AI automation, AR and computer vision, and the metaverse. China and Japan are leading the growth in AI and machine learning together with Singapore and South Korea. If you come to this region, you might be surprised how people are embracing this advanced technology. People accept it because it is just so convenient and thanks to those Super Apps we have. 

Angelica: To clarify for people who may not be familiar, what are Super Apps? 

Leah: Yeah. So Super Apps are mobile applications that can provide multiple services. And you may have heard of some of the Super Apps such as WeChat in China. Kakao from South Korea, Line app from Japan (that's also widely used in Taiwan and Thailand) and Grab from Singapore, which is used in Southeast Asia. On Super Apps, you can use multiple services from online chatting, shopping, food delivery, to car hailing and digital payments. We literally live our social and cultural life on the Super Apps.

Angelica: Is it sort of like if Uber had one app, but not necessarily branded it's more of just, I'm going to go to WeChat, it'll call a ride, rent a scooter, or order in. You just download one app versus having to download five different ones. 

Leah: Yeah, definitely. But actually for WeChat, it's more complicated, I would say, because there is a whole ecosystem on WeChat because WeChat uses mini programs. Just think of as a microsite on WeChat…

Angelica: Mm-hmm. 

Leah: where they can sell their product and they can have these food delivery services. And for other Super Apps like Line app and Grab it's just exactly like you said. One example is that Burberry launched its social retail store in collaboration with Tencent, which integrates its offline store with mini programs on WeChat. It enables some special features in the store, such as earning social currencies, by engaging with the brand and even raising your own animal based avatars. This is pretty cool as it links up our digital and physical experiences. 

Angelica: Yeah. What I really liked about this example was how technology was seamlessly integrated throughout. It wasn't like, “Hey scan this one QR code.” It went a little bit further to say, “Okay, if you interact with this mini program, then you'll have access and unlock particular outfits or particular items for the digital avatar. You'll be able to actually unlock cafe items in the real store.” So it seemed like it was all a part of one ecosystem. It didn't feel tacked on. It was truly embedded within the holistic retail experience. I know with a lot of branded activations within the US specifically, there's always that question of, should it be accessible through a mobile website or is it something that we can use a downloaded app for? And most clients tend to go with the mobile website. 

Leah: Yeah. 

Angelica: Because there's this hesitancy to download just another application, just to do another thing. And then worrying about the wifi strength when on site when asking people to download these apps. But it'd be interesting for brands creating these mini programs within a larger Super App that then consumers won't necessarily have to do anything else other than access that mini program versus having to download something. Then there's a lot more flexibility in what brands can do and they're not limited to what's available on a mobile website. They have the strength of what can be possible with an app. 

Leah: Yeah, agreed. So another observation actually from our report is that the metaverse is on the rise in the APAC region. It might outplay the plans laid down in the West. Some platforms that draw our attention are Zepeto from South Korea and TME land in China

Angelica: Yeah, and what's cool about those platforms is we see this emphasis on virtual idols, avatars and influencers. From the research that we did, we noticed that there are certain countries that are a bit more traditional culturally… 

Leah: mm-hmm

Angelica: and are strict in how people can be in their real selves to have this sort of escape of the bounds culturally of what people can and cannot be because it's right or wrong or not necessarily accepted. People are going towards anonymity…

Leah: Yeah. 

Angelica: for being able to express themselves. Sort of like the Finstagram accounts that happen in the US or expressing themselves through these virtual influencers, because then their virtual selves can be much more free to express themselves than their real versions could be.

Leah: And also Asia has a rich fandom culture. So it's not a surprise that we see the emphasis on virtual idols and virtual influencers because it enables the fans to interact with the superstars anytime, anywhere.

Angelica: Yeah. And from a branding aspect of things as well, virtual influencers and avatars can also be much more easy to control. Like all the controversies that happened because someone did something either way back in their past or something recently, that makes brands nervous about being able to endorse real people because people are flawed. With virtual influencers, you can control everything. You have teams of people being able to control exactly what they look like, what their personality is, what they do, and that flexibility and customizability…that's a lot more intense than it would be for a real person that has real feelings.

So there's some limitations on what the brand can do, where it's a lot more flexible with virtual influencers. 

Okay, we've covered quite a lot there. There's a lot of really interesting examples that we see within the APAC region that definitely could be applied within Western countries as well. With this said, we're gonna go ahead and move on to what we did for the Labs Report prototype and expand a little bit more on our process.

Let's start with: what was even the prototype? For the prototype we leveraged Zepeto. Zepeto is a metaverse-like experience world platform…insert all buzzwords here…where it allows users to interact like you would for a Roblox world that you go and experience to, but it has additional social features to it.

So what we would think of as an Instagram feed or something like that, it has that embedded within the Zepeto platform. So instead of going to Instagram to talk about your Roblox experience, those two experiences are integrated within one platform. What we also wanted to achieve with this prototype is leverage a technology that originated from the APAC region, and specifically Zepeto. Zepeto is available globally for the most part, with a few exceptions, but it originated within South Korea. We really wanted to use Zepeto because it's available globally for most audiences and it takes the current fragmented way of how the metaverse worlds are created and integrates them with virtual influencers and social media.

With these gamified interactable experiences, the social aspects are really what makes this particular platform shine. And we are also doing this because the metaverse even a year or so later is still an incredibly popular topic. People are still having a lot of discourse about what the metaverse is, what it can be, discussing how brands have already interacted with their first steps into the metaverse, how they're going to continue to grow.

And this is part of what we do a lot at Media.Monks. We get a lot of client requests for similar types of experiences, whether that be Roblox, Decentraland, Horizon World, Fortnite…and Zepeto is just a great platform that no one's really talking a lot about within the Western dialogue, but it's incredibly powerful and it reaches so many people. We saw that it was an amazing platform that put the promise of what the metaverse can and will be to the next level.

Leah: Yeah. I also like Zepeto because Zepeto not only has Asian style avatars and it enables you to customize your avatar from your head, body, hair, outfits, and even poses and dancing steps you can have. So with Zepeto you can purchase a lot of outfits and decorations with the Zepeto money, which is a currency that you earn by app purchases or being more active on the platform. 

Angelica: Yeah. There's two different types of currencies that Zepeto has. One of which are called Zems…i.e. gems. And then there's another one, which are coins. For creator made items, you can set a price for how many Zems you want them to go for. Anything that's created by users can only be sold by Zems, which are very difficult to get free with an app. That's where, you know, the free to play tends to come in. With a Euro you can get 14 Zems, so then you can buy more digital clothing. There are coins that you start the experience with that you can use to purchase Zepeto-created items. And so that's kind of how they have that difference there. 

Leah: But my favorite part about Zepeto is the social aspect as you mentioned earlier. For me, it's like TikTok in the metaverse because it has the Feed feature.

You know, there are three pages of the feed: For you, following, and popular. Under the feed you can see live streaming by the virtual influencers and you can have your own live stream as well.

Angelica: For the live stream that's using some motion capture as well, because it's either pre-made models and moves that are created or people can actually have their face being recognized in real time...

Leah: Yeah. 

Angelica: to then translate to that virtual avatar. 

Leah: Yeah. Zepeto, they have the Zepeto camera. So with this camera, you can create content with your own avatar and the AR filter, which copies your facial expression quite accurately, and even brings your own avatar to real life. So you can place your own avatar on the table in your room.

Angelica: One part that I also thought was really cool…you had mentioned earlier with the poses. Think about if we see a celebrity on the street, we're gonna take a photo with them. Right. We can't just let that celebrity pass by without being like, “oh yeah, I totally saw JLo in Miami,” you know? The “take a photo or it didn't happen” type of thing, haha. There's a version of that on Zepeto. Fans can take a photo with you with their virtual avatars with your virtual avatar. So it takes the virtual autograph, of sorts, to a different level. You can live vicariously through your avatar by having them take a photo with your, your favorite celebrity or your favorite influencer. So I really love that aspect of being able to build that audience virtually as well. 

Something also that's really cool about Zepeto is within those world experiences, the social aspects are still very much ingrained in there. It's not just, “Okay, you have this separate social feed, you have the separate virtual influencer side, and then you have the world.” They're all integrated.

An example of this is the other day we were testing out the Zepeto world and we were all in the same experience together. When someone would take a selfie, and that's right: there is a selfie stick in this experience and it looks exactly like what you would imagine, but the virtual version of it too. And when someone takes a photo or a video, it automatically tags people that were within that photo.

So it's generating all of this social momentum, like really, really quickly. And soon as you take that photo, you can either download it directly to your device. Or you can go ahead and immediately upload it. What was great for me personally…figuring out how to have, you know, just the right caption... that's something that takes me way too long to figure out what are the right words and the right hashtags. But you don't even need to worry about captions when taking photos within these worlds. As soon as you say, “I wanna upload it,” it automatically captions, tags people, and also gives other related hashtags for how other people could see that experience from you.

So it's very seamless and easy. 

Leah: Yeah. That's amazing. 

Angelica: It's just like the next level of how it makes sharing super, super, super easy, so that's something I really like there too. 

Speaking of the worlds: now, within this next part of the prototyping process, it was up to us to determine the worldscape and interactions. And as a part of the concept, we wanted to create a world that plays into what real life influencers would be looking for when trying to fill their feed. And that is: creating content. Specifically: selfies. And so we created four different experiences that would have the ultimate selfie moment.

One, which is this party balloon atmosphere. Sort of think about these like really big balloons that you can kind of poke with the avatar as you move around, or even like jump on some of the balloons to get a higher view from it as well.

The second was like a summer pool party. You could actually swim in the pool. It would change the animation of the avatar when you're in the water part. And, you know, the classic, giant rubber ducky in the pool and all those things. So definitely brought you in the moment.

The third was an ethereal Japanese garden, so very much when wanting to get away and have a chill moment, that was definitely the vibe we were going for there.

And then lastly, we had the miniaturized city. So what you would think is the opposite of meditation is the hustle and bustle of the big city. And we created that experience as well. There is also a reference to the Netherlands. So you'll just have to keep an eye out for what that is and let us know if you find it.

Leah: Is there a hidden fifth environment?

Angelica: There it is. Yeah. You know, what was interesting is when we were testing out the environment and we were all together. 

Leah: Yeah. 

Angelica: We created our own room. 

Leah: Yeah. 

Angelica: And then we thought it was just gonna be the eight of us that were testing it out and then other people, random people showed up. 

Leah: Wow.

Angelica: I was just like, “where did you guys come from?” There were two people that actually used the chat within the room and they belined directly to where that fifth environment was. 

Leah: Yeah. 

Angelica:  So it was just really interesting that people one were specifically coming to the world to experience it together.

Leah: Mm. 

Angelica: And then two, we saw a lot of random people. There would be dead spots where it’s just like, “okay it's just one of us in the room.” We're just testing it. But as soon as all of us got in there together and started taking photos, there were so many people that showed up. It's just like “What? This is insane!”

Leah: Was it the recommendation system on Zepeto?

Angelica: Yeah. That's what we're thinking. Because the room that was created…we thought it was not, I guess it wasn't a private room. It was probably a public room. 

Leah: Yeah. 

Angelica: But it was interesting that as soon as we started playing around and posting content, then people were like, “Okay, I'll join this room.”

Leah: Yeah. Maybe because of tagging as well.

Angelica: Yeah, exactly. And that goes to our earlier point of how really powerful that platform is and how posting would give that direct result of someone posting something and other people wanting to be a part of it. There was one person that liked my post that had like 65,000 followers.

Leah: Whoa. 

Angelica: And I'm like, who are you? What is this? 

Leah: That's definitely a virtual idol. 

Angelica: Yeah, exactly. They only had like six posts though, which was a little weird, but they had so many followers. It was nuts. 

Leah: Actually today I just randomly went into a swimming pool party on Zepeto. I went into the world, people were playing with water guns together.

Angelica: Mm-hmm 

Leah: So I had just arrived. Landed. Then someone just shoot me with a water gown and I was hit. I must lose my block. 

Angelica: Oh no! Haha, that sounds fun though. 

Leah: Yeah, that was fun. 

Angelica:  Was it like a big room? Like how many people were in that environment at once? 

Leah: When I was there, it was around 80 people in the world.

Angelica: Oh, wow.

Leah: Yeah, it's quite a lot actually. 

Angelica: There's definitely something to be said about how there's superfans of Zepeto. Like that's kind of part of the daily aspect of it. Being able to meet people through the social aspects and then hang out with them through these worlds.

But all this to say this entire worldscape and all these interactions that we included within the prototypes were all built within what they call their BuildIt platform.

Leah: It's quite user-friendly. It's very easy to create a world yourself even with zero experience of any 3D modeling software. 

Angelica: Yeah. BuildIt is like a 3D version of website builders. You have the drag and drop type of thing. Where instead of a 2D scrolling website experience, now you have that drag and drop functionality with a lot of different assets into a 3D space. We can also create experiences like this through Unity. The only caveat to Unity is that the experience that we would create there would only be available on mobile devices. And we didn't wanna restrict the type of people that would be able to experience this. So we decided to do it on BuildIt because the end result of those worlds would be able to be accessed on both desktop and mobile. 

Leah: Other than the world space, you can also create some clothes for your avatar to make it look more unique and with its own personality. So in our case, we create a more neutral looking avatar with blue skin. Very cool, they're slightly edgy but approachable. And the process of creating clothes was very friendly. So you just download the template and then add the textures in Photoshop. We chose a t-shirt, jacket, bomber, and wind breaker. And then we touched it up with some Oriental elements such as a dragon and soft pink color, which matches our Shanghai office. Everyone can create their own unique clothes with simple editing of the textures. 

Angelica: Yeah. We really wanted to play within clothing specifically because that's a part of this digital ecosystem of being an influencer. You may have branded experiences that you take part of, or brands sponsor you. Influencers will wear custom clothing either that they design or that they're representing another brand. All those things we wanted to integrate within this. 

So the influencers are visiting this world. They could say, “Hey, I'm in this Media.Monks experience” or “insert brand here” experience. And I'm also wearing their custom clothing. It's sort of a shout out to the clothing as well as the world. So it's at the heart of this larger ecosystem. The world is not exclusive to the clothes…is not exclusive to social. All of those elements are all playing together and this leads to creating social content.

Once we had the world and the merchandise solidified, we continue to build off this virtual influencer style by creating content of our own. What we did is we analyzed popular Zepeto influencers. We even made a list of the types of content they create, which is going to someone else's world, doing an AR feature with their real life self. Being able to do posed photos with other avatars. All those were a part of the social content that we created as a part of this. 

Now that the prototype is ready to go, it's time to think about what the prototype did not yet achieve but that we would really like to see in the future. So one thing that we recommend is: when wanting to create branded fully custom worlds, those should definitely be made within Unity to have the most flexibility. At this time of recording, being able to export worlds means that only is on mobile devices. So, you know, that's something to keep in mind there. 

Leah: For clothing creation, there are some limitations. For example, for the texture, the maximum resolution we can upload is 512 x 512. So it means we can't add detailed patterns or logos onto our clothes. And we can't create physics of our clothing materials. That is another thing that I think the platform can improve. 

Angelica: Yeah. It's not able to show the fuzziness of a sweater or if we're creating a dress or a shirt that needs to be flowy, it won't show that that shirt or that dress is fuzzy or flowy. It'll just be the pattern that's shown, but the texture of how a clothing might feel based on seeing it is not reflected there. So it's a give and take where it's very easy to create clothing items 

Leah: Yeah. 

Angelica: …but it doesn't go so far as to have a realistic look. 

Leah: Yeah, but I think this is something that’s not just Zepeto. Other metaverse platforms can improve with that because I don't see many platforms can have physics of the clothing itself. It would be great if the physics of the clothing could be implemented in the workspace as well as in the AR camera. It would add extra immersion and fidelity to the whole experience. 

Angelica: Yeah. It would also help with making those small micro interactions really fun. Let's say there's a skydiving experience that's in Zepeto and someone is jumping off of the plane and is doing their skydive.

Leah: Yeah.

Angelica: It'd be cool. If the physics of the clothing would react to, like this virtual wind that is happening, or something like that. Or if it's a really puffy sweater, it kind of like blows up because all of the air is kind of getting stuck in it. Those are just the fun things that make people get even more immersed within the environment too. 

Moving forward in creating branded experiences, having a closer relationship with Zepeto’s support team and development team will be really helpful in a lot of the things that the BuildIt platform has a restriction for. But when collaborating with Zepeto and with using the Zepeto plugin for Unity, then we can unlock a lot of interactions that make the experience a lot deeper. 

The other thing to mention here is it'd be really great to see Zepeto integrate with other social media platforms versus the Zepeto specific one. We've talked a lot about how Zepeto is a really powerful platform because it combines social with the virtual experience as well. And it would just be great if let's say there's an experience that happens in Zepeto and we're taking a photo or video, we say we wanna post it. Could that be post, all in one swoop, be posted to Instagram, posted to Twitter, posted to Facebook and all of those things, instead of this Zepeto ecosystem kind of being stuck.

So all the cool stuff that we're saying, it gets left within this platform and they're not necessarily shared outside of it unless you did the repost thing. That's kind of how it would work with Zepeto, but it'd be really great if all those rich features that we get with Zepeto could be extended to other platforms.

And I mean, there's already the platform fatigue of having to keep up five or many more social media platforms. So auto captioning for Instagram would be great or having an experience in Zepeto and then moving that on to what I wanna post on Twitter that would just make the process so much easier. 

Leah: The full integration of that might take some time…

Angelica: Mm-hmm 

Leah: since there are more things to consider such as data privacy. 

Angelica: Yep. 

Leah: But we might say it's coming faster in APAC. If one day the metaverse platform is integrated into the Super Apps. Just imagine by then it would be truly one ecosystem. 

Angelica: Exactly. It'd be a really powerful way to have things all within one place. Meta has tried with this “connecting what you do virtually and connecting it to other social media platforms” specifically within its own ecosystem of Facebook, but it's had mixed success. There's just not as much of, “Okay. I'm posting what I'm doing in VR to Facebook.” There's not as much of that traction happening as with going in Zepeto, having this experience, posting it, and people randomly show up because of the social stuff. You could see that immediate interaction. It'd be really great to see this integration outside of just Zepeto social into other social media experiences to really expand its reach. Also particularly because of the virtual influencer aspect of things. Just imagine having this facial mocap that you do within Zepeto and that livestream could go to Instagram, Facebook, and multiple platforms at once. That would really increase the visibility of that virtual influencer and the social clout. 

So we're getting towards the end. Let's go ahead and think about what are some concrete takeaways that the audience can implement and use within their daily lives, as they're considering Zepeto. And then also just in general, the APAC trends that we're seeing here.

Something that I think of is: gaming and social media don't have to be separate anymore. Like when playing online experiences, traditionally, it'll be either playing Warhammer on Steam and having the voice app within there, or opening up Roblox and a Discord channel. But those are two separate platforms: one to connect and one to play. With Zepeto, it's really inspiring to think about how those interactions can be in one. And not just voice, but the social aspect and everything that comes with that. It's really the next level of getting closer to what we talk about the metaverse can be. And Zepeto is really inspiring in that way. 

Leah: Yeah. To your point about this social aspect: Zepeto is actually what we need right now. We can't expect everyone directly dive into virtual without connecting them with the social life in the real world. And Zepeto has this potential to bridge the gap between our social life in the physical world and the digital one. 

Angelica: Yeah, Zepeto is a sleeping giant of sorts where it could have huge potential for a global audience. It is accessible in other countries outside of the APAC region, like we mentioned, but there's just not as much buzz around it as the platform definitely deserves. There are platforms that have tried to have the integration that Zepeto has within those three categories of virtual influencers, social media and experiences. But there just hasn't been as much from those other platforms as Zepeto has been able to succeed in.

So like Decentraland, Sandbox Roblox, Fortnite, Horizon Worlds…all those platforms have tried to get this integration, but it just has not been as successful. Something also to keep in mind and why Zepeto is just a really great platform is that there have been brand activations that have happened on Zepeto already.

There have been concerts and virtual representations of BTS or even Selena Gomez going into those concerts. Like what we applauded a few years ago with the Fortnite concert, Zepeto has already been within those realms already. There's a Samsung activation. There's a Honda activation, and a Gucci one as well.

And those are definitely getting a lot of traction and movement with people who are actually part of those experiences. And because it's integrated within its own social media ecosystem with purchasing items with virtual influencers, there's just so much potential for when brands are getting into these spaces, the type of impact and interaction they can have with consumers.

Leah: Yeah. The last thing we learned from this region: currently the West and the East still feel very distinct technologically and also culturally, with some crossover happening, but it's not as much as we would like to see. Things like virtual influencers, technology in retail, Super Apps, increased use of digital payments, those have been used to deepen collections with consumers and enhance ease of use. It would be amazing to see that more widely integrated within the West.

Angelica: Yeah, exactly. There's a lot of cultural and technological crossover to Eastern countries in terms of, you know, the US culture and colloquialisms always make their way around the globe. And it would be really great to see the really impactful technological and cultural innovations that are happening within the East, make their way more holistically towards the West. Not just here or there, but how Google has been embraced within APAC. It'd be great to have some of those APAC platforms integrated in the west. There's a lot that each can learn from each other and build up on each other. It's not necessarily let's distinguish the West from the East, because we talked about that quite a bit, but what is the way that globally we can improve experiences for consumers. And there's a lot of ways technology can empower people to have those deeper connections and how brands can also be a part of that story.

Leah: Yeah. 

Angelica: So that's a wrap! Thanks everybody for listening to the Scrap The Manual Podcast. Be sure to check out our blog post for more information, references, and also a link to our prototype. Remember to check out the Netherlands references and also the hidden fifth world within that prototype. If you like what you hear, please subscribe and share! You can find us on Spotify, Apple Podcasts and wherever you get your podcasts.

Leah: If you want to suggest topics, segment ideas, or general feedback, feel free to email us at scrapthemanual@mediamonks.com. If you want to partner with Media.Monks Labs, feel free to reach out to us at that same email address. 

Angelica: Until next time!

Leah: Bye.

Our Labs.Monks provide insight into APAC’s emerging AI, AR, automation, and metaverse technologies–along with a sneak peek into the prototype leveraging an upcoming tech from the region. artificial intelligence AR augmented reality technology emerging technology

Prepara Tu Marca de Cara Al Futuro Con El Informe ‘la Transformación de Lo Digital’

Prepara Tu Marca de Cara Al Futuro Con El Informe ‘la Transformación de Lo Digital’

3 min read
Profile picture for user mediamonks

Written by
Monks

Colorful shapes flying

La virtualización es la nueva era.

Drones autónomos, influencers virtuales, subculturas híbridas y canales donde las personas conversan y compran… la vida en el mundo digital ha generado una explosión de conductas de consumo novedosas y nuevas expectativas de experiencias altamente personalizadas y con conciencia social. El potencial de las tecnologías emergentes, combinado con el ingenio de lxs consumidorxs, ha dado paso a la transformación de lo digital y al amanecer de una nueva era: la virtualización, la nueva frontera de expansión empresarial.

The transformation of digital accompanied by colorful shapes

Estás a solo una descarga de:

  • Comprender cómo la virtualización está redefiniendo aspectos como la experiencia, la comunidad, la propiedad y la identidad.
  • Aprender sobre los principios de diseño que hacen a las experiencias más colaborativas, personalizadas e impulsadas por lxs usuarixs.
  • Navegar los nuevos estándares éticos y un paradigma de privacidad transformado que pondrá a prueba a las primeras marcas virtuales.

Esta experiencia se ve mejor en computadora

Download Now

Adéntrate en la nueva frontera de expansión empresarial.

La virtualización es la transformación de lo digital: un conjunto de nuevas conductas, normas culturales y paradigmas tecnológicos resultantes de 30 años de transformación digital hiperacelerada en los últimos cinco. Le sigue a las eras de globalización y transformación digital. Si bien la transformación digital ahora se enfoca en las bases para conectar los distintos puntos de contacto digitales, la virtualización hace referencia a las experiencias que derivan de las inversiones digitales de una marca. En la superposición de la participación de nuevxs consumidorxs y las posibilidades de transformación tecnológica, están surgiendo nuevos modelos de negocio para brindar mayor apoyo a las audiencias nativas digitales y acelerar el crecimiento.

Monk Thoughts La unión de nuevas tecnologías y nuevas expectativas se convierte en una nueva frontera de expansión. Y expansión significa nuevos ingresos, nuevas audiencias y una nueva forma de trabajar.
black and white photo of Wesley ter Haar

La virtualización está impulsando legados nuevos y duraderos.

El modelo tradicional de customer acquisition, o captación de clientes, está saturado y es altamente competitivo. Pero la virtualización es un lienzo en blanco que brinda el espacio para que las marcas pioneras superen a su competencia. Al mismo tiempo, es esencial que no repitamos los errores del pasado a medida que se definan nuevos estándares éticos. A través de formas innovadoras de crear comunidad, significado y valor, la virtualización es una nueva oportunidad para participar en la cultura, acuñar legados de marca, impulsar la longevidad y definir una nueva era.

Marcas invirtiendo en experiencias digitales en ComplexLand para conectar con sus audiencias.

Comienza tu viaje hacia la virtualización.

Ya nos estamos asociando con las marcas mejor valoradas del mundo para impulsar un crecimiento sin precedentes a través de la virtualización, y podemos ayudarte a hacer lo mismo. Ponte en contacto para explorar las nuevas conductas de consumo de la era virtualizada, las influencias que les dan forma y cómo podemos aprender de ellas para conectar mejor con las audiencias.

La virtualización, una nueva era en lo digital moldeada por las conductas emergentes de lxs consumidorxs, es la nueva frontera de expansión a medida que las marcas crean nuevos legados. brand virtualization Web3 emerging technology Digital transformation digital experiences data privacy

ComplexLand • An Immersive Virtualization of an Iconic Cultural Festival

  • Client

    Complex Networks

  • Solutions

    ExperienceInnovation SprintsRetail Concept InnovationImpactful Brand Activations

00:00

00:00

00:00

Case Study

0:00

Reimagining an icon.

ComplexCon is an institution among youth culture and style icons: a cultural mecca that brings the Complex Networks community and the hottest brands together to celebrate convergence culture. Realizing that trendsetters are increasingly just as interested in their digital identities as their physical ones, we leveraged this insight into new consumer behaviors to design ComplexLand: a free, immersive 3D digital platform featuring exclusive drops, ecommerce features, performances from top-selling artists and unique brand partnerships—the likes of Gucci, Versace and more.

Balancing accessibility with exclusive experiences.

While many virtual events try (and sometimes fail) to capture the energy of a crowded room, ComplexLand stands out as a single-player experience focused on global accessibility, community and lots of shoppable merch—a key feature enabled by Shopify’s robust system. By introducing exclusive brand partnerships that make it fun for visitors to shop, the platform has become Complex Network’s second-largest source of revenue, and the ultimate example of how an authentic, entertaining experience can drive sales.

What’s more, it’s far from complex when it comes to usability. The experience is powered by WebGL, meaning attendees can reach the fully realized virtual theme park on both mobile and desktop devices—no app or download required. Part sci-fi treasure hunt and part virtual bazaar, players are free to roam the map and discover musical performances, food deliveries, celebrity panel discussions and screenings—then brag about it with others in a persistent chat room.

Our Craft

A virtual experience that makes shopping easy.

  • An avatar visits the Complex store
  • A bunch of shoppable merch is look through at the Complex store
  • Avatar in Complexland landscape with colorful mountain
  • Avatar in Complexland having a chat with another avatar

A future-proofed partnership.

Striking a meaningful connection of game mechanics and street culture while evoking the festival atmosphere, ComplexLand provides a digital space where people can shape their virtual identities and participate in compelling branded experiences. And just like culture itself, the annual event is in constant evolution. A year after the initial launch of ComplexLand in 2020, its second edition brought even more opportunities for attendees to engage with others in a multiplayer experience, like sharing drops, having one-to-one conversations and even interacting with branded non-playable characters. In its third iteration, we opened the possibility to make NFTs, which creators can use to build their communities and express their creative identity. 

Since the start of our partnership years ago, ComplexLand has grown into a profitable media and retail platform that combines commerce and entertainment. It’s the first of its kind to condense more than 70 brands into one shared virtual experience—allowing the institution to establish new partnerships with the hottest brands driving culture today. All thanks to our joint commitment to leverage the newest Web3 technologies and create a place where people can express themselves and connect with others.

An avatar in ComplexLand
A virtual pair of shoes
Press From the first virtual event in the metaverse in December 2020 came a franchise that the publisher now sees as a permanent addition to its events business. And it’s a potentially lucrative addition at that.
Read on Digiday

Results

  • $700,000+ in sales during the 5 days of ComplexLand 1.0
  • Complexland 2.0’s gamified virtual shopping increased sponsorship revenue by 60%
  • Since ComplexLand’s launch, it’s brought 200+ brands to the annual event
  • 2x FWAs

  • 1x The Drum Experience Awards

Want to talk innovation? Get in touch.

Hey 👋

Please fill out the following quick questions so our team can get in touch with you.

Can’t get enough? Here is some related work for you!

Future-Proof Your Brand With The Transformation of Digital Report

Future-Proof Your Brand With The Transformation of Digital Report

2 min read
Profile picture for user mediamonks

Written by
Monks

The transformation of digital accompanied by colorful shapes

Virtualization Is the New Era

Autonomous selfie drones, virtual influencers, hybrid subcultures and shoppable channels where people converse and convert… Life in digital has launched an explosion of novel consumer behaviors and new expectations for highly tailored, socially conscious experiences. The potential of emerging technologies, met with consumer-driven ingenuity, has given way to the transformation of digital and the dawn of a new era: virtualization, the new frontier for business growth.

Virtualization_Report_Cover

You're one download away from:

  • Understanding how virtualization is redefining experience, community, ownership and identity.
  • Learning experience design principles to enable more collaborative, personalized and user-driven experiences.
  • Navigating new ethical standards and a transformed privacy paradigm that will vet virtual-first brands.

This experience is best viewed on Desktop

Download Now

Enter the New Frontier for Business Growth

Virtualization is the transformation of digital: a set of new audience behaviors, cultural norms and technology paradigms resulting from 30 years of digital transformation, hyper-accelerated over the past five years. It follows previous eras of globalization and digital transformation. While digital transformation now focuses on the pipes and plumbing to connect digital touch points, virtualization concerns the experience layer on top of a brand’s digital investments. In the overlap of new consumer engagement and the possibilities of tech transformation, new business models are emerging to better support digital-native audiences and accelerate growth.

Monk Thoughts New technology and new expectations together become a new frontier for growth. And growth means new revenue, new audiences, and a new way of working.
black and white photo of Wesley ter Haar

Virtualization Is Launching New, Lasting Legacies

The traditional customer acquisition model is over saturated and highly competitive, yet virtualization is a white space that provides room for early movers to outcompete. At the same time, it’s essential we don’t repeat the mistakes of the past as new ethical standards take hold. Through innovative ways of creating community, meaning, and value, virtualization is a new canvas to engage in culture upon which brands can mint their legacies, fuel longevity and define a new era.

Brands investing in digital experiences at ComplexLand to connect with their audiences.

Begin Your Virtualization Journey Now

We’re already partnering with the world’s most valued brands to drive incredible growth through virtualization, and we can help you do the same. Connect with us to explore the emerging consumer behaviors of the virtualized era, the influences that shape them, and how you can learn from them to better connect with your audiences.

Virtualization, a new era in digital shaped by emerging consumer behaviors, is the new frontier for growth as brands mint new legacies. brand virtualization Web3 emerging technology Digital transformation digital experiences data privacy

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss