Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

As the Auto Industry Evolves, All Roads Lead to Content

As the Auto Industry Evolves, All Roads Lead to Content

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

User interacting with the Results page of an Alexa skill designed to select cars best fit for particular lifestyle needs.

After a slow 2020, car sales are kicking into high gear as consumers become mobile. S&P forecasts that global sales will expand by 8%-10% this year, with the European market driving further growth in electric vehicles. As auto manufacturers accelerate into a brighter future, the Labs.Monks—our R&D and innovation group—are exploring the evolution of the auto industry and where it’s headed next in a new report.

The report tackles key concerns for automakers: the rise of D2C and foreign challenger brands, an urgent need for customer insights and the quickly evolving definition of what it means to be an auto brand today. At the center of each concern stands an opportunity to invest and experiment with content channels that engage consumers across the brand experience—whether in the pre-purchase consideration phase or while driving the car itself.

A Shift to Content and User Experience

Gone are the days where a car’s value is staked on horsepower and mechanics alone; while those certainly remain important, consumers are increasingly focused on software updates, wireless connectivity and digital user interfaces. At the same time, a future in which autonomous vehicles become the norm is prompting brands to rethink the elements that make up an ideal user experience. For example: when a car drives itself, what’s left for passengers to engage with? “Entertainment becomes more important,” says Jamie Webber, Business Director.

Monk Thoughts Brands are wondering: do you have to partner with a streaming service? Do you become an entertainment brand as much as an automotive brand?
Portrait of Jamie Webber

We’re still years away from fully autonomous cars. But “it’s a multiyear timeline, and brands want to be ready by the time cars can perform,” says Geert Eichhorn, Innovation Director. He notes how companies like Google are laying the groundwork now with platforms like Android Auto, a version of its mobile operating system designed for use in the car.

Just like how the iPhone revolutionized our concept of what a mobile phone can do, digital dashboards and new user interfaces have the potential to redefine how we engage with automobiles right now—like a speed dial that turns red when you’re speeding. Patrick Staud—Chief Creative Technologist at STAUD STUDIOS, which joined our team this year—even envisions deep customization opportunities through content packs. “One area we like to think about is the personalization of sound design for electric engines, buttons and different functions—much like mobile ringtones,” Staud says. “Customization could go so far as downloading dials and themes into your car’s interior, which could become a huge new channel for revenue.”

Building Direct, Digital Relationships

Content channels like those mentioned above can solve a critical challenge that automakers have universally wrestled with over the years: capturing consumer data. Dealerships commonly own the relationship with consumers—they walk them through the consideration phase, understand their preferences and ultimately close out the sale. Brands are now aiming to develop stronger customer relationships of their own, whether through D2C offerings or by offering digital experiences.

Such experiences can profoundly transform brand-consumer relationships by supporting new customer behaviors and instilling confidence in the buying journey. “In the luxury automotive sector, we’ve seen a growing use of digital tools, especially by women and people of color who prefer digital tools because they find dealerships talk down to them or don’t take them as seriously,” says Daniel Goodwin, a Senior Strategist who works with auto clients. So while in-person activities like test drives remain important for many, there’s a growing demand for virtualizing the dealership experience.

An Alexa assistant asks a user whether the vehicle they may buy will be used for off-roading.

An Alexa skill prototyped by the Labs.Monks lets users easily find the right car to suit their lifestyle.

In addition to providing a more comfortable experience, Goodwin also notes how direct, digital relationships can enable greater customization. “Customization is good for both brands and consumers,” says Goodwin. “COVID-19 has changed car buying behavior, and consumers are now more willing to wait for a car to be delivered that meets their exact needs rather than pick one up from a lot on the same day.” While made-to-order cars are a staple for luxury automakers, brands like Ford are moving toward the model to support the change in buying behavior. “This also helps brands that have been suffering from chip shortages, want a more direct relationship with consumers and no longer want their cars sitting unused in car lots,” says Goodwin.

In exploring how digital platforms can help consumers find the right car for them, the Labs.Monks prototyped an Alexa-based assistant that learns users’ specific needs through a simple question-and-answer format. The assistant may ask you things like whether you need a car for your commute, or what the size of your family is. Responses are measured against a database of 2,000 cars from 42 different brands, organized using machine learning and computer vision. The assistant is a contrast to complex search engines or nuts-and-bolts configurators.

Monk Thoughts It’s not a sterile kind of experience. It’s much more about the personal lifestyle that suits you.
Portrait of Geert Eichhorn

Cultivating Online Community

As automakers reconsider the shifting definition of what it means to own a car—an identity that’s perhaps less “driver” and more “user”—there’s a growing focus on supporting owners by building community. The desire for brand community certainly isn’t new; Jeep owners have built a culture of serendipitously greeting one another on the road for decades with the infamous “Jeep wave.” More recently, Tesla has organized local chapters of its Tesla Owners Club in which owners share knowledge or build advocacy for the brand.

As brands consider how to get owners talking to one another, they might take inspiration from community-minded platforms already on the market. “Look at the Waze app,” says Webber. “I use it for its GPS function, but there are a lot of attempts it makes to prompt interactivity between drivers, whether it’s reporting police activity, road closures, traffic and more.” Brands can similarly adopt a community-oriented role using driver data it picks up, whether through digital experiences or even on the road—in fact, Eichhorn adds that vehicle-to-vehicle communication is already being explored to increase driver safety.

Customizable digital dashboards, in-cabin entertainment, online communities—auto brands may begin to look a lot more like content brands in the future. Not only does an increased focus on content lay the foundation for the fully autonomous passenger experience; it can also help brands hold onto consumer interest in the months (or sometimes years) that they wait for their custom configuration to be made—an increased concern with supply chain issues and longer wait times imposed by the pandemic. But perhaps more importantly, digital content and experiences will help them better understand consumers and their needs, with data and insights steering their business in the right direction for years to come.

For the automotive industry, the brand-consumer relationship is quickly evolving. Learn how the path to purchase and automotive marketing have shifted. For the automotive industry, the brand-consumer relationship is quickly evolving. Learn how the path to purchase and automotive marketing have shifted. mediamonks labs Automotive automotive marketing Alexa skills voice assistant

From Runway to Gameplay, Fashion Goes Virtual

From Runway to Gameplay, Fashion Goes Virtual

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

From Runway to Gameplay, Fashion Goes Virtual

Retail customers stand outside in a queue to ensure a safer shopping experience. Dressing rooms are closed. Fashion Week has gone virtual. And today’s fashion design students can’t meet in a studio to cut and sew materials. But the fashion industry isn’t in peril—it’s just taking on a new look.

This month, our research and development team MediaMonks Labs is collaborating with FLUX, our fashion and luxury team, to offer a special Labs report focused on the future of fashion. Bursting at the seams in digital innovation—from production to customer experiences—the report spares no effort to serve looks and inspired insight on the virtualization of fashion in its many forms. 

Virtualization is In-Season

For a long time, fashion-forward didn’t necessarily translate to being tech-forward. But in recent years, there’s been a growing desire to shake things up and break free from the cycle of seasonal releases and endless fashion weeks around the world. Suddenly, events that had long been exclusive became available to everyone through social feeds, completely changing the way brands engage with their audiences—like video game-inspired activations.

“It’s not just about the tech changing, but also how consumer behaviors are evolving,” says Ben Lunt, Head of Experience Design, Fashion & Luxury. “Brands knew they’d have to adapt, but the time never felt right until the past year.” Thanks to the pandemic, customer-facing digital experiences are increasingly in vogue—just recently, MediaMonks partnered with Verizon Media and IMG to bring Rebecca Minkoff’s new Spring 2021 collection to fashion lovers everywhere through 3D renders.

Monk Thoughts It’s not just about the tech changing, but also how consumer behaviors are evolving.

The technology lets people inspect looks up-close from any angle, either on a desktop device or directly superimposed in their surroundings using augmented reality. Previously, Rebecca Minkoff noted that customers are 30% more likely to buy when given the chance to engage with 3D product models online.

3D Production Connects People and Experiences

While the virtualization of the consumer experience has received a lot of attention from the fashion industry, it’s also aiding efforts in design and production. In response to sustainability concerns, today’s fashion students are learning to design in CLO3D, which allows designers to design, develop and sample garments in real time—software that’s also proven useful in the pandemic. The tool does more than let users design and collaborate from a safe distance—it streamlines the entire process.

In the traditional process, draping and patternmaking for each change in design can be time-consuming and wasteful. Virtualized production lets designers visualize these variations at speed, opening them up to more experimentation throughout the process. But it’s not about speeding up an already fast industry. “It’s about pinpointing parts of the process that can be streamlined in order to slow down others,” says Brandi LaCertosa, a Creative at MediaMonks. “We can create more space and time for thoughtful design and production.”

And these same assets can pull double duty by powering the kinds of touchpoints discussed above—or even inspire entirely new experiences. On the Labs team, MediaMonks Innovation Director Geert Eichhorn says: “If you switch to this digital pipeline you can make new products, like exporting designs onto video game avatars or letting users try on outfits with a digital twin.” While the digital twin idea is still some time away, it inspires some of the exciting D2C ecommerce solutions that forward-thinking brands might try out.

New Feedback Loops Transform the Industry

 Accelerated production and design can transform the value chain—a linear process that moves from designing and planning to sourcing and supply, and finally the consumer experience—into more of a Venn diagram where different steps overlap. Consider if 3D garments worn by players in a video game were the same used in a virtualized look book for retail buyers—but were originally made during the design and production process of the physical garments.

Monk Thoughts It’s about pinpointing parts of the process that can be streamlined in order to slow down others.

These assets can also be used for market testing. RTFKT presents 3D designs to its audience, inviting them to vote on those that make it to physical production. Fashion brand Finesse uses AI and social listening to source data-driven designs. Using CLO3D, the brand can act on trends quickly through accelerated production. For brands that serve as tastemakers, this same data can act as a trend forecast report in the design process, helping curate which pieces of the collection to take from runway to production. “Many designers crave an understanding of the people who will wear their clothes,” says LaCertosa. “Take Virgil Abloh for example, who is extremely active on Clubhouse for exactly this reason.”

Emulating the Analogue Aesthetic

Despite the advantages of virtualization, could it all be a fad—is it the emperor’s new clothes, bound to fall out of fashion once the pandemic subsides? Lunt notes that there’s always been a tension between fashion labels—luxury ones in particular—and new technologies, particularly because those brands have honed a more analogue aesthetic that can feel at odds with virtualization at first blush.

“They operate at a deep, impressionistic level,” he says. “If you look at a campaign image from Bottega Veneta, there’s a lot going on there—it touches you at a deep level, but a lot of those soft signals are analogue. Digital currently has its own aesthetic codes which can often be antithetical to luxury.” But it’s not a zero-sum game. By way of example, Lunt mentions Pixar’s painstaking efforts to emulate an analogue aesthetic in its CG films—like the split diopter lens, a unique tool that puts two objects in focus with no continuous depth of field to provoke a specific emotion in the viewer. 

Virtualized fashion also runs the risk of falling into “uncanny valley” territory, in which the slightest imperfection in an otherwise faithful reproduction can induce revulsion. “From the way the trim falls when moved to folds in the fabric, the smallest thing that looks off can trigger that response,” says Eichhorn.

But these challenges shouldn’t turn brands away from virtualization. Instead, it should prompt them to think more thoughtfully about where technology has the most potential to fuel creative innovation or build stronger relationships with consumers. “Luxury codes are already evolving to accommodate, appropriate, and ultimately push digital aesthetics forward,” says Lunt. “And you can still take that stylized photoshoot to capture the human element, then use 3D so consumers can see how it actually looks on them,” adds LaCertosa. “It’s about using this technology to support your brand and its aesthetic, not replace it.” 

There’s more where that came from.

Spurred by sustainability concerns and accelerated by the pandemic, fashion’s virtualization extends from production to the consumer experience. From Runway to Gameplay, Fashion Goes Virtual Fashion-forward meets tech-forward.
Fashion virtualization virtualized fashion mediamonks labs flux

When Speed is Key, MediaMonks Labs Enables Swift, Proactive AI Prototyping

When Speed is Key, MediaMonks Labs Enables Swift, Proactive AI Prototyping

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

When Speed is Key, MediaMonks Labs Enables Swift, Proactive AI Prototyping

As the COVID-19 pandemic spreads throughout the world and people retreat into their homes to practice social distancing, ingenuity and the need to digitally transform have become more apparent now than ever. Always looking for ways to jump-start innovation, the MediaMonks Labs team has experimented with ways to speed up the development of machine learning-based solutions from prototype to end product, cutting out unnecessary hours of coding to iterate at speed.

“Mental fortitude and being used to curveballs are skills and ways of working that come to the foreground now,” says Geert Eichhorn, Innovation Director at MediaMonks. “We see those eager to adapt come out on top.” Proactively aiming to solve the challenges faced by brands and their everyday audiences, the team recently experimented with a faster way to build and iterate artificial intelligence-driven products and services.

Fun Experiments Can Lead to Proactive Value

The idea behind one such experiment, the Canteen Counter, may seem silly on the surface: determine when the office canteen is less busy, helping the team find the optimal time to go and grab a seat. But the technology behind it provides some learnings for those who aim to solve challenges quickly with off-the-shelf tools.

Here’s how it works. The Canteen Counter’s camera was pointed at the salad bar, capturing the walkway from the entrance to the dishwashers—the most crowded spot in the canteen. The machine learning model detects people in the frame and keeps a count of how many are there to determine when it’s busy and when it isn’t—much like how business listings on Google Maps predict peak versus off-peak hours.

CC Screen2

Of course, now that the team is working from home, there’s little need to keep an eye on the canteen. But one could imagine a similar tool to determine in real time which spaces are safe for social distancing, measured from afar. Is the local park empty enough for some fresh air and exercise? Is the grocery store packed? Ask the AI before you leave!

“I would like to make something that is helpful to people being affected by COVID-19 next,” says Luis Guajardo, Creative Technologist at MediaMonks. “I think that would be an interesting spinoff of this project.” The sentiment shows how such experiments, when executed at speed, can provide necessary solutions to new problems soon after they arise.

Off-the-Shelf Tools Help Teams Plug In, Play and Apply New Learnings

Our Canteen Counter is powered by Google’s Coral, a board that runs optimized TensorFlow models using an Edge TPU chip. To get the jargon out of the way, it essentially lets you employ machine learning offline—a process that typically connects to a cloud, which is why you need a data connection to interact with most digital assistants. The TPU chip (which stands for tensor processing unit) is built to handle the neural network-trained machine learning directly on the hardware.

This not only allows for faster processing, but also increased privacy because data isn’t shared with anyone. Developers may simply take an existing, off-the-shelf machine learning model to quickly optimize to the hardware and the goals of a project. While the steps behind this process are simpler than training a model of your own, there’s still some expertise required in discovering which model best suits your needs—a point made clear with another tool built by Labs that compares computer vision models and the differences between them.

Monk Thoughts What is a canteen counter today could become a camera that tells you something about your posture tomorrow. Anything goes, and it changes by the day.
Portrait of Geert Eichhorn

What the team really likes about Coral is how flexible it is thanks to the TPU chip, which comes in several different boards and modules to easily plug and play. “That means you could use the Coral Board to build initial product prototypes, test models and peripherals, then move into production using only the TPU modules based on your own product specs and electronics and create a robust hardware AI solution,” says Guajardo.

Quicken the Pace of Development to Stay Ahead of Challenges

For the Labs team, tools like Coral have quickened the pace of experimentation and developing new solutions. “The off-the-shelf ML models combined with the Coral board and some creativity can let you build practical solutions in a matter of days,” says Eichhorn. “If it’s not a viable solution you’ll find out as soon as possible, which prevents you from wasting any valuable time and resources.” Eichhorn compares this process to X (formerly Google X), where ideas are broken down as fast as possible to stress test viability.

“At Labs, we jump on new technologies and apply them in new creative ways to solve problems we didn’t know we had, so any project or platform that has as much flexibility as the Canteen Counter is very much up Labs’ alley,” says Eichhorn. “What is a canteen counter today could become a camera that tells you something about your posture tomorrow. Anything goes, and it changes by the day.” He notes that more is being worked on behind the scenes as the team ponders the trend toward livestreaming, the need for showing solidarity, play and interaction while working from home.

It’s worth reflecting on how dramatically the world has changed since we settled on the idea to keep an eye on our workplace canteen through a fun, machine learning experiment. But Eichhorn cautions that in a rush for much-needed solutions, “innovation” can often begin to feel like a buzzword. “What we do differently is that we can actually build, be practical, execute, and make it work.”

Extraordinary times call for extraordinary solutions.

Focused on solutions that are both useful and practical, the MediaMonks Labs team shares its approach to rapidly prototyping machine learning-based solutions. When Speed is Key, MediaMonks Labs Enables Swift, Proactive AI Prototyping By cutting out unnecessary coding hours, MediaMonks Labs builds solutions at speed.
Machine learning artificial intelligence mediamonks labs prototyping innovation google coral coral board

MM Labs Uncovers the Biases of Image Recognition

MM Labs Uncovers the Biases of Image Recognition

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

MM Labs Uncovers the Biases of Image Recognition

Do you see what I see? The game “I spy” is an excellent exercise in perception, where players take turns guessing an object that someone in the group has noticed. And much like how one player’s focus might be on an object totally unnoticed by another, artificial intelligences can also notice entirely different things in a single photo. Hoping to see through the eyes of AI, MediaMonks Labs developed a tool that pits leading image recognition services against one another to compare what they each see in the same image—try it here.

Image recognition is when an AI is trained to identify or draw conclusions of what an image depicts. Some image recognition software tries to identify everything in a photo, like a phone automatically organizes photos without the user having to tag them manually. Others are more specialized, like facial recognition software trained to recognize not just a face, but perhaps even the person’s identity.

This sort of technology gives your brand eyes, enabling it to react contextually to the environment around the user. Whether it be identifying possible health issues before a doctor’s visit or identifying different plant species, image recognition is a powerful tool that further blurs the boundary between user and machine. “In the market, convenience is important,” says Geert Eichhorn, Innovation Director at MediaMonks. “If it’s easier, people are willing pick up and try. This has the potential to be that simple, because you only need to point your phone and press a button.”

Monk Thoughts With image recognition, your product on the store shelf or in the world can become triggers for compelling experiences.
Portrait of Geert Eichhorn

You could even transform any branded object into a scavenger hunt. “What Pokemon Go did for GPS locations, this can do for any object,” says Eichhorn. “Your product on the store shelf or in the world can become triggers for compelling experiences.”

Uncovering the Bias in AI

For a technology that’s so simple to use, it’s easy to forget the mechanics of image recognition and how it works. Unfortunately, this leads to an unequal experience among users that can have very powerful implications: most facial recognition algorithms still struggle to recognize the faces of black people compared to white ones, for example.

Why does this happen? Image recognition models can only identify what it’s trained to see. How should an AI know the difference between dog breeds if they were never identified to it? Just like how humans draw conclusions based on their experiences, image recognition models will each interpret the same image in different ways based on their data set. The concern around this kind of bias is two-fold.

First, there’s the aforementioned concern that it can provide an unequal experience for users, particularly when it comes to facial recognition. Developers must ensure they power their experience with a model capable of recognizing a diverse audience.

Screen Shot 2019-10-30 at 4.53.04 PM

As we see in the image above, Google is looking for contextual things in the event photo, while Amazon is very sure that there is a person there.

Second, brands and developers must carefully consider which model best supports their use case; an app that provides a dish’s calorie count by snapping a photo won’t be very useful if it can’t differentiate between different types of food. “If we have an idea or our client wants to detect something, we have to look at which technology to use—is one service better at detecting this, or do we make our own?” says Eichhorn.

Seeing Where AI Doesn’t See Eye-to-Eye

Machine learning technology functions within a black box, and it’s anyone’s guess which model is best at detecting what’s in an image. As technologists, our MediaMonks Labs team isn’t content to make assumptions, so they built a tool that offers a glimpse at what several of the major image recognition services see when they view the same image, side-by-side. “The goal for this is discovering bias in image recognition services and to understand them better,” says Eichhorn. “It also shows the potential of what you could achieve, given the amount of data you can extract from an image.”

Here’s how it works. The tool lists out the objects and actions detected by Google Cloud Vision, Amazon Rekognition and Baidu AI, along with each AI’s confidence in what it sees. By toying around with the tool, users may observe differences in what each model responds to—or doesn’t. For example, Google Cloud Vision might focus more on contextual details, like what’s happening in a photo, where Amazon Rekognition is focused more on people and things.

Monk Thoughts With this tool, we want to pull back the curtain to show people how this technology works.
Portrait of Geert Eichhorn

This also showcases some of the variety of things that can be recognized by the software, and each can have exciting creative implications: the color content of a user’s surroundings, for example, might function as a mood trigger. We collaborated DDB and airline Lufthansa to build a Cloud Vision-powered web app, for example, which recommends a travel destination based on the user’s photographed surroundings. For example, a photo of a burger might return a recommendation to try healthier food at one of Bangkok’s floating markets.

The Lufthansa project is interesting to think about in the context of this tool, because expanding it to the Chinese market required switching the image recognition from Cloud Vision to something else, as Google products aren’t utilized in the country. This gave the team the opportunity to look into other services like Baidu and AliYun, prompting them to test each for accuracy and response time. It showcases in very real terms why and how a brand would make use of such a comparison tool.

“Not everyone can be like Google or Apple, who can train their systems based on the volume of photos users upload to their services every day,” says Eichhorn. “With this tool, we want to pull back the curtain to show people how this technology works.” With a better understanding of how machine learning is trained, brands can better envision the innovative new experiences they aim to bring to life with image recognition.

MediaMonks Labs built a tool to better understand image recognition services by uncovering their biases. MM Labs Uncovers the Biases of Image Recognition Just like people, no two artificial intelligences are alike—even when they aim to do the same thing.
artificial intelligence machine learning mediamonks labs AI bias bias in ai image recognition computer vision

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss