Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

A Phillips blender with green juice in it
A Phillips vacuum being used on a living room floor
A family having fun in a living room with a phillips fan on display

Amazon Creative Optimization • Unlocking the Power of Performance

  • Client

    Philips Domestic Appliances [Versuni]

  • Solutions

    Artificial IntelligenceCommerceMediaMedia AnalyticsDataPerformance Media

An iMac showing different amazon ads around a web browser

Blended skills unlock campaign performance data.

Longtime Amazon advertiser Philips Domestic Appliances [Versuni] needed to integrate creative optimization with media performance on a large scale. This kind of operation requires deep expertise in machine learning, artificial intelligence, creative production and optimization—a tall order, but no sweat for us. So Philips DA enlisted our help, resulting in the development of an automated tool that identifies key creative elements that drive performance at scale, backed by campaign performance data.

Amazon tech logos

Building on the best in machine learning and artificial intelligence.

Amazon’s robust computer vision and machine learning-powered solutions, combined with our proficiency in constructing automated workflows, meant we had everything we needed to turn Philips DA’s opportunity into reality. We began by demystifying Amazon Media Cockpit data, giving the media team access to enriched user data to assess media performance. Next, we leveraged our deep understanding of AWS APIs to extract labels, metadata and documentation. This set the foundation for a streamlined creative categorization and labelling process while also opening the possibility of integrating metadata as filters in the brand’s digital asset management (DAM) tool.

In partnership with

  • Philips Domestic Appliances [Versuni]
Client Words In starting this project, optimizing creative at scale for our Amazon ads was a major challenge. Working in partnership with Monks we leveraged machine learning and AI which played a pivotal role in creating a scalable, data-driven solution that can adapt and grow with our ever-changing needs.
Wiktoria Malicka

Wiktoria Malicka

Global Amazon Media Lead, Versuni

A custom, automated tool enables creative optimization at scale.

After laying the groundwork, we custom-built a machine learning-powered tool fully tailored and automated for Philips DA’s specific business case. Upon completion of the tool, Philips DA gained a newfound capability to assess the creative aspects of their Amazon advertisements in conjunction with media performance data in a scalable, well-structured method. This breakthrough enabled the identification of numerous opportunities for optimizing ad creatives, while eliminating the need for speculative optimization and manual data engineering work. As a result, the company gained enhanced technical, analytical, operational and financial efficiencies.

Results

  • First structured analysis on ad creative elements within the organization
  • 700+ automated extraction of ad creative attributes
  • 200+ hours saved with 600x more efficiency through automation versus manual tagging
  • De-siloed alignment across data, media and content teams

Want to talk media? Get in touch.

Hey 👋

Please fill out the following quick questions so our team can get in touch with you.

Can’t get enough? Here is some related work for you!

A woman with her hair swaying
`

AI Solutions

Stop Automating & Start Orchestrating Real-Time Relevance

We move your business beyond manual workloads to AI-native strategy, orchestration, and execution at scale for maximum relevance and ROI.

AI transformation for marketing workflows and hyper-personalization.

AI is a platform shift on the scale of the internet, forcing every business to rethink how they go to market. Seizing this opportunity means fundamentally transforming your marketing operations, going far beyond automating old processes to deliver unprecedented relevance and value.

Navigating this shift is a monumental challenge. We act as your change agent, helping you harness this shift while de-risking the transformation. Our approach turns your rigid marketing supply chain into a fluid, real-time engine for growth, applying our unique experience from the front lines of the AI revolution so your business doesn't just survive the shift, but thrives in it.

Solutions

Explore our solutions driving the new era of AI-first marketing.

    • Monks.Flow

      Monks.Flow logo in black with a flowing circle evolving in colors

      Our AI-powered professional managed service is built for enterprise complexity, deploying specialized agents across your complete marketing ecosystem. These agents handle tasks from strategy and creative to adaptation and performance, transforming your team from operators into orchestrators. Monks.Flow is highly extensible, integrating with any existing tool or workflow to deliver autonomous execution and measurable results without vendor lock-in.

      Learn more
    • AI Campaigns

      Google Pixel 9 Pro phone

      Our AI Campaigns solution transforms the rigid content supply chain into a fluid, intelligent ecosystem. We move your brand beyond manual workflows to a system that delivers higher quality, more culturally relevant, and performance-optimized creative at unprecedented speed. By embedding intelligence at every stage, from insights to delivery, we help you get exponentially more from your creative investment while cutting production costs.

    • Search Experience Solutions

      Close-up of a person using a dark smartphone for mobile search, illustrating the importance of a mobile-first SEO strategy and user experience.

      The way brands are discovered is undergoing a massive transformation as search becomes an AI-powered conversational experience. To thrive, your brand must be understood, trusted and recommended by the AI models that now guide users. We combine traditional SEO with Answer Engine Optimization (AEO) to make your brand an authoritative source, ensuring you capture high-intent traffic and dominate the new era of discovery.

      Learn more
    • AI Consulting

      Employees around a table consulting with each other

      True transformation requires more than technology; it requires a strategic partner who can align people, processes and culture for an AI-native future. Our global consulting practice drives purposeful, AI-powered change for the Fortune 100. We provide expert guidance and actionable roadmaps for everything from AI readiness to building and scaling human-centric technology solutions that move your business forward.

Brands we work with:

Check out more work
`

Connect

Reinvent your marketing pipeline with AI and Monks.

Our commitment to responsible innovation.

Our commitment to responsible innovation is publicly detailed in our Ethical Marketing Policy, while our internal operations are guided by the Global AI Policy for all Monks. AI's transformative power demands an equally powerful commitment to responsible governance. Our approach to AI governance is a living framework, designed to ensure our ethical standards evolve and lead the way as the technology advances. This framework is built on a foundation of trust, safety, and human oversight.

  • Human-in-the-Loop Governance. At every stage, our human creative and strategic teams act as the essential control layer, operating within a rigorous governance model anchored by the cross-functional AI Core team (Legal, Data Privacy, and Information Security). We maintain a clear review process with gates for legal clearance and brand safety, ensuring that AI serves as a tool to augment—not replace—the informed, ethical judgment of our experts.
  • Ethical Data & Sourcing. We recognize the risks of models trained on unvetted internet data. We strongly favor tools that use proprietary, transparently sourced and legally permissible datasets. Furthermore, we apply a rigorous Vendor Security Assessment (VSA) process to demand contractual assurances (including NDAs and DPAs) from our technology partners, protecting our clients from copyright and data privacy risks.
  • Active Bias Mitigation. We proactively address the biases that AI models can inherit from their training data, aligning with our broader Diversity, Equity & Inclusion (DEI) commitments. Our teams undergo mandatory trainings and actively work to identify and correct stereotypical or inequitable representations in AI-generated content. This commitment is reflected in various initiatives, ensuring the work we produce authentically reflects the diverse audiences our clients serve.

Learn how we’re helping brands lead in the AI era.

25 Minutes of AI logo with a colorful gradient background

Events

Stay Ahead. Own the AI Curve in 25 Minutes.

Your monthly, hyper-focused dose of AI innovation, inspiration, and essential information. We deliver the strategic insights you need to keep your organization ahead of the industry curve—no time wasted.

Learn more

Want to talk artificial intelligence? Get in touch.

Hey👋

Please fill out the following quick questions so our team can get in touch with you.

More on AI

For Creatives and AI, It Takes Two to Tango

For Creatives and AI, It Takes Two to Tango

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

For Creatives and AI, It Takes Two to Tango

Chances are, you’ve seen the meme before: “I forced a bot to watch over 1,000 hours of [TV show] and then asked it to write an episode of its own. Here is the first page,” followed by a nonsensical script. These memes are funny and quirky for their surreal and unintelligible output, but in the past couple of years, AI has improved to create some incredible work, like OpenAI’s language model that can write text and answer reading comprehension questions.

AI has picked up a handful of creative talents: making original music in the style of famous artists or turning your selfie into a classical portrait, to name a few. While these experiments are very impressive, they’re often toy examples designed to demonstrate how well (or poorly) an artificial intelligence stacks up to human creativity. They’re fun, but not very practical for day-to-day use by creatives. This led our R&D team, MediaMonks Labs, to consider how tools like these would actually function within a MediaMonks project.

This question fueled two years of experimentation and neural network training for the Labs team, who built a series of machine learning-enhanced music video animations that demonstrate true creative symbiosis between humans and machines, in which a 3D human figure performs a dance developed entirely by (or in collaboration with) artificial intelligence.

The Simulation Series was built out of a desire to let humans take a more active approach to working creatively with AI, controlling the output by either stitching together AI-created dance moves or by shooting and editing the digital performance to their liking. This means you don’t have to be a pro at animation (or choreography) to make an impressive video; simply let the machine render a series of dance clips based on an audio track and edit the output to your liking.

“Once I had the animations I liked, I could put it in Unity and could shoot them from the camera angles that I wanted, or rapidly change the entire art direction,” says Samuel Snider-Held. A Creative Technologist at MediaMonks, he led the development of the machine learning agent. “That was when I felt like all these ideas were coming together, that you can use the machine learning agent to try out a lot of different dances over and over and then have a lot of control over the final output.” Snider-Held says that it takes about an hour for the agent to generate 20 different dances—far outpacing the amount of time that it would take for a human to design and render the same volume.

Snider-Held isn’t an animator, but his tool gives anyone the opportunity to organically create, shoot and edit their own unique video with nothing but a source song and Unity. He jokes when he says: “I spent two years researching the best machine learning approaches geared towards animation. If I spent two years to learn animation instead, would I be at the same level?” It’s tough to say, though Snider-Held and the Labs team have accomplished much over those two years of exhaustive, iterative development—from filling virtual landscapes with AI-designed vegetation to more rudimentary forms of AI-generated dances in pursuit of human-machine collaboration.

Enhancing Creative Intelligence with Artificial Intelligence

Even though the tool fulfills the role of an animator, the AI isn’t meant to replace anyone—rather, it aims to augment creatives’ abilities and enable them to do their work even better, much like how Adobe Creative Cloud eases the creative process of designing and image editing. Creative machines help us think and explore vast creative possibilities in shorter amounts of time.

It’s within this process of developing the nuts and bolts that AI can be most helpful, laying a groundwork that provides creatives a series of options to refine and perfect. “We want to focus on the intermediate step where the neural network isn’t doing the whole thing in one go,” Snider-Held says. “We want the composition and blocking, and then we can stylize it how we want.”

Monk Thoughts The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination.
Samuel Snider-Held headshot

It’s easy to see how AI’s ability to generate a high volume of work could help a team take on projects that otherwise didn’t seem feasible at cost and scale—like generating a massive amount of hand-drawn illustrations in a short turnaround. But when it comes to neural network-enhanced creativity, Snider-Held is more excited about exploring an entirely new creative genre that perhaps couldn’t exist without machines.

“It’s like a reverse Turing test,” he says, referencing the famous test by computer scientist Alan Turing in which an interrogator must guess whether their conversation partner is human or machine. “The tool’s glitchy aesthetic sells the ‘otherness’ to it. It doesn’t just enhance your productivity, it can enhance the limits of your imagination. With AI, we can create new aesthetics that you couldn’t create otherwise, and paired with a really experimental client, we can do amazing things.”

Google’s Nsynth Super is a good example of how machine learning can be used to offer something creatively unprecedented: the synthesizer combines source sounds together into entirely new ones that humans have never heard before. Likewise, artificial intelligence tools like automatically rendering an AI-choreographed dance can unlock surreal, new creative possibilities that a traditional director or animator likely wouldn’t have envisioned.

In the spirit of collaboration, it will be interesting to see what humans and machines create together in the near and distant future—and how it will further transform the ways that creative teams will function. But for now, we’ll enjoy seeing humans and their AI collaborators dance virtually in simpatico.

A dancing AI from MediaMonks Labs goes beyond enhancing productivity–it supercharges creative thinking and imagination, too. For Creatives and AI, It Takes Two to Tango A dance-designing AI made by MediaMonks Labs does more than just the robot.
Ai artificial intelligence machine learning ml neural network neural network training creative machines creative AI

Looking Back at 2019 and the Dawn of a New Era

Looking Back at 2019 and the Dawn of a New Era

4 min read
Profile picture for user mediamonks

Written by
Monks

Looking Back at 2019 and the Dawn of a New Era

The decade is drawing quickly to a close, and it’s been a wild ride. From new technologies to new members of our family (we welcomed BizTech, IMA, Firewood Marketing and WhiteBalance this year), 2019 presented us with a lot of thrilling changes—and some exciting opportunities as we enter a new era. Looking back, we polled managing directors from our offices around the world for their favorite trends and technologies that have emerged in the past year—and what they’re looking forward to next.

Extended Reality Gets Real

Interest in mixed and extended reality (the combination of real and virtual objects or environments, like augmented or virtual reality, enabled by mobile or wearable devices) has been growing. At the same time, mixed reality has made strides in maturity over the past year, like Google’s efforts in making virtual objects feel truly anchored to the environment with occlusion, in which virtual objects are responsive to their surrounding environment—for example, disappearing behind real-world objects.

For Martin Verdult, Managing Director at MediaMonks London, extended reality is among the innovations he’s become most excited about going into 2020, and not just for the entertainment potential: “Virtual and augmented reality will become increasingly prevalent for training and simulation, as well as offering new ways to interact with customers.” For example, our Spacebuzz virtual reality experience gives children a unique look at the earth and environment they may typically take for granted, using the power of immersive tech to leave an indelible mark.

Monk Thoughts Value comes from connecting an IP to a brand through a deeply engaging hyper reality experience.

As the technology that powers extended reality matures, so will its potential use cases. But when a technology is still evolving significantly in short time, it can be difficult for brands to translate their ideas or goals into clear, value-added extended reality experiences. “We have introduced creative sprints for our core clients to get these ideas in a free flow,” says Verdult.

Among Verdult’s favorite examples of augmented reality projects MediaMonks has worked on this year is Unilever’s Little Brush Big Brush, which uses whimsical, virtual animal masks to teach children proper brushing habits and turn a chore into playtime. Similarly, extended reality can bring products to life in an engaging way—or if used in a customer’s research phase, it can help customers interact with a product with minimal (or no) dedicated retail shelf space.

Little Brush Big Brush Case Video.00_00_15_17.Still009

Part of the Little Brush Big Brush’s charm is that it extends beyond simply AR, connecting to a web cartoon series and a Facebook Messenger chatbot to reward kids with stickers at key milestones. “Value comes from connecting an IP to a brand through a deeply engaging hyper reality experience,” says Olivier Koelemij, Managing Director at MediaMonks LA. “One that only a well-executed integrated production can offer, combining digital and physical in new and extraordinary ways.”

AI/Machine Learning Grows Up

One can’t reflect on past innovations and look to the future without mentioning artificial intelligence and machine learning. From programmatic delivery to enabling entirely new creative experiences—like matured extended reality powered by computer vision—to connecting cohesive experiences across the Internet of Things, artificial intelligence “will change our interaction with technology in ways we can’t imagine yet,” says Sander van der Vegte, Head of MediaMonks Labs, our research and development team that continually experiments with innovation.

The most creatively inspiring uses of AI are the ones that will help us understand the world and our fellow humans. In collaboration with Charité, for example, we programmed a 3D printer to exhibit common symptoms of Parkinson’s disease and its effect on motor skills. The result is a series of surreal art-objects that make real patients’ experiences tangible for the general population.

190719_Productshot_Groupshot_fin

Social Content and Activations Build Impact

Ask Sicco Wegerif (Managing Director at MediaMonks Amsterdam) what struck him this year, and he’ll tell you it’s the elevation of social content in purchasing—for example, how Instagram made influencer posts shoppable early this year. Wegerif notes that about a quarter of consumers have made a purchase on social media, signaling new opportunities for brands to build connections with consumers.

“Looking at this from an integrated and smart production perspective, we can help brands create so many assets and storylines that tap into this trend, especially when combining this with data so we can be super personal and relevant.” When social media is prioritized early in the creative and planning process, it can enable more meaningful experiences.

For example, our “People are the Places” activation for Aeromexico used Facebook content to transform the way users discover destinations around the world. Instead of researching and booking a city, users get to learn about people around the world—then purchase a ticket to where they call home. The social content enriches the experience and builds emotion into the experience. “It’s in essence a very simple thought that can change the whole CX,” says Wegerif.

Social Activations and Digital Experiences Weave Together

Speaking of social media, it can become a powerful tool to build relevance and connection with experiential. Jason Prohaska, Managing Director at MediaMonks NY, says: “Experience and social work hand-in-hand as part of the digital plan for many brands, and are no longer below the priority line.” With live experiential—which elevates the role of the online audience to interact, take part in and build buzz around experiences—brands can achieve greater strategic impact in how they build connection with their consumers.

But doing so successfully requires a confluence of data, influencers, experiential storytelling and production. The future of this looks good to Prohaska. “We expect 2020 to deliver several use case scenarios at scale for brand identity that may set benchmarks for personalization, automation, customer journey optimization, efficacy, performance and engagement.”

Koelemij looks forward to stronger investment in digital and consumer understanding as brands begin to integrate experiences even further going into 2020. “With most good work, success and performance can now be better attributed to digital as we get more advanced in understanding what success looks like,” he says, “especially in how we can measure it across blended activations.”

And that’s exactly how we’d like to spend 2020: helping brands achieve their goals with data-backed, insights-driven creative across the customer decision journey. Through added capabilities thanks to companies like WhiteBalance, Firewood, BizTech and IMA joining the S4Capital family in 2019, we achieve this by greatly prioritizing and enhancing key elements of the marketing mix for daring brands—and as we reflect on the past year, we can’t wait to see what’s next.

At the close of the decade and the dawn of a new era, we look back at some of the most exciting trends and developments in the past year. Looking Back at 2019 and the Dawn of a New Era We look back at past achievements and set expectations for 2020.
End of year recap recap tech trends ar augmented reality mixed reality extended reality 2019 new year s4capital social media marketing machine learning

MM Labs Uncovers the Biases of Image Recognition

MM Labs Uncovers the Biases of Image Recognition

4 min read
Profile picture for user Labs.Monks

Written by
Labs.Monks

MM Labs Uncovers the Biases of Image Recognition

Do you see what I see? The game “I spy” is an excellent exercise in perception, where players take turns guessing an object that someone in the group has noticed. And much like how one player’s focus might be on an object totally unnoticed by another, artificial intelligences can also notice entirely different things in a single photo. Hoping to see through the eyes of AI, MediaMonks Labs developed a tool that pits leading image recognition services against one another to compare what they each see in the same image—try it here.

Image recognition is when an AI is trained to identify or draw conclusions of what an image depicts. Some image recognition software tries to identify everything in a photo, like a phone automatically organizes photos without the user having to tag them manually. Others are more specialized, like facial recognition software trained to recognize not just a face, but perhaps even the person’s identity.

This sort of technology gives your brand eyes, enabling it to react contextually to the environment around the user. Whether it be identifying possible health issues before a doctor’s visit or identifying different plant species, image recognition is a powerful tool that further blurs the boundary between user and machine. “In the market, convenience is important,” says Geert Eichhorn, Innovation Director at MediaMonks. “If it’s easier, people are willing pick up and try. This has the potential to be that simple, because you only need to point your phone and press a button.”

Monk Thoughts With image recognition, your product on the store shelf or in the world can become triggers for compelling experiences.
Portrait of Geert Eichhorn

You could even transform any branded object into a scavenger hunt. “What Pokemon Go did for GPS locations, this can do for any object,” says Eichhorn. “Your product on the store shelf or in the world can become triggers for compelling experiences.”

Uncovering the Bias in AI

For a technology that’s so simple to use, it’s easy to forget the mechanics of image recognition and how it works. Unfortunately, this leads to an unequal experience among users that can have very powerful implications: most facial recognition algorithms still struggle to recognize the faces of black people compared to white ones, for example.

Why does this happen? Image recognition models can only identify what it’s trained to see. How should an AI know the difference between dog breeds if they were never identified to it? Just like how humans draw conclusions based on their experiences, image recognition models will each interpret the same image in different ways based on their data set. The concern around this kind of bias is two-fold.

First, there’s the aforementioned concern that it can provide an unequal experience for users, particularly when it comes to facial recognition. Developers must ensure they power their experience with a model capable of recognizing a diverse audience.

Screen Shot 2019-10-30 at 4.53.04 PM

As we see in the image above, Google is looking for contextual things in the event photo, while Amazon is very sure that there is a person there.

Second, brands and developers must carefully consider which model best supports their use case; an app that provides a dish’s calorie count by snapping a photo won’t be very useful if it can’t differentiate between different types of food. “If we have an idea or our client wants to detect something, we have to look at which technology to use—is one service better at detecting this, or do we make our own?” says Eichhorn.

Seeing Where AI Doesn’t See Eye-to-Eye

Machine learning technology functions within a black box, and it’s anyone’s guess which model is best at detecting what’s in an image. As technologists, our MediaMonks Labs team isn’t content to make assumptions, so they built a tool that offers a glimpse at what several of the major image recognition services see when they view the same image, side-by-side. “The goal for this is discovering bias in image recognition services and to understand them better,” says Eichhorn. “It also shows the potential of what you could achieve, given the amount of data you can extract from an image.”

Here’s how it works. The tool lists out the objects and actions detected by Google Cloud Vision, Amazon Rekognition and Baidu AI, along with each AI’s confidence in what it sees. By toying around with the tool, users may observe differences in what each model responds to—or doesn’t. For example, Google Cloud Vision might focus more on contextual details, like what’s happening in a photo, where Amazon Rekognition is focused more on people and things.

Monk Thoughts With this tool, we want to pull back the curtain to show people how this technology works.
Portrait of Geert Eichhorn

This also showcases some of the variety of things that can be recognized by the software, and each can have exciting creative implications: the color content of a user’s surroundings, for example, might function as a mood trigger. We collaborated DDB and airline Lufthansa to build a Cloud Vision-powered web app, for example, which recommends a travel destination based on the user’s photographed surroundings. For example, a photo of a burger might return a recommendation to try healthier food at one of Bangkok’s floating markets.

The Lufthansa project is interesting to think about in the context of this tool, because expanding it to the Chinese market required switching the image recognition from Cloud Vision to something else, as Google products aren’t utilized in the country. This gave the team the opportunity to look into other services like Baidu and AliYun, prompting them to test each for accuracy and response time. It showcases in very real terms why and how a brand would make use of such a comparison tool.

“Not everyone can be like Google or Apple, who can train their systems based on the volume of photos users upload to their services every day,” says Eichhorn. “With this tool, we want to pull back the curtain to show people how this technology works.” With a better understanding of how machine learning is trained, brands can better envision the innovative new experiences they aim to bring to life with image recognition.

MediaMonks Labs built a tool to better understand image recognition services by uncovering their biases. MM Labs Uncovers the Biases of Image Recognition Just like people, no two artificial intelligences are alike—even when they aim to do the same thing.
artificial intelligence machine learning mediamonks labs AI bias bias in ai image recognition computer vision

How a Creative Technologist Taught an AI to Take His Job

How a Creative Technologist Taught an AI to Take His Job

3 min read
Profile picture for user Kate Richling

Written by
Kate Richling
CMO

How a Creative Technologist Taught an AI to Take His Job

MediaMonks Creative Technologist Sam Snider-Held recently contributed (heavily) to the following article that was featured on The Drum –

Feared by many and distrusted by others, machine learning may yet turn out to be a creative’s best friend, by performing the low level tasks that take up valuable time.

Ever since McCann Japan debuted robotic creative director AI-CD β in 2016, creatives have been left wondering how long they’ve got until steel-clad copywriters sweep through the boardrooms and studios of adland. Rather than being made redundant by the rise of the algorithms though, many will find themselves training their AI colleagues to do their jobs.

At the New York offices of digital production agency MediaMonks, creative technologist Sam Snider-Held is running towards, rather away from, such a vision of the future. His most recent project saw him train a neural network to design virtual landscapes. The algorithm ‘watched’ him work on a VR landscape, and then used his example to inform future design choices.

Snider-Held suggests these experiments could culminate in a ‘surgeon’s assistant’, an entity capable of predicting a creative’s choices and presenting them with the tools they need.

Monk Thoughts I started using machine learning as a way to get what I wanted faster. I then started to think, what if I had a machine that knew what I am going to need or what I am going to do at any given point in time?

He is fairly confident that machine learning will be used in the near future for a range of low-level creative work — augmenting, rather than usurping, designers. “I think a machine doing very basic visual tasks is something we’ll see quite soon.”

His experiments come at a time when some of the biggest creative companies on the planet are pursuing creative applications of artificial intelligence (AI). Magenta, an initiative from the Google Brain team, is aimed at using machine learning in music.

Applications include Nsynth, a synthesizer that can take existing sounds and merge them to create entirely new sounds that can be used by musicians. There’s also Onsets And Frames, an application that uses neural networks to predict patterns in sound, allowing it to automatically transcribe piano recordings.

Adobe’s work in this sphere includes its AI platform Sensei and automated image editor DeepFill. “I think you will see that sort of stuff in Adobe products in the next couple of years,” says Snider-Held.

Vijay Gupta, a retail strategy director at Adobe, says the company’s AI research aims to “enhance, not replace” creative work. “By automating the routine elements of this process, creatives can gain more time to work on original concepts,” he says.

Gupta points to Launch It, an app previewed at this year’s Adobe Summit that automatically tags web content.

Monk Thoughts This is just one way AI is helping to solve problems. Far from replacing and standardizing creative output, AI will remove the barriers that stand in the way of creativity.

However, Snider-Held says new technology is always a double-edged sword. “It’s something we should be looking into on a very critical level.”

However, he says the ability to train machines could protect creatives from redundancy. He asks: “Are the machines going to get rid of the need for me, or can we use it to make me more productive?”

Gupta says the true aim of Adobe Sensei is ‘IA’ — intelligence amplification. He says: “Human intelligence and creativity will always be number one, but it can be amplified massively by AI.”

Snider-Held, who learned to create algorithms by watching YouTube videos, says the skills wielded by the next generation of creatives will soon enable them to use machine learning for day-today tasks. He concludes: “I think that as teenagers grow up using these tools to make content, it won’t be a ‘black box’ to them in the way it is to us. And maybe, along the way, they’ll find something totally crazy that will transform the way we do work.”

This article originally appeared on The Drum on July 23, 2018.

Feared by many and distrusted by others, machine learning may yet turn out to be a creative’s best friend, by performing the low level tasks that take up their valuable time. Creative technologist Sam Snider-Held is running towards, rather away from, such a vision of the future. His most recent project saw him train a neural network to design virtual landscapes. The algorithm ‘watched’ him work on a VR landscape, and then used his example to inform future design choices. How a Creative Technologist Taught an AI to Take His Job Feared by many and distrusted by others, machine learning may yet turn out to be a creative’s best friend, by performing the low level tasks that take up their valuable time.
machine learning

A Realistic Take on Voicebots

A Realistic Take on Voicebots

4 min read
Profile picture for user Jason Prohaska

Written by
Jason Prohaska
Managing Director

A Realistic Take on Voicebots

Voice technology that power devices like Amazon’s Alexa and Google Home is the next frontier for emerging tech companies.

Facebook’s announcement to launch ParlAI recently only intensified the industry’s ambition to reach the ultimate goal of having meaningful conversations with computers by voice.

But let’s hold onto our horses; we’re not there yet.

At MediaMonks, we’re receiving increasing requests from brands eager to explore this emerging utility, and at the same time, we’re working with engineering and product teams to understand exactly what the technology can do. As it stands, we still have leaps to make, but one thing is sure: Voice-activated tech is set to get smarter, and fast.

Recently, I was speaking with a top executive at a leading global consumer products company. While watching TV, he saw an interesting ad for a product similar to his own. This prompted him to test Alexa. He asked what the best brand for the product category was, and Alexa promptly responded with a list of competitors. Later, one offered to send him a sample, and another listed best prices. This goes to show that while we may not yet be having meaningful conversations, voice-activated tech is rising and so are opportunities for brands to embrace it.

The Good, the Bad and the Promising

recent study shows the U.S. market for voice-activated assistants has grown nearly 130 percent since 2016.

Today, Amazon Echo (Alexa) and Google Home — which differ from Apple’s Siri and Google Now in that they’re independent, stationary devices — dominate the market. Their main function is to provide a “smarter home” by calling up music, reminding you of your agenda, and even answering trivia questions.

One of the biggest benefits of voice-activated tech is that it saves time. Speaking is more natural than writing, and because you don’t have to take out your phone, it’s faster. It’s also more accessible for those who, for one reason or another, aren’t able to use keyboards or screens.

Monk Thoughts Soon unnecessary typing and tapping on a keyboard will be a memory of the distant past.

Perhaps. But this feature is still prone to error. When many people are speaking close to a device at once, it tends to have difficulty actually hearing the activation phrase. In the end, if you have to repeat your request again and again, it can be more time-consuming than just walking over to flip a switch.

There’s also the issue of privacy to consider. Burger King’s recent TV ad using “OK, Google” is a prime example of this. The ad used the wake word “OK, Google” to prompt devices to describe its burgers, but within hours of release — and hilarious edits to the Whopper Wikipedia page — the commercial was pulled. The widespread coverage of this ad highlighted the fact that voice technology is still new for many, and the idea of anyone, or anything, listening in on people is unnerving.

These issues are mere glitches, however. The biggest challenge is that although we’ve created processes that allow computers to get better at translation, voice recognition, and speech synthesis, most computers still don’t understand the meaning of language.

Monk Thoughts No AI system is good enough to understand conversational speech just yet. [It] relies on both listening to what you say and predicting what you will say next. Structured speech is still much easier to understand than unstructured conversation.

And, research confirms the average person is struggling to find value adopting this emerging tech trend in their daily lives.

Brands Should Prepare for Tomorrow, Starting Today

The list of current limitations is long. Despite these drawbacks, advances in machine learning mean that computers are getting better at recognizing what people are saying. We’re not there yet, but Zuckerberg’s ambition of AI that understands conversational speech may not be far off.

In 2011, the global voice recognition market was valued at nearly 47 billion. Six year later, that figure has more than doubled to 113 billion. Along with Facebook’s new announced investment, there’s a rush to accelerate the transition from speech recognition to natural language processing at scale. Once this is achieved, Zuckerberg’s wish for computers to have more sophisticated conversations will become possible.

Brands can start preparing for this new frontier today. As my earlier example of Alexa demonstrates, soon more and more consumers will be turning to these products to compare options and make purchases. Brands need to anticipate this change now by integrating these devices in their ecommerce and marketing strategies. In much the same way online shopping transformed the brick and mortar retail experience, voice activation technology will take this to the next level.

Each day, the promise of meaningful conversation and results-oriented solutions provided by humans interfacing with computers is evolving. Let’s all continue to explore and contribute to these technologies as they become smarter and more meaningful…one word at a time.

This article originally appeared on VentureBeat on June 26, 2017.

At MediaMonks, we’re receiving increasing requests from brands eager to explore voicebot technology, and at the same time, we’re working internally and with emerging tech companies to understand exactly what it can do. As it stands, we still have leaps to make, but one thing is sure: Voice-activated tech is set to get smarter, and fast. A Realistic Take on Voicebots As it stands, we still have leaps to make when it comes to voicebot technology, but one thing is sure: Voice-activated tech is set to get smarter, and fast.
voicebot emerging tech trends emerging tech companies machine learning

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

5 min read
Profile picture for user Kate Richling

Written by
Kate Richling
CMO

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

Here, our friends at SoDA (or The Digital Society) sit down with MediaMonks Founder and COO Wesley ter Haar, who is also on the SoDA Board of Directors.

In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs to uncover some interesting findings.

SoDA: Not only are agencies continuing to launch internal labs and incubators but, more importantly, they are making direct investments into these initiatives. Do you think agencies are finally getting serious about innovation and realizing they need to invest in R&D rather than just hope to do cool work as part of regular client initiatives? How does MediaMonks approach innovation and do you invest in it outside of directly funded client projects?

Wesley ter Haar: I’ve never been a great fan of the Lab moniker. It’s a weird way to silo innovation as a play-thing instead of making it a core part of the day-to-digital-day we all live in. 

Monk Thoughts Choosing between markets is like anointing a favorite child...
black and white photo of Wesley ter Haar

…The US is our largest market and the most ambitious, APAC leads the way in user behavior, LATAM has some of the most creative visual talent I’ve seen and I’m always amazed by the creative ideas that bubble up from smaller European markets. It goes to show that constraints are never a reason to deliver mediocre work.

This article was originally published in The SoDA Report – with key findings from Forrester.

Monk Thoughts The key question anyone should ask themselves in our business is the existential one, 'Am I still relevant X months from now?'
black and white photo of Wesley ter Haar

Wesley ter Haar: That X used to be 24 to 36 months, and has probably been whittled down to 9 months with the constant change that besets consumer behavior and client adoption. At MediaMonks we hire or acquire against an internal innovation roadmap based on where we see the confluence of people, products and platforms are headed. For us, that has meant the acquisitions of a VR-first production company and a connected commerce company, the launch of a digital first content company and a hiring spree to bolster our AR capabilities. So, yes, innovation is critical to the health of our business, but I don’t believe a ‘Lab’ is the way to make it central to who we are and what we do.

SoDA: MediaMonks is hired by client-side marketers (and agencies) to deliver cutting edge work. This year we found that Chatbots/Conversational Interfaces, AI/Machine Learning and Programmatic Advertising topped the rankings for anticipated impact and planned investment in emerging technology. Agency leaders and marketers were generally aligned on this front with one major exception… Virtual and Augmented Reality. Marketers are planning to make significant investments in VR/AR while agency leaders are lukewarm on the short-term marketing impact. Why do you think there’s such a big gap between marketers and agencies on this front?

Wesley ter Haar: I think this gap mostly represents the excitement for the future state of “The R’s” (augmented reality, virtual reality and mixed reality) relative to the technical maturity of current platforms and production processes.

Monk Thoughts In fact, VR/AR currently have an 'r-problem' of their own and Reach, Results and ROI will be narrow until there is full native OS support on mobile devices, some level of convergence on distribution platforms, standard industry specifications and clear metrics.
black and white photo of Wesley ter Haar

From the SoDA perspective, this shows a mature agency landscape with many agency leaders trying to think from a client perspective, focus on value/budgets and look at reasonable metrics. As agency leaders, we have to make sure we educate clients on the now while planning for the future so we don’t miss the boat when the scale and spread of these technologies starts impacting brands, business and bottom lines.

SoDA: For many years, SoDA has tracked what marketers value most in their agency relationships and, on the flip side, what agencies think their clients value most in their partnership with them. “Expertise in Emerging Tech/Trends” and “Process/Project Management” are consistently rated in the Top 5 by both agency leaders and client-side marketers. How does MediaMonks balance the importance of project management rigor with the desire for clients to explore (and quickly deliver on) the latest technologies? Is there a healthy tension between these two factors?

Wesley ter Haar: There will always be tension between doing difficult things for the first time and delivering difficult things for the first time, on time. Our role is to explain risk, mitigate against it as best we can, and make the “fall down and get back up” process of research, innovation and iteration one that is transparent to clients.

Monk Thoughts A company like ours is built on saying 'YES' because we believe we can solve the ample caveats that emerging tech trends bring to the table...
black and white photo of Wesley ter Haar

…But, the level of comfort on the client side is going to rely on the quality of the process and the project management rigor around it. In the same way that ad agencies are not artists, digital agencies shouldn’t hide behind labs and a “Crazy Scientist” vibe when it comes to new technology and trends. It’s all about the practical application for clients and their (potential) customers.

SoDA: This year we asked agency leaders to identify strategic factors they saw as most critical to their ongoing growth and evolution. Not surprisingly, “Attracting and retaining top talent” and “Developing new services / capabilities” topped the list. Interestingly, very few looked at “Expanding to new markets/geographies” as an important part of their strategic plans. MediaMonks appears to be quite the opposite with offices now in Amsterdam, London, Stockholm, New York, LA, Dubai, Sao Paulo, Buenos Aires and Singapore. How do you approach geographic expansion and why has it been so central to your growth strategy? What challenges have you wrestled with in managing the business across such a broad geographic footprint? What do you see as the most exciting new markets?

Wesley ter Haar: To start with the reasons, it gives us the opportunity to recruit and retain talent at a much larger scale, and in turn helps us cater to the ambition many of our Monks harbor when it comes to working in other countries and cultures. 

Monk Thoughts For clients, it means we can offer global scale and local relevance. So much of the work we do needs to be created and trans-created across regions and there is a clear efficiency in cost, quality and project control when we run that via our footprint.
black and white photo of Wesley ter Haar

We run all offices as a single P&L which sounds like an admin choice, but is a critical cultural component. We are one company operating across 9 countries and 10 offices, with teams and talent working across time zones. Budgeting, resourcing and planning needs to be seamless to make that work, and that’s been the operational focus from Day 1.

 

In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs to uncover some interesting findings. Here, our friends at SoDA (or The Digital Society) sit down with MediaMonks Founder and COO Wesley ter Haar, who is also on the SoDA Board of Directors, to talk machine learning, AI, virtual reality, augmented reality and other emerging tech trends. Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs and spoke with our Founder Wesley ter Haar to find out more.
machine learning AI virtual reality augmented reality emerging tech trends mixed reality

Rise of the Machines — the Next Big Revolution in Tech

Rise of the Machines — the Next Big Revolution in Tech

4 min read
Profile picture for user Michiel Brinkers

Written by
Michiel Brinkers
Technical Director

Rise of the Machines — the Next Big Revolution in Tech

Machine learning’s potential to reveal hidden information in data is amazing and it’s already a part of our daily lives.

However, as MediaMonks’ Technical Director Michiel Brinkers reflects in this post, its possibilities are still largely unexplored. Here he gives his view on what machine learning can mean for creative digital production going forward and the challenges that need to be overcome to get there –

They’re coming for us, or are they?

For those unfamiliar with machine learning (ML), it’s a type of algorithm which learns from patterns in data and then, based on what it learned, can recognise and predict similar patterns in new data. It’s an application of artificial intelligence.

The technology is evolving at a phenomenal pace and every few weeks a new API, paper or prototype is released, continually raising the bar. Currently, the bulk of the research in ML is being developed by the big tech companies such as Google, IBM, Amazon and Microsoft. All have been researching ML for years and recently Facebook has also been making enormous strides.

A lot of work is also being done by universities, independent research groups and individual developers. And through the collaboration of Partnership on AI, as well as open source projects, emerging tech companies are working alongside all varieties of engineers and creative technologists to advance the field.

So, what’s stopping us?

The first challenge we face in is that models, which contain all the information needed to classify data, are difficult to create. It takes thousands of classified input data points to create a model, and unless a client already has the right data available, it’s hard to come by. The good news however is that creating a completely new model isn’t always necessary. Using an existing system, such as the Google Vision API, may in fact be preferable from a time, cost, and features perspective. So, smart choices in the creative approach can usually cover up any shortcomings.

The second challenge is fixing bugs. ML can be a bit of a black box, with even scientists often only able to speculate at its inner workings. This means that fixing bugs is a major challenge and vastly unlike regular programming. It’s not a matter of simply changing a few lines of code, but requires engaging in a whole new cycle of trial, error and discovery to develop the right model.

The last challenge is that we need to learn how to design with ML in mind. While the success rates for output can be impressive, it’s not perfect. For example, human lipreading has a tested success rate of around 20%, while ML has a 50% success rate. So, if we want to build a campaign around this, we need to come up with a solution for the remaining 50%.

With machine learning we need to take the user on a journey and help them appreciate what’s going on. It shouldn’t be a binary experience if it’s to be impactful, and there needs to be room for error built into the concept. So, to solve this we have to be creative with the tools we have, and come up with smart fallback solutions which don’t break user engagement.

Looking to the future

For marketers, ML can be integrated with a multitude of touchpoints, including Social, DOOH screens, platforms and campaign sites, and can play a powerful role that influences the consumer decision-making journey.

Some possibilities include using object recognition so a brand can recognise types of products from competitors and show the user an equivalent product in its range. Or, using voice recognition to develop conversational UI or voice controlled websites.

For digital agencies opportunities lie in creatively implementing available applied machine learning solutions, such as the Google Vision API, TensorFlow, IBM Watson, Microsoft Cognitive Services and API.ai,. We’re seeing tremendous potential for clients and users, and are particularly excited about:

Through trial, error and deployment, here at MediaMonks we’re working with lead researchers to explore ML’s potential and how we can build solutions at scale. We’re quickly moving from a position of “we think this is possible” to “we know this is possible”.

Can we build serendipitous experiences through the use of predictive algorithms? Can we recognize fashion trends based on Instagram and what people are wearing in the street? Can we create awareness by detecting cyberbullying? Can we recognize endangered species being sold at a market?

Client briefs that were once out of reach are suddenly becoming a reality and we are excited. Machine learning will enable us — and others — to craft the most captivating digital work seen in years. Just watch this space…

This article originally appeared on HuffPost on May 15, 2017.

MediaMonks’ Technical Director Michiel Brinkers reflects in this post on the possibilities of machine learning – giving his take on what it could mean for creative digital production going forward and the challenges that emerging tech companies will need to overcome to get there. Rise of the Machines — the Next Big Revolution in Tech Technical Director Michiel Brinkers reflects in this post on the possibilities of machine learning – giving his take on what it could mean for creative digital production going forward.
machine learning emerging tech companies

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss