Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss

VR Scout Runs, Rappels and Revels in the Jack Ryan Experience

VR Scout Runs, Rappels and Revels in the Jack Ryan Experience

4 min read
Profile picture for user Kate Richling

Written by
Kate Richling
CMO

VR Scout Runs, Rappels and Revels in the Jack Ryan Experience

This is one of our favorites – and that says a lot when you consider the reach Amazon’s Jack Ryan Experience had in terms of buzz and coverage. Here, VR Scout covers our work with Amazon Studios at Comic-Con –

Rappelling from a helicopter and zip-lining in VR will have your heart pounding.

What better way to tease the world-wide release of the new Tom Clancy’s Jack Ryan series on Amazon Prime then to drop thousands of Comic-Con fans into the boots of Jack Ryan himself.

Dropping down in a conflict zone under fire made what happened next all seem like a quick blur. I “crossed” a ridiculously long unsteady plank of wood to enter another bombed out building. I picked up a weapon and began engaging with enemies, ducking behind crates that were both physically and virtually there.

Walking further down a hallway I came to a balcony. I grabbed onto a zip-line and literally just walked off the side of the building. I land on what could only be safety mats and was quickly ushered into a vehicle where I had to drive myself to safety. What the hell just happened?

Keep in mind I was physically walking, grabbing, and flying with a VR headset on, the entire time.

amazons-jack-ryan-vr-park-takes-over-comic-con_06
Monk Thoughts I’ve been to location-based entertainment VR centers before, but this was another level. It’s especially mind boggling when you realize that you’re being tracked in VR the entire time with a Optitrack motion capture system and wireless VR headset system.

Did I mention that this was all outside under direct sunlight?

Created in collaboration with MediaMonks, a team of over 300 worked non-stop over a few months to get this ready for Comic-Con. Quite an accomplishment considering the public will get to experience it this week only. This really should be its own theme park.

The immersive Jack Ryan Training Field pushes you to uncover if you have what it takes to become a field operative. And I can tell you, I barely had what it took. It’s a nerve-racking experience that had me questioning my reality in front of me and forced me to push any fear of heights I may have had aside.

The Jack Ryan Training Field was also streamed live on Twitch, where viewers could interact and throw challenges in the way of me or others running through the course.

amazons-jack-ryan-vr-park-takes-over-comic-con_07

On top of the VR Training Field, Amazon also erected a massive escape room that will throw you into your first field assignment. Created in collaboration with AKQA and Unit9, you can dig deep to thwart an extremist conspiracy, uncover a plot of double-crossing, and obtain classified intel. The Dark Ops escape experience is run as a live drama, with actors, voice technology and immersive set pieces.

If you’re heading to Comic-Con this week, you’ll find Tom Clancy’s Jacky Ryan experience right outside Comic-Con at the corner of MLK Promenade & 1st Street. The season premiere airs August 31st on Amazon Prime.

This article was originally published on VR Scout on July 19, 2018.

Taking over an entire 60,000 square foot city block, Amazon created a massive event park that places you in the heart of the Middle East. Featuring one of the most extravagant attempts at an end-to-end warehouse-scale VR experience I’ve ever seen—it had everything. Repelling, zip-lining, plank walking, and even a car chase.

amazons-jack-ryan-vr-park-takes-over-comic-con_02

Over the years, I’ve grown accustom to VR taking center-stage at Comic-Con, experiencing everything from an immersive Blade Runner ride to a Mr. Robot simulcast at Petco Park. But for the most part, the VR demos were usually as simple as putting on a headset and enjoying the ride.

Not for Jack Ryan. Jack Ryan loves action—and stomach drops.

Upon entering the training park and receiving your Analyst ID badge, the first thing you’ll notice is the Jack Ryan Training Field, an overbearing obstacle course with a life size Bell Huey military helicopter propped a couple floors off the ground.

Before entering the immersive “training field,” I got the privilege of watching UFC fighter Ronda Rousey breeze through her run. Geez, I have to follow in her footsteps?

amazons-jack-ryan-vr-park-takes-over-comic-con_03

Slowly making my way up multiple flights of stairs to gear up in the cabin of the helicopter precariously perched atop a bombed out building made me realize how unordinary this VR experience was about to be. I strapped on a rappelling harness, HP Omex X VR backpack PC, a modified Oculus Rift VR headset, and hand foot trackers. My heart began to beat faster.

amazons-jack-ryan-vr-park-takes-over-comic-con_04

Then virtual reality happened. Staff members dressed as soldiers guided me to the edge of the sliding cabin door. In VR, I could see my limbs in front of me as I stutter stepped to the edge. It was clear that I was now flying high over a war torn city. I took a seat and nervously watched my virtual legs dangle in the air off the chopper side. The next thing I knew, I was rappelling out, actually hoisted by a crane down from the safety of the cabin. My heart was racing.

amazons-jack-ryan-vr-park-takes-over-comic-con_05
VR Scout covers our work with Amazon Studios at Comic-Con around the release of Jack Ryan – "I’ve been to location-based entertainment VR centers before, but this was another level. It’s especially mind boggling when you realize that you’re being tracked in VR the entire time with a Optitrack motion capture system and wireless VR headset system." We think they liked it. VR Scout Runs, Rappels and Revels in the Jack Ryan Experience VR Scout covers our work with Amazon Studios at Comic-Con around the release of Jack Ryan – “I’ve been to location-based entertainment VR centers before, but this was another level…”
Twitch VR VR headset virtual reality

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

5 min read
Profile picture for user Kate Richling

Written by
Kate Richling
CMO

Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA

Here, our friends at SoDA (or The Digital Society) sit down with MediaMonks Founder and COO Wesley ter Haar, who is also on the SoDA Board of Directors.

In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs to uncover some interesting findings.

SoDA: Not only are agencies continuing to launch internal labs and incubators but, more importantly, they are making direct investments into these initiatives. Do you think agencies are finally getting serious about innovation and realizing they need to invest in R&D rather than just hope to do cool work as part of regular client initiatives? How does MediaMonks approach innovation and do you invest in it outside of directly funded client projects?

Wesley ter Haar: I’ve never been a great fan of the Lab moniker. It’s a weird way to silo innovation as a play-thing instead of making it a core part of the day-to-digital-day we all live in. 

Monk Thoughts Choosing between markets is like anointing a favorite child...
black and white photo of Wesley ter Haar

…The US is our largest market and the most ambitious, APAC leads the way in user behavior, LATAM has some of the most creative visual talent I’ve seen and I’m always amazed by the creative ideas that bubble up from smaller European markets. It goes to show that constraints are never a reason to deliver mediocre work.

This article was originally published in The SoDA Report – with key findings from Forrester.

Monk Thoughts The key question anyone should ask themselves in our business is the existential one, 'Am I still relevant X months from now?'
black and white photo of Wesley ter Haar

Wesley ter Haar: That X used to be 24 to 36 months, and has probably been whittled down to 9 months with the constant change that besets consumer behavior and client adoption. At MediaMonks we hire or acquire against an internal innovation roadmap based on where we see the confluence of people, products and platforms are headed. For us, that has meant the acquisitions of a VR-first production company and a connected commerce company, the launch of a digital first content company and a hiring spree to bolster our AR capabilities. So, yes, innovation is critical to the health of our business, but I don’t believe a ‘Lab’ is the way to make it central to who we are and what we do.

SoDA: MediaMonks is hired by client-side marketers (and agencies) to deliver cutting edge work. This year we found that Chatbots/Conversational Interfaces, AI/Machine Learning and Programmatic Advertising topped the rankings for anticipated impact and planned investment in emerging technology. Agency leaders and marketers were generally aligned on this front with one major exception… Virtual and Augmented Reality. Marketers are planning to make significant investments in VR/AR while agency leaders are lukewarm on the short-term marketing impact. Why do you think there’s such a big gap between marketers and agencies on this front?

Wesley ter Haar: I think this gap mostly represents the excitement for the future state of “The R’s” (augmented reality, virtual reality and mixed reality) relative to the technical maturity of current platforms and production processes.

Monk Thoughts In fact, VR/AR currently have an 'r-problem' of their own and Reach, Results and ROI will be narrow until there is full native OS support on mobile devices, some level of convergence on distribution platforms, standard industry specifications and clear metrics.
black and white photo of Wesley ter Haar

From the SoDA perspective, this shows a mature agency landscape with many agency leaders trying to think from a client perspective, focus on value/budgets and look at reasonable metrics. As agency leaders, we have to make sure we educate clients on the now while planning for the future so we don’t miss the boat when the scale and spread of these technologies starts impacting brands, business and bottom lines.

SoDA: For many years, SoDA has tracked what marketers value most in their agency relationships and, on the flip side, what agencies think their clients value most in their partnership with them. “Expertise in Emerging Tech/Trends” and “Process/Project Management” are consistently rated in the Top 5 by both agency leaders and client-side marketers. How does MediaMonks balance the importance of project management rigor with the desire for clients to explore (and quickly deliver on) the latest technologies? Is there a healthy tension between these two factors?

Wesley ter Haar: There will always be tension between doing difficult things for the first time and delivering difficult things for the first time, on time. Our role is to explain risk, mitigate against it as best we can, and make the “fall down and get back up” process of research, innovation and iteration one that is transparent to clients.

Monk Thoughts A company like ours is built on saying 'YES' because we believe we can solve the ample caveats that emerging tech trends bring to the table...
black and white photo of Wesley ter Haar

…But, the level of comfort on the client side is going to rely on the quality of the process and the project management rigor around it. In the same way that ad agencies are not artists, digital agencies shouldn’t hide behind labs and a “Crazy Scientist” vibe when it comes to new technology and trends. It’s all about the practical application for clients and their (potential) customers.

SoDA: This year we asked agency leaders to identify strategic factors they saw as most critical to their ongoing growth and evolution. Not surprisingly, “Attracting and retaining top talent” and “Developing new services / capabilities” topped the list. Interestingly, very few looked at “Expanding to new markets/geographies” as an important part of their strategic plans. MediaMonks appears to be quite the opposite with offices now in Amsterdam, London, Stockholm, New York, LA, Dubai, Sao Paulo, Buenos Aires and Singapore. How do you approach geographic expansion and why has it been so central to your growth strategy? What challenges have you wrestled with in managing the business across such a broad geographic footprint? What do you see as the most exciting new markets?

Wesley ter Haar: To start with the reasons, it gives us the opportunity to recruit and retain talent at a much larger scale, and in turn helps us cater to the ambition many of our Monks harbor when it comes to working in other countries and cultures. 

Monk Thoughts For clients, it means we can offer global scale and local relevance. So much of the work we do needs to be created and trans-created across regions and there is a clear efficiency in cost, quality and project control when we run that via our footprint.
black and white photo of Wesley ter Haar

We run all offices as a single P&L which sounds like an admin choice, but is a critical cultural component. We are one company operating across 9 countries and 10 offices, with teams and talent working across time zones. Budgeting, resourcing and planning needs to be seamless to make that work, and that’s been the operational focus from Day 1.

 

In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs to uncover some interesting findings. Here, our friends at SoDA (or The Digital Society) sit down with MediaMonks Founder and COO Wesley ter Haar, who is also on the SoDA Board of Directors, to talk machine learning, AI, virtual reality, augmented reality and other emerging tech trends. Innovation Labs, the Future of Reality & Global Expansion | In Conversation with SoDA In this year’s Global Digital Outlook Study with Forrester, SoDA expanded their inquiry in the area of innovation labs and spoke with our Founder Wesley ter Haar to find out more.
machine learning AI virtual reality augmented reality emerging tech trends mixed reality

Buying a Ticket for the VR and AR Hype Train? A Technologist Gets Real

Buying a Ticket for the VR and AR Hype Train? A Technologist Gets Real

4 min read
Profile picture for user Samuel Snider-Held

Written by
Samuel Snider-Held
Senior Director of Technology & AI

Buying a Ticket for the VR and AR Hype Train? A Technologist Gets Real

VR and AR are the future, or so they say. With headlines like “2016 Will Be the Year That Sets the Stage for Virtual Reality” and “How VR Is Starting To Become Our Reality In 2017” taking over the hyper-saturated blogosphere, it might seem like VR and AR are the only technologies worth investing in.

But, as a virtual and augmented reality creative technologist, I’m constantly telling clients and colleagues to question this sentiment.

I work on some of the world’s most forward-thinking VR and AR projects every day and wholeheartedly believe in the power of these technologies to alter, integrate, or create new experiences and memories. At the same time, it’s also my job to think critically about technology and what new tools are best suited to meet client objectives.

VR struggles with two things: sharing and distribution. Since digital advertising lives or dies by social, the question many brands face is, how do we share cutting-edge VR and AR experiences? The struggle is that these experiences are not inherently shareable. VR hijacks your perception of the world by creating an illusion for you and your eyes only. Unless the VR experience is broadened through another channel, such as a teaser video on Youtube for example, if you want to share it with your friends, they have to be as big as a VR geek as you.

So if you’re a digital strategist or brand manager stretching yourself to explore how you can engage with these tools, I encourage you to stop, take a breather, and first read this post. It might just be that the best tool for the job is another technology entirely.

The Next Big Thing — Social VR Integration

Currently brands are only interested in creating their own VR experience, instead of exploring opportunities surrounding the VR hype. Brands creating their own one-off VR or AR experiences in hopes of creating a new channel for brand awareness is like creating an entire branded social network or community. This was very popular in the early days of social media advertising, but now you advertise within these social networks instead of trying to replace them. The same will happen with AR and VR.

For example, while Facebook is working on their experimental VR social platform Spaces, there already exists social platforms like the Rec Room and Altspace where users can virtually join others and talk, play games, and create things. If you’re hell-bent on creating VR and AR content for your brand, this is the trend to watch. The social iterations of VR will have an infrastructure designed for you to tell brand stories. This will be much cheaper and easier than creating your own application, and you can see the beginnings of this in Facebook’s AR studio. And similar to the way much of our current work is focussed on creating content for existing platforms, we’ll be delivering VR and AR in this way.

Experiences Unique to VR & AR

It’s undeniable that VR and AR can create unique experiences and express creative ideas that are not possible with any other technology. So if you’re dead set on creating a VR or AR experience, then make sure that you play to the medium’s strengths.

Take for instance this mixed reality case: ASM: Into the Wild, the world’s largest mixed reality experience to date. Using AR markers to place tiny virtual objects or characters on your table has been around for ages, but this is different. Using Google’s Tango technology, a museum was augmented into a living rainforest. Walls were transformed into trees and corridors into forest paths and guests were given tablets which they could walk around and interact with endangered animals, something they can’t do in real life. The magic of AR is not just bringing virtual animals to your dining room table, but to your entire environment.

Or perhaps my favorite example, Google Earth VR. Imagine having the whole planet at your fingertips, one moment deftly flying through the skyscrapers of midtown Manhattan, and the next sitting peacefully at the top of Mount Everest. Taking data from satellite imagery and 3D photogrammetry, the environments in Google Earth VR are majestic, and are some of the most presence inducing in all of VR. There’s nothing more mind blowing than virtually standing outside of your apartment looking up at your window, knowing that you’re actually inside, decked out in VR gear. Furthermore, the experience really gets you excited about the VR’s future. If this is what it looks and feels like now, what will it look like 10 years from now?

The Future

But what about other VR experiences that could well and truly be useful for real life? In the automotive industry you could allow potential customers to drive through impossible test drives, while changing features and testing during the same experience.

Or, think about how VR can provide an amazing tool for training new professionals in technical fields.

What if you could train to be a wind turbine technician by running through a variety of possible scenarios before you ever step foot in one? Or, imagine learning a language. How useful would it be to simulate the feeling of language immersion by placing someone learning french in a Parisian cafe, and they can only navigate the experience by correctly pronouncing various phrases?

Or museums! Imagine going to the MOMA and seeing Jackson Pollock via an AR tablet ferverishly throwing paint at one of his canvases!

The possibilities are endless, but that doesn’t mean that every possibility is right for your brand. So before you spend a whole lot of your (or your client’s) money, ask yourself this. Why do you want to create a VR or AR experience? Do these technologies really provide your brand something better than other technologies? Or is your idea just a gimmick? If you’re looking for reach or engagement, then maybe wait a while before reaching for a VR headset. As the VR and AR markets mature, the channels for telling your brand stories will mature with them.

This article originally appeared on Shots on July 5, 2017.

MediaMonks' Creative Technologist Samuel Snider-Held works on some of the world’s most forward-thinking VR and AR projects every day – and believes in the power of this tech to alter, integrate or create new experiences. At the same time, it’s his job to think critically about virtual reality and augmented reality. Here he dives into two struggles – sharing and distribution. Buying a Ticket for the VR and AR Hype Train? A Technologist Gets Real While we work on some of the world’s most forward-thinking VR and AR projects every day, and believe in the power of this tech to alter, integrate or create new experiences, here we get real.
mixed reality VR AR virtual reality augmented reality VR headset

A Technical Look into Mapping Virtual Worlds for Real Spaces

A Technical Look into Mapping Virtual Worlds for Real Spaces

7 min read
Profile picture for user Rene Bokhorst

Written by
Rene Bokhorst
Technical Director

A Technical Look into Mapping Virtual Worlds for Real Spaces

In February 2017, together with World Wildlife Fund, ArtScience Museum and Google Zoo, MediaMonks launched a large-scale mixed reality experience “Into The Wild.”

All backed by the effort to help people in Singapore experience the devastating effects of deforestation and learn more about some of the world’s most endangered species and their habitats.

It was the world’s first Tango-enabled smartphone Lenovo Phab 2 Pro, and guided visitors through personalised digital adventures, which started with augmented reality (AR) on the ground floor of the exhibition space, before transitioning to full virtual reality (VR).

The end of the experience shifts back to AR, where users go up to the fourth floor for an experience that includes planting a virtual tree.

Transforming over 1,000 square meters of the Singapore ArtScience Museum into a virtual, interactive rainforest, making it the largest AR experience in the world, and second ever AR museum experience developed using Google Tango.

And it wasn’t easy. From a technical perspective, we faced the massive challenge of how to accurately and smoothly map a virtual rainforest onto a physical and dynamic museum space, making sure the walls aligned with trees, corridors with the forest’s paths, and that we worked our way around the museum’s existing exhibitions and staging.

With this, we can complete the alignment. And since Tango reports the device’s coordinates back in ecef coordinates, we can easily calculate the corresponding unity coordinates. Effectively we update the Unity camera with every Tango update we receive using this transformation.

What’s more, for every virtual tree planted, a real tree was planted in Rimbang Baling, one of the last pristine rainforests in Sumatra where the endangered Sumatran tiger lives. 5000 new trees were pledged in the project’s first month.

I hope by sharing this we can inspire the imagination of current and aspiring developers to build even more exciting AR/VR experiences that map to the real world. Go forth and conquer!

This article originally appeared on TechWorld on July 18, 2017.

So How Did We Do It?

To start with, if you’re augmenting the real world with virtual objects, it’s important that the device rendering your view (such as a smartphone, monitor, CAVE or head mounted device) is exactly aware of where it is in the real world.

For this, a device needs to know its position and orientation in a three-dimensional space.

In the case of Tango, where the augmentation happens on a camera feed, the position and orientation of the rendering device needs to be in real world coordinates. Only if the position and orientation of a Tango device is reported accurately, and fast enough, proper augmented reality is possible.

The fact that Google Tango does this for you is very cool because it allows developers to augment real world locations within their own virtual world which is different from Snapchat-like AR which, for example, augments bunny ears to your head.

With real world bound augmentations, you can potentially create shared AR experiences that revolve around and involve landmarks.

In this case, it allowed us to transform the ArtScience Museum into a lush virtual rainforest and from the user’s perspective, exploring the rainforest becomes as natural as exploring the museum itself because every corridor or obstacle in the virtual world matches a corridor or obstacle in the real one.

Into The Wild 1

Into The Wild

Into The Wild 2

Into The Wild

Into The Wild 3

Into The Wild

Into The Wild 4

Into The Wild

Into The Wild 5

Into The Wild

Into The Wild 6

Into The Wild

Google Tango Coordinates

We used Unity3D to create our virtual world. To begin, we assured our Unity developers that they wouldn’t have to worry about alignment and were free to design the virtual world using whichever position or orientation they liked, as long as it was true to scale.

Developers familiar with Geographic Information Systems (GIS) know there are a lot of coordinate systems out there called “datums”. Historically, a lot of institutes developed their own, but since the introduction of GPS, the US developed WGS84 which is the most often used for commercial devices.

The great thing about this coordinate system is that it is Cartesian, it calculates in meters, and uses the centre of the earth as its point of origin. This is important because, in a properly mapped environment, Google Tango can give you its exact position and orientation on the globe and gives you these in WGS84.

Google Tango calls these coordinates ecef coordinates, so, we’ll call it ecef also.

Determining the Correct Approach

The next step is to ensure our Unity world overlaps with the real world so we can achieve augmented reality. Two approaches to solve this come to mind.

  1. Transform (move+rotate) the Unity world to sit on top of the ecef coordinates of the museum.
  2. Transform (move+rotate) the ecef Tango device coordinates into Unity world coordinates.

The approaches are 80 percent the same, as in both cases you have to calculate the transformation from virtual (Unity) to real (ecef). The difference, however, lies in whether you transport the virtual world onto the real one (approach one), or whether you transport the real camera onto the virtual world (approach two).

To determine which approach is best, we had to see what these coordinates look like in a real use case. Here are some examples of how Unity coordinates look:

Object A: [10.000, 63.250, -11.990]

Object B: [-92.231, 33.253, -62.123]

By contrast, below are two examples of how ecef coordinates look:

Hilversum MediaMonks HQ 2nd floor near the elevator: [3899095,5399920414; 353426,87901774078; 5018270.6428830456]

Singapore ArtScienceMuseum in front of cashier shop: [-1527424,0031555446; 6190898,8392925877; 142221,77658961274]

Obviously, the ecef coordinates are quite large numbers. In fact, it’s clear that single-precision floating points (or floats) are going to have a lot of trouble with these.

Without going too much into detail about floats, it’s important to note that performing arithmetic with numbers around 10–6 with numbers around 106 means that you significantly lose accuracy.

In addition, there’s also no way of getting around the fact that a lot of 3D programming is done around 10–3 to 103 (think of transformation, model, view, or projection matrices).

To understand this further, I recommend watching this video as it demonstrates this point perfectly. It shows a fighter jet taking off from around the origin [0; 0; 0] with a camera following it and, as its own position gets larger and larger (as well as the camera’s position), the floating point calculations become less and less accurate.

Imagine then what the error would be if the coordinates of your camera are like the ecef coordinates shown above? You would be combining fine scaled rotation values with very large position values. The error in the result will be enormous.

AR isn't quite as fun if the augmentation isn't done accurately.

AR isn't quite as fun if the augmentation isn't done accurately.

Add to this the fact that Unity is hard coded to work with floats (rather than double-precision floating points, or doubles), and the fact is that we can’t afford any large errors in AR. It’s therefore clear that approach one is unfeasible. This is because the camera needs to stay relatively close to the origin to avoid precision errors.

So, we proceed with approach two which is to transform the ecef Tango device coordinates into Unity world coordinates.

Find the Transformation

Transformations between coordinate systems in 3D graphics usually entail finding translation (positional), orientation and scaling values.

Each of these three concepts act in 3D space, so they must describe their positioning, rotation, and scaling for each of the three axis (x, y and z). This gives us nine values to find.

The nine unknowns are a hint of how many equations you need to find these nine unknowns. This is important when determining how many real world coordinates are needed to anchor our virtual world to.

Our initial idea was to create a transformation that would deal with all three concepts (translation, rotation, and scale). However, due to difficulties, and the fact we were able to design our virtual world true to scale, we decided to drop accounting for scale and focus on translation and rotation only.

This meant that effectively we now only need to find six unknowns.

Calculate the Transformation Matrix

At this point in solving this challenge, we’re down to finding a transformation matrix that only accounts for location and rotation. Luckily, this problem has been solved a million times already by Computer Science students.

If you simply search Google, you’ll find countless examples of how to transport a rigid body from one coordinate system to another. This is one example that will get you there 90 percent of the way.

Finding a transformation matrix revolves around minimising the sum of squares error between two sets of data points. The following method is tailored for this problem since it deals with rotation and translation separately.

Conceptually, we approach this by picking a point in the real world, and we say that that point corresponds to another point in the virtual world.

Basically, the worlds are anchored to each other on that point. However, as you can imagine, choosing a single point as an anchor still allows the worlds to pivot around the anchor, in which case they will be misaligned most of the time.

Therefore, to place the virtual world squarely on top of the real world perfectly, you need at least a few anchors. Depending on the number of unknowns you’re trying to find, you need an equal or more amount of equations.

Equations can be derived from known pairs (in this case ecef and Unity coordinates). In this case, a total of 3 pairs (or anchors) is enough to allow us to find a full 3D transformation matrix.

The idea is that you choose N amount of points (at least three) in the real world, and find their ecef coordinates. Then, you go into the virtual world and place a point on their corresponding virtual locations (so N in total). For the museum project, we used 10 easy to find landmarks at the base of each pillar inside the museum.

Into The Wild Coordinates 2

Above, two ecef coordinates we measured in the real world and below, their virtual world counterparts.

For this, we used a third-party library called Math.Net that allows us to do linear algebra with doubles. You only have to run this code once at the start of the program.

The result is that now have 10 ecef coordinates and 10 Unity space coordinates shaped in a circle, resulting in 10 pairs of coordinates. The next step then is to apply the steps discussed in this article and find a transformation matrix that allows us to transform a point from ecef space to Unity space.

We ran into a few problems while implementing this. For example, Unity is a left-handed coordinate system, while ecef is right-handed. And the article we referenced above also used row major ordering, while the library used column major ordering.

This makes filling, transposing, and multiplication ordering of matrices different. We eventually overcame all of these problems through careful reasoning, and not trying to take too many steps at the same time.

Apply the Transformation

Following the previous step, we have a transformation matrix we could call ecefTunity, (or unityTecef depending on how you calculated the matrix). So, transforming a point in ecef space to unity space becomes as trivial as:

Into The Wild Coordinates 3
Technical Director René Bokhorst shares how, through a process of experimentation and trial and error, MediaMonks transformed Singapore’s ArtScience Museum into a virtual rainforest using AR and VR. This blog originally appeared on TechWorld. A Technical Look into Mapping Virtual Worlds for Real Spaces Technical Director René Bokhorst shares how through experimentation, and trial and error, we transformed Singapore’s ArtScience Museum into a virtual rainforest using AR and VR.
AR VR augmented reality virtual reality mixed reality

Choose your language

Choose your language

The website has been translated to English with the help of Humans and AI

Dismiss