ar

7 Hot Trends in AR

What’s hot in the world of AR right now? Happy Finish CTO Marco Marchesi shared his views with publication CIO earlier this month. Here’s what he believes are the key trends currently shaking up the world of augmented reality.

Part of my job is testing and implementing emerging technologies, and my ‘mission’ is to make magically possible what was previously considered just a crazy creative idea.

But in order to do so, some design/build/fail/success steps are required across a continuous R&D attitude. Here, I want to identify seven trends that I have personally experienced and seen rising in the AR market in the last 6-8 months (most could actually be applied to VR or MR as well). I will also highlight what kind of benefits and challenges we may expect from them.

Context Awareness: 

We are seeing more and more AR apps and tools that are context-aware. Deep learning processes are the most commonly used techniques to gain an insight of what the frame captured by the camera contains. So – object detection, segmentation or image-to-image translation (my personal favourite: recognise the reality and transform it) are commonly the most used models. If AR is able to keep tabs of what is around us by tracking the visual features that appear on camera, Deep Learning techniques guess what is in the scene. From a simple description of the objects in front of us, to a more sophisticated interaction between a virtual avatar and your laptop display, the opportunities are infinite.

Location Awareness:

When Niantic released Pokemon Go, it was clear that location-based AR apps had to be ‘a thing’ at some point. But placing gyms and items to collect was just the first step. In the meantime, other companies were working under the rear to map the real world and build a virtual one on top of it. Recently, a London-based startup called Scape released its SDK that allows users to place AR assets that persistently stay anchored on the buildings located in areas scanned in one hundred cities around the world. Snapchat introduced ‘Landmarkers’, so the users are able to see famous monuments – from Tour Eiffel to Buckingham Palace – augmented in the most creative ways.

Remote Rendering:

The idea of relying on remote rendering capabilities to give superpowers to AR mobile content is not new, but over the years it has faced the constraints of poor network bandwidth, limited hardware and significant latency. With the advent of 5G and edge computing, the concept is coming to reality and the first demos of real-time rendering performed on a server machine and visualised remotely on a mobile device have been published. The first example of this has been Microsoft, who introduced remote rendering as one of its Azure ecosystem options – but I expect many more cloud rendering services to come very soon, along with cloud software or platforms (e.g. Stadia).

Faster networks:

Strictly related to the previous point, Edge Computing will make it possible to achieve challenging ideas where speed and reliability are not an option. Let’s imagine how difficult it may be to deliver remote rendering in a stadium to 10,000 users during a sports event or a concert. Beside this, 5G aims to be the natural partner of Edge Computing in delivering fast, richer and reliable experiences. Its adoption rate will be key in defining how much server-based solutions will be successful, compared to existing locally managed technologies. For example, currently body tracking can be performed on a mobile phone by running a light AI model locally. As the mobile computational power increases and some companies like Apple or Huawei have introduced their own dedicated AI chips on phone, Deep Learning architectures become deeper and more computationally expensive; requiring more battery, memory and processing resources. 5G will make it possible to run such architectures remotely in real-time, without any noticeable latency.

Virtual try-on:

Advances in Deep Learning have allowed us to achieve more accurate body-tracking results that can run in real-time on mobile devices. The retail industry, particularly sports and fashion, will take advantage of the opportunity for users to try on clothes and accessories on their mobile phones or AR mirrors in store – making shopping a question of ‘try before you buy’, with customisation and visual effects dominating the experience. On top of that, with virtual try-on apps we can expect a reduction of returns (why would I order something if I don’t like how it fits virtually?) and consequently of CO2 emissions caused by transportation.

Lens:

For a while, AR lenses and filter effects seemed just a playful alternative to more sophisticated SLAM-based AR frameworks that were running natively on mobile. Furthermore, content creation suffered by the limitations of the filter platforms in terms of file size and number of vertices. With faster networks, we will see higher quality assets and new functionalities will be introduced, along with face tracking, body tracking (of pets too!) and gesture recognition.

Web:

One of the questions that I answer often is ‘When Web AR?’ For years we have seen implementations of marker-based solutions written in Javascript and WebGL (particularly with ThreeJS) that were compromised by the fragmentation of web browsers scenario (it runs on Chrome/Firefox but not on Safari, or vice-versa), as well as bandwidth. The introduction of WebAssembly and AR-ready browsers has made possible the publication of marker-less experiences with a few, or even zero, lines of code (see AR Quick Look on Safari). Now the world is ready to be seen in AR, without downloading any apps.

Share the news
Marco Marchesi
Marco Marchesi
Articles: 2

Get in Touch

Do you have a project to discuss, or have you seen something you like and wish to find out more? We would love to hear from you.