Smart Augmented Reality - project yourself into the future

21 November 2018
realite augmentee
Smart glasses are often called augmented reality glasses, which can be a little confusing. Let’s see if we can clarify things thanks to a system developed by AUSY.

Smart Glasses

Smart glasses feature a projection screen, and touchscreen/vocal management and interaction commands. The aim is to provide the wearer with useful information in a range of different formats e.g. text, audio and vibrating messages. In the same way as smart watches, smart glasses are used to receive first-degree information. Depending on the application, these glasses can provide the wearer with different types of information: the type of objects in the vicinity, weather reports, smartphone notifications, speed of travel, directions, etc. Visual communication is achieved by projecting information, in the form of floating windows, onto the glasses’ screen. When talking about smart glasses, it is important to understand the difference between the ‘material’ screen and the ‘floating screen’, which is a software communication window. It is also important to understand that the fact of projecting information onto this screen does not necessarily denote augmented reality.

To sum up, smart glasses enable the wearer to receive information projected onto a screen system.

The most popular smart glasses on the market today are Google Glass and Raptor Everysight. 

 

Augmented Reality

Augmented Reality (AR) integrates a three-dimensional virtual object into the wearer’s actual environment. This object must evolve in real time and in accordance with the user’s movements and interactions. In an AR environment, this is problematic: the user must be able to naturally manipulate augmented content in the real world. The main challenge lies in manipulating these objects in a practical manner.

Therefore, unlike smart glasses where the aim is to project content onto a screen, AR reality glasses allow the wearer to interact with augmented 3D objects in the real world.

The launch of an AR headset such as the Hololens by Microsoft is a real innovation in this domain. This innovative augmented reality system offers the wearer a truly impressive experience as it enables high quality interactions with virtual 3D content. And thanks to the headset’s sensors (camera, microphone, etc.), using this technology also improves the visual perception of AR environments.

 

Smart Augmented Reality - bring it on!

Thanks to recent technological advances, AR is no longer in the development stage. Its current level of maturity means that augmented content is no longer the main objective but rather a tool for creating innovative products.

In the future, it will be necessary to offer users augmented interfaces that are adapted to a range of new tasks. Thus, it will be necessary to put in place smart services that can analyse the environment and the user’s needs in real time and then suggest relevant choices thanks to virtual 3D interfaces. The idea is to solve real problems and create new product offers for concerned industries.

This would involve developing bespoke systems comprising:

  1. Augmented interfaces
  2. Smart brains

Say whaaat!! A smart brain? A smart brain is an Artificial Neurone Network (ANN) i.e. an integrated system in the Artificial Intelligence (AI) family.

 

ANN is a system composed of a network of several artificial cellular structures replicating neurones and which are generally arranged in interconnected layers. It is a complex yet reliable mechanism for processing data. ANNs are considered as a powerful alternative for getting round the classic computer limitations. Their power relies on mechanisms inspired by human neurones and parallel data processing. These networks are used in a wide range of applications. These highly promising applications benefit from scientific advances in this area e.g. aircraft autopilot systems, car guidance systems, medical diagnosis, speech synthesis, computer vision systems, robotics, etc. Given the current impact of these neural networks, in the future we will make greater use of ANNs.

But what about Deep Learning? Deep Learning techniques help ANNs learn and come up with answers that are better adapted to their environment and function. This famous concept, inspired by how the human cortex functions, is based on a hierarchical study of data: in order to facilitate processing, input data (signals, images, files, etc.) is rendered less complex until elementary data is obtained.

Neural networks can be used to solve a range of problems. For example, they are very good at solving classification, image analysis and shape recognition problems. A range of different network architectures (CNN, RNN, GAN, VAE, etc.) are available and choosing the right one is critical.

 

What are we doing today?

In a medical context, AUSY is working on the hospital of the future and participating in the development of a new generation of applications integrating an AR environment by combining artificial intelligence and tools such as Hololens.

The Hololens device offers a range of features and services to users:

•             assistance with diagnoses and analyses, and rapid interpretation;

•             optimal management of human and material resources;

•             optimal interaction thanks to new smart interfaces;

•             optimised content handling thanks to vocal recognition, virtual writing, controls and gestures. 

When developing these types of innovative systems and to offer personalised services in specific user contexts, Deep Learning must be performed using data from this same context. In other words, Deep Learning must always be based on the user’s real environment and be personalised. This improves the reliability and efficiency of the trained neural network thus providing relevant augmented renderings.

 

Technologies 

Microsoft technologies for Hololens developments (including Holotoolkits and Universal Windows Platform [UWP]) combined with the powerful Unity3D development engine offer a wide range of advanced features leading to innovative services. In addition, the integration of the Vuforia AR platform in Unity3D provides a range of tools for creating advanced AR experiences.

The advantage of UWP lies mainly in the portability of applications developed on different systems integrating this API. This application platform can be used with all devices running Windows 10. It also uses the properties specific to the device in question, which means that it adapts user interfaces to specific screen sizes and resolutions. It can also be programmed in different languages: C#, C++, Visual Basic and JavaScript. In our AR context, Hololens and Unity3D are developed with C#.

Microsoft Machine Learning (ML) and Artificial Intelligence (AI) are powerful tools for accelerating the development and creation of smart applications. Adopting a powerful neural network, such as AlexNet or Squeezenet, and implementing new algorithms is made possible thanks to specialised frameworks such as CNTK, Onnx, Tensorflow, etc.

 

schema


 

In the past, AR tools were only available to certain privileged users. However, recent technological advances have made them more readily accessible. In the near future, we will see a massive increase in the number of useful applications.

AUSY is part of this revolution and is now developing Mixed Reality projects that combine AR and VR technologies to offer advanced smart user functionalities.


 

 

hakim

Hakim, who holds a PhD in IT, is a Computer Vision expert. He conducts research on Simultaneous Localisation and Mapping - SLAM 3D, an AR subject. Today, he is honing his expertise by heading up our Mixed Reality R&D projects. He is particularly interested in innovation in terms of organisation and development. Thanks to his experience, he has been able to develop the skills required to implement innovative approaches to design products of the future.

 


 

 

Don’t hesitate to check out our web page on Smart Innovation.


Let’s discuss your projects together.

contact