Being responsible for Product Lifecycle Management (PLM) and topics of 3D visualization at fme, I would like to share an update with you on our activities in the field of Augmented Reality (AR) and Virtual Reality (VR), which recently have been on everyone’s lips considering buzzwords such as digitalization, IoT and Industry 4.0.
As demonstrated by my colleague Rolf Krämer in his blog post > “Looking-back-on-three-years-of-development-projects-in-virtual-and-augmented-reality-at-fme”, fme has already been involved for several years in projects implementing VR and AR techniques. One of last year’s projects aimed at offering clients the opportunity to individually create their dream car, visualizing it in 3D as a photorealistic and interactive model and interacting with them by means of online product configurators.
Picture 1: Interactive 3D Rendering – Individually configured car, rendered in the cloud and displayed in a WEB browser via Streaming
Most car retailers utilize configurators where rendering-hardware necessary for complex visualizations is only locally available and, hence a photorealistic and interactive visualization via Holo Lens & Co is possible. However, exactly these fascinating product animations are not yet accessible in WEB configurators due to the client’s lack of high-end rendering infrastructure.
Previous visualization techniques and their limits
Until now, techniques used for visualization were based on images of individually configurable components rendered in advance, which consequentially had been assembled to a static image and transmitted to the user. Only few of the fixed configurations were able to be controlled as quasi-interactive 3D animations by the user. Likewise, these animations consisted of statically rendered single images, which were visualized in quick succession corresponding to the desired rotation (e.g. horizontal rotation of the product, 2 degrees per rendered image).
Due to the gradually increasing diversity of models and configuration possibilities (currently up to several million variations), as well as the several corresponding views (front, rear, left-hand side, right-hand side, top, 3D), the required lead time for structural analysis of the images and storage space needed is increasing, while the possible period of time for their creation (conditional on constant facelift) is shortening.
Discovering of new opportunities by means of new technologies
The disadvantages persisting up to now were overcome in a project under our leadership by integrating the steps of rendering, which use high computing power, into a cloud-environment. As a result, these are now provided to the client in form of an individually created video stream in real time for interaction.
Using the cloud architecture, it is possible to individually address the user behaviour of the client. In particular, the time-of-day-dependent system load is improved by an automatized adding or removing of rendering components in the cloud. With this, the cost structure is optimized. (Example: The peak time of use is between 6 and 10 pm, which implies that only during this period a larger number of rendering components is needed to fulfil the requirements).
The configuration of the cloud architecture can be created or adapted through scripts. The utilized method is also known as IaaS (Infrastructure as a Service).
This allows particularly short reaction times during productive operation. Out of the Box, standardized monitoring and security updates as well as back-up functionalities, which are more reliable in operation and less expensive in contrast to a self-operated infrastructure, are available.
The data throughput can be adapted to the varying bandwidth of the client’s internet connection enabling a smooth visualization of the 3D stream even under poor conditions.
Picture 2: Interactive 3D Rendering – Detailed view of a configured car ( zoom in)
Furthermore, the user has the opportunity to interact with the stream. Pre-configured cameras can be connected through and directly controlled by means of rotate and zoom commands with a minimum of latency.
However, an extension of this solution to a virtual-reality component is currently not possible with the mentioned technique. This would require a larger internet bandwidth to transmit the considerably larger number of images per time unit to the frontend.
Further potential is available for future projects though, such as the interactive display of product features or animation for mouse-over movements and even more.
With the help of this project, we as fme are able to prove that the combination of modern technologies (cloud architecture, high end visualization and streaming) allows us to support our clients in pursuing their goal of offering their customers a more intense and enthusiastic experience configuring and buying their future products.