3D Smart Assistive Navigation System

Overview:

The extreme weather (i.e. foggy and dusty conditions) or an accidents/emergency (i.e. fire), unavoidably cause the low-visibility environment where people temporally lose their optical sense and visual perception, subsequently the ability to recognize barriers, avoid fallen columns, and find way out within shortest possible time, and followed by potential life-threatening hazards. For instance, firefighters risk their own lives to save others. However, when they step into the smoky burning buildings, they literally become disoriented due to the dense smoke. National Fire Protection Association (NFPA) reports in 2016 that there are estimated 60000 firefighters get injured and 70 firefighters die each year on average. Although various factors contribute to the mournful injuries and death, disorientation in low-visibility environment acts as one of the main reasons, under which not only the firefighters’ life is threatened, the rescue effectiveness is largely impaired at the same time, since the ‘golden rescue time’ will be missed out during the ‘blind search’. According to the survey by WBRC among U.S on-duty firefighters, it is admitted that in most cases, they solely rely on ‘instincts’ to find a possible right way to the lives waiting to be rescued, but unfortunately, it is common that they neither catch up with the best rescue chance, nor survive themselves.

Therefore, in our current era of versatile portable environmental sensors and artificial intelligence (AI) and computer vision (CV) driven agent (i.e. self-driving vehicles), there is a clear need to reach out of the bottleneck of human capability, with a paradigm shifting towards modern assistive tools that seamlessly collaborate with human, thus enabling people suffering from low-visibility environment to regain the mobility losses associated with sensory deprivation, and stop the current downward ‘spiral’ of debility. To this end, the proposed wearable technology solution provides real-time navigation assistance in one’s immediate three-dimensional environment, revitalizing the virtual environmental perception. More specifically, we aim to 1) design a wearable platform that is able to integrate multi-sensor fusion techniques to effectively combine information obtained from the newly embedded LiDAR, infrared and ultrasound sensor systems (hardware) that are integrated into the novel smart service system; 2) develop algorithms that are able to perform real-time scene understanding, and 3) support cyber-human interaction in low-visibility environment by providing the assistive service operating at ‘Default’ mode or ‘User-selective’ mode to meet different navigation requirements.

“SMART” FEATURES OF THE SOLUTION

Our 3D Smart Assistive Navigation System, named 3D Navigation System for people in low-visibility environment (3D Navi-LV), has a discrete, compact form factor, a wide field of view, an impressive range, an intuitive tactile-based human machine interface (haptics) and binaural bone conduction auditory overlay for special-purpose auditory interaction (leaving normal audition intact and without interference). The system is targeted at a low cost point, and is housed in a unique four-component, lightweight, hands-free external vest/internal belt/cellphone/headset, providing a stable platform for robust mapping. Particularly, the environmental map is re-displayed to the end user through the belt via the haptic interface in a spatiotopically preserved, intuitive, body-centered fashion. In other words, obstacles on the user’s immediate left are mapped, processed and re-displayed through vibrating actuators, leveraging state-of-the-art soft robotics technology, on the left aspect of the belt (a belt that is centered at midline over the belly button). Hazards that are shorter vibrate fewer actuators in one column of actuators on the belt, and obstacles that are closer to the user are communicated through a higher frequency of vibration, and steadily increasing vibration gives a sense of tactile looming or ‘approach.’

OVERALL DESIGN of 3D Smart Assistive Navigation System

Pipeline

RESEARCH OBJECTIVES

Objectives

OBJECTIVE 1: Design the optimal 3D Smart Assistive Navigation System

Task 1 Integrate additional sensor modalities.

Our first task will involve integration of the lidar and infrared ranging hardware into the current (ultrasound-only) wearable device to facilitate environmental awareness via all three sensor modalities. The new hardware will be integrated into the center of the vest, across the center-line zipper. Once integrated, the positioning of the new hardware will be low enough so that the vest will remain functional as a pull-over. Once the physical hardware is integrated into the vest, we will begin testing different processing algorithms and architectures using the new platform (Objective 2, 3). The outputs of new sensor modalities will be combined via sensor fusion techniques to yield a single depth map of the external environment. This depth map will be thresholded and passed to the vibrotactile controller, which in turn conveys information to the user regarding potential environmental hazards, acting as a warning system through gentle touch, i.e., virtual whiskers giving a rough spatial impression of pertinent obstacles in one’s immediate environment

Task 2 Integrate auditory output channel.

The sensory modalities of the end user (e.g., tactile, auditory) are used to convey information via feedback elements. The current platform uses vibrotactile elements to indicate the locations and heights of environmental obstacles. For a human user, this sensory input is quickly interpreted, but imprecise relative to a spoken description of the environment. However, attention creates a bottleneck in the processing of sensory inputs, which is particularly true of a spoken auditory channel. Thus, whereas tactile information is imprecise but processed quickly and in parallel, human processing of spoken auditory information is slow but precise. Thus, it is easy to precisely convey that there is an upcoming step, staircase, or drop-off in the user’s step path with a word or two (e.g., ‘potential hazard!’, or ‘barrier alert!’), but processing-time constraints dictate that auditory description of further scene details (e.g., ‘desk to the left, person approaching at 3 o’clock, step approaching in 3 seconds’) would be impractical and dangerous. This danger comes not just from the time delays inherent in transmitting and processing auditory information, but in the fact that attending to this spoken description would make you less able to detect natural auditory cues from the environment that might alert you to danger (e.g., a fast-moving vehicle approaching from a sensor blind-spot while you cross the street). Our second subtask is therefore to integrate auditory sensory feedback in a way that complements the existing vibrotactile feedback, without creating undue distraction. Once integrated, the effect of auditory feedback on a subject’s ability to successfully attend to vibrotactile feedback will be assessed, and the level of detail in auditory descriptions can be adjusted so that there is minimal interference.

platform

OBJECTIVE 2: Design deep neural networks for 3D scene understanding

Task 1 - Real-time 3D dynamic scene reconstruction

The reconstruction pipeline as shown below contains three main components: 1) Pre-process the data captured from multiple sensors; 2) Estimate the current six degree-of-freedom (6DoF) pose of the sensorsand detection of dynamic objects; 3) Use the estimated pose to convert the depth samples into a unifiedcoordinate space and fuse them into a global scene model. 3D

Task 2 - Design neural neworks for scene parsing

Semantic segmentation, a pixel-level classification process of partitioning the image into semantically mean-ingful regions with pre-defined classes, has attracted significant research interest with applications in various fields. In this proposed project, an accurate Semantic Segmentation of a scene image is of great importance in identifying the potential hazards that causes the safety issues for the individuals in low-visibility environment. The recent advancement in deep learning has led to a significant body of work on semantic segmentation such as Fully-Connected Convolutional Network and SegNet based architectures. The CRF-based approaches were developed to improve the segmentation performance by connecting labels of the previous frames of each pixel to the next frame to incorporate previous context into the network.

Task 3 - Design 3D object detection networks

The accurateof objects in 3D environment is of central importance in our proposed 3D Navi-LVnavigation system. The primary sensor acquired by our proposed navigation system are in the format of Li-DAR point clouds which provides reliable depth information of the objects in the immediate surrounding 3D environment. The key developmental efforts include the development of novel real-time 3D object detection frameworks for indoor and outdoor navigation, design of innovative deep-learning-based algorithms for 3D feature representation, as well as 3D user-scene interaction interface.

OBJECTIVE 3: Design cyber-human interaction studies via 3D Smart Assistive Navigation System

Task 1 - Cyber-Human Interaction with Default Mode

Obstacles and hazards prevent people from moving safely and freely in a low-visibility environment. Therefore, the Smart Assistive System would actively predict undesired effects. That is, given the “actions” and “objects” the system predicts the outcome of the actions, and trigger the corresponding alert to the user. This will allow individuals to navigate safely and unaided through complex environments negotiating obstacles and hazards, massively increasing mobility and functional independence.

default-mode

Given this functional mode related to the safety and health issues , the platform will automatically (by default) send alerts to the visually impaired via the haptical and/ or audio interface. The haptic and audio interfaces, displays ‘processed’ environmental information to the end user in real-time via an intuitive, ergonomic and personalized vibrotactile re-display along the torso (a belt-based system that was retro-fitted into a lumbar back support) and audio feedback that is delivered through bone-conduction transducers in a paired headset. The sensory modalities used to convey information to the user from the platform will have a profound effect on usability.

Task 2 - Cyber-Human Interaction with User-Selective Mode

The 3D Smart Assistive Navigation System enables the individuals in low visibility environment virtually regain the perception of affordance group in their surrounding environment. The ‘user-selective’ mode essentially enables user to make active exploration of the environment according to the instructional operations which are predefined on 3D Smart Assistive Navigation System. For example, we propose a ‘Point to Tell’ unit, allowing users to explore the information about region they are pointing to.

point2tell