Human Machine Interfaces

NVISO's HPC SDK provides a robust real-time human behaviour AI API, NVISO Neuro Models™ interoperable and optimised for neuromorphic computing, the ability for flexible sensor integration and placement while delivering faster development cycles and time-to-value for software developers and integrators. It enables solutions that can sense, comprehend, and act upon human behavior including emotion recognition, gaze detection, distraction detection, drowsiness detection, gesture recognition, 3d face tracking, face analysis, facial recognition, object detection, and human pose estimation. Designed for real-world environments using edge computing it uniquely targets deep learning for embedded systems, 

NVISO delivers real-time perception and observation of people and objects in contextual situations combined with the reasoning and semantics of human behavior based on trusted scientific research. The NVISO HPC SDK is supported through a long term maintenance agreement for multi-party implementation of tools for AI systems development and can be used with large-scale neuromorphic computing systems. When used with neuromorphic chips, the NVISO HPC SDK can be used to build gaze detection systems, distraction and drowsiness detection systems, facial emotion recognition software, and a range of other applications of neuromorphic computing where understanding human behaviour in real-time is mission critical.

Accurate and Robust

CNNs scale to learn from billions of examples resulting in an extraordinary capacity to learn highly complex behaviors and thousands of categories. NVISO can train powerful and highly accurate and robust models for use in the toughest environments thanks to its proprietary datasets captured in real-world environments.

Easy to Integrate

Where AI is fragmented and difficult-to-navigate at the edge, NVISO AI Apps are simple to use, develop, and deploy, with easy software portability across a variety of hardware and architectures.​ It reduces the high barriers-to-entry into the edge AI space through cost-effective standardized AI Apps that are future proof and work optimally at the extreme edge.

Ethical and Trustworthy

AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. Additionally unfair bias must be avoided, as it could could have multiple negative implications. NVISO adopts Trustworthy AI frameworks and state-of-the-art policies and practices to ensure its AI Apps are "fit-for-purpose".

NVISO AI App Catalogue
Detect and Analyse


Supporting the interaction of consumer devices with their owners in their daily lives. Detect presence and identity through facial recognition software. Anticipate and react to owner needs by combining with observations from gaze detection software. Pay attention to and appropriately adjust to mood through observation by facial emotion recognition.


With the rise of autonomous vehicles, smart video surveillance, facial detection and various people counting applications, fast and accurate object detection systems are rising in demand. With the advent of deep learning in object detection and recognition systems they can not only recognise and classifying every object in an image but also localise each one.


Facial emotion recognition software decodes facial behavior into seven primary emotional states, along with their respective intensity and probability of occurrence. Reading facial micro expressions allows emotion analytics to infer subtle changes in emotional expression over time and can be used to detect changes in mood and understand instantaneous reactions.


Remote vital sign monitoring using sensors such as time-of-flight cameras, mm-wave radar, and rgb cameras allows heart rate, breathing rate, fatigue levels, and advanced emotional states (anxiety, stress, and pain) to be measured remotely to a sensor (no physical contact required). This information can be gathered to assist in decision making.


Gaze detection software using deep learning performs real-time eye movement tracking providing gaze direction, as well as 3D and 2D coordinates of the eyes (pupils). Gaze detection systems are calibration free and provide the basis of more complex eye tracking systems which analyses human processing of visual information, measuring attention, interest, and arousal.


Artificial intelligence and human emotion recognition software allows non-verbal human communication to be detected and analysed by a computer. By reading emotional “expressions” from the face, tone of voice, and body and hand gestures more complex and advanced emotions can be detected. Custom emotion recognition development services and software solutions allow tailoring to specific use cases.


Driver attention detection systems are designed to warn you that you are fatigued or are at risk of becoming drowsy. Cars with drowsiness detection and occupant monitoring systems can monitoring eye state, blink rates, head gestures, body movements, and signs of fatigue such as yawning to provide driver distraction and drowsiness detection.


Human pose estimation provides multi-person 2d pose estimation for human body pose and shape estimation. Correspondingly 3d pose estimation can be performed using reference 3d human body models and by combining detection and tracking for human pose estimation in videos advanced interactive human machine interfaces can be enabled.


Deep learning for hand gesture recognition on skeletal data provides a fast, robust, and accurate method to detect hand gestures from a variety of camera sensors. Hand gesture recognition software will then classify both static and dynamic hand poses for interaction with autonomous systems for control and search tasks as well as emotional interactions.


NVISO Neuro Models™ are purpose built for a new class of ultra-efficient AI processors designed for ultra-low deep learning on edge devices. Supporting a wide range of heterogenous computing platforms ranging from CPU, GPU, VPU, NPU, and neuromorphic computing they reduce the high barriers-to-entry into the embedded AI space through cost-effective standardized AI Apps which work optimally on edge devices for a range of common human behaviour use cases (low power, on-device, without requiring an internet connection). NVISO Neuro Models™ use low and mixed precision activations and weights data types (1 to 8-bit) combined with state-of-the-art unstructured sparsity to reduce memory bandwidth and power consumption. Using proprietary compact network architectures, they can be fully sequential suitable for ultra-low power mixed signal inference engines and fully interoperable with neuromorphic processors as well as existing digital accelerators.


NVISO Neuro Models™ use proprietary datasets and modern machine learning to learn from billions of examples resulting in an extraordinary capacity to learn highly complex behaviors and thousands of categories. Thanks to high quality datasets and low-cost access to powerful computing resources, NVISO can train powerful and highly accurate deep learning models.


NVISO Neuro Models™ store their knowledge in a single network, making them easy to deploy in any environment and can adapt to the available hardware resources. There is no need to store any additional data when new data is analysed. This means that NVISO Human Behaviour AI can run on inexpensive devices with no internet connectivity providing response times in milliseconds not seconds.


NVISO Neuro Models™ are scalable across heterogeneous AI hardware processors being interoperable and optimised for CPUs, GPUs, DSPs, NPUs, and the latest neuromorphic processors using in-memory computing, analog processing, and spiking neural networks. NVISO Neuro Models™ maximise hardware performance while providing seamless cross-platform support on any device.

Run Smarter
Optimised For Cost/Power/Performance

Microcontroller Unit (MCU)

AI functionality is implemented in low-cost MCUs via inference engines specifically targeting MCU embedding design requirements which are configured for low-power operations for continuous monitoring to discover trigger events in a sound, image, or vibration and more. In addition, the availability of AI-dedicated co-processors is allowing MCU suppliers to accelerate the deployment of machine learning functions.

Central Processing Unit (CPU)

Once a trigger event is detected, a high-performance subsystem such as an ARM Cortex A-Class CPU processor is engaged to examine and classify the event and determine the correct action. With its broad adoption, the ARM A-class processor powers some of the largest edge device categories in the world.

Graphic Processing Unit (GPU)

In systems where high AI workloads must run in real-time where MCUs and CPUs do not have enough processing power, embedded low power GPUs can used. GPUs are highly parallel cores (100s or 1,000s) for high-speed graphics rendering. They deliver high-performance processing, and typically have a larger footprint and higher power consumption than CPUs.

New User Experiences 
Robust and Accurate

Mobility Experiences

Next generation mobility requires AI, from self-driving cars to new ways to engage customers. Build and deploy robust AI-powered interior monitoring systems. The NVISO Human Behaviour SDK includes building blocks and tools that accelerate in-cabin monitoring developments that require the increased perception and interaction features enabled by AI including gaze and eye state tracking, body tracking, and activity and gesture recognition.

Distraction and Drowsiness

A key feature expected to become a standard feature in new cars as a result of regulatory and rating agency requirements Euro NCAP 2025 Roadmap. DMS can detect distracted and drowsy drivers by accurately measuring eye and head position, attention, and fatigue. The DMS alerts the driver and integrated safety systems upon detection of a risk such as drowsiness or distraction before safety is compromised.

The Intelligent Cockpit

It is not only the driver who is the focus of attention. The camera can be positioned in such a way that all seats are in its field of vision. The system can detect the presence of any other occupant, and can thus deactivate the airbag if, for instance, a child safety seat is present. IMS enables in-vehicle systems to sense their occupants’ emotional states and gestures to provide personalized experiences in the transition to automated driving.

Designed for the Real-World
Integration Ready

Embedding Tools

Supporting prototype platforms (Intel) to production platforms from centralized computing (NVIDIA), multimedia computing (Qualcomm), to close-to-sensor computing (Arm A5x, A7x + NPU and Neuromorphic Accelerators).

Simulation Tools

Camera sensor type and location play a critical role in system performance. 3D simulation tools provide a fast and effective design and verification platform.

Machine Learning Tools

Data driven software development require automated data tools to enable full cycle development of AI systems cost effectively.

Performance That Scales
Any Sensor, Any Placement

The interior of a vehicle is an unpredictable environment. Typical constraints range from driving environmental unpredictability to drastic changes in ambient temperature. These factors drive the need for systems to include sufficient algorithms capable of handling tough environmental conditions and choice of camera placement is critical to enable the robust operation of AI systems. Another factor that adds to the system complexity is accommodating the cosmetic design of the vehicle.

Automotive designers constantly try to introduce new design concepts while also maximizing driver comfort features. These constraints require the position and location of the camera to often change. NVISO addresses these challenges through supporting flexible camera positioning anywhere between the A-pillar and the center stack which is critical to large scale adoption.

Case Study
Bonseyes AI Marketplace

NVISO's is a founding member of the Bonseyes Community Association which is a non-profit AI-focused association based in Switzerland. BCA was created and inspired by the H2020 Bonseyes AI Marketplace project to ensure that artificial intelligence benefits all of society through open, distributed, and decentralized platforms and technology. BCA’s mission is to ensure that artificial intelligence (in all forms from narrow to general) which will ultimately create highly autonomous systems benefits all of society – not just a select few.

It aims to directly build open technology platforms supporting distributed artificial intelligence marketplaces empowering innovators, data scientists, and developers to build and trade value creating, safe, and beneficial AI aligned with societal goals. Currently it is focused on supporting Extreme Edge AI technologies supporting ground-breaking technologies such as ultra-low-power machine learning processors and brain-inspired neuromorphic chips. For more information see the Bonseyes AI Marketplace.

Within the scope of the Bonseyes project, NVISO showcased how future in-car experiences would be centered around the driver-vehicle relationship, where AI would enable the car to have continuous learning, with the end goal to develop an understanding with the occupants that is both meaningful and human in nature.

Easy To Integrate


Process captured data from a camera in real-time on-device with our EVK. Quickly verify your use case using our 30-day Trial EVK License by processing captured data from a camera sensor in real-time. Understand if existing NVISO AI Apps are suitable for desired end application performance.


Fast-track your development with our x86 development platforms with APIs for software in-the-loop testing, evaluation, and creating demonstrators. Out-of-the-box software using our Developer SDK License allows you to get up and running in minutes not weeks.


Access the provided signals on-device or transmit them to other devices and then act on them to deliver innovative product features. Deploy AI-driven human machine interfaces by using our Production SDK License on production hardware.


Talk with NVISO AI expert to learn more about about trial Evaluation Kit (EVK) for IoT devices.


Consumer Robots

Human–robot interaction plays a crucial role in the burgeoning market for intelligent personal-service and entertainment robots.

Automotive Interior Sensing

Next generation mobility requires AI, from self-driving cars to new ways to engage customers. Build and deploy robust AI-powered interior monitoring systems.

Gaming and Avatars

The gaming industry (computer, console or mobile) is about to make extensive use of the camera input to deliver entertainment value.