Following the recent NVISO Neuro SDK milestone release, including two new high performance AI Apps from its Human Behavior AI App catalogue, Gaze and Action Unit Detection, NVISO is pleased to announce that the SDK can be seen running on the BrainChip Akida platform by visitors to the Socionext CES 2023 stand located at the Vehicle Tech and Advanced Mobility Zone in the Las Vegas Convention Center, North Hall, Booth 10654.
Lausanne, Switzerland – 2nd January, 2023 – nViso SA (NVISO), the leading Human Behavioural Analytics AI company, is pleased that its Neuro SDK will be demonstrated running on the Brainchip Akida platform at the Socionext stand at CES2023. Following the porting of additional AI Apps from its catalogue, NVISO has further enhanced the range of Human Behavioural AI Apps that it supports on the BrainChip Akida event-based, fully digital neuromorphic processing platform. These additions include Action Units, Body Pose and Gesture Recognition on top of the Headpose, Facial Landmark, Gaze and Emotion AI Apps previously announced with the launch of the Evaluation Kit (EVK) version. This increased capability supports the further deployment of NVISO Human Behavioural Analytics AI software solutions with these being able to further exploit the performance capabilities of BrainChip neuromorphic AI processing IP to be deployed within the next generation of SOC devices. Target applications include Robotics, Automotive, Telecommunication, Infotainment, and Gaming.
“BrainChip’s event-based Akida platform is accelerating today’s traditional networks and simultaneously enabling future trends in AI software applications” said Rob Telson, VP Ecosystems, BrainChip. “NVISO is a valued partner of Brain Chip’s growing ecosystem, and their leadership in driving extremely efficient software solutions gives a taste of what compelling applications are possible at the edge on a minimal energy budget”.

A demonstration of the NVISO Neuro SDK running on the BrainChip can be seen running on the BrainChip Akida neuromorphic processing platform on the Socionext booth at CES 2023.
The SDK release supports the latest advancements for analysis of complex emotions:
This latest release of the SDK for use by solution developers will support implementation across an increased range of use case scenarios providing for deployment of a wide selection of NVISO’s existing range of real-time, deep learning-based AI Apps, such as those used for face detection, gaze, head pose recognition, facial analysis, emotion recognition, object detection, gesture recognition and body pose analysis along with its recently announced implementation of state of the art graph-based facial analysis applicable to the analysis of complex emotions.
Implementation of complex emotion analysis using state-of-the-art graph-based facial analysis. As one of the most important affective signals, facial affect analysis (FAA) is essential for developing human-computer interaction systems. Early methods focussed on extracting appearance and geometry features associated with human effects while ignoring the latent semantic information among individual facial changes, leading to limited performance and generalization. Recent work attempts to establish a graph-based representation to model these semantic relationships and develop frameworks to leverage them for various FAA tasks.
Benchmark performance data for the AI Apps running on the BrainChip neuromorphic AI processor

NVISO delivers solutions for a wide range of use cases including those in the areas of Smart Living, Smart Mobility and Smart Health. This is achieved through a range of AI Apps providing visual observation, perception and semantic reasoning capabilities, the results of which can be used in identifying issues, in decision making processes and in supporting autonomous “human like” interactions. NVISO AI Apps are specifically designed for resource constrained low-power and low-memory hardware platforms deployed at the extreme edge. These AI Apps analyse core signals of human behaviour, such as body movements, facial expressions, emotions, identity, head pose, gaze, eye state, gestures, or activities, and identify objects with which users interact. In addition, these AI Apps can be optimised for typically resource constrained, low power and low-cost processing platforms deployed on the edge, as demonstrated with ultra-compact models such as the Emotion Recognition AI App with less than 100KB of memory. Furthermore, NVISO AI Apps can be easily configured to suit a camera system for optimal performance in terms of distance and camera angle, and thanks to NVISO’s large scale proprietary human behaviour databases NVISO’s AI Apps are robust to the imaging conditions often found in real world deployments. Unlike cloud-based solutions, NVISO’s solutions do not require information to be sent off-device for processing elsewhere so user privacy and safety can be protected.

NVISO Neuro SDK is designed for a range of Automotive based driving monitoring and interior sensing use cases.
“This announcement of the delivery of a full Neuromorphic AI SDK with an increased range of AI Apps from our catalogue supports our objectives in providing AI solutions on the deep edge ”, said Tim Llewellynn, CEO of NVISO, “Deployment of the combined technologies of NVISO AI Apps together with embedded neuromorphic processing can provide significant performance improvements for target use cases, in terms of both processing speed and power consumption, so delivering on the promise of wide scale deployment of human friendly technologies for a wide range of applications ranging from consumer products through to medical devices and autonomous automotive systems. The addition of the latest graph based facial analysis capability opens up the possibilities for the use of a wider range of up to 58 Action Units in both emotion analytics and other, customer developed, analysis of human behaviours and state.”

NVISO Neuro SDK now supports detection of up to 58 Facia Action Units for the detection of complex emotions.
About Unith
Unith is a leading digital human brand. It unifies the research and development of facial movement deep learning, audio machine learning, and conversational design (NLP) to generate the first customizable, interactive avatar of its kind. Businesses can create their own digital humans to immerse, assist, and educate customers in real time and in multiple languages, all through one full stack platform.
For further information, please visit: unith.ai
About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY) BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, also offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006