Deep neural network based embedded machine vision solution that helps drones, toys, service robots and other consumer electronics products to interact with human more efficiently and autonomously.
A powerful embedded machine vision system for all-level human behaviors understanding.
Detect and analyze facial information including identity, gender and even emotion.
Remote control and send commands with your handgestures.
Long-distance body language detection and action recognition.
Autonomous human tracking, gesture control selfie
Children behavior understanding, interactive games
Human awareness, action recognition
User identity recognition, gesture commanding
Remote gesture control, system automation
Interactive gesture recognition, games
The InspiRED Robot Brain makes robots smarter for education and entertainment. Deep neural network based visual perception system and robot interaction capabilities are integrated into an extremely small form factor.
The InspiRED Rover Robot is an AI-enabled rover robot for education and entertainment. It is powered by the InspiRED Robot Brain.
The InspiRED Humanoid Robot is an AI-enabled child-size humanoid robot for education. It is powered by the InspiRED Robot Brain.
The InspiRED Mini Robot is an AI-enabled mini humanoid robot for education and entertainment. It is small enough to fit in your pocket. It is powered by the InspiRED Robot Brain.
Deep learning based machine vision system for embedded platforms (Linux, Android and iOS) and standard cameras
Deep learning model based visual recognition capability
Advanced algorithm and optimized solution for real-time applications
Work with standard cameras, no need of special sensors
Process locally on embedded platforms, no privacy data will be transmitted to the cloud
We are a group of robotics and machine learning research scientists. The core team members have experience working closely with NSERC Canadian Field Robotics Network, Disney Research Pittsburgh, Japan National Institute of Informatics, Clearpath Robotics, SAP, Simon Fraser University Autonomy Lab and Vision and Media Lab.