There have been quite a few technical developments in progress over these past few months to talk about … all towards improving our autonomous wheelchair. Here are some of the highlights:
Data Collection Box: Ready for Validation Start
The “fuel” at the heart of our Deep Learning algorithms being developed for our autonomous wheelchair is data! To date, here at Blue Horizon AI, we have been using open data sets that are being used in the development of self driving cars, to further develop the “Perception” module that drives the wheelchair. However, wheelchairs drive in less structured environments then cars, such as on sidewalks, pathways, and indoors. Moreover, wheelchairs drive more like a tank then a car. Thus we need additional and better data that is more specific to the unique nature of actual wheelchair driving in order to better train the Perception/Auto function, with the end goal to make performance improvements in our “auto-pilot” wheelchair function.
Toward this end, we have completed phase 2 of a Data Collection Box that is mounted on the wheelchair, and powered by the wheelchair battery. This data collection function, built on low cost hardware, such as a PI3, collects raw colour image video data & joystick driver data. A post data collection process then converts this raw data into appropriate data set to be used in the further offline training of the artificial neural network being used to drive the wheelchair autonomously. Our data collection box is now undergoing intensive robustness testing to insure that it can operate for long periods of time on a wheelchair to securely collect valuable & correct data under a variety of conditions and lengths of time.
Nvidia Jetson TX2: AI Computing on the “Edge”
We have ported one of our prototype autonomous wheelchair hardware platforms to the Nvidia Jetson TX2 ( https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/ ) . This platform provides us with a high compute GPU function in a portable low power & low cost method so we can run more advanced artificial neural network models with the goal to be able to self-drive the wheelchair with higher performance results. Additionally, it also provides us additional compute capability for developing & training artificial neural networks, along with our in-house GPU compute platform. Initial preliminary testing has been completed indoors, and more extensive lab & outdoor testing is planned for later.
Middle-Ware Software Improvements:
The middle-ware part of our system is that which “glues” together the Perception/Auto intelligence function with the low level hardware that sends electrical signals to the wheelchair motor control system & that also manages the incoming video image stream from the camera, and other sensor information. The middle-ware system is also responsible for managing the data collection function, and for providing the user interface to both start & stop the data gathering & autonomous driving functions, and to also provide status of the system to the user.
A set of software improvements to our middle-ware system have been recently completed to improve ease of use for data collection & testing of the autonomous functions, as well as to support portability to other hardware platforms, thus laying the ground work to allow eventual open-sourcing of this autonomous wheelchair project.