JB Software was involved in the development of the first voice-activated, cloud-based computer on the market, the Ubi. Backed by several technologies that are in common use today, this is IoT at it’s finest. Basically the Ubi is a device that connects a component board containing a microphone, speakers and sensors to an Android TV stick. The first version is largely a prototype that was brought to market due to the amount of interest.
However awesome the hardware is, the real magic happens on the server side. Phrases are broken down into actions then executed. Natural Language Processing is the backbone of creating a ubiquitous user experience where one action could have dozens of ways to activate it. For example, you can ask to turn on a light with “Lights on”, “Turn on hallway light” or “I need some light”.
The server is also responsible for gathering the sensor data and storing it for later use, especially in reports. This was in fact a real challenge with the amount of data we were dealing with. Furthermore, fast retrieval of the data was important due to actions being triggered by sensor reading such as light and temperature.
One of the best features about the Ubi is the integrations with several other IoT devices. For example, SmartThings provides a reliable way to automate our homes. The Logitech Harmony remote to voice control your TV. Use the WeMo to remotely toggle a power outlet. Or control your Sonos with your Ubi.
A small Toronto based company called UCIC launched the Ubi in 2015. Due to the on-going costs and the revolution of Google Home and Amazon Alexa, the project has since been shut down. Ubi could not complete with the millions that went into the R&D of both devices. At least we get to say we were first to market!