By Rania Imam
iPhones are growing more personal, including photos, messages, bank and health information, travel data, as well as, personal confidential codes. Every year we grow more and more attached to our phones. On the other hand, technology is growing smarter every day, and software developers have created extensive neural networks and algorithms to imitate every aspect of the human behavior in smart phones. Neural networks are collections of processors that mimic the way the human brain works, but these systems typically take a huge amount of processing power to run. At the International Solid State Circuits Conference in San Francisco last month, MIT researchers presented a new chip -specifically designed to implement neural networks. The new chip will allow those networks to run locally on phones without Internet connections, changing the game for neural networks in smart phones.
The new chip, which the researchers named, Eyeriss, could also help usher in the “Internet of things”, the idea that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that directly report information to networked servers, aiding it with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots. At the conference, MIT researchers used Eyeriss to implement a neural network that performs an image-recognition task, the first time that a state-of-the-art neural network has been demonstrated on a custom chip. “This work is very important, showing how embedded processors for deep learning can provide power and performance optimizations that will bring these complex computations from the cloud to mobile devices,” says Mike Polley, a Senior Vice President at Samsung’s Mobile Processor Innovations Lab. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your cell phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.” The MIT researchers’ work was funded in part by DARPA (Defense Advanced Research Projects Agency, an agency of the US Department of Defense responsible for the development of emerging technologies for use by the military).
Recent developments in cognitive phone applications have advanced abilities to analyze, store, extract and process information on the mobile phone platform, in both the areas of hardware configuration and intelligent software. Deriving the capabilities of sensing and inferring human behavior and social context from its own platform, connected sensors, and/or linked servers, these smartphones are designed to “think” just the way a human would. In a couple of years smart phones are expected to be smarter than us, and human beings will grow more dependent and attached to their phones. If that is the case in a couple of years, then how would the relationship grow over a decade?