Ever since Microsoft announced HoloLens, the company has kept its presentations and information on the mixed-reality glasses and ecosystem isolated as more of a developer curiosity than a mass market product. There’s good reason for the company to have done so; look at how easily Google Unwanted Face ComputerGlass was hijacked by the hipster class with delusions of self-entitlement. But it’s also made it harder to track improvements to the technology that underlies HoloLens, including Microsoft’s Holographic Processing Unit, or HPU.
Harry Shum, executive VP of Microsoft’s Artificial Intelligence and Resource Group, announced at the 2017 Conference on Computer Vision and Pattern Recognition that instead of relying on FPGAs to provide cost- and power-effective execution of AI programs and to form deep neural networks (DNNs), Microsoft’s second-generation HPU 2.0 will incorporate a custom silicon AI coprocessor for image and speech recognition.
“The chip supports a wide variety of layer types, fully programmable by us,” Mark Pollefey, Microsoft’s director of science for HoloLens, wrote in a company blog post. “Harry showed an early spin of the second version of the HPU running live code implementing hand segmentation.”
Pollefrey goes on to say:
The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery. This is just one example of the new capabilities we are developing for HoloLens, and is the kind of thing you can do when you have the willingness and capacity to invest for the long term, as Microsoft has done throughout its history. And this is the kind of thinking you need if you’re going to develop mixed reality devices that are themselves intelligent. Mixed reality and artificial intelligence represent the future of computing, and we’re excited to be advancing this frontier.
Comments
Post a Comment