Google Home project lead Mario Querioz held the device in his palm, revealing a design that was shorter and wider than Amazon's cylindrical Echo, which is powered by Amazon's virtual assistant Alexa. Microsoft also has its own personal assistant, Cortana, but as yet no at-home device.
Google Home will use its new Google assistant, which leverages Google's search and the contextual queries it's been developing with a decade of research into artificial intelligence. It will be able to play music, complete a range of tasks and answer questions that one would ask of Google search.
The SignAloud glove captures ASL gestures with sensors that measure everything from XYZ coordinates to the way individual fingers flex or bend. That sensor data is sent via Bluetooth to a nearby computer and fed into coding algorithms that categorize the gestures, which are translated into English and then audibly spoken via speaker.
“Until now, that's the direction technology has taken us -- living and gorging on screens with our heads down in our phones. But 2016 will be the year this changes. Why? Because we know something is wrong. The loss of humanness is very real. We also know that technology has the profound potential to enhance our experience of the world around us, rather than distract us from it.
I call this the Invisible Interface -- a movement wherein technology still provides us with information and gives us command of our surroundings, but through discreet signals rather than screens. It is not that different from the way we orient ourselves in nature: we look at the Sun to understand how much daylight is left in the day; we feel a breeze and turn towards it to scan the horizon for the sign of a storm.
This new approach to the transmission of information is much harder to build than pixels on a screen. And yet it is so much more rewarding for the designer, because the resulting user experience is natural, fluid and non-interruptive. Information and action is then woven into our lives so discreetly that, if it weren't for the magical experiences it creates, we would forget it is there.”
“Working with Corning, Apple created pliable iPhone cover glass. Swipe it, and the phone works the way it always has. But press it, and 96 sensors embedded in the backlight of the retina display measure microscopic changes in the distance between themselves and the glass. Those measurements then get combined with signals from the touch sensor to make the motion of your finger sync with the image on screen.
Some of this technology was first revealed in the Apple Watch, which has a feature called Force Touch. But 3D Touch is to Force Touch as ocean swimming is to a foot bath. Screen size makes a difference, but the software on the iPhone 6S has a liquid ease. Apply a tiny bit of pressure anywhere you want to explore something—a restaurant link inside a text, an 11 a.m. meeting invite buried in an e-mail—and a peek at the restaurant’s Web page or a window into your calendar hovers expectantly in the middle of the screen while everything else blurs into temporary opacity. Press a little harder, and what you’ve been peeking at pops fully into frame. Release your finger, and you’re right back where you started. Presto chango, no home button required.”
He starts simply, asking for the time in Berlin and the population of Japan. Basic search-result stuff—followed by a twist: “What is the distance between them?” The app understands the context and fires back, “About 5,536 miles.”
Then Mohajer gets rolling, smiling as he rattles off a barrage of questions that keep escalating in complexity. He asks Hound to calculate the monthly mortgage payments on a million-dollar home, and the app immediately asks him for the interest rate and the term of the loan before dishing out its answer: $4,270.84.
“What is the population of the capital of the country in which the Space Needle is located?” he asks. Hound figures out that Mohajer is fishing for the population of Washington, DC, faster than I do and spits out the correct answer in its rapid-fire robotic voice.
MyFord Touch was powered by a Microsoft operating system, but Ford is now using an OS from BlackBerry subsidiary QNX that already runs in-dash systems in Audis, BMWs, and Mercedes-Benzes, among others. In 2005, the system’s early days, a partnership with Microsoft was “completely obvious,” says Gary Jablonski, Ford’s manager of infotainment systems. “We wanted a big software company, lots of horsepower, connected to the consumer industry, connected to the phone industry.” The BlackBerry software, he says, will be more resistant to crashes of the PC variety. It turns out the kinds of bugs people will tolerate from their phones drive them crazy on the road.
Sync 3 aims to wipe the touchscreen clean with a far easier interface. “We really focused on trying to make a system that was the simplest to use for customers,” Jablonski says. That goal may sound obvious, but John Schneider, the project’s chief engineer, acknowledges that to justify the added cost, “We tried to pack a lot of features into MyFord Touch.”
On its surface, the idea behind Soli is similar to Leap Motion and other gesture-based controllers: A sensor tracks the movements of your hands, which control the input into a device. During a demo at the session Friday, Soli's founder, Ivan Poupyrev, showed how the sensor could recognize gestures and allow people to control functions of a smartwatch without touching a display.
But unlike other motion controllers, which depend on cameras, Soli is equipped with radar, which helps it "track sub-millimeter motions at high speed and accuracy," ATAP says. This helps keep Soli tiny — small enough to fit within a tiny chip that can be incorporated into wearables and other devices.
“We're a digital species now—nothing short of apocalypse will change that! The health of our digital society lies, therefore, in the broadest possible distribution of agency. Agency is circumscribed mainly by the UI—the machinery through which human intent is transduced into the machine. So designing and deploying radically more capable UIs is one of the most important things we can do today. At Oblong we built our belief about what this should look like into our mission statement: "to provision the world with new computing forms of durable value and genuine worth, forms profoundly capable, human, beautiful, and exhilarating."”
It is an ancient post now, but I had written The Best UI is no UI. One of the most interesting things to come out of Unit4’s analyst summit last week was its vision of “self-driving” ERP, their vision of machine learning and artificial intelligence driving the user interface.
“Like a self-driving car, self-driving ERP takes care of tasks that are better served by technology, leaving people to focus on the exceptions that need human intervention.
Self-driving ERP doesn’t ask the user to constantly enter data. It doesn’t require huge amounts of training in order for users to understand how to achieve desired outcomes. Self-driving ERP becomes an intelligent support and planning system that utilizes information from all sorts of internal and external sources including productivity tools (calendar, outlook, document systems, social tools) to drive cases,projects and initiatives and tasks. It delivers actionable insight based on what it already knows. The system will make suggestions based on company behavior, personal behavior, the weather, traffic and all other possible sources it pulls data from.”
Three things I like about Unit4’s vision
a) They are leveraging Microsoft’s machine learning advances (it’s a broader arrangement where MS Azure data centers will also provide the IaaS for Unit4’s public cloud) (click image to enlarge)
b) They have already considered several vertical scenarios for the people/services industries they are focusing on. Since the Microsoft arrangement is not exclusive, how vendors like Unit4 differentiate with it will be key
c) Not something they mentioned last week, but listening to Thomas Staven and Ton Dobbe of Unit4 discuss electronic documents in the Nordic public sector, I was reminded that in I had profiled a Swedish government customer of Agresso (now Unit4) in The New Polymath in 2010. The document exchange involved 85,000 suppliers and tens of millions of invoices. I was impressed at the digitization progress even back then. Think of the ability to train machines with that much data already digitized. Also exciting to see Unit4’s ability to take that experience to other parts of the world.
Sounds like a mystery novel, but now you can write in air. From Computerworld
“The gestural device goes on the index finger and can be used to write Japanese characters, Latin letters or numbers in midair. A linked smartphone or other Bluetooth mobile device with a Fujitsu app can instantly recognize numbers written with the ring with about 95 percent accuracy, according to developer Fujitsu Laboratories.”