Face-controlled Chromebooks are coming soon

[ad_1]


Google
Chromebooks have AI too. Google’s recent comments on its AI advances arrived between Microsoft’s big Copilot+ announcement and the forthcoming Apple AI news. In addition to outlining a few new AI features that are now available for Google’s Chromebook Plus line of laptops, Google previewed a fascinating feature coming later that would let you control the entirety of your Chromebook with just your face.
Using computer vision and the webcam built into your Chromebook, you’ll soon be able to talk to it, move your face, and make hand gestures to control your Chromebook. Google calls it Project Gameface, and it’s being built right into ChromeOS. The feature was originally announced via a blog post on May 10, and is aimed at creating a “hands-free, AI-powered gaming mouse,” but now it’s being expanded and is officially coming to Chromebooks.
In settings, you can set specific head movements and facial gestures captured from the camera to do things like left click, reset the cursor to center, scroll with the mouse, activate the keyboard, and more. The types of facial gestures you can use, meanwhile, include opening your mouth, smiling, looking in different directions, or raising the eyebrows. You can even customize a “gesture size,” to be even more inclusive of the specific needs of users.
Google

Google seems to be adding gestures into the mix, and now you’ll be able to accomplish many more tasks with Project Gameface. According to the update, you’ll be able to do things like send emails, use apps, and browse the web without touching your keyboard or screen. If you can do all that, we have to assume Google has added even more ways of controlling the device. It also doesn’t require downloading software and should — in theory — work across apps, services, and websites.

Google admits that it’s still “early in this project,” so we don’t yet know when this feature will roll out.
Interestingly, Apple recently announced eye tracking for the iPad as a similar way to improve accessibility and allow for a more hands-free ways of interacting with devices.
Beyond updates to Project Gameface, Google has announced it’s also working on a few other upcoming AI features. With Gemini right on the device, you’ll soon be able to do live translate and transcriptions from videos and video calls. Gemini will also be able to do something called “Help me read and understand,” which is a way of getting summaries of or asking questions about particular articles or pages. Lastly, you’ll be able to log into your Chromebook and get prompted to pick things up where you left off, with all opened apps and websites grouped together just as they were in just one click.
These new features would, in theory, need to be run on a device’s neural processing unit (NPU), which very few new Chromebooks actually have.

Editors’ Recommendations

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Megaclicknshop
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart