Share on Facebook
Share on Twitter
Pin to Pinterest+
New tech from Google promises pictures and video that will memorialize your life with a passive, wearable camera that is programmed to take pictures “worth taking” – and you don't even need to help it.
Image of Google Clips via Google.
Photography is what you would call an active, engaged field – the best photographers live and breath their work, obsessing over lighting, setup, composition, lenses, etc. But for the vast majority of people who love to take photos, whether with some amount of skill or none at all, taking pictures often boils down to showing off food on Instagram or documenting life with friends and family.
While smartphones have made this process easier, particularly with regard to filters and post-production editing of photographs, smartphones have not radically altered the way in which most people take pictures.
Google hopes to change that with their newest addition to their product stable, Google Clips.
You may recall Snap (the parent of amazingly successful ephemeral photography app SnapChat) released a wearable set of glasses called Spectacles which have two cameras, one above each eye, capable of capturing video and photos and transmitting them through the SnapChat app to a user's followers.
And prior to that, you may recall the much maligned, though still cool to some of us, Google Glass – the augmented reality do-it-all headset that has now found life in professional commercial settings where it failed to catch on in the consumer realm.
Google hopes that Clips will overcome the hesitation some people had with Spectacles (some people think the concept strange, to say the least, and many are uncomfortable with what is essentially a video camera over someone’s eyes where you never know what that person is doing with it). Also, Clips hopes to be more immediately accessible to consumers than Google Glass, which was promptly sidelined by many as living within the realm of cutting-edge technology fetishists.
What Does Google Clips Actually Do?
In a hardware event on October 4, 2017, Google unveiled Clips to an eager and surprised audience, describing the device as being capable of autonomously capturing pictures with “happy faces, good lighting, interesting framing” – in other words, it actively searches for the best moments – while the user need only wear it or clip it to something, allowing the user to passively capture the best moments of their life, in theory.
Of course, in an obvious nod to multifunctionality and the needs of some users for pictures and video right then, the device has a button that allows for manual operation.
High-res stills can be captured from the video, giving users two different media to use in when they assemble it later in production or upload it to social media.
In a little bit of an eerie factoid from >Ars Technica, Google promised that Clips will “figure out which faces and pets you see regularly and know that you'll probably want to capture them more often.” It will do this through a process called machine learning in which computers interpret data in search of patterns with which to optimize the device's functionality. Normally the purview of advanced robotics, machine learning is seen as the future of technology by many, but is also regarded as potentially dangerous.
The device, in what Google calls an acknowledgment of user desire for privacy, Google Clips will work without a Wi-Fi connection.
Noted technologist, futurist, and entrepreneur, Tesla's own Elon Musk, expressed the fears of many users, claiming that Google Clips is “not so innocent” according to Gadgets 360.
Currently the device will only support smartphones running Android OS 7.0 or above.
The smartphones Google mentioned included the Google Pixel, Samsung Galaxy S7, and Samsung Galaxy S8.>
The device will retail for $249 with a release imminent according to Google.