Using your voice to control the music coming out of your speakers, lock your doors or place an order for paper towels might seem like something out of a science fiction movie, but these days it’s a reality thanks to smart speakers like Amazon’s Echo. As convenient as this technology can be, one of the primary questions it raises is whether or not consumer privacy is at risk. These devices are programmed to listen for certain keywords that wake them up to do their jobs. So far, we’ve seen this manifest as an expensive nuisance, in the case of a six-year-old girl who ordered $200 worth of merchandise for herself, and even as a potential aid to help solve crimes, like a 2015 murder case in Arkansas. Although Amazon’s smart speaker is arguably the most well-known, it’s far from the only one out there, with Google’s Home as a rival. As Apple prepares to toss its own device into the ring — with rumored facial recognition in addition to voice — we decided to take a look at how smart speakers work, whether they can be trusted to maintain your privacy and whether you should have one in your home.
What are smart speakers and how do they work?
Smart speakers are devices that use voice-activated artificial intelligence technology to answer commands. Seeking to change the way people interact with their homes and media devices, they can be connected to third-party Internet of things devices like your garage door opener or thermostat, or online accounts such as Amazon or Spotify, to allow you to control them with your voice. This type of technology isn’t exactly new — voice-recognition software (such as Apple’s Siri) has been available on mobile devices for a while. However, smart speakers like Amazon Echo and Google Home are designed as virtual home assistants and intended to be used in as many different ways as possible. In addition to devices offered by the three tech giants we’ve mentioned, some third-party devices have begun to emerge, such as this one which allows users to combine technology from Amazon and Google in one device.
Apple, which has yet to put out a smart speaker of its own, is apparently working on one that would also incorporate facial recognition technology. According to those familiar with the ongoing development, the camera would be opt-in only, so users would have to activate it, and it could potentially identify who’s in the room and change things like music, lighting and temperature based on that person’s preferences. Naturally, Apple’s device will be Siri-based, which means it’s likely that users will be able to have it read incoming emails and send text messages or Tweets.
How could they be putting your privacy at risk?
The key function of smart speakers is to be able to listen to and follow your commands as soon as they’re said. To ensure this, each device contains a web-connected microphone which is constantly listing for you to say specific words or phrases. For example, when you speak the words “Alexa,” “Amazon” or “Echo” within range of an Amazon Echo’s microphone, it activates and records your voice. The recording is transferred to an external server for analysis so the smart speaker can fulfill your request or answer your question. These voice recordings are streamed and stored remotely, and you can review and delete them online. Google Home’s process is similar, down to being able to access a log of your prior requests online.
Because your devices are trained to wake up and record as soon as they hear one of their wake words, there may be times when snippets of conversation get stored without you realizing — or approving — it. Most people who own a smart speaker can probably remember at least one occasion when their device activated seemingly out of nowhere; they’ve even been known to activate in response to voices on people’s TVs. Since these devices do not differentiate between different people’s voices, literally anyone can activate and make requests of them (which is how that six-year-old in Texas was able to order a fancy dollhouse). Beyond the danger of winding up with unwanted items being delivered to your front door, there are questions to how private a home with a voice-activated microphone recording people’s voices truly is.
Additionally, as these devices advance and are designed with more intense technology, there may come a point where more than simple questions or requests are recorded, transmitted and stored — if you’re dictating a text message to your smart speaker, will that be saved as well, and for how long? Questions like these are the root of most Internet of things concerns that privacy experts and consumers alike are asking.
Should you use this technology?
That is a personal decision for each individual or household to make. It’s important to look critically at the technology you’re using and recognize it for what it is — and what it isn’t. Although the term “always on” is often used to describe smart speakers, as the Future of Privacy Forum points out in a 2016 publication, this is not entirely accurate. These devices do not record anything before their wake words are uttered, and they only record short bursts of speech, not lengthy conversations. That said, if you are uncomfortable with the thought of a smart speaker being able to record when you don’t actively have a command or question to give it, you can opt to either mute the microphone manually or log into your account and change the settings to suit your needs (such as requiring a passcode to process purchases or making it so the device plays an audible tone when it’s active, so you know when it’s recording). Additionally, consider occasionally logging into your account and viewing or deleting your history. Amazon and Google smart speaker users can delete individual requests as well as their entire history.
The potential danger for these types of devices and the technology that powers them comes from hackers who find and exploit backdoor flaws in their coding. We saw this last fall, when thousands of webcams and other Internet of things connected devices were hijacked and used to shut down an array of popular websites with a massive DDoS attack. Lawmakers as well as organizations like the Electronic Privacy Information Center have been pushing for more accountability on behalf of the companies making these products, especially in light of concerns that smart products of all kinds — from cars to children’s toys to televisions — could pose a potential risk for unlawful surveillance. Online privacy is a hot-button issue at the moment, with the recent spotlight on the government stripping away power from the FCC to regulate Internet service providers’ collection and use of consumer data, and this is yet another piece of that puzzle.
How can smart speaker users protect themselves?
Like with any Internet-connected product you use, it’s important to know and understand the different settings available to you. Most smart speakers allow owners the ability to deactivate the microphone, place parental controls to prevent kids or unknown adults from wreaking havoc and more. Strengthening the passwords for your Amazon or Google account is also vital, as someone who gains access to your accounts could easily view your request history. Turn on two-step verification for your accounts, too, for an added layer of security. Additionally, if you’re really paranoid, you might consider where you place your smart speaker — perhaps the bedroom isn’t the best location if privacy is at the top of your mind.
It’s also important to take into consideration the settings and source of any third-party programs or devices you pair your smart speaker with. Not all Internet of things devices are created equal, as we have learned, so knowing what you’re connecting to and whether it’s secure is important. Chances are, these types of devices will become further integrated into our world, so understanding how they work and what you can do to keep them from violating your privacy is the best way to keep your peace of mind.