Alexa et al are also normalizing "always on microphone that sends audio to a remote business over the internet" technology. According to the bright-line rule from Kyllo v United States[1], when a technology is "in general public use"[2], police no longer need a warrant when they use their own devices based on that technology to see the "details of a private home that would previously have been unknowable without physical intrusion"[3].
You cited the summary, which may be useful but isn’t legally binding. The actual opinion starts after “Justice Scalia delivered the opinion of the Court.”
Also, Kyllo isn’t relevant to your concerns. A microphone and radio which picks up audio and transmits it is a technology which has been around for more than 60 years. Use of that technology has been addressed by the court in Katz and its predecessors, which collectively found that warrantless use by the police of such tech is often prohibited by the fourth amendment.
A more relevant question is whether the fourth amendment would allow police to access records stored by Amazon in the case of conversations picked up by Alexa devices. Usually, such data is fair game (third-party doctrine), but there are some constitutional restrictions (See e.g., *Carpenter v. U.S., 2018). However, even if there were no restrictions, the question is probably moot because other laws (Wiretap Act, Stored Communications Act) restrict police behavior. Basically everyone believes that these restrictions apply to Alexa recordings.
Recently there was a case in which a police department served Amazon with a subpoena for Alexa recordings from the scene of a suspected murder. I don’t remember the outcome in that case, but that too is irrelevant to concerns about warrantless collecting or searching of Alexa data, because there a warrant was or could have been obtained.
The ruling is about agents using thermal imaging technology to bust a marijuana plantation, performed from a car on a public street. The court stated that this does not constitute a search, since any random citizen could've done the same thing using commodity equipment.
Always on microphones that transmit via the internet (i.e. bugs) have been a commodity for... a few decades? But you still cannot legally bug your neighbor's apartment.
>Always on microphones that transmit via the internet (i.e. bugs) have been a commodity for... a few decades? But you still cannot legally bug your neighbor's apartment.
It has not been normal until the last few years for people to typically have an Alexa-like device that is always listening for the "I'm about to make a request" signal, which frequently gets turned on by accident.
It's not ridiculous that someone might conclude in the future, "if you're in someone's house with such a device, you should reasonably expect that some part of your conversation might get sent to the third party", which would mean a diminishing of the right to such privacy in 4th amendment jurisprudence.
And frankly, the whole concept of using voice to activate it is reckless. It is unavoidable to have significant false positives. It should be done with a non-audio signal that can't be faked (up to the limits of modern crypto), like an authenticated EM signal to turn on the listening.
What if you’re known for loudly announcing “Google, turn off your microphone”? Then you are protecting people’s expectation of your privacy/defence of it/fourth amendment rights? I’m tempted to be that person and have done it twice.
>It has not been normal until the last few years for people to typically have an Alexa-like device that is always listening for the "I'm about to make a request" signal, which frequently gets turned on by accident.
Google Now: 2012
Siri: 2010
Microphones in cell phones: longer than those two.
I’m referring to voice activated assistants, and I was referring to the case when the typical (middle class) household had one, which happened long after their initial release (and arguably hasn’t even happened yet, depending on how you define “typical”).
Again, the question is how long it's been common for the typical person to have the voice activated one in their home. Stop changing the question.
In any case, 4 years vs 10 years doesn't matter. The point is, a court hasn't addressed it in the context of changing norms, at which point I claim there is a danger that it will re-evaluate what counts as private, just as OP was.
Hasn't Google Now (and now Assistant) always required a button/icon press before the mic turned on? And they only understand "OK Google" until after you say "OK Google?"
same with Alexa devices. There is a separate hardware chip that activates the rest of the stuff only after you say the trigger word. Which is also the reason for why you cannot choose an arbitrary word to activate the device with, you can only choose between about 4 different preset activation words in the settings.
> The court stated that this does not constitute a search
From the court's opinion (the above [1]):
>> ... obtaining by sense-enhancing technology any information regarding the home's interior that could not otherwise have been obtained without physical "intrusion into a constitutionally protected area," constitutes a search--at least where (as here) the technology in question is not in general public use.
>> Based on this criterion, the information obtained by the thermal imager in this case was the product of a search.
>> the imaging in this case was an unlawful search
You’re subtly altering the meaning of “in general public use”. As used in the court’s opinion, they’re drawing a distinction between whether the average person could perform this activity against another person. If the average person carried around thermal imaging tech that would have been capable of performing the same actions as the police, it would have not constituted an unlawful search. Extrapolating that to say that if the average person chooses to use always-on microphones to record themselves inside their homes, the police can record people inside their homes, is not an equivalent argument. It is totally viable to say that because so many people have the ability to record conversations in public places, the police can record conversations in public places. But that’s not the same thing at all.
The standard is "could any random passerby do this with legal and common tools," not "is this technology used commonly." Regardless of how the definition of what is or is not considered common shifts, the fundamentals of the two situations remain different.
That would be illegal for other reasons, which would seem at least to me to violate the principal they are using to justify this. Certainly using a thermal camera in this manner is not illegal for a random bystander.
are you sure? I mean I think in most states, if you were parked outside a person's house with a thermal camera pointed at the house, and the police drove up and asked what you were doing and you said "I'm using my thermal camera to spy on what people are doing in that house, and they don't know I'm doing it LOL!" I think the police would probably take you in.
Now as to whether they could find anything to charge you with that would stick that would depend on if the state had any sort of peeping tom laws, and if the police went up to the house and found if anyone was naked or partially naked in the house while the thermal camera was being used.
That said if there was a peeping tom law, and there was a naked person, the party with the camera that was charged could then argue that thermal cameras don't show enough to be considered under the law.
However it is my experience that police can be creative when they want to charge someone when that someone is doing something they don't like, so not sure that just a peeping tom law would be brought into play on such a circumstance.
I suppose after all this you might argue that police overreach does not make it actually illegal, but in general my viewpoint is that something being illegal is determined by power and in the case under discussion the court wanted what the police did not to be illegal because they felt it benefited the power of the system that it be so, and therefore shaped their arguments to their wants, not to any particular logic or moral sense. Thus when later another court wants the people on the street using the tech to be illegal, power gets what it wants.
With its paparazzi industry, California probably has one of the more restrictive Peeping Tom Laws, and that law requires the peeper to be looking into a door or a window while on private property. I infer from a little extra googling that most/all of the states' laws are similar in this way.
As you imply, none of this prevents the police from arresting you and taking you downtown anyway. They can even say they thought it violated peeping tom laws (Heien vs. North Carolina). Beyond that, "you can beat the charge but you can't beat the ride" is an old truism among abusive police.
Kyllo is almost 20 years old, so the likelihood of a city having a law contravening it is probably low (IANAL).
Sorry, I may be wrong but my understanding of peeping tom laws is that they apply where the user has a reasonable expectation of privacy.
I think you're wrong that they generally require the peeper to be looking while on private property because if that were the case the peeper would be trespassing and could be charged with that, although certainly it may be the case in some places. State by state basis makes it difficult to say for sure.
I guess it might be beneficial to have peeping tom laws limited to what one does on private property with the understanding that rich people have big enough property that you have to get on it to reasonably violate their privacy. Thus you can construct the law to protect the privacy of rich and poor alike as long as they have 5+ acre estates. And it is true that is is nice to have multiple things you can charge someone with, so they can plea bargain some of them away. So that they can be charged with trespassing still doesn't mean that they wouldn't want a law to charge them with peeping as well.
But as I understand it the reasonable expectation of privacy generally applies to stuff like your home, or a private dressing room (if talking about actors) and so forth.
Thus if someone is having a shower in the upper bathroom and you have climbed up a high tree off their property to look through an open window that they would not expect anyone to climb because wtf, then you are probably a peeping tom and violating a peeping tom law if it exists in that state.
> But you still cannot legally bug your neighbor's apartment.
You... not. But the police? Issue a warrant and they don't even have to risk, as in the decades before, having a spy physically break in to the building or the target discovering the bug.
Secret services (CIA/NSA) probably even have their direct uplink to Amazon or at least under-the-tables cooperations established to snoop on foreign people "for antiterror reasons" (aka helping out allied services, just like with the BND and MI5 with snooping on global Internet traffic).
But if they set up their own microphones in their home that are accessible through a public IP address, you can play the audio that they bugged themself with
The only things in my pocket are my wallet and keys. My phone[1] could probably be used for remote surveillance, but without a warrant, that's a crime.
Note: intent and expectations matter. A cellphone by itself (without any type of "voice assistant" app or feature) is only intended to record audio when it's being used to make a call. Yes, malicious actors or buggy software could enable the microphone, but that's not the expected behavior of the device. Kyllo v United States is all about what the public understands and expects about a technology.
You're trusting that the microphone is not listening, which is about on par with trusting any device that claims to only store stuff in a local buffer, except when commanded to talk to the cloud.
I think to your point, there is possibly more legal protection around wiretaps than voice assistants. However, how many apps have access to the microphone? Are these apps covered by the same protections as making a phone call using POTS?
Edit: I see you've linked that your phone is an old-school landline, which doesn't really help people understand the legalities around smartphones, which many feel are a requirement to daily life.
Can the police right now go to Amazon and demand to listen to anyone's Alexa in real-time? Can the NSA go to any landline provider, and demand a secret list of a person's entire call history(or they may already have this already)?
The biggest profile case where Alexa audio recordings got turned over was during a murder trial after a judge's orders.
I am not entirely clear that the ramifications of Kyllo v. United States are as dire as OP is stating for Alexa-like devices. How exactly do the police freely gain access to a "in general public use" Alexa-like device that is secured through something like an Amazon account and secured WPA password-protected WiFi network? Amazon would have to turn over credentials or the police would have to hack your account or WiFi somehow...
They don't, Kyllo v United States isn't about gaining access to someone's devices. The police used their own thermal camera to search Kyllo's residence:
>> In order to determine whether an amount of heat was emanating from petitioner's home consistent with the use of such lamps, at 3:20 a.m. on January 16, 1992, Agent Elliott and Dan Haas used an Agema Thermovision 210 thermal imager to scan the triplex. [...] The scan of Kyllo's home took only a few minutes and was performed from the passenger seat of Agent Elliott's vehicle across the street from the front of the house and also from the street in back of the house.
Okay then what does this have to do with someone's private Alexa device? Are you suggesting the police will install their own Alexa device at someone's house?
> My phone[1] could probably be used for remote surveillance, but without a warrant, that's a crime.
That's a neat phone, but we already know pretty much all phone communications are intercepted and recorded. This is why I do all of my communications over IPoAC[1].
> The only things in my pocket are my wallet and keys. My phone[1] could probably be used for remote surveillance, but without a warrant, that's a crime.
Sure, a bit naive, but I digress. Even if it's illegal, it would be better if it was simply not possible in the first place. It's optionally having that functionality, rather than building solutions considering privacy in the first place.
> without any type of "voice assistant" app
I don't think you can uninstall Siri or Google Assistant on neither iPhone or Android, so that point is kind of null. Only with root or custom roms you'll be able to do that.
you have to give explicit permissions for those things. google assistant doesn't work on my phone because i never gave it permission to enable that feature.
> you have to give explicit permissions for those things. google assistant doesn't work on my phone because i never gave it permission to enable that feature.
I wouldn't trust that whatever people who sit in power doesn't have a way of overriding that.
Cellphones with voice assistants are already in general public use, even if you don't have one. So if general public use is enough to justify warrantless surveillance, we're already there.
A cellphone doesn't necessarily have the voice assistant enabled, and works just fine without it. An Echo, Homepod or Google Home without the voice assistant is basically useless (I don't know whether you can even disabled the VA).
Supposedly your smart phone and smart TV are not passively listening to your offline conversations. That's only done on a targeted basis to people who are being monitored. Your location history however is being passively tracked.
Not defending it, but your smart phone is not quite as bad as Alexa. At least for the moment.
iPhones support “Hey Siri” 24/7 unless you turn it off. (As someone else said, google phones support “OK Google” as well)
The only additional danger from an Alexa device is that you are trusting amazon as well, but the core functionality of an Alexa device to always listen for the wake word and communicate to a server if it thinks it heard the wake word locally is the same as what most smartphones have.
You'll note that "OK, Google" works when your phone is on flight mode. Google aren't constantly sending your microphone data to their servers, the recognition of that phrase happens on your device.
Mine was activated many times in the past when it shouldn't have been, e.g. while watching TV (no, nobody said "OK Google"), and my mom has a tendency to somehow activate my phone by just talking...
Once it is activated, it starts recording and sending audio data to google.
Every such device has a false positive rate on keyphrase activations, and of course the do, there is only limited processing available in those phones (that also try to conserve power), and people tend to mumble sometimes so you have to be a little generous and balance false positives against false negatives - the latter of which people refer to as "shit don't work".
Devices in lawyers' home offices aren't any different. And if the device encounters a false positive and activates, somewhere on some server some recording of your confidential conversation may get stored. Or worse, you buy 100 copies of "The Art of the Deal" off amazon by accident.
Even worse if your client's name, or their wife's or mom's name, is Alexa... or Alexandra/Alexander, which also cause a lot of false positive activations, or so I heard from a friend of a friend with that name.
My little brother's name is Alexa (not short for Alexander, just Алекса/Alexa). There are no Echo devices in the household to say the least. I do like to joke "This is so sad. Alexa, play Despacito" :)
Yes, they can't constantly send microphone data even if they wanted due to power constraints. They use a dedicated DSP chip with the sole purpose of detecting the hot word.
then stabb the mic afew dozen times, and feed the hotword to the DSP with a button, or better yet use another DSP to bridge your own custom interface to the "hotword sniffer"
typing to an alexa device should be an option, just like having an AI chat.
Maybe you do care but your significant other or roommate doesn't (and doesn't want to understand because... inconvenient).
This is possibly the biggest single problem with modern surveillance/privacy culture. The idea of requiring an individual's consent to something that is potentially a risk for them is laudable, but it's also largely a symbolic gesture when some of the largest and most powerful organisations in the world are duping all of their friends into providing much the same information without their consent or even necessarily their knowledge anyway. The whole business model is fundamentally flawed, and privacy laws are a long way from catching up.
Choose friends and partners wisely. If your roommate is doing something to endanger you let them know or change roommates. What if they were doing hard drugs? Would you live under the fear of cops busting down the door?
Steve Jobs said in an inteview that Apple did pretty well on the privacy thing. He wanted to ask the user for permission to access private data, others wanted to take it all without asking.
That gives me enough information to guess there's people who would make apps that take the private data without asking, whether the option to turn off privacy feature blocks them all or not is the question.
Sure, and nobody who cares about privacy would own an always on smart home device from Amazon or Google. But you don't always know if one of those devices is listening, such a when going to a friend's house.
Wonder if the whole voice recognition could be done on device? No voice sent over the net, firmware updates to improve recognition. Then just call the needed APIs to activate services etc.
What I don’t understand is that my Apple Watch definitely recognizes “hey Siri” and then does speech to text without internet access, but then (sometimes) cannot execute the command without internet access. That last part seems like the easiest.
That sounds like an out of context composite of concepts of dubious context. It was in the context of 'search vs in plain view' as distinctions and general availability was meant in terms of what was acceptable for technology to count as in 'plain view'.
The thermals were 'in plain view' technically but given that the technology was highly expensive and of limited application it was clearly disingenuous if the general public couldn't access it.
Essentially the argument is that just because a signal is emitted doesn't mean that the police have an automatic right to observe or decode it beyond a common observer.
Essentially if they had patrol bots on the street with human level hearing it is still 'in plain view' levels. If they started pointing parabolic microphones or laser mikes at people's houses that would be beyond plain view and into an outright search.
If that suggested interpretation jurisprudence were applied then audio bugs should have long been perfectly legal and not a search as nearly-everyone is already capable of hearing sounds and remembering them.
Those were about using devices from outside the home though. Physically going inside a home and planting a microphone. Installing something like this into a house by law enforcement would still be a step beyond that ruling.
My girlfriend celebrates passover which involves discussing "Elijah". The way she pronounces that name (she's a New Yorker) triggers her Alexa device, to much hilarity.
More soberly: I had no idea what Elija stands for so I asked the device, which recited part of the wikipedia. It then said, "while I have your attention here are some notifications you missed" and began to recite the first of her messages from her work which are definitely not for public consumption. (She was able to throttle it, but still -- what if I had asked and she had not been present?)
I have no doubt they are able to hear anything happening in the room they happen to be installed in. But I doubt they listen for anything but their keyword, and I seriously doubt they listen, record and send off to their overlords what they hear on a massive basis.
However, they clearly CAN, and while they almost certainly don't on any massive scale, if they by their overlord or a security hack, did on a targeted basis listen, record and send conversations on, that would be very possible, and very harmful for whoever was targeted.
I can't imagine the NSA and other covert organizations not drooling at the chance to do just that. I'm also fairly certain those devices are not secure as by and large the tech industry has a complete failure rate at making anything computing related secure.
2006: The FBI admits for the first time to turning a dumb phone into a microphone to listen in on a some New York mafia guys conversations at a restaurant in the early days even before smartphones made it even easier to do so. Which they publicly used to convict them (meaning it was probably an old tactic by then):
> The FBI appears to have begun using a novel form of electronic surveillance in criminal investigations: remotely activating a mobile phone's microphone and using it to eavesdrop on nearby conversations.
>However, they clearly CAN, and while they almost certainly don't on any massive scale, if they by their overlord or a security hack, did on a targeted basis listen, record and send conversations on, that would be very possible, and very harmful for whoever was targeted.
Besides the fact that they have better antennas, this is true for pretty much any device with a microphone and a processor with audio processing capability, no?
My laptop has a microphone and video. Surely Microsoft could just start recording with the with right windows update?
My Avaya VOIP phone has speaker phone, could do the same.
> My laptop has a microphone and video. Surely Microsoft could just start recording with the with right windows update?
It's a commonly mentioned issue, and a somewhat regular occurrence with various malware or "security solutions".
That's why a number of laptops have either visual indicators or shutters (though mostly for the cameras, sadly hot mics are rarely a consideration), and a rare few have physical disconnection switches (again mostly for cameras).
Presumably power is disconnected by the T2 chip. Different from mechanical disconnect via a physical switch, but equivalently effective if you trust the T2 chip. (And if you don’t ... well, you can’t use the MacBook securely at all.)
You had better not trust the T2 chip, because it is vulnerable to the checkm8 exploit and the checkra1n folks have already demonstrated total compromise. The encryption functionality isn't affected if you have FileVault on, because your password is not stored anywhere on the device, but everything else, from basic SMC functions like mic/cam/fans/touchbar to secure boot to verifying the microcode and ME firmware before loading are totally useless now.
AFAIK, the T2 is always powered on even when the main CPU is off, so this could have ultra-long-term persistence.
Perhaps within the hazy, inextricable mists of jack, pulse and alsa, there's an answer. I remember feeling qualified for a job at NSA after getting midi and my mic to work (effectively) on my Arch (Mint, Ubuntu, Debian, etc) laptops years ago. It was a feat. I can see:
Hacker1: we've got root.
Hacker2: yeah, but how the fuck do we get audio to work?
Hacker1: modify grub to default to windows partition and reboot, or maybe install Virtual Box and run windows. Probably not possible otherwise.
Hacker2: No, waaay too slow, they're on an ATT gateway. Find a softer target.
I think the idea there is that it physically disconnects the mic, since the laptop instead tries to use the headphone mic (which in this case isn't connected). Not sure if it would actually work, it would be easy to 3d-print one to test.
If you connect a plug to the audio jack, sure, the laptop probably defaults to using that jack for input and output. But you can still choose to use the built-in mic, right? So it's not physically disconnecting anything, just triggering a software disconnect.
AFAIK you're correct. Though it's possible most spying software just uses the default mic and doesn't have the capabilities to switch the input device?
Not a Mac user, and the last time I played with sound stuff in nix was almost 15 years ago...
But I feel like switching input (or even collecting multiple sound inputs) on a nix OS is not much more convoluted than dealing with the sound stack in general. But that's probably also related to how the driver itself interfaces with the audio device...
> Surely Microsoft could just start recording with the with right windows update?
Yes, that's why automatic updates was seen with a lot of skepticism at first. And the reason automatic updates are winning is because everything is so full of holes that disabling automatic updates is even less secure than enabling it.
Surely Microsoft could just start recording with the with right windows update?
Yes, they could. I get some scepticism when I say my businesses have declined to move to Windows 10, mostly because we don't trust the security and privacy aspects. However, we deal with personal data, and we also deal with various types of information that are protected by statute and/or contract. It would be very obviously against the spirit and quite possibly against the letter of several different laws for us to knowingly store and use that kind of data on a system that sends information we can't control up to the mothership and/or that could be updated without us agreeing to it. Indeed, there are multiple ongoing investigations into Windows 10 because of exactly those kinds of concerns across the EU right now.
It's a good thing that non-technical people who do take privacy seriously, such as lawyers, are starting to notice the glaring problem with these modern technologies and advise against using them. With a bit of luck, we'll then get statutory regulation requiring disclosure in advance of exactly what these devices are doing and the security and privacy implications including when they don't work properly or if they get compromised. Again, there had been much discussion in Europe recently, at least up until the world was more concerned with another kind of virus over the past few weeks, of requiring much stronger standards for security in IoT devices, and those sorts of laws really can't arrive soon enough. Mandating controls to physically isolate all cameras, microphones, transmitters and receivers at the hardware level might not be a bad idea, either.
It should be noted that the device is not intended to record conversations absent the trigger word, however there can be false positives:
> According to Amazon’s website, no audio is stored unless Echo detects the wake word or is activated by pressing a button. But sometimes Alexa appears to begin recording without any prompt at all, and the audio files start with a blaring television or unintelligible noise.
It doesn't even matter how sincere they are. False negatives hurt the perception of the product, because then people need to repeat the key word. False positives do not hurt the perception of the product, because the recording occurs silently. Their incentives are aligned against privacy, and so privacy will suffer.
False positives does hurt the perception, because the device will usually do something or utter a response unless it fails to detect something intelligible or writes the request off as not intended for it (having two devices at opposite ends of a large open space living area can sometimes get frustrating, but looking at the history, you see how often they correctly determine which device should respond)
An occasional false positive is funny, but it gets old pretty quickly.
That's not true. Amazon can have a significantly more accurate classifier running on their backend that processes the transmitted audio. And this classifier can be more accurate due to both more computation and it would have access to the entire data stream, not just the initialization words. You would never know that your data was sent to Amazon and then ignored.
They can, and they do. You can see that in your history because you can listen to the audio of things it decided subsequently was not intended for it.
But unless you have multiple devices, those make up a relatively small proportion of requests, and hence most of the time false positives is something they have a strong incentive to stop because it causes negative user experiences.
[If you have multiple devices it will happen "all the time" when a request is genuinely meant for a Alexa device but more than one device hears the request, but then it has no UI effect - one device will usually answer, and the other(s) will stand down; this is by no means perfect, but it works reasonably well]
Obviously, any device which is mic'd and connected is a risk. And devices like the echo or whatever google and apple sell have a permanent hot mic' since they keep trying to match the keyword so a "live" indicator would be of very limited use.
Unless the trigger-matching was performed by some sort of tamper-resistant secure module which would also trigger the indicator and not feed any data to the rest of the system, but then you'd have to trust that that is properly implemented as well.
I guess a suspicious mind could be able to check that by looking whether there is traffic between the device and the cloud servers. And the paranoid mind (who wasn't paranoid enough to not have such a device, oddly enough) could only allow the device 'net access when desiring to use its capabilities (though obviously there's no guarantee said device wouldn't have been buffering hours of conversation while offline).
They do accidentally trigger very often, though. If you have Alexa devices and check the history, it often records fragments of things that were not intended for it.
If I was to have a very sensitive conversation, I'd unplug them.
But day to day it really does not concern me (or I wouldn't have four of them around the house...)
Where is the keyword 'Alexa' recognized? On the device or back in Amazon towers? If the latter then every sounds is transmitted back to grep for 'Alexa'. Of course we believe they don't store that other noise data.
Both. First on the device, then in the cloud. The latter determines if it's a "false positive" or not. AFAIK there is no way to remotely start recording up to the cloud.
The cloud can push changes to the wake word (because changing it by the app is possible), it can add new ones, and the real question is rather or not amazon servers can instruct it to not turn the lights on for certain wake words.
You can see this in their home defense feature on the echos, where the echo can have the sounds of broken glass and wood breaking added as wake words so it can notify you and/or police in the event of an intruder.
I like that despite all the comments, nobody called me on "by and large the tech industry has a complete failure rate at making anything computing related secure. "
Which I figured someone would raise a fuss about. It makes me sad that we can say that and just accept it. It shouldn't be this way.
Now that they've publicly claimed several times that they don't do that, I think it would really erode trust in the product if they changed the policy.
> But I doubt they listen for anything but their keyword
> I can't imagine the NSA and other covert organizations not drooling at the chance to do just that.
Why doubt? Why imagine? You know that, for data on big corporation's servers, the NSA already gets a copy of everything - and everything is recorded forever without you really being able to delete it. Why would this be so different for voluntary self-espionage devices (Alex, Dot etc.)?
It a reasonable assumption that a lot more recording happens than is claimed.
> You know that, for data on big corporation's servers, the NSA already gets a copy of everything
This is squarely in conspiracy theory realm. Nothing in the Snowden documents or other leaks hinted at this (except for the NSA snooping on unencrypted internal links, which is useless now that everyone encrypts everything).
> Why would this be so different for voluntary self-espionage devices (Alex, Dot etc.)?
Why would a big tech company willfully infringe on their customer's privacy? How would they even do it? They have every reason to prevent it.
The Snowden documents are old now while this has become the hottest gift.
"Why would a big tech company willfully infringe on their customer's privacy?" to comply with a national security order. These companies are not lawless entities fighting for good.
I don't think NSL can even be used request the actual content of the recordings (vs. "just" the metadata) - a judge would have to sign off on that.
Now, clearly there's a lot of questionable practices like secret courts, gag orders and ineffective oversight in general, but this is still far from "for data on big corporation's servers, the NSA already gets a copy of everything", and very far from "being forced to deploy a firmware update that turns a smart home devices into a bug".
Companies act exclusively in their own interests, which I would hope includes to not engage in covert surveillance of their own customers.
This will only stay this way if we keep demanding and expecting privacy, and push for stronger oversight. But defeatist claims like "the NSA gets all data anyway" are not useful.
> This is squarely in conspiracy theory realm. Nothing in the Snowden documents or other leaks hinted at this (except for the NSA snooping on unencrypted internal links, which is useless now that everyone encrypts everything).
This is absurd. Plenty of leaks clearly indicated that they succeded at breaking different types of SSL.
It's a very reasonable concern. I used to have a Google Assistant in my home office. I've muted the mic and just use it as a Bluetooth speaker.
However, the much bigger concern should be your phone, which has a mic, is internet connected, and almost always listening. You need to also disable the Google Assistant, Sciri, etc. there too.
Oh, and that doesn't just go for your home. That's in the office, at a client, on the train, in an Uber... Everywhere you take your phone.
How did you mute the microphone? Through the software, which is controlled over the network, or with judicious application from a screwdriver? The latter would be the only way I would trust.
My understanding is that the "microphone mute" button on Alexa devices is a hardware switch that cuts the wire to the microphone, at engineering's insistence.
I'm rather short of praise for Amazon but if this is still the case, great.
6 years ago I used to work for a translation agency.We used to get some very confidential (and very interesting) texts from banks. We were not allowed to put a single word into google translate from client files and this stuff is 10 times worse. Having a permanent mic from large corps in your home is never a good idea...
I don’t entirely get the cognitive dissonance of worrying about a microphone+speaker sitting on your table when you probably have a microphone+speaker+camera+face-scanner etc... in your pocket.
Same story for most laptops.
Wrong threat model. Those devices only record maliciously if they're hacked. Smart speakers/home assistants inadvertenly record during normal operation. See: all the news stories about their owners discovering their recordings on their google/amazon account.
If you turn Siri off, it doesn't activate by accident, and the iPhone still works as intended. If you turn off Alexa, the home assistant doesn't activate by accident because it doesn't work at all.
At that point why bother getting it in the first place? The whole appeal of it is that you can just say "hey google" from anywhere in the house and it would answer you. If you have to walk over and turn it on, you might as well use your phone.
I got rid of smart speakers a few months ago after being enriched with everyone's perspective on this forum; tech is my hobby, not my career. I have zero regrets. It's a little inconvenient now to not be able to say "play [music]" or "play [show]", instead having to use a remote. But the peace of mind that I'm not being recorded is paid for en masse.
Welcome to a world where the general consumer's choices coupled with more and more private capitalization of essential services dictates life for the rest of people.
In this case it's pretty easy: dont buy these garbage products to begin with. Phones, on the other hand, are a bit more difficult to find viable alternatives for in many cases
Until we have a home assistant that can be operated without a connection to the internet without going to the cloud there is always going to be this risk. I don't see any of the big players actually designing a system like this other than Apple.
The performance of such a system would be bad compared to an internet connected one (frequent speech model updates and more compute) and the reviews would pillory the device for a nebulous to most privacy (unseen) vs. performance (seen) trade off. Try using a slightly older (early 00s) car with voice commands to get a sense.
Yes, this could happen but if the project was sending data to the cloud being opt in instead of opt out (like for example telemetry for Windows / MacOS when you first install) it would ideally be that way.
I grew up in a communist dictatorship, where illegal wiretapping by the government is still an open wound in society's culture. Given that, I cannot understand who willingly buys this crap and puts it in their house.
A couple of days ago, I lost all respect for a coworker when we were in a meeting and somebody said something which woke up the Echo he had on his desk.
Do you carry a smart phone? Does it not have a microphone designed to have hands free conversations and pick up audio from a distance? Does it have the ability to constantly track your location? Does it contain most of your private, electronic communication?
If we're talking about illegal wiretapping, a smart phone is significantly more risky and problematic than a smart speaker.
>The firm worries about the devices being compromised, less so with name-brand products like Alexa, but more so for a cheap knock-off devices, he added.
Speaking of Ring knock-offs:
In these days of self-imposed isolation, this hands-free motion activated connected door knocker seems seems pretty useful for scaring away unwanted visitors without spreading germs:
> You can go and listen to them yourself, and delete them.
I've been in the business long enough to know that if something is digitized, it often never ever goes away. You don't know (nobody does for sure, actually) what is the internal retention policy of this information or its metadata. You can delete the recordings but the only thing you know for sure is that you and you alone cannot access them anymore.
Only after the wake word has triggered the "start interpreting audio" code. These things are NOT constantly streaming everything they hear to the cloud (and this can be easily confirmed by monitoring traffic from the device's IP address on your network). That is a big distinction.
I'm willing to believe that it's only after it's been triggered by the wake word processing. The problem is a lot of things I say that aren't the wake word still triggers a wake up. If you've got one of these, turn on the accessibility options to make noises when it wakes up... I'm not a heavy user, so more than half of the wakeups aren't intentional.
>Amazon and Google say their devices are designed to record and store audio only after they detect a word to wake them up. The companies say such instances are rare, but recent testing by Northeastern University and Imperial College London found that the devices can activate inadvertently between 1.5 and 19 times a day.
How can a device activate 1.5 times a day? Does it have to activate right before midnight then deactivate the same amount of time after midnight? Technically that's still activating on a whole day. Or can some devices half-activate?
They gave all the context required to share it without the connection "Lawyers warned Alexa is hearing confidential calls" requires. This could be telling of Bloomberg's expected/desired readership.
I want and expected it to listen to everything. Sure it can be used to harm me but it won't be a good business decision, on the other hand they can use the data improve the product and service for me.
Maybe not you specifically, but it can be hard to say. Generally though, by making people unhappy with what they have and by convincing them that they need to always be spending money on something new to find happiness.
I've found a lot of improvement in my life satisfaction (and the size of my savings account) since getting rid of ads almost entirely. I've also found that I never feel the need to be in a hurry to buy something. Buying seems to be a slower process -- whenever I want the thing and feel good about it, not just because it's hot right now or I'll miss a deal. I know there will be another.
This may not apply to everyone but it was my experience. Clearly not a big fan of the ad industry haha.
You don't think it might be better to assume all this data will leak or be hacked? You most likely have decades of life ahead of you, and once a file is free on the net, it's there forever.
We seem to agree about the level of risk. It's no more my place to tell you how comfortable you should feel with that risk, than it would be for me to tell you what genre of music should be your favorite. That's just personal preference.
I think it's important to not overly freak out about these home assistants. Your phone company or video conferencing company is already listening to your confidential client calls. Your email provider is reading your email. Your internal applications run on Amazon's servers. Your upstairs neighbor has their ear to the floor. I think lawyers are used to thinking "as long as it's not in writing, we're okay", but with speech recognition getting better and better, your phone calls / VCs are going to show up in court someday.
You treat these things as third-party listening devices - bugs, because that is exactly what they are, and adjust your conversation accordingly unless/until it is unplugged.
Look at it from a lawyer's perspective: imagine your privileged conversation with your client did leak. Your client sues you. If there was a surveillance speaker in the room it doesn't matter if it leaked because of it. You just demonstrated extreme lack of care by discussing privileged information in front of a networked microphone.
Voice mails and instant messages have been produced in court cases for many years. Even some famous cases have hinged on that.
There's also a thing where if a communication is privileged, and you accidentally produce it, you may not be able to unscramble the egg just because in principle it was covered by lawyer-client privilege. So there is a real risk to recording things even if there are rules protecting you.
Your phone or video conference company is providing the platform that allows you to have the conversation in the first place. Same with your email provider, same with applications running on the cloud. Those are the infrastructure We pay monthly to support business and communication.
If my neighbor was listening to my conversations AND running the third largest ad platform on the internet, it would alarm me.
I think in all cases the service provider has a pretty strong incentive to not violate your trust. You'll cancel your Zoom subscription if they start feeding your meetings to the person that sues. You'll stop buying from Amazon if Alexa starts feeding your meetings to the entity you're engaged in litigation with. In both cases, the incentive exists to keep your confidential meetings confidential.
But, incentives are not crypto. There is probably nothing nefarious going on, but if you use open-source software with strong cryptography, a service provider that wants to be nefarious is simply unable to do so. That is what lawyers should be aiming for; freaking out about Alexa is a feel-good stopgap at best.
==“Perhaps we’re being slightly paranoid but we need to have a lot of trust in these organizations and these devices,” Hancock said. “We’d rather not take those risks.”==
Seems like a pretty reasonable measure and advice. The baseline assumption you make is that you know the who, what, where and when of Amazon sharing data.
Does Amazon actually have an incentive not to violate your trust? You could make the same argument about Facebook and they have routinely violated users trust. It seems the real lesson is that without legal protection for consumers, companies will continue operating in the gray area.
It doesn't. They're just not the only evil ones, and thus I find it's wrong to use such a title. Either call them all out or none. "Always-listening voice assistant" would have worked well in place of Alexa.
[1] https://caselaw.findlaw.com/us-supreme-court/533/27.html
[2] Used throughout the ruling[1], but especially section II of Justice Stevens' dissent.
[3] The ruling[1], 2nd paragraph