I am a private investigator. By definition, that means I investigate things that are… well, private. I snoop around in people’s business to find out what’s going on, and then I present the evidence back to the client.
I am also a privacy advocate.
You’d be forgiven for thinking this is a mismatch, because it is a fine line to walk. But it is possible, in my opinion at least. And I’m in good company. Jaya Baloo, all-round badass and CISO at KPN Telecom in the Netherlands, presented a scathing commentary at the FT Cyber Security Summit last year. She ended her speech with this quote, which I thought was excellent:
But there’s a line to be drawn, right? There has to be. I investigate things that people don’t want me to find out.
This isn’t an easy area to work in for a whole load of reasons, not least because investigations often get quite complicated.
Let’s take a look at an example case, based on one I worked a couple of years ago (some of the details have been changed, but you’ll get the general idea).
Case Begins: A potential client calls me. Someone is sending her abusive messages. They are racially charged and becoming more threatening in nature. She has already been to the police, and hasn’t had much luck. She wants me to find out who is behind these messages.
Background: Most of the messages are being sent via email. The email address is not one with which the client is familiar. There are also two Twitter accounts which are being used to tweet abuse at her account. The perpetrator has emailed her on both her work email and her personal one.
Methodology: The first thing I do is get as much detail as possible about everything that is happening. My client gives me the login details to both of her email addresses and her Twitter account (I neither require nor request this; she volunteers both).
I log in to her email accounts and read the emails the perpetrator has sent. She is not particularly tech-savvy and is nervous about forwarding them to me in case any information is somehow lost along the way. She has deleted a few – the ones which were sent very early on – but she remembers roughly what they said, and gives me an overview of the content of these.
I don’t log in to her Twitter account, because I don’t need to. She has received no direct messages on Twitter from the perpetrator, only mentions. The perpetrator did not lock their Twitter accounts, so I read the tweets in question. The profiles were evidently only set up to tweet abuse from, and only to this one victim. A quick search for the perpetrator’s email address brings back nothing. While this doesn’t necessarily mean it’s not used for anything else, I decide to work on the assumption that it was set up solely for this purpose for now.
This is interesting. It is possible, of course, that the perpetrator has multiple Twitter accounts, each used to tweet abuse at a different person. However, this seems unlikely. That would be a case of “troll” type abuse, where someone is just as unpleasant as they can be online, to a wide range of targets. Generally, those people don’t bother with separate accounts for different people: they spew hate everywhere indiscriminately. The fact that both Twitter accounts are used to direct abuse solely at my client points to this being someone she knows offline.
I file this away in my mind and continue my investigation. Crucially, I do not report the accounts to Twitter at this time. I might need them later, or the perpetrator might tweet again and give something away. In the interests of the investigation, it is important for the accounts to remain active for the time being.
I then turn to the email accounts. I google my client’s name and discover that her work email address is easy to find. It’s listed on the company website beside a nice big photo of her, which is the same one she uses across all her social media accounts, including some that she might not want her employer finding. I make a note of this: I will tell her about it when I send over my final report, and will recommend that she change her profile picture on a few sites.
Her personal email address, however, is not so easy to find. My client does not use her home PC very often, and her personal email is one of those auto-generated ones that you get when you start an account with an ISP.
Google brings back no results for her email address. It brings back a few results for the first part of it, which is her name followed by a number. However, these are old profiles and might not even be hers: her name is not uncommon.
This points even more strongly towards it being someone she knows.
Using the details she gave me, I log into her personal account and check the ‘Sent’ messages. There is nothing there. In the inbox are a few messages from her ISP, her husband and some shopping websites. Her spam folder has nothing beyond the usual junk.
I log out and log into her other account. This one is much more active. I click on the first email and skim-read it. I then continue clicking through her emails. I’m not sure quite what I’m looking for, but I have a feeling this is the way forward. Lots of emails from colleagues about projects, various cat videos, a couple from her brother reminding her about social events or forwarding on family news.
About fifteen messages in, something catches my eye: a typo that seems familiar. I pause and read the message more fully, then I bring up the abusive Twitter accounts and scroll down them again.
I search for the person’s email address in my client’s inbox. I take the first five messages and check the headers. The IP address is not the one from which the abusive messages are being sent. But I have a hunch, and I continue.
I read through the emails from this person, and I compare them with the syntax of the Twitter account. I find several examples of typos or turns of phrase that are consistent across both. I note these down.
I then go back to the abusive emails and look at all the headers, from every email sent. They change occasionally; it is likely that this person is masking their IP.
Then I come to a particularly vitriolic message. It is one of the later ones; one of the emails that prompted my client to call me. It has a threatening undertone and is a scathing personal attack. It was obviously sent in anger.
I check the header. This one time, in a single email out of dozens, the perpetrator was so incensed that he – I now know it is a he – did not mask his IP address. He logged in to his email account while he was at home, and he was so angry that he forgot to take precautions.
The IP address in this single angry email matches the IP address of someone she knows, who has emailed her non-threatening messages from his personal account too.
Conclusion: I send all the evidence to the client. She is upset that someone she knows is doing this, but relieved to have found out who it is.
I shred my paperwork and give it to my incredibly cute infosec committee for further dismantling. I report the abusive accounts to Twitter and they are taken down. I tell my client to change her passwords so that I can no longer access her email accounts (not that I would, but security is important). I do not know what my client does with the information, but she has enough to take it to the police and hopefully to stop the perpetrator.
So what do we learn from this?
Obviously, not every case will be like the one above. However, there are a few pertinent points that apply to parts of the privacy debate.
People want you to be able to catch the bad guys
My client volunteered her email and social media logins. I would not have asked her for these, but she was desperate enough that she wanted me to have as much information as possible.
Now, not everyone will do this. But you might be surprised, in a desperate situation, how much information people are willing to hand over. If I knew a burglar had been in my house, I would allow the police to search it and fingerprint it. There is a general understanding that, if you want an investigation to be successful, you have to hand over a certain amount of information. Very few people have a problem with this.
The problem arises when investigators want access by default, or access where a crime might be committed at some point. I would let the police into my house to search for evidence about a burglary, but I would not give them a key to my front door and open access whenever they wanted, in case a burglary took place.
This is, in somewhat simplified terms, essentially what the whole Apple/FBI debate is about. It is particularly well summed up by this analogy:
And the one thing that isn’t mentioned there, of course, is that over the past three years or so we’ve been gradually discovering that law enforcement agencies have been metaphorically unlocking our houses wherever they can, and poking around in the name of national security.
On a related note, it is worth mentioning that the reason my client volunteered so much information was because she believed I was trustworthy. Lots of people no longer believe law enforcement agencies are intrinsically trustworthy, mainly because we’ve discovered they’ve been breaking the law and lying about it.
What you read isn’t the full story
Would it be useful to get into a terrorist’s phone and read all their messages to their terrorist friends? Probably.
Would your entire case rest on that? Probably not.
In a long, ranty and very enjoyable conversation with a couple of other forensic investigators a while ago, we debated this point.
Yes, the more evidence you have in a case, the better. Obviously.
Yes, sometimes criminals are good at covering their tracks and you need warrants, or to gain further access somehow.
But a case is built on a huge array of evidence. Especially if it’s something like counter terror or child protection, you can’t let it hinge on one thing. You need as much evidence as you can find, from as many sources as you can get. And in all investigations there will be something you can’t get access to. An account that’s been deleted (more on that in a minute). A house that’s burned down. A person who’s disappeared. A total lack of eyewitness testimony (which is fraught with difficulties at the best of times, but that’s another debate).
Pretty much any investigator will tell you that sometimes a lead just doesn’t go anywhere. Sometimes a lead that you know could find you more evidence about a case is inaccessible. It’s really frustrating. Trust me, I get that. But there are some things we just can’t do, for a whole variety of reasons, and in good investigative practice you’d then move on and find the next potential lead, or work through the evidence you already have to see if you have a solid enough case for reasonable doubt.
Some of the things we can’t do because they’re impossible. But some of the things we can’t do because they’re illegal.
I, as a private investigator, cannot break into someone’s house to look for evidence. I cannot impersonate a law enforcement official. I cannot hack into someone’s email account. I cannot grab my main suspect, drag them to a basement and pull out their fingernails until they tell me what I want to know. And I cannot enter their phone through a back door and gather all the information from it.
Law enforcement agents can do some of those things. But they’re not allowed to torture people, and they’re not allowed to force their way into phones either.
Hold up, scar. Did you just compare anti-encryption to torture?
Actually, yeah. I did.
Are they the same thing? No. Not at all, and I’m not claiming that they are.
However, the reason encryption is legal and the reason torture is illegal spring from the same concept: that humans have basic rights that should be upheld, even when it’s really damn inconvenient and you really don’t want to.
People have a right to privacy. They have a right to decide who accesses their data. We have provisions in place for overriding that right in certain instances, such as warrants to search a house. But the right to override a single individual’s privacy in order to search for evidence of a crime cannot put at risk the right to privacy of everyone else. That’s insane.
And then there’s just bad investigation
It makes sense to keep the public on their toes about terrorism. To make sure they don’t merrily go around picking up large suspicious-looking suitcases in train stations and lugging them to the nearest lost property office.
It makes sense to tell the public when we succeed at something, too. Sometimes a terrorist plot is foiled, and it’s good for the public to know that. They can’t always know exactly how it was done, because details like that can’t always be released, for a whole range of reasons.
But sometimes people get a little too enthusiastic about stuff like this. Take this claim from Reuters, for example:
People reading that would be forgiven for thinking it means that Twitter has suspended 125,000 people who sympathise with terrorist agendas.
But, as Gen Knoxx so eloquently points out, this is not the case.
If you shut down the account of a terrorist, that doesn’t shut them out of the internet forever. It doesn’t even shut them out of Twitter. All it means is that they can’t use that specific Twitter username anymore. So they create a new one – several new ones, probably – and they keep tweeting.
If we left some of these accounts up rather than closing them down, we’d be far more likely to be able to glean useful information from them. We could analyse a person’s tweets over time, note when they went quiet for a few days, compare that to known sympathisers we have on file. Over time, people become used to their own Twitter accounts, and they start to make mistakes. Constantly forcing them to shut down their accounts and open new ones keeps them on their toes and reminds them we’re watching.
So what’s the answer?
Damned if I know.
I think law enforcement agencies have shot themselves in the foot by illegally entering our lives to an extent that few would consent to.
I think it’s important to build a strong enough case when you’re taking evidence to court to convict the guilty party.
I think it’s important to respect people’s basic human rights, and I think privacy is one of those rights.
I think it’s possible – sometimes challenging, but possible – to walk that line.
We just have to try very, very hard. Because if we start setting aside human rights because they’re inconvenient, then I worry for the future of humanity.