David Spreadborough, International Trainer, Amped Software

David, can you tell us a bit about your role and what it involves?

I’m the international trainer for Amped Software. First of all, Amped Software is a digital image and video company and everything that we do has a forensic and scientific backing. It’s very easy to deal with an image or a video, but to deal with an image or a video forensically, with a scientific backing, requires a product to guarantee that everything a user does is forensically sound.

My history is that I was a police officer for 24 years; the last 12 years were spent purely doing CCTV and image investigations, mainly from CCTV. I left in 2015, upon the closure of the Forensic Imaging Unit.Because I’d been aware of Amped Software, and I’d been aware of some of their products, I’d started assisting them with some ideas in order to help users. Then they offered me a job as their international trainer. I not only go around the world teaching other people to use the software, but I also do the research and development of ideas; getting ideas from users when I’m delivering training and working out how we’re going to put that into the software. I also do private analysis work, so if there are any challenges while I am conducting an investigation, we can solve these problems and then build the solution into the software as well.

So it’s quite an interactive process?

Yes, that’s right. If I can’t do something that I need to do, we figure out a way of getting it into the software.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

Image authentication was one of the biggest challenges. It used to be that only very highly specialized people, with a very high expertise in digital imagery, were able to do it. But now there’s more of a need for other people to be able to get the same results without that high level of knowledge. To do that we needed a piece of software that could meet the demand, and that’s what Amped Authenticate does.

You spoke earlier today at Forensics Europe Expo about authentication within a legal context, and some of the issues that are associated with that. Can you talk us through some of these challenges?

In the UK, we tend to believe what we see, especially from a prosecution point of view. Every single exhibit – every image – needs to be documented and referenced in a statement. And then that image will be presented.

When we get that image from the policing point of view, we do have a way of tracking that image and ensuring its chain of custody. For images taken by police offers, we have processes and regulations to ensure the integrity of the image is maintained. But what about images supplied by members of the public? What happened beforehand, before we got it? Is there any reason why a person would want to manipulate that image? Has anyone checked for signs of manipulation in that image? How can we identify whether there are signs of manipulation in that image?

For example, has it been submitted by the defendant’s mother? We laugh, but if the defendant’s mother says “Here is the image: my son couldn’t have done that, because here he is at a birthday party” – has anyone checked that image? Has anyone authenticated that image? If we can’t authenticate the image, why not? And it’s usually because they don’t have the knowledge or the software to do it. Whereas now that’s there. So, the answer can now be gained. But at the moment, not enough people are even asking the questions

What are some of the more common ways that images are manipulated?

It could be any way – there’s a million different ways. It’s usually adding something or taking something away from an image. It’s very easy in any free software to remove an object or to add an object or to change an object. To change the time or to change any meaning. It could be for any reason.

Where we’re sitting here in a restaurant now, we could change someone’s face, or you could change someone and add someone else, to say that they were there. We could change the time that this image was taken, we could change the place that this image was taken. I said before, it’s very easy to actually change an image and change the context and the meaning of that image, but hiding the artefacts that are left behind is much harder, and you have to do a lot more work. And even if you have done that, even if you have removed all traces of manipulation, it’s very easy to then say that this image has been modified in some way. You may be unable to say directly what parts, but you can say that it is not a camera original. That’s the big one. And if someone says it is, and they put that in their statement, and they’ve said that this is a true and accurate report, then you can prove otherwise.

In terms of making sure that this authentication happens, do you think it needs to come from the legal side of things, or do you think forensic practitioners need to be more aware, or both?

I think a bit of both. There’s a general belief at the moment that when we get an image or a video from CCTV, people believe straight away what they see, and they don’t ask the questions. And the biggest question is “Is there any reason why someone would want to change that image?”

So, as I said, from a prosecution point of view, if it’s come from a witness with no connection, there’s very little reason for them to change the image. The context and the story behind the actual image creation, the image generation, is very important. What was the story behind that image being taken? Because that story might not play out. If they say the image was taken by a witness who was just passing, OK, what are the chances that that’s been manipulated? Very little. If it aids in an alibi, it’s huge. If it aids in a prosecution, is there any reason why that person would want that particular person prosecuted? Have they got a grudge against that person?

I think one of the most frequently overlooked things in digital forensics is the human element. The reasons why people commit crimes and manipulate evidence; I think that’s often neglected.

I agree. When we’re dealing with image analysis, image enhancement and CCTV especially, we’re always told that we have to try and remove ourselves from the investigation. They use a term ‘blind to the context’ – we need to try to be completely unbiased and to minimize as much as possible, any unconscious bias or cognitive bias. So, we have to make ourselves blind to the context.

I agree that, when we’re trying to answer a question like what is the license plate, or what is this, or what did this person do, or is this the same as this, or is this this person, yes, we need as little information as possible. We only need the information that we need in order to answer that question.

But when it comes to image authentication, we need to ask what is the history of that image? Because then you’ve got a chain to follow. If Mrs. Smith said she took the image at this point, with her camera which is this type of camera, you’ve got some questions then that you can ask. Does that image come from that camera? Can we have that camera? No, we can’t. Why not? Because you’ve lost it. Well can we find any other images online that were taken with that exact model of camera? Because then they’re going to have the same compression type, so then we can start comparing those. If you’ve got none of that information, there’s nothing to work on. But yes, you have to ask yourself why they’ve created the image, and then you’ve got a target to work towards.

What do you think the future is for image authentication?

In the UK, there is an inference that a digital video recorder, for example, is working correctly until it’s proven otherwise. And that’s what’s happening with images; there is an inference that it’s a true image.

There’s an ongoing investigation in Italy where a scientist at the University of Naples has been accused of manipulating images to fit his study findings. If we’re finding that images are being manipulated and distorted to meet a certain point of view in research and in universities, and we’re also finding it in the press and in politics, and we’re finding it in every other walk of human life, why is it not being questioned from a legal perspective? And I would say that it is, it’s just that no one’s ever identified it. And that’s what we’ve got to do, we’ve got to start asking the question. And it has to be right from the start.

And so, when we’re talking about where the future lies – the future is asking the questions. Is that an authentic image? Can I find any signs of manipulation in that image? Is its integrity correct? Has the image come from that camera? We can provide the facts and then leave it up to the court to decide.

I never say that it’s an authentic image. I say that I haven’t been able to find any signs of manipulation, and that it is a camera original, or that it’s not a camera original. That’s because not every form of image manipulation is malicious. For example, someone might have just increased the brightness and that might not change the story. So, we can explain that bit. Changing it, you know, putting other people in the scene; even cropping an image, to say someone wasn’t there – all of those things, we need to now start saying, well what’s changed in the image and how much can we rely on it?

Is there someone somewhere in the world who is sitting in prison because of a manipulated image? I don’t know. It’s interesting. I think no one’s ever checked really because, as we said right at the start of this, up until a couple of years ago it really was just the scientists and the professors in digital image processing, and the real experts in compression techniques, who were identifying these manipulated images. They were reading very technical papers on how to identify compression and reprocessing and signs of manipulation and artefacts. Whereas now we’ve got a piece of software that does it. It’s exciting times.

It’s only going to take a matter of time for someone to ask that question and for someone to find it, and then people might start saying, well why haven’t we been doing this before? I got my first digital camera in about 1996 or 1997. Photoshop has been around for over 25 years. So how long have we been able to manipulate images? A long time.

I think that’s the point for the future: the ability is there, and the ability is there with a piece of software and a few days’ training. And the more that you see and the more that you do, the better you become.

Last night, I was preparing for today and I went out to the street and took a picture of the street, with the street sign, and then I took 25 images with my phone. I tweaked the image of the street and removed the street sign. And I then ran the tweaked image through Amped Authenticate and found the signs of where the street sign had been removed as clear as day. All the other artifacts as well, the sign that it’s gone through Photoshop, and all these other bits, and it took me 15 minutes to find what was changed in the image and also to link that image to my phone.

So it doesn’t even have to be a big drain on time and resources.

Not at all. Because we’ve got a lot of automation in the software now, you can run through 81 different configurations, and all the filters, in the time it takes to make yourself a cup of coffee. You just have to learn what you’re looking at. How can I interpret the data that’s in front of me? And that comes through the trainer. Software gives you the data; understanding the data is through training and experience. Same as every other field.

Which brings us back to your job!

Yes!

Is there anything else you’d like to add?

I think for your readership: digital images are coming in all the time. Especially in the computer crime units, they’re finding thousands and thousands of images. There’s an untapped intelligence tool there.

You have all these cameras that are seized. Has anyone taken the noise profiles of all those cameras? It’s so easy to do! It takes 25 images, you get a noise profile of that camera, and then you can search on that noise profile of all those images, to say well hang on a minute, we’ve now got a link between that image there and that camera there. Going back to your human element, that then gives you a human link between those two people: the camera owner and the person who had the image. That gives you a new lead to follow and it doesn’t take long to do.

And the API as well, the link with whatever piece of software is used to extract these images, you can run the batch API on all of those images and come back in the morning and you’ve then got a list of which images are camera original and which aren’t camera original, so you know which ones to look at first. And it’s done for you, you’ve just got to have the software and think about the image authentication side, how it’s going to help your investigation.

So much of it is about thought process. It’s not even training people to do things, it’s training them to ask the questions in the first place, and then you can work out how to do it.

That’s exactly right. And I think it is going to take a long time for people to do it naturally, and to do it every single time. As a starting point, we need people to ask “Is there any reason why someone would want to manipulate that image? And why?” Sometimes you’ll find there’s no reason why someone would want to manipulate the image, but sometimes it’s a bit obvious. For example, if it’s going to help them either from a prosecution or defence point of view. And then you’ve got to start thinking, well OK, let’s check it.

Forensic Focus interviewed David Spreadborough at Forensics Europe Expo in London, in May 2017.

About David Spreadborough

David served as a UK Police Officer for 24 years, the final 12 of which were spent as a CCTV investigator. He was the first LEVA certified Forensic Video Analyst in Europe and remains one of only four outside of North America.
Since working with Amped Software, David has provided a key role in the development of Amped Software’s technical training, as well as spreading his passion for jurisprudence reform through the latest technological innovations.

He is still a practicing forensic video analyst and has frequently been called as an expert witness to assist legal teams and law enforcement with on-going criminal investigations.

About Amped Software

Amped Software develops the global leading solutions for all image and video processing needs relating to forensics, investigations, public security, and intelligence. With an emphasis on the transparency of the methodologies used, Amped solutions empower customers with the three main principles of the scientific method: accuracy, repeatability, and reproducibility.

Amped Authenticate is the leading software for forensic image authentication and tamper detection on digital images. It provides a suite of powerful tools to determine whether an image is an unaltered original or the result of manipulation with a photo editing software. Amped Authenticate also provides camera ballistics tools to verify the camera used to shoot the image. For more information, visit: ampedsoftware.com/authenticate.

Leave a Comment

Latest Videos

Digital Forensics News Round Up, March 27 2024 #dfir #digitalforensics

Forensic Focus 27th March 2024 6:06 pm

Digital Forensics News Round-Up, March 21 2024 #digitalforensics #dfir

Forensic Focus 21st March 2024 6:15 pm

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles