Notifications
Clear all

Locard's Exchange Principle

3 Posts
2 Users
0 Likes
1,078 Views
keydet89
(@keydet89)
Posts: 3568
Famed Member
Topic starter
 

This morning, I responded to another post and included the below link

Locard's Exchange Principle
http//www.profiling.org/journal/vol1_no1/jbp_ed_january2000_1-1.html

Having re-read this article in the face of posts I've seen in several public forums, IMHO the article makes a lot of very important points.

It's a good read…take a look. It doesn't take a great leap to transition the concepts to the digital realm. In most cases, I replaced "physical" with "digital", and "crime" with "event".

When investigating an event, evidence can be very transient. Evidence left behind as a result of Locard's Exchange Principle can be volatile and fade with time (ie, network connections, process memory, NetBIOS name table entries, arp cache, etc), or simply be destroyed and lost in the purist approach to forensics (ie, take system down, image).

This needs to be considered in the face of Heisenberg's Uncertainty Principle in the digital realm, as well. In "Forensic Discovery", Dan Farmer and Wietse Venema discuss uncertainty, in part, on page 6
"Our general philosophy recommends greater understanding instead of higher levels of certainty, which could potentially make such methodology more suspect in a court of law. Paradoxically, however, the uncertainty - primarily in the data collection methods - can actually give a greater breadth of knowledge and more confidence in any conclusions that are drawn."

To me, this makes perfect sense. Consider live response…say a methodology is developed, like the Forensic Server Project (FSP)[1]. The investigator runs multiple tests to determine and document the changes made on a system by the methodology prior to use. On Windows XP, for instance, you'll have not only changes to the system memory, but files may be added to the Prefetch directory, entries may be added if the appropriate Event Log audit settings are enabled, etc. With this documented, the investigator then uses the tools to extract volatile data prior to imaging the system.

At this point, can the volatile data collected be considered evidence? As the tools are burned to CD, a copy of the CD can be made should it be required for disclosure. The collected evidence can be released, as well.

This is definitely worth considering. Taking the requirement for legal proceedings out of the equation, it should easily become clear how valuable such a methodology would be in the face of incident verification and identification.

Harlan

[1] http//www.windows-ir.com/fsp.html

 
Posted : 15/11/2005 7:04 pm
hogfly
(@hogfly)
Posts: 287
Reputable Member
 

This morning, I responded to another post and included the below link

When investigating an event, evidence can be very transient. Evidence left behind as a result of Locard's Exchange Principle can be volatile and fade with time (ie, network connections, process memory, NetBIOS name table entries, arp cache, etc), or simply be destroyed and lost in the purist approach to forensics (ie, take system down, image).

This needs to be considered in the face of Heisenberg's Uncertainty Principle in the digital realm, as well. In "Forensic Discovery", Dan Farmer and Wietse Venema discuss uncertainty, in part, on page 6
"Our general philosophy recommends greater understanding instead of higher levels of certainty, which could potentially make such methodology more suspect in a court of law. Paradoxically, however, the uncertainty - primarily in the data collection methods - can actually give a greater breadth of knowledge and more confidence in any conclusions that are drawn."

At this point, can the volatile data collected be considered evidence? As the tools are burned to CD, a copy of the CD can be made should it be required for disclosure. The collected evidence can be released, as well.

The short answer is absolutely. It's evidence. Admissability in a case in another question. The validity of the evidence must be proven with other types of evidence collected. In fact, other types of evidence can help bolster the theories developed by the analyst/criminologist. (For instance the correlating of physical memory contents, the arp table and network data collected from an intermediary.)
The key is following the evidence. While true that we affect everything we touch, it is also true that in the digital world, everything we do leaves traces. It's a matter of identifying those traces and proving their vailidity as worthwhile evidence.

This is definitely worth considering. Taking the requirement for legal proceedings out of the equation, it should easily become clear how valuable such a methodology would be in the face of incident verification and identification.

It's not only worth considering, it is something that should be implemented for anyone that writes their own tools and conducts live forensics investigations. As a matter of course all developers should have documentation of expected results(results being system modifications) when using their tools.

 
Posted : 16/11/2005 3:23 am
keydet89
(@keydet89)
Posts: 3568
Famed Member
Topic starter
 

Hogfly,

Excellent response…how many people out there are testing their tools? How many folks download a third party tool, to use as part of IR or forensic analysis procedures, and do so much as dump the import table (PE files?)? How many run the tools on test systems with monitoring tools to determine what they do?

Locard's applies not only to attacks on the system remotely but also local interaction with the system at the console, by a user or an investigator.

Harlan

 
Posted : 26/11/2005 5:16 pm
Share: