±Forensic Focus Partners

Become an advertising partner

±Your Account


Username
Password

Forgotten password/username?

Site Members:

New Today: 0 Overall: 33148
New Yesterday: 2 Visitors: 91

±Follow Forensic Focus

Forensic Focus Facebook PageForensic Focus on TwitterForensic Focus LinkedIn GroupForensic Focus YouTube Channel

RSS feeds: News Forums Articles

±Latest Articles

RSS Feed Widget

±Latest Webinars

Thoughts on testing tools

Computer forensics discussion. Please ensure that your post is not better suited to one of the forums below (if it is, please post it there instead!)
Reply to topicReply to topic Printer Friendly Page
Forum FAQSearchView unanswered posts
Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 6:47 am

Il be honest, this discussion has opened my eyes and not gone the way I expected - or maybe I should have expected this! Laughing

Willbarton - i suppose in some cases or lab certification yes, but now I am thinking almost just in general. Would it be enough to just examine the process and trust the tool?

Minime:- I suppose un-tested tools are really only going to pose a missing evidence risk rather than adding, as providing the physical locations of carved data is always going to allow for checking and manual validation. However (in an ideal world...), 100% validate the tool, say for carving, remove any time consuming manual validation processes? Just a thought - one which is likely not possible.

I suppose given that it is us that do the interpretation and the tool just provides data, then a misinterpretation of data is likely going to be down to us in most instances and missing down to the tool.  

tootypeg
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 7:19 am

- tootypeg

Minime:- I suppose un-tested tools are really only going to pose a missing evidence risk rather than adding, as providing the physical locations of carved data is always going to allow for checking and manual validation. However (in an ideal world...), 100% validate the tool, say for carving, remove any time consuming manual validation processes? Just a thought - one which is likely not possible.


Validation will only tell you what a tool does and doesn't do. Have a read of the ones on the NIST CFT site for mobile extraction and carving. These tools have anomalies, but you could report them to the vendor to try and fix.
But ultimately a manual review is always going to be required. As an example, you validate UFED and find that it never pulls emails of an iOS device which isn't jail broken (so most of them). You know this so manually review emails on iOS devices if relevant to your examination.
You've still validated the tool/process but can't eliminate the other side.  

minime2k9
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 4:19 pm

- tootypeg
1. We cannot effectively test tools in DF


Nonsense. "We" can and should test tools. Some tests may be more difficult to perform than others, sure. But until we have the test requirements formulated, however, it is very difficult to say just anything about 'effective'. There such things as impossible test conditions: we should not allow those to influence our view of what can be done.

Even a 'compare the results from tool X with the competition' is a test, although it is usually highly informal, based on confused thinking, and so generally worthless, except in very special circumstances, in which case it can be perfectly valid.

By 'we' I don't mean each and every one of us. But 'we' as a community need to do so. It would be hell if each and everyone of us had to check every purchased piece of electrical equipment for safety -- but we don't. Underwriter's Laboratories and similar organizations exist for a reason.

2. We are reliant on vendors to produce accurate tools - however even they may not be able to fully test them.


We are lazy enough to let vendors get away with sub-par tools. Some years ago a Guidance customer asked about some particular time stamp of some file system or possible archive file format in their support forum. After some investigation Gudiance replied that the timestamp was incorrect: they had mixed two time stamps up, and labelled one as the other. That's a rather dramatic failure of quality assurance.

Compare that situation with Medical IT, particularly with life-preserving equipment.


Surely we are the only branch of forensics where it is acceptable to use tools which have not been effectively calibrated.


Until we have a) an error profile of a particular tool or measuring device (possibly even in a particular situation), and b) that error profile is unacceptably large, and c) we have the means for reducing it, there is no point in talking about calibration. The purpose of calibration is to reduce errors in results. So ... what errors do we know exist?

And also, this is relevant mainly in situations where the information isn't self-evident by other means. We do have some situations where we use tools as 'measurement devices' -- I regard timestamps as one such area -- or 'Geiger counters' -- E-discovery comes to mind -- but we also have areas where error simply won't survive for long enough, given a particular method/process. In those, calibration may be meaningless.

But we may have use for it in areas we don't expect: even classification of IIOC is subject to error profiles, and thus possibly to calibration: does a particular analyst classify images 'correctly', or are there are errors? Random errors? Systematic errors? And are the errors acceptable or not? If not, calibration is probably a question of training, rather than dialing some mental knob up or down.

We have had situations where traditional forensic analysts claim 0 error rate for, say, fingerprint identification analysis on the one hand, and we have the Brandon Mayfield mess on the other. "Error profile? No, no error rate whatsoever."

We're not different in digital forensics.

But we're lucky in one respect. "We" (probably) don't have an individual calibration problem that lab assistants do: where each measuring device has its own error profile, dependent, say, on local temperature, barometric pressure, or local water quality, so that every lab needs to calibrate its own equipment. Instead, many of our errors tend to be reproducible within (say) a particular software release. So we (probably) only need one test, and then were done. (Unless we're looking for the error profile of the testing method used for that test.)  

Last edited by athulin on Fri May 19, 2017 4:50 pm; edited 1 time in total

athulin
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 4:42 pm

- tootypeg
I suppose given that it is us that do the interpretation and the tool just provides data, then a misinterpretation of data is likely going to be down to us in most instances and missing down to the tool.


You seem to be generalizing here, but you can't really do so.

One of the most basic 'datum' that computer FAs work with is, I believe, the time stamp. How often do you 'interpret' a time stamp? Rarely, if ever. You let whatever tool you're using do that interpretation for you. This appears to be an area where normal testing and validation would be desirable.

There are similar areas, though perhaps less obvious, where some internal binary datum, is translated or interpreted, and where correct translation/interpretation is desirable to assure. One may be validation of hash algorithms: unless tool X implements MD5 or SHA-1 or whatever correctly, any reliance on hashes is uncertain. Yet, has such validation taken place? Show me the test protocol.

Another is use of reserved fields. ISO 9660, for example, has file attributes that contain two reserved bits that currently are unused. If one of those bits suddenly is set in a particulr ISO image ... is that of forensic interest? Difficult to say, but I would certainly want to be told about it so that I can evaluate it myself. So ... will tool X tell me? No? Hm ...

Or if a highly flexible archive format (say ZIP) suddeny shows a file that has attributes of type '7F' ... but the highest known format is '32'. Also something I'd like to be told about. Not likely to happen, but if it did happen, I wouldn't want to miss it.  

athulin
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 4:57 pm

Interesting points Athulin. I think I am going to write up some content up formally and post it for scrutiny trying to summarise the issues etc.

I think I am back to square one in terms of what I thought about this area now.

Curiously, do you think we can continue on on our current trajectory in terms of how we utilise of tools or do you think there will come a time where we need to have a major overhaul?  

tootypeg
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Fri May 19, 2017 6:00 pm

- tootypeg
Curiously, do you think we can continue on on our current trajectory in terms of how we utilise of tools or do you think there will come a time where we need to have a major overhaul?


I'm from the IT world, and have worked with software development (in a field where validation occasionally was *extremely* important), user interface design, system management, software testing, security, ISO 9001, incident response etc.

I've seen a lot of poor work, on the part of a tool maker, as well as on the part of the tool user. Quality assurance seems to loom very high in my mental approach to my work -- possibly and probably higher than I can sustain.

When I started in computer forensics, I thought most of the things I got stuck on was just due to my unfamiliarity with the subject area. At present, I'm afraid I consider computer forensics as a field as not clearly much better than junk science. Note, "as a field". There are exceptions, both good and bad. But I'm somewhat of a pessimist, too.

Some part of that problem is due to tools, poor tools, confusing tools, bad tools. Murphy lives, is almost certainly employed at one of the major forensic tool makers, and is busy ensuring that his law remains valid. In a field as sensitive as a computer forensics, closely related to at least some kind of ideal of justice, we shouldn't allow him to get away with it.

Add to that my relatively late discovery of "Strengthening Forensic Science in the US", Brandon Garret's "Convicting the Innocents", and just the other week, David Harris's "Failed Evidence", which gives pretty hard kicks to such ideals. It's not "there but for the grace of god go I", but rather "there, in a couple of years, I too will be along with everyone else in this particular field."

Tool testing is only one of several things that need to be done. We must stop trust software nerds (of which I am one) to do the right thing. Heck, Software Engineering was invented in the early 1970s just because software nerds didn't get things right. Tool testing is the one thing that appeals most to me, mainly because of my background. (Proof reading also appeals to me ... probably for the same reasons.)

Yes, some kind of corrective measures are needed.

And while I think, in general, that certification could be a help, I also think that Harris's analysis of 'Why Law Enforcement Resists Science' applies in fairly large measure to computer forensics lab certification as well.  

athulin
Senior Member
 
 
  

Re: Thoughts on testing tools

Post Posted: Sat May 20, 2017 9:15 am

- athulin

One of the most basic 'datum' that computer FAs work with is, I believe, the time stamp. How often do you 'interpret' a time stamp? Rarely, if ever. You let whatever tool you're using do that interpretation for you. This appears to be an area where normal testing and validation would be desirable.

Only to provide a practical example, a recent discussions where it can be appreciated how what should be simple, common knowledge, tested and verified by n indpendent researchers and validated, not debatable topic, timestamps on NTFS Windows Systems, can easily turn out as an open can of worms Wink :
www.forensicfocus.com/...c/t=15034/

- minime2k9

Validation will only tell you what a tool does and doesn't do.


I would slightly rephrase that as:
Validation will only tell you what a tool can do and cannot do (implied "in the right hands" or "with the right procedure" and more than anything else "in a given, specific scenario").
The real issue is the zillion possible scenarios.

Athulin's mention of the zip format is a very good one, I remember many "quirks" with either "not fully correct" or "only slightly non-standard" zip files and various uncompressing tools, as an example for a certain period of time 7-zip found zip archives downoaded via Wayback Machines as "invalid" (and it needed a single 00 byte appended to them), more here (uncompressing old floppies images nightmares), still as a practical example:
reboot.pro/topic/12255...al-floppy/


jaclaz
_________________
- In theory there is no difference between theory and practice, but in practice there is. - 

jaclaz
Senior Member
 
 

Page 4 of 7
Go to page Previous  1, 2, 3, 4, 5, 6, 7  Next