Tool Vendors and Te...
 
Notifications
Clear all

Tool Vendors and Testing (an open study)

3 Posts
3 Users
0 Likes
688 Views
(@tootypeg)
Posts: 173
Estimable Member
Topic starter
 

Bit of a curious one, I assume vendors operate on here.

I wonder if there would be he possibility to engage with any who might be willing to take part in a tool-testing study which will test tools against each other (comparable functions etc.), to evaluate performance? This is very vague at the moment but lets assumed-

Vendor A, B & C - tested against functions D, E, & F (all do this task) and jointly report on results?

Would any be interested in collaborating on these activities? And if not, what barriers might there be?

 
Posted : 18/12/2019 8:01 pm
Passmark
(@passmark)
Posts: 376
Reputable Member
 

If it was a reputable agency running the tests, then I am sure some would be happy to take part.

Except for the really basic functions, I think an apples to apple comparison would be difficult.

Example of basic function
================
Image a hard drive to make a E01 disk image.
But even this could be problematic, if levels of compression where different, or if there was a verification step, or extra hashing performed by one of the tools, or differences in handling of disk errors.

Example of high level function.
================
Index the files on the hard drive, search for files containing 50 different words and phrases, then export the results to CSV.
This is really hard to keep consistent. What file types are indexed, is OCR performed, are you testing in a low RAM environment, are you testing with hardware with a large number of CPU cores, is string extraction done binary files, is the indexing recursive (a PDF in a Zip in a Zip in a PST), what disk image format was used, what is the mix of file types, is unallocated space on drive indexed, etc…
In short there are dozens of variables & permutations.

 
Posted : 18/12/2019 11:29 pm
(@trewmte)
Posts: 1877
Noble Member
 

Prof. Buchanan and team from Napier University published a paper on

Evaluating Digital Forensic Tools ( DFTs)

6 Conclusion
This paper has outlined evaluation and validation methodologies, where some of these are too complex to be used by digital forensics investigators such as Carrier’s abstraction layers model [19], and others do not cover all aspects of the tools [32]. For all them, none has been implemented in such a way that enable automations of the validation process. This means that testing may need to be performed manually. This is obviously an issue as it takes away a significant amount of time from investigators.

Beckett’s [3] methodology can be used to define the requirements to validate digital forensics functions. This is a good methodology which covers all aspects in the definition of the validation process. However, the methodology does not cover the actual implementation of the validation process. Therefore, another methodology is needed. A good candidate is the methodology of Wilsdon [13] based on black-box testing.

https://www.napier.ac.uk/~/media/worktribe/output-178532/flandrinpdf.pdf

 
Posted : 19/12/2019 9:08 am
Share: