Imaging technique u...
 
Notifications
Clear all

Imaging technique using a NAS

13 Posts
6 Users
0 Likes
3,532 Views
(@d1m4g3r)
Posts: 28
Eminent Member
Topic starter
 

Good day everyone. Just to give a quick prelude to my question. We currently image multiple desktops/laptops on client site using a custom built forensic linux distro that allows us to create EnCase compatible images of the target machine. The way we do it now, is that we attach an external HD to each system and perform the imagining individually.

I would like to know how I can image using a NAS drive. Let's say I would get a NAS of about 10Tb, would I be able to image about 8 individual systems to that same NAS simultaneously and would i still be able to achieve the same performance as when I was using an individual HD for each of those target machines?

Another question is going to be on storage of those image files. After I bring the NAS back to the Lab, would I store copies of the images somewhere on a storage server, and create a working copy for processing? What are the back up options i should be considering?

Thanks.

 
Posted : 12/04/2018 5:38 pm
minime2k9
(@minime2k9)
Posts: 481
Honorable Member
 

Most NAS boxes run a Linux software raid known as LVM.
If you attach the drives to a Linux machine with the MDADM software installed on it, you can rebuild the RAIDs from the attached disks and image the volumes using Guymager or similar tools.
This will change a very small amount of data on the disk, not user data but data nonetheless.
Write access is required to mount the drives, but you could try it with write blockers that 'cache' writes and see if it works.
Alternatively image all the disks and try and rebuild back at your main office.

 
Posted : 12/04/2018 6:36 pm
(@d1m4g3r)
Posts: 28
Eminent Member
Topic starter
 

Most NAS boxes run a Linux software raid known as LVM.
If you attach the drives to a Linux machine with the MDADM software installed on it, you can rebuild the RAIDs from the attached disks and image the volumes using Guymager or similar tools.
This will change a very small amount of data on the disk, not user data but data nonetheless.
Write access is required to mount the drives, but you could try it with write blockers that 'cache' writes and see if it works.
Alternatively image all the disks and try and rebuild back at your main office.

Thank you for your reply. However, I was actually asking how I can image using the NAS as the destination and not the source.

 
Posted : 12/04/2018 9:56 pm
UnallocatedClusters
(@unallocatedclusters)
Posts: 577
Honorable Member
 

I have a Synology DS 1817+ (https://www.synology.com/en-us/products/DS1817+) with eight 1 terabyte SSD drives.

I do not have the DS 1817+ RAIDed so that I have a total of 8TB of storage capacity.

Currently I use the Synology to store forensic databases, which works very well.

Basically I will plug in an external USB drive holding the forensic image file to one of my forensic workstations, and then create the forensic database (Forensic Explorer, OSForensics, Axiom) to the Synology machine. This setup allows my multiple forensic workstations to all connect to Synology and access the forensic databases stored there.

I do not see why you could not write forensic image files to a Synology's internal individual drives at the same time if you wanted to.

 
Posted : 12/04/2018 10:23 pm
(@thefuf)
Posts: 262
Reputable Member
 

This will change a very small amount of data on the disk, not user data but data nonetheless.

This is a dangerous assumption. It is possible that the activation of a software RAID volume will change gigabytes of user data.

 
Posted : 12/04/2018 10:51 pm
(@mscotgrove)
Posts: 938
Prominent Member
 

I think you will find writing 8 streams of data to a NAS device will be very slow. Reading is potentially fast but writing will involve a very large amount of head movement on the NAS drives.

One to one is in my experience likely to be the fastest solution.

 
Posted : 13/04/2018 1:08 am
minime2k9
(@minime2k9)
Posts: 481
Honorable Member
 

This will change a very small amount of data on the disk, not user data but data nonetheless.

This is a dangerous assumption. It is possible that the activation of a software RAID volume will change gigabytes of user data.

In what scenarios are you talking about?

 
Posted : 13/04/2018 11:56 am
(@tacobreath)
Posts: 14
Active Member
 

I could be missing sovietpecker's first question, but it sounds to me like you might be able to solve the problem by creating a portable network on site. You would only need your NAS, a fast ethernet switch and ethernet cables. Each of your workstations would connect to the switch, and the switch would be connected to the NAS. I agree with mscotgrove - the imaging (writing) speed would be very slow, though.

For your second question, I would definitely recommend transferring those Encase compatible images to a storage server once you get back to your Lab. This would allow you to re-use your NAS every time you needed to image on site. The ideal situation is to have 2 copies of your image - 1 for processing and 1 as a backup copy - and have each one on a separate device.

One option that might work for both on-site imaging and storage is a RAID hard drive enclosure like a Mediasonic PRORAID. It use USB 3 ports rather that ethernet connections. It holds 4 hard drives that could configured in several different RAID arrays. You would just need a USB 3 hub to connect your imaging workstations to the unit. Because of its small size - about the size of a small NAS - it is portable enough to carry on site. But you can also connect a second one to your processing workstation at the lab. If your workstation is networked, then all your devices on the network will see it as an external drive. Again, speed may be an issue, however.

 
Posted : 13/04/2018 2:29 pm
(@thefuf)
Posts: 262
Reputable Member
 

This will change a very small amount of data on the disk, not user data but data nonetheless.

This is a dangerous assumption. It is possible that the activation of a software RAID volume will change gigabytes of user data.

In what scenarios are you talking about?

RAID resync.

 
Posted : 13/04/2018 3:14 pm
(@d1m4g3r)
Posts: 28
Eminent Member
Topic starter
 

I could be missing sovietpecker's first question, but it sounds to me like you might be able to solve the problem by creating a portable network on site. You would only need your NAS, a fast ethernet switch and ethernet cables. Each of your workstations would connect to the switch, and the switch would be connected to the NAS. I agree with mscotgrove - the imaging (writing) speed would be very slow, though.

For your second question, I would definitely recommend transferring those Encase compatible images to a storage server once you get back to your Lab. This would allow you to re-use your NAS every time you needed to image on site. The ideal situation is to have 2 copies of your image - 1 for processing and 1 as a backup copy - and have each one on a separate device.

One option that might work for both on-site imaging and storage is a RAID hard drive enclosure like a Mediasonic PRORAID. It use USB 3 ports rather that ethernet connections. It holds 4 hard drives that could configured in several different RAID arrays. You would just need a USB 3 hub to connect your imaging workstations to the unit. Because of its small size - about the size of a small NAS - it is portable enough to carry on site. But you can also connect a second one to your processing workstation at the lab. If your workstation is networked, then all your devices on the network will see it as an external drive. Again, speed may be an issue, however.

Thank you Tacobreath. Great response. So it seems that speed is the major issue. That is noted. I am really interested with the storage of image files. Is it best to copy the image files from the External HD unto a back up server? Also is it advisable to reuse those external HDs to image again? If you get a great number of cases with a large number of imaged systems would you consider tape storage of the image files?

 
Posted : 16/04/2018 10:52 am
Page 1 / 2
Share: