Bulldawg Posted November 20, 2013 Report Posted November 20, 2013 I'm having some issues with backing up directly to a network share. By "issues" I mean it's taking too long. I'm working with my NAS vendor as well, but even backing up to another computer on the same network is much slower than I think it should be. Hardware: Examiner PC with two 1 Gbit ports teamed with link aggregation. Case is stored on single internal SATA drive, not the OS drive. Switch supports link aggregation and is properly configured (Juniper EX2200). My NAS is a Synology RS3413xs+ with a six drive RAID 6 array. 1. Backing up to a local RAID 5 array takes about 8 minutes for this 17 GB case. 2. Backing up to the NAS takes about 11 hours for the same case 3. Backing up to a shared drive on another computer takes about 40 minutes. The network activity seems artificially limited to about 12 Mbit/s during the backup. I can copy the locally created backup from #1 above to the NAS in about 10 minutes. Like I said, I'm working with the NAS vendor as to why this is happening, but is Intella doing anything that could cause such dramatic slowdowns when backing up over a network share? I'm also experiencing exactly the same problem with EnCase 7.08.01 in this setup. I'll be asking EnCase for help with this too, but someone must have an answer to this.
Bulldawg Posted November 21, 2013 Author Report Posted November 21, 2013 So, more interesting is this-- When I created an iSCSI target on the NAS and backed up to that, I got the full expected speed--limited only by the speed of the hard drive the case is stored on. Why am I getting full speed connected to the same NAS on the same RAID array through an iSCSI target and only a small fraction of full speed when connected through a shared folder?
dougee Posted November 21, 2013 Report Posted November 21, 2013 Not sure if its the issue you are suffering, but it could be an issue with the SMB protocol between the devices and the way EnCase and Intella writes the data out. I have seen very similar numbers when imaging across the network to an SMB share in Linux. When I changed to using Netcat or iSCSI the speed difference was incredible, the SMB traffic overhead (in my case) was enormous. Maybe sniff the network traffic during the backup and see if you can spot any overhead or other issues.
Bulldawg Posted November 21, 2013 Author Report Posted November 21, 2013 Time to dust off my packet analysis book. I've never been much of a Wire Shark expert, but should be interesting to watch. A user on the EnCase forum suggested I try a product called ProcessActivityView from Nirsoft. It allows me to monitor all the files a process opens, closes, writes, reads, etc. I've learned a lot about EnCase and Intella just watching them work through ProcessActivityView. For instance, the reason EnCase takes so long to open a case is that it's reading exactly 16KB of data from what looks like every file in the evidence cache. In this particular case, that's about 70,000 files. Even off an SSD that takes a while. I've also learned that the EnCase backup process writes tens or even hundreds of thousands of files. SMB overhead may indeed be an issue there. Intella is much nicer to the file system during backup. It's just copying the contents of the case to a backup location. Thankfully, most of the case is in itemcontent.dat, so it's only a few hundred files to copy. I'm going to keep going on this until I figure it out and will report back. In the mean time, if anyone has any other ideas, I'm open.
Bulldawg Posted November 22, 2013 Author Report Posted November 22, 2013 Wireshark has definitely shown SMB overhead is a problem. During a Windows Explorer file copy there is a lot of [TCP segment of a reassembled PDU], which I believe is the last TCP packet in a chunk of data. I see these packets about once every 0.000001-.0000002 of a second. During an Intella backup, I see these same packets, but they are about every 0.0005 second apart. If I'm reading this right, the Windows Explorer file copy is transferring data about 250-500 times faster than the Intella backup. For some reason, this particular copy did seem even slower than normal. In between the [TCP segment of a reassembled PDU], I consistently see this pattern: Protocol Length Info SMB 258 Write AndX request, FID: 0x4b14, 4096 bytes at offset 2916352 TCP 60 microsoft-ds > 64249 [ACK] Seq=1411882 Ack=111323736 Win=65535 Len=0 SMB 105 Write AndX request, FID:0x41b14, 4096 bytes This pattern repeats, with little variation between each and every data PDU on the slow backup. I do see some of the TCP and less frequent "Write AndX" in the faster copy, but they are much fewer and don't appear between each data packet. If anyone is better than I am at interpreting this or knows what the problem is, please let me know. My Googling isn't turning up much.
arjohn Posted November 25, 2013 Report Posted November 25, 2013 Hi Bulldawg, What Intella version do you use for this? The most recent release (1.7.1) uses a different method for creating the backup. If you're using an older version at the moment, could you try 1.7.1 and see if that improves things?
Bulldawg Posted November 25, 2013 Author Report Posted November 25, 2013 Hi Bulldawg, What Intella version do you use for this? The most recent release (1.7.1) uses a different method for creating the backup. If you're using an older version at the moment, could you try 1.7.1 and see if that improves things? I have only tried it with 1.7.1.
dougee Posted November 26, 2013 Report Posted November 26, 2013 Thats sounds exactly like the SMB issue I was having with imaging to SMB shares. The TCP overhead was huge. Hopefully Intella can look at the network issue with backups and make improvements.
arjohn Posted November 26, 2013 Report Posted November 26, 2013 We have ben able to reproduce the network issue, although not to the extend that you've mentioned. We have started an investigation and will try to improve the backup speed.
Bulldawg Posted November 26, 2013 Author Report Posted November 26, 2013 John, That's great. Would you mind sharing a few details about what's causing it? I'm trying to troubleshoot all the software and devices on my network that are showing slowness. Any insights into what Vound is doing would be appreciated. - I'm seeing the same issue with EnCase 7.08.01, although EnCase's backup process creates tens of thousands of little files, so I suspect their problem is a little different than yours. - I'm able to image a drive to the CIFS (SMB) share from a TD3. It starts out slow, about 30 MB/s, but picks up speed later to end with an average of 70-90 MB/s. Although that's a bit slower than it should be, it eliminates the copy process if I image to a bare drive, so I'm considering using my TD3 to image straight to my evidence share on the NAS in the future. I haven't tried imaging through a computer directly to a share yet. As to your comment that you've not seen the problem to the same extent I'm seeing, I'll point out again that performance does vary widely, even backing up the same case to new empty folders on the share. After the NAS has been on for a few days, the slowness is less pronounced.
arjohn Posted November 29, 2013 Report Posted November 29, 2013 Could you try our soon-to-be-announced 1.7.2 release? We've been able to sneak in last minute some improvements that should hopefully improve backup performance. We have seen a noticable improvement in our test setups, but we'd love to hear how this works out for you.
Bulldawg Posted December 2, 2013 Author Report Posted December 2, 2013 Could you try our soon-to-be-announced 1.7.2 release? We've been able to sneak in last minute some improvements that should hopefully improve backup performance. We have seen a noticable improvement in our test setups, but we'd love to hear how this works out for you. I ran a backup with 1.7.1 this morning. That took 1:48 (an hour and forty eight minutes) to an empty folder on the NAS. I ran a backup with 1.7.2 just now--same case, no modifications, just open the case and close it, backing up to a new, empty folder on the NAS. That backup only took 0:12. For the record, a backup to a local RAID 5 array took about 0:07. It is interesting that it only reached peak performance when copying itemcontent.dat. The smaller files, even big ones like strings.dat (I assume the one in index\locations since it's the biggest of the strings.dat) topped out at about 500 Mbit/s, but when copying itemcontent.dat, it reached near 1 Gbit/s (actually about 900 Mbit/s). I saw similar differences in speed with the local backup. itemcontent.dat is about 25 GB in this case. Despite still being slower than the local backup, 1.7.2 backs up much faster than 1.7.1 when backing up to an SMB share. The difference in speed is probably mostly down to TCP and SMB overhead.
arjohn Posted December 2, 2013 Report Posted December 2, 2013 Great, and thanks for the update. What did the trick is switching from synchronous I/O to asynchronous I/O and using larger I/O buffers.
Recommended Posts