Tuesday, June 24, 2008

Link to the NIST research on dd isues

I've written a couple of simple overviews of the issues surrounding dd and the seeming lost sectors when bad blocks are encountered. I neglected in my previous posts to include a link to the research by NIST at http://dfrws.org/2007/proceedings/p13-lyle.pdf.

Speaking to both Drew Fahey and Barry Grundy the feeling is that there is no reason to overreact, virtually every tool we use has some flaw or another, however further research is needed to be clear about the issue and how to circumvent it.

I'm off to present at the ACPO conference tomorrow and I'm sure the subject will come up, I'll post any interesting comments.

Wednesday, June 11, 2008

Norway

I'm teaching this week at the National Police University in Norway and have met some very interesting and talented investigators from various services. What is very interesting is the almost total lack of organised defense experts. It is quite fascinating that most cases with computer evidence rely almost totally upon the prosecution expert with no counter from an alternative position.

As I do both prosecution and defense work I can see the pros and cons from both sides but although I do not doubt the integrity of the officers here I do believe that a sound defense requires experts giving testimony from both sides. Even though with the best will in the world the reports should be the same, we both look at the same data, we all know that things get missed and some issues and elements can be explained in more ways than one.

It is does seem that some officers are now beginning to leave the service and set up on their own so I suppose we will begin to see that change. In the UK, of course, we have many defense experts and although one has to wonder about the competence and even integrity of one or two, at least a defendant can be assured of a second set of eyes on the data. Dont get me started on the need for industry control, I can go on all day. Doesn't mean I know how to solve the problem though!

I guess setting up in Norway could be a good thing for someone?

Linux dd issues part 2

I spoke in the last few posts about the issues with dd both in Windows and Linux. Having recommended in a previous post that you use dd_rescue with the -d flag added to enable direct disk access I have since found that when running it from the Helix distro it appears to work but instead creates a 0 byte file. I can't get my head around why it would do this.

However, following more research it appears that using GNU-dd in Linux you can enable the iflag=direct argument. This seems to enable O_DIRECT disk access and avoid the seeming buffering issues. Testing this against a drive with no errors it acquired the drive as expected and provided the right hash, so at least it doesn't mess things up.

Interestingly I emailed Barry Grundy about it and he had been following the same line of research and testing. Both of us are away from our labs for a week or so and will not be able to test against a drive with bad sectors until then but I will post again.

If you wish to try it the syntax is simple:-

dd if=/dev/(drive) of = (where you save it) conv=noerror iflag=direct

If you get any interesting results please don't hesitate to contact me.


Tuesday, June 3, 2008

...and FAU-dd issues

Having just posted about DCFLDD, my good friend Jim also pointed out that I had ignored the issues with FAU-dd from George Garner. Helix uses this dd version on the Windows side, specifically because it supports the \\.\PhysicalMemory device to grab RAM. It has been noted that even if the block size is set to 512b FAU-dd still copies data at 4096b to increase speed. however, if it encounters a bad block it will skip 4096b.

The latest version from George steps back from 4096b to 512b when a bad block is found to minimize lost data but unfortunately support for \\.\PhysicalMemory was removed in that version. This is only an issue if bad blocks are found. Removing the noerror switch will stop dd if errors are found and enable you to use a different tool if you are concerned about this. (do not remove the noerror switch when imaging RAM, it will stop almost immediately)

Also, to get around this, FTK imager is installed on the Windows side and there are no reported problems of this type with that tool. However, running from a GUI will have a greater footprint on a live system.

DCFLDD problems

A number of concerns have been raised recently about certain linux dd implementations such as DCFLDD. You can read about it at http://www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=2557 and http://tech.groups.yahoo.com/group/ForensicAnalysis/message/82

In simple terms the problems revolve around how dd treats a bad sector. With the noerror flag set one would hope that dd would jump the bad sector, zero it and move on. However it would seem that a number of sectors are being missed when a bad block is found. Research by Barry Grundy and others indicates that this is due to the way the Linux kernel buffers data coming from the device being imaged. The buffering is a good thing as it speeds things up but it also would seem to enable the skipping of good sectors when a bad one is encountered.

This affects one of my favourite tools, Helix. Helix uses the DCFLDD tool as a basis for the Adepto GUI on the Linux side. In the meantime if you are using Helix you can make use of dd_rescue, making sure that the -d flag is set which enables direct disk access to the device. If you were planning to image the disk sda to an attached drive sdb1 this would look something like:-

dd_rescue -d -v /dev/sda /media/sdb1/image.dd

The release of Helix Pro later this year will deal with issue.