<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Some stuff &#187; error</title>
	<atom:link href="http://blog.yhuang.org/?feed=rss2&#038;tag=error" rel="self" type="application/rss+xml" />
	<link>https://blog.yhuang.org</link>
	<description>here.</description>
	<lastBuildDate>Wed, 27 Aug 2025 08:50:58 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.1</generator>
		<item>
		<title>coding block length in action</title>
		<link>https://blog.yhuang.org/?p=250</link>
		<comments>https://blog.yhuang.org/?p=250#comments</comments>
		<pubDate>Sun, 14 Mar 2010 04:12:28 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[article talks]]></category>
		<category><![CDATA[block]]></category>
		<category><![CDATA[byte sectors]]></category>
		<category><![CDATA[correction]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[error rates]]></category>
		<category><![CDATA[interleaving]]></category>
		<category><![CDATA[protection]]></category>
		<category><![CDATA[reed solomon]]></category>
		<category><![CDATA[sector]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=250</guid>
		<description><![CDATA[This article talks about the switch to 4096-byte sectors from the current 512-byte sectors for PC hard disks. This section is interesting: 4096 byte sectors don&#8217;t solve the analog problem—signals are getting weaker, and noise is getting stronger, and only reduced densities or some breakthrough in recording technology are going to change that—but it helps [...]]]></description>
			<content:encoded><![CDATA[<p><a href="http://arstechnica.com/microsoft/news/2010/03/why-new-hard-disks-might-not-be-much-fun-for-xp-users.ars/">This article</a> talks about the switch to 4096-byte sectors from the current 512-byte sectors for PC hard disks. This section is interesting:</p>
<blockquote><p>4096 byte sectors don&#8217;t solve the analog problem—signals are getting weaker, and noise is getting stronger, and only reduced densities or some breakthrough in recording technology are going to change that—but it helps substantially with the error-correcting problem. Due to the way error correcting codes work, larger sectors require relatively less error correcting data to protect against the same size errors. A 4096 byte sector is equivalent to eight 512 byte sectors. With 40 bytes per sector for finding sector starts and 40 bytes for error correcting, protecting against 50 error bits, 4096 bytes requires (8 x 512 + 8 x 40 + 8 x 40) = 4736 bytes; 4096 of data, 640 of overhead. The total protection is against 400 error bits (50 bits per sector, eight sectors), though they have to be spread evenly among all the sectors.</p>
<p>With 4096 byte sectors, only one spacer start is needed, and to achieve a good level of protection, only 100 bytes of error checking data are required, for a total of (1 x 4096 + 1 x 40 + 1 x 100) = 4236 bytes; 4096 of data, 140 of overhead. 100 bytes per sector can correct up to 1000 consecutive error bits; for the forseeable future, this should be &#8220;good enough&#8221; to achieve the specified error rates. With an overhead of just 140 bytes per sector, about 96% of the disk&#8217;s capacity to be used.</p></blockquote>
<p>With longer block lengths, the error correction capability generally goes up for the same coding overhead, however, it seems rather more complicated than this. First of all, I don&#8217;t think each manufacturer uses the same code or coding structure. (They used to just use Reed-Solomon code, though later they tried concatenating it with LDPC code, and now I hear some are switching to pure LDPC with iterative decoding.) But even if we assume they use some non-exotic block code, and use interleaving for bursts, the math still seems very strange: 40 error correction <em>bytes</em> can only correct 50 consecutive <em>bits</em> currently? I think not.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=250</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Audio through the PC Speaker</title>
		<link>https://blog.yhuang.org/?p=197</link>
		<comments>https://blog.yhuang.org/?p=197#comments</comments>
		<pubDate>Mon, 10 Aug 2009 07:32:08 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[beep beep]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[mechanical impedance]]></category>
		<category><![CDATA[modulation]]></category>
		<category><![CDATA[pc speaker driver]]></category>
		<category><![CDATA[pulse code modulation]]></category>
		<category><![CDATA[restriction]]></category>
		<category><![CDATA[speaker]]></category>
		<category><![CDATA[target signal]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=197</guid>
		<description><![CDATA[At work, sound comes from inside the computer and has a terrible quality, so I wondered if it used the PC speaker. Years ago when I didn&#8217;t have a sound card on a 486 computer, I used a PC speaker driver to play sound. To my surprise, this driver is still available. However, I didn&#8217;t [...]]]></description>
			<content:encoded><![CDATA[<p>At work, sound comes from inside the computer and has a terrible quality, so I wondered if it used the PC speaker. Years ago when I didn&#8217;t have a sound card on a 486 computer, I used a PC speaker driver to play sound.</p>
<p>To my surprise, this driver is <a href="http://support.microsoft.com/kb/138857">still available</a>.</p>
<p>However, I didn&#8217;t realize at the time how playing sound through the PC speaker required an interesting hack. You see, the PC speaker could only be put into two states, on and off. So it was easy to make single pitches (which we all did, didn&#8217;t we? beep beep) So technically the PC speaker could only generate square waves of various duty cycles. Here&#8217;s <a href="http://www.scribd.com/doc/16750435/c-Io-Hardware-Programming-for-Pc-Speakers">how the PC speaker is controlled</a>.</p>
<p>Then how was it able to play arbitrary wave files? Well, it turns out 8-bit sound is supposedly generated by pulse code modulation <a href="http://www.bsdg.org/swag/SOUND/0107.PAS.html">like this</a>, like a light dimmer. The idea is to get &#8220;half on&#8221; you turn it on half the time. But it really shouldn&#8217;t work that way if you think about it, at least not so trivially, since what is being set is the amplitude, not the average power.<br />
<span id="more-197"></span><br />
Of course the PC speaker, even though it can only be given signals to drive it to two states, doesn&#8217;t just make square waves, since it has mechanical impedance. In fact it is a low-pass filter. So the true problem, which any decent PC speaker driver should solve, is given a target signal to be reproduced and an impulse response of the PC speaker, find the input signal (subject to restriction to two values, say, -1 and 1), that when filtered by the PC speaker gives the least error from the target signal. This probably is not given by the simple modulation scheme. On the other hand, is the two-value restriction costly? What is the minimum and maximum amount of error that results?</p>
<p>There must be a paper on this. On the other hand, I&#8217;m not sure that any existing PC speaker driver implements anything like this. Here is the <a href="https://lliurex.net/projects/valencia/gaia/browser/ubuntu-mods/trunk/alsa-driver/alsa-kernel/drivers/pcsp/pcsp_input.c?rev=8599">code for one</a> of them, but it looks too short to be doing anything sophisticated.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=197</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Is this true?</title>
		<link>https://blog.yhuang.org/?p=166</link>
		<comments>https://blog.yhuang.org/?p=166#comments</comments>
		<pubDate>Sat, 07 Mar 2009 21:41:39 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[binary symmetric channel]]></category>
		<category><![CDATA[classical statement]]></category>
		<category><![CDATA[codewords]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[frac]]></category>
		<category><![CDATA[input alphabet]]></category>
		<category><![CDATA[math]]></category>
		<category><![CDATA[noisy channel]]></category>
		<category><![CDATA[theorem]]></category>
		<category><![CDATA[wikipedia]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=166</guid>
		<description><![CDATA[So this thing on Wikipedia http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem could have left it at the classical statement of the theorem with bullet #1. Then it goes on to say: 2. If a probability of bit error is acceptable, rates up to are achievable, where . 3. For any , rates greater than are not achievable. I have never [...]]]></description>
			<content:encoded><![CDATA[<p>So this thing on Wikipedia</p>
<p><a href="http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem">http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem</a></p>
<p>could have left it at the classical statement of the theorem with bullet #1. Then it goes on to say:</p>
<p>2. If a probability of bit error \(p_b\) is acceptable, rates up to \(R(p_b)\) are achievable, where</p>
<p>\(R(p_b) = \frac{C}{1-H_2(p_b)}\).</p>
<p>3. For any \(p_b\), rates greater than \(R(p_b)\) are not achievable.<br />
<span id="more-166"></span><br />
I have never seen this before. At first glance, this seems questionable, as Fano&#8217;s converse gives \(P_e^{(n)} \ge 1 &#8211; \frac{1}{nR} &#8211; \frac{C}{R}\), which seems to converge to \(H_b(p_e) \ge p_e\) for \(p_e \in [0,0.5]\). So it must mean whatever is used to code this is not going to be a long block code.</p>
<p>One example where this is true is the binary symmetric channel, with uncoded transmission. But I&#8217;m not so sure what is the achievability scheme in general, although I have some ideas &#8212; it may involve quantizing the excess codewords to the nearest zero-error codewords. The converse I have no idea.</p>
<p>In terms of the statement, it is really unclear what is meant by &#8220;bit error&#8221;. In the classical statement, a message from a large alphabet is coded into some \(X^n \in \mathcal{X}^n\) where \(\mathcal{X}\) is the channel input alphabet. After decoding, \(X^n\) is either found correctly, or it is in error. There is no &#8220;bit&#8221; in here. Even if \(X\) is binary, is the bit error the received (uncooked) bit error? Or is it the decoded (cooked) bit error? Why should the decoded bit error matter, isn&#8217;t that a codebook artifact? Or is it the bit error in the original message, if the original message is to be represented by a bit-stream? But that is also entirely arbitrary.</p>
<p>Anyway I&#8217;d like a clarification from someone or a reference.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=166</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>IT security policy &#8220;research&#8221;</title>
		<link>https://blog.yhuang.org/?p=103</link>
		<comments>https://blog.yhuang.org/?p=103#comments</comments>
		<pubDate>Sat, 23 Feb 2008 20:23:36 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[charge decay]]></category>
		<category><![CDATA[curious piece]]></category>
		<category><![CDATA[DRAM]]></category>
		<category><![CDATA[dram manufacturers]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[hardware]]></category>
		<category><![CDATA[liquid nitrogen]]></category>
		<category><![CDATA[magnitude difference]]></category>
		<category><![CDATA[RAM]]></category>
		<category><![CDATA[security]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=103</guid>
		<description><![CDATA[&#8220;Researchers find way to steal encrypted data,&#8221; screams this article in the New York Times. Oh do they? But come&#8230; on&#8230;, what is this ridiculous demonstration? Okay, okay, it&#8217;s the IT Policy School over there, let&#8217;s cut them some slack. What they&#8217;ve come up with is a way to read seated DRAM under OS lock [...]]]></description>
			<content:encoded><![CDATA[<p>&#8220;Researchers find way to steal encrypted data,&#8221; screams <a href="http://www.nytimes.com/2008/02/22/technology/22chip.html">this article</a> in the New York Times.</p>
<p>Oh <em>do</em> they? But <em>come&#8230; on&#8230;</em>, what is <a href="http://citp.princeton.edu/memory/">this ridiculous demonstration</a>? Okay, okay, it&#8217;s the IT Policy School over there, let&#8217;s cut them some slack. What they&#8217;ve come up with is a way to read seated DRAM under OS lock without specialized hardware, and if they said that, it would be fine.<br />
<span id="more-103"></span><br />
While I don&#8217;t care for their pseudo-slick presentation and shameless self-promotion (with a &#8220;blog&#8221;?), it is still a curious piece of work. Its unfortunate and regurgitated untechnicality leaves questions, though. DRAM is refreshed in tens of milliseconds, and since DRAM manufacturers are always trying to cut power consumption, I&#8217;m going to assume this rate is necessary to ensure reliable read out. There is a 3-order magnitude difference between that and the seconds to minutes reported that DRAM can be without power and still be read, during which time <em>exponential</em> charge decay takes place. Something else has to be going on, no? It just isn&#8217;t entirely clear that when the computer is turned off momentarily, on-board capacitors or even on-module capacitors aren&#8217;t discharging for long enough to residually power the refresh circuitry [*]. On the other hand, they claim they can remove the RAM completely and (with the help of liquid nitrogen) halt for an hour without power. I have some doubts as they dance around this issue.</p>
<p>As for real implication for security, there isn&#8217;t much, if only because this kind of breach isn&#8217;t fundamental. We already know that once indefinite hardware access to a running machine is first obtained (a practical requirement for this attack), there are always ways to compromise it. That&#8217;s how <a href="http://www.xenatera.com/bunnie/proj/anatak/xboxmod.html">the Xbox was cracked</a> &#8212; I&#8217;m talking about in-parallel probes on pins and traces, which can be just as well applied to the scenario here. Unless there are self-destructive mechanisms or other <em>fundamental</em> barriers to hardware access, we are just dealing with a matter of how high is the effort threshold. To fix it, encryption keys should not be stored in RAM in a detectable way, and any TPM modules that are currently being designed should have additional hardware security measures. That&#8217;s not hard to do, but in the meantime, let&#8217;s sit back and watch an uptick in the cracking of existing software and DRM protection schemes, as protected areas of RAM are opened up to easy hacking &#8212; a far more likely and practical fallout.</p>
<hr size=1>
<p>[*] I just read their full technical documentation, and they seem a little sloppy. They measure (and plot) total module read out error rate, but then fit a curve to it that they justify with MOSFET charge decay characteristics. Isn&#8217;t that right? Well, no: error rate should exhibit the typical digitizing water-fall effect of the comparator circuit.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=103</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>the sixteenth amendment not properly ratified?</title>
		<link>https://blog.yhuang.org/?p=97</link>
		<comments>https://blog.yhuang.org/?p=97#comments</comments>
		<pubDate>Sat, 02 Feb 2008 06:12:32 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[amendment]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[farm]]></category>
		<category><![CDATA[procedural error]]></category>
		<category><![CDATA[sixteenth]]></category>
		<category><![CDATA[sixteenth amendment]]></category>
		<category><![CDATA[tax]]></category>
		<category><![CDATA[tax protestors]]></category>
		<category><![CDATA[tax rates]]></category>
		<category><![CDATA[three quarters]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=97</guid>
		<description><![CDATA[A lot of people seem to write about that, claiming procedural error. That&#8217;s not what I care about, actually. I am interested to know why people in the late 19th century clamored for an income tax. It seems strange. It looks like the farm lobby in the West at that time wanted a graduated tax [...]]]></description>
			<content:encoded><![CDATA[<p>A lot of people seem to write about that, claiming procedural error. That&#8217;s not what I care about, actually.</p>
<p>I am interested to know why people in the late 19th century clamored for an income tax. It seems strange. It looks like the farm lobby in the West at that time wanted a graduated tax to redistribute income, so I can understand some states being for an income tax, but three-quarters of the states? It seems difficult even to raise tax rates today, so where were the &#8220;tax protestors&#8221; back then?</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=97</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Today I became suspicious of everything (part 3)</title>
		<link>https://blog.yhuang.org/?p=38</link>
		<comments>https://blog.yhuang.org/?p=38#comments</comments>
		<pubDate>Mon, 25 Dec 2006 17:06:04 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[bad sectors]]></category>
		<category><![CDATA[ddrescue]]></category>
		<category><![CDATA[disk]]></category>
		<category><![CDATA[error]]></category>
		<category><![CDATA[everything]]></category>
		<category><![CDATA[hard disk recovery]]></category>
		<category><![CDATA[image rescue]]></category>
		<category><![CDATA[MiB]]></category>
		<category><![CDATA[perpendicular recording technology]]></category>
		<category><![CDATA[rescue image]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=38</guid>
		<description><![CDATA[This is part of the hard disk recovery documentation. Part 3. Today I became suspicious of (the ext2ifs driver, the mkfs command, the USB enclosure, and basically) everything On Christmas morning Santa Claus had not granted my wish: ddrescue was still running, but the image file had not been timestamped any more recently than when [...]]]></description>
			<content:encoded><![CDATA[<p>This is part of the hard disk recovery documentation.</p>
<p>Part 3.</p>
<p><font color="#770033"><br />
Today I became suspicious of (the ext2ifs driver, the mkfs command, the USB enclosure, and basically) everything</p>
<p>On Christmas morning Santa Claus had not granted my wish: ddrescue was still running, but the image file had not been timestamped any more recently than when I left it, and the damaged drive had spun down by itself.  dmesg revealed a syslog message &#8220;too many IO errors&#8221; or something like that, which had caused Linux to give up on reading from the damaged drive.  I was very frustrated because, well let&#8217;s see, I had expected the disk imaging to make good progress, but instead&#8230; I must suffer a reboot and the induced indefinite re-churning of the drive, with even more data loss!  What.</font><br />
<span id="more-38"></span><br />
<font color="#770033">Attempting to reboot, I got impatient and tried the Dell Diagnostic partition again.  To my dismay, the Dell Diagnostic partition had become unreadable, a sure sign that the disk&#8217;s failures were worsening, adding more urgency to the recovery mission.  With no viable second option, I booted Knoppix again and resigned myself to the disk churning.  Finally I was in again and started ddrescue back up at about 4000MB, to skip the errors at 3200MB.</p>
<p>  /media/sda2#  ddrescue -B -n -i4000M /dev/hda rescue.image rescue.log</p>
<table align="right" width="300" border="1" cellpadding="10" style="margin: 2 2 2 2; background: #FFFFFF; border-collapse: collapse; border-style: dashed; border-color: #365873;">
<tr>
<td>
At some point I had to get a replacement hard drive. 80GB seemed to be the right price point, and somehow I narrowed it down to just two &#8220;choices&#8221;: a Seagate Momentous 5400.3 (with Perpendicular Recording Technology) believe it or not, and another one by Fujitsu, so it wasn&#8217;t much of a choice.  The Seagate had already received reviews noting bad sectors (!) and I really did not want another one of those.  I also rang Dell to see if the parts would be interchangeable (mostly on the question of the CD drive).  Unfortunately, Dell would not give me a straight answer, but after insisting on getting my personal information, they told me &#8220;your warranty has expired,&#8221; &#8220;but you have unlimited lifetime phone support,&#8221; which turned out to be a person on the other end putting me on hold for minutes at a time every time I asked a question to go read a manual and coming back with some inane suggestion.  For example, I was told to repeat things I said I already tried, in particular to go into the Dell Diagnostic partition which of course was already dead.  His eventual diagnosis was, &#8220;it has been determined that, sir, you have a hardware problem, not a software problem, and due to your warranty has already expired, you may get a new drive from anywhere, just make sure to get a 40GB IDE disk&#8221; (presumably to match the original).  So I wrote off the wasted hour on that call as sunk cost, and went ahead with the Fujitsu drive.
</td>
</tr>
</table>
<p>It continued to stick briefly at roughly 1GB intervals of data transfered, sometimes less.  Sometimes I would manually interrupt the process and re-start it at a later point.  This ddrescue allows with no problem because it uses its log intelligently, to patch together various chunks of recovered data, such that validly transferred data was never reread and interruptions were not costly to the recovered image.  Wishing to avoid a repeat of the inextricable-3200MB-error from Sunday night, I interrupted ddrescue whenever it seemed to be stuck long.  I noted the following sticking-points:</p>
<p>  3277MiB<br />
  9798MiB<br />
  10415MiB<br />
  10917MiB<br />
  11906MiB<br />
  13885MiB<br />
  24499MiB<br />
  25890MiB<br />
  27394MiB<br />
  27851MiB<br />
  31280MiB<br />
  32799MiB<br />
  33748MiB<br />
  35065MiB<br />
  36954MiB</p>
<p>The data corruption pattern covers the entire disk &#8212; so far as there is any physical correspondence &#8212; but affects only a small percentage of disk&#8217;s data.  In fact, the recovery to error ratio was at least 100:1.  I eventually transferred about 38GB of raw bits out of the 40GB disk.</p>
<p>By the way, the -n option demands &#8220;no error splitting&#8221; meaning that a 64K-chunk of data with any read problems is marked as wholly erroneous (or &#8220;/&#8221; in the log file&#8230; good data is &#8220;+&#8221; in the log file).  After the runs with -n, I did a run with error splitting on a small portion on the disk, forcing ddrescue to analyze only the 64K-chunks with errors on the original run.  With this method, I recovered yet more data (&#8220;/&#8221; became mostly &#8220;+&#8221; and some &#8220;-&#8221;).</p>
<p>TRYING THE RAW IMAGE UNDER WINDOWS</p>
<p>Before attacking more chunks with error-splitting, I decided to examine the current state of the disk image.  There would be no point to try very hard on the bad parts if the useful files were already contained in the &#8220;+&#8221; regions.  Just from the log file, I was pleased to find the first 5K intact (this includes the first sector holding the MBR and partition table), then many errors in the front of the drive which were in the Dell Diagnostic partition, followed by several hundred MB intact, which would include the boot sector of the NTFS partition and the front of its master file table hopefully.  A chunk of the end of the drive was also intact, which should contain the backup NTFS boot sector.  Things were looking much better.</p>
<p>So I next planned to mount the image under Windows, since many NTFS analysis tools run only on Windows.  To do this, I needed a filesystem driver to read ext2 under Windows.  There are a few, but the <a href="http://www.fs-driver.org/">&#8220;ext2 Installable File System for Windows&#8221;</a> or ext2ifs boasts Kernel-mode extension of the Windows file system that &#8220;<i>is indeed comparable to Windows NT&#8217;s native file system drivers</i>&#8220;.  Not really, as the implementation is missing some behavior one would expect from Windows but it does behave in such a way that most of the usual higher-level filesystem manipulations can be done directly on the ext2 volume, including getting real drive letters and the ability to perform file manipulation directly (without requiring a specialized file manager).</p>
<p>First I wanted to make a copy of the image file, for two reasons: <u>to have a copy to work with without endangering the recovered data</u>, and to put the original back into ddrescue to churn away at erroneous chunks according to which chunks turn out to be important.  In retrospect I could have made this copy under Linux, but since I had ext2 already mounted in Windows, I just made the copy under Windows.  While it was doing this copying, guess what, the source drive (the external USB drive) became unreadable.  Uh&#8230;&#8230;</p>
<p>I took the drive with the 38GB of good data back into Knoppix Linux, and Linux could not mount it either.  Uh&#8230;&#8230;</p>
<p>  # dmesg | tail</p>
<p>&#8220;Corrupt group descriptor: bad block for inode bitmap&#8221; etc. etc.  Uh&#8230;&#8230;</p>
<p>What did the ext2ifs driver do? There is not a single reason it should have touched the group descriptor table.</p>
<p>  # e2fsck /dev/sda2</p>
<p>&#8220;e2fsck: Bad magic number in super-block while trying to open /dev/sda2<br />
The superblock could not be read or does not describe a correct ext2 filesystem.&#8221; etc. etc.  Uh&#8230;&#8230;</p>
<p>What did mkfs.ext2 do? There are backups of these file system parameters, why are they all bad now?</p>
<p>What did the USB enclosure do? It didn&#8217;t barf all over the disk, did it?</p>
<p>This completely goes against the original intention of having a <b>good</b> copy of the data! And at this point I need to fork a sub-project to work on recovering the ext2 volume, because I don&#8217;t think I can read all 38GB of the already-recovered data back out of the dead Seagate again.  Argh!!  I called it a night.</font></p>
<p>Lessons today:</p>
<ul>
<li>I shouldn&#8217;t fork the project, I should just bork it altogether. The gods clearly aren&#8217;t working with me.</li>
<li>Of course I should keep working at this. I just need to use proven technology when making a crucial file copy.</li>
<li>ddrescue is good about skipping over good areas of the disk already scanned, but isn&#8217;t intelligent about scanning the bad areas of the disk.</li>
<li>Dell support is truly useless.</li>
</ul>
<p>On to <a href="http://scripts.mit.edu/~zong/wpress/?p=44">Part 4</a>.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=38</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
