### coding block length in action

This article talks about the switch to 4096-byte sectors from the current 512-byte sectors for PC hard disks. This section is interesting:

4096 byte sectors don’t solve the analog problem—signals are getting weaker, and noise is getting stronger, and only reduced densities or some breakthrough in recording technology are going to change that—but it helps substantially with the error-correcting problem. Due to the way error correcting codes work, larger sectors require relatively less error correcting data to protect against the same size errors. A 4096 byte sector is equivalent to eight 512 byte sectors. With 40 bytes per sector for finding sector starts and 40 bytes for error correcting, protecting against 50 error bits, 4096 bytes requires (8 x 512 + 8 x 40 + 8 x 40) = 4736 bytes; 4096 of data, 640 of overhead. The total protection is against 400 error bits (50 bits per sector, eight sectors), though they have to be spread evenly among all the sectors.

With 4096 byte sectors, only one spacer start is needed, and to achieve a good level of protection, only 100 bytes of error checking data are required, for a total of (1 x 4096 + 1 x 40 + 1 x 100) = 4236 bytes; 4096 of data, 140 of overhead. 100 bytes per sector can correct up to 1000 consecutive error bits; for the forseeable future, this should be “good enough” to achieve the specified error rates. With an overhead of just 140 bytes per sector, about 96% of the disk’s capacity to be used.

With longer block lengths, the error correction capability generally goes up for the same coding overhead, however, it seems rather more complicated than this. First of all, I don’t think each manufacturer uses the same code or coding structure. (They used to just use Reed-Solomon code, though later they tried concatenating it with LDPC code, and now I hear some are switching to pure LDPC with iterative decoding.) But even if we assume they use some non-exotic block code, and use interleaving for bursts, the math still seems very strange: 40 error correction bytes can only correct 50 consecutive bits currently? I think not.

### Audio through the PC Speaker

At work, sound comes from inside the computer and has a terrible quality, so I wondered if it used the PC speaker. Years ago when I didn’t have a sound card on a 486 computer, I used a PC speaker driver to play sound.

To my surprise, this driver is still available.

However, I didn’t realize at the time how playing sound through the PC speaker required an interesting hack. You see, the PC speaker could only be put into two states, on and off. So it was easy to make single pitches (which we all did, didn’t we? beep beep) So technically the PC speaker could only generate square waves of various duty cycles. Here’s how the PC speaker is controlled.

Then how was it able to play arbitrary wave files? Well, it turns out 8-bit sound is supposedly generated by pulse code modulation like this, like a light dimmer. The idea is to get “half on” you turn it on half the time. But it really shouldn’t work that way if you think about it, at least not so trivially, since what is being set is the amplitude, not the average power.

### Is this true?

So this thing on Wikipedia

http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

could have left it at the classical statement of the theorem with bullet #1. Then it goes on to say:

2. If a probability of bit error $$p_b$$ is acceptable, rates up to $$R(p_b)$$ are achievable, where

$$R(p_b) = \frac{C}{1-H_2(p_b)}$$.

3. For any $$p_b$$, rates greater than $$R(p_b)$$ are not achievable.

### IT security policy “research”

“Researchers find way to steal encrypted data,” screams this article in the New York Times.

Oh do they? But come… on…, what is this ridiculous demonstration? Okay, okay, it’s the IT Policy School over there, let’s cut them some slack. What they’ve come up with is a way to read seated DRAM under OS lock without specialized hardware, and if they said that, it would be fine.

### the sixteenth amendment not properly ratified?

A lot of people seem to write about that, claiming procedural error. That’s not what I care about, actually.

I am interested to know why people in the late 19th century clamored for an income tax. It seems strange. It looks like the farm lobby in the West at that time wanted a graduated tax to redistribute income, so I can understand some states being for an income tax, but three-quarters of the states? It seems difficult even to raise tax rates today, so where were the “tax protestors” back then?

### Today I became suspicious of everything (part 3)

This is part of the hard disk recovery documentation.

Part 3.

Today I became suspicious of (the ext2ifs driver, the mkfs command, the USB enclosure, and basically) everything

On Christmas morning Santa Claus had not granted my wish: ddrescue was still running, but the image file had not been timestamped any more recently than when I left it, and the damaged drive had spun down by itself. dmesg revealed a syslog message “too many IO errors” or something like that, which had caused Linux to give up on reading from the damaged drive. I was very frustrated because, well let’s see, I had expected the disk imaging to make good progress, but instead… I must suffer a reboot and the induced indefinite re-churning of the drive, with even more data loss! What.