Is this true?

So this thing on Wikipedia

http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

could have left it at the classical statement of the theorem with bullet #1. Then it goes on to say:

2. If a probability of bit error \(p_b\) is acceptable, rates up to \(R(p_b)\) are achievable, where

\(R(p_b) = \frac{C}{1-H_2(p_b)}\).

3. For any \(p_b\), rates greater than \(R(p_b)\) are not achievable.

I have never seen this before. At first glance, this seems questionable, as Fano’s converse gives \(P_e^{(n)} \ge 1 – \frac{1}{nR} – \frac{C}{R}\), which seems to converge to \(H_b(p_e) \ge p_e\) for \(p_e \in [0,0.5]\). So it must mean whatever is used to code this is not going to be a long block code.

One example where this is true is the binary symmetric channel, with uncoded transmission. But I’m not so sure what is the achievability scheme in general, although I have some ideas — it may involve quantizing the excess codewords to the nearest zero-error codewords. The converse I have no idea.

In terms of the statement, it is really unclear what is meant by “bit error”. In the classical statement, a message from a large alphabet is coded into some \(X^n \in \mathcal{X}^n\) where \(\mathcal{X}\) is the channel input alphabet. After decoding, \(X^n\) is either found correctly, or it is in error. There is no “bit” in here. Even if \(X\) is binary, is the bit error the received (uncooked) bit error? Or is it the decoded (cooked) bit error? Why should the decoded bit error matter, isn’t that a codebook artifact? Or is it the bit error in the original message, if the original message is to be represented by a bit-stream? But that is also entirely arbitrary.

Anyway I’d like a clarification from someone or a reference.

Comments

  1. me
    March 27th, 2009 | 20:25

    Okay, there is actually a reference given to McKay’s book on page 162, and it does state the theorem in this way. I’ll read it more carefully tonight.

Leave a reply