die throwing problem

Here’s a link to a subtle probability problem.

You throw a die until you get 6. What is the expected number of throws (including the throw giving 6) conditioned on the event that all throws gave even numbers?

The “obvious” answer is incorrect.
(Read the article)

bounding overlaps

A Venn diagram gives a schematic view of joint counts on a set of \(n\) categories, e.g. \(c(S_1^n=s_1^n)\) where \(s_i\in\{0,1\}\). Each “patch” of the diagram corresponds to one of \(2^n\) possible values of \(s_1^n\).

If we have the total count \(C\triangleq \sum_{s_j: j\in \{1,…,n\}} c(S_1^n=s_1^n)\), then we can take the counts as probabilities by normalizing with \(p(S_1^n=s_1^n)=c(S_1^n=s_1^n)/C\).

Suppose we are given only singleton marginals \(p(S_i=1)\triangleq \sum_{s_j: j\in \{1,…,n\}\backslash i} p(S_1^n=s_1^n)\), then we can bound the other probabilities by imposing universal constraints on probabilities to be between 0 and 1.
(Read the article)

extrinsic bias in the prediction market

People have proposed using price signals from prediction markets to estimate the odds of certain events. On Intrade right now, you can buy contracts for the two outcomes of the 2012 US Presidential Election. Each contract expires at $10 if the event occurs or $0 if it doesn’t. For example, “Barack Obama wins” contracts are $6.33 a pop right now, while “Mitt Romney wins” contracts go for $3.65. On the page, these are taken directly as probabilities, because it is assumed that the gamble is zero-sum.

Specifically, if \(p\) and \(\bar{p}=1-p\) are respectively the probabilities of two complementary events, and \(a\) and \(b\) are respectively the prices of contracts on them, which can be bought and sold freely, then no-arbitrage imposes that \(-a-b+10 = 0\) and statistical no-arbitrage imposes \(-\bar{p}a +p(10-a) = 0\) and \(-pb +\bar{p}(10-b) = 0\). Solving indeed gives the prices \(a=10p\) and \(b=10\bar{p}\).

However, this isn’t the end of the story.
(Read the article)

compactness

I swear the concept of compactness was invented to remedy the shortcomings of closedness. Compact sets are closed (in Hausdorff spaces and therefore metric spaces), so compactness is stricter than closedness. It evidently patches some feebleness in the definition of closedness to make it more useful.

Closedness of a set in a metric space (“includes all limit points”), by the sound of it, really wants to be something akin to “has solid boundaries.” But it isn’t. The problem is that the existence of limit points depends on the embedding space. If the embedding space lacks those limit points, then a set in it can be technically closed even though it isn’t really “like” other closed sets. For example, the set \(\mathbb R\) in space \((\mathbb R, d_{\text{Eucl.}})\) is closed, because the space has no point called \(\infty\).
(Read the article)

tensors

This has been a confusing topic, with half a dozen Wikipedia pages on the subject. Here I took some notes.

Tensors are sums of “products” of vectors. There are different kinds of vector products. The one used to build tensors is, naturally, the tensor product. In the Cartesian product of vector spaces \(V\times W\), the set elements are tuples like \((v,w)\) where \(v\in V, w\in W\). A tensor product \(v\otimes w\) is obtained by tupling the component bases rather than the component elements. If \(V\) has basis \(\{e_i\}_{i\in\{1,…,M\}}\) and \(W\) has basis \(\{f_j\}_{j\in\{1,…,N\}}\), then take \(\{(e_i,f_j)\}_{i\in\{1,…,M\},j\in\{1,…,N\}}\) as the basis of the tensor product space \(V\otimes W\). Then define the tensor product \(v\otimes w\) as

(1) \(\sum_{i,j} v_i w_j (e_i,f_j) \in V\otimes W\),

if \(v=\sum_i v_i e_i\) and \(w=\sum_j w_j f_j\). The entire tensor product space \(V\otimes W\) is defined as sums of these tensor products

(2) \(\{\sum_k v_k\otimes w_k | v_k\in V, w_k\in W\}\).

So tensors in a given basis can be represented as multidimensional arrays.

\(V\otimes W\) is also a vector space, with \(MN\) basis dimensions (c.f. \(V\times W\) with \(M+N\) basis dimensions). But additionally, it has internal multilinear structure due to the fact that it is made of component vector spaces, namely:

\((v_1+v_2)\otimes w = v_1\otimes w + v_2\otimes w\)
\(v\otimes (w_1+w_2) = v\otimes w_1 + v\otimes w_2\)
\(\alpha (v\otimes w) = (\alpha v)\otimes w = v\otimes (\alpha w)\)
(Read the article)

a card problem

Here is a problem quoted from fakalin. A full deck of cards has 52 cards. Suppose 10 of them were face up and 42 were face down. You are in a dark room holding the deck. How do you rearrange the deck into two subdecks so that they have the same number of cards facing up?
(Read the article)

road path problem

Suppose there is a straight road, infinitely long at both ends, located 1 unit from your starting location. Find the most efficient path to reach the road, and the worst-case total length of this path.

The trivial but wrong way is to go for 1 unit in some direction, then trace the circumference of a unit-radius circle. The road will surely be found this way, but the path length is \(1+2\pi\), which can be improved upon.
(Read the article)

Windows 7 Math Input Panel!

http://blogs.msdn.com/blogfiles/e7/WindowsLiveWriter/InkInputandTablet_E2A5/clip_image014_thumb.jpg

Somehow the ability to turn handwritten math into MathML escaped my attention as a Windows 7 feature from trying the first beta. Finally! I’ve been waiting for this since forever… Wonder what took so long.

Next up are music notation and graphics in general*. The ultimate goal of a handwriting recognizer is of course similar to that of OCR: to turn one piece of art (for the lack of a better word) rendering to another, text included. Specifically, it should rectify all the rendering to a “typeset” form. It should intelligently recognize a host of objects with its own Visio-like templates: if I draw a resistor, it should pull out a nice schematic rendering of a resistor. If I draw a rectangle, and select “rectify”, it should make a rectangle with straight edges.
(Read the article)

An improvised dialogue on Wolfram|Alpha

[18:01] fakalin: hey
[18:01] fakalin: did you try wolfram alpha
[18:02] me: waht
[18:02] fakalin: what
[18:02] fakalin: jeez you’re out of touch
[18:02] fakalin: http://www.wolframalpha.com/
(Read the article)

Is this true?

So this thing on Wikipedia

http://en.wikipedia.org/wiki/Noisy-channel_coding_theorem

could have left it at the classical statement of the theorem with bullet #1. Then it goes on to say:

2. If a probability of bit error \(p_b\) is acceptable, rates up to \(R(p_b)\) are achievable, where

\(R(p_b) = \frac{C}{1-H_2(p_b)}\).

3. For any \(p_b\), rates greater than \(R(p_b)\) are not achievable.
(Read the article)

Next Page »