<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Some stuff &#187; mod</title>
	<atom:link href="http://blog.yhuang.org/?feed=rss2&#038;tag=mod" rel="self" type="application/rss+xml" />
	<link>https://blog.yhuang.org</link>
	<description>here.</description>
	<lastBuildDate>Wed, 27 Aug 2025 08:50:58 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.1.1</generator>
		<item>
		<title>8 perfect shuffles</title>
		<link>https://blog.yhuang.org/?p=84</link>
		<comments>https://blog.yhuang.org/?p=84#comments</comments>
		<pubDate>Sat, 29 Dec 2007 09:50:56 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[52 cards]]></category>
		<category><![CDATA[equiv]]></category>
		<category><![CDATA[half]]></category>
		<category><![CDATA[high school math]]></category>
		<category><![CDATA[index]]></category>
		<category><![CDATA[index positions]]></category>
		<category><![CDATA[math 2]]></category>
		<category><![CDATA[math math]]></category>
		<category><![CDATA[mod]]></category>
		<category><![CDATA[number]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=84</guid>
		<description><![CDATA[It is said that 8 perfect shuffles of a standard deck of 52 cards results in an unshuffled deck. Why is that? Is it true only for 52 cards? What kind of shuffle is implied by this statement? Yesterday, we (high school math buddies) made pretty good sense of it. Let&#8217;s first index positions in [...]]]></description>
			<content:encoded><![CDATA[<p>It is said that 8 perfect shuffles of a standard deck of 52 cards results in an unshuffled deck. Why is that? Is it true only for 52 cards? What kind of shuffle is implied by this statement? Yesterday, we (high school math buddies) made pretty good sense of it.<br />
<span id="more-84"></span><br />
Let&#8217;s first index positions in a standard deck by 0,1,&#8230;,51.</p>
<p>When we say &#8220;perfect shuffle,&#8221; it means to cut the deck in half, then interleave the cards from the two halves. There are actually two choices here, depending on which card we put on top. We can send card 0 to 0, card 26 to 1, card 1 to 2, &#8230;, card 25 to 50, card 51 to 51 (we&#8217;ll call this Shuffle 1), or we can send card 26 to 0, card 0 to 1, card 27 to 2, &#8230;, card 51 to 50, card 25 to 51 (we&#8217;ll call this Shuffle 2). Shuffle 1 is what we deal with, because the statement doesn&#8217;t work for Shuffle 2. But note that Shuffle 2 is just Shuffle 1 followed by exchanges of pairs.</p>
<p>So, the effect of shuffling on the first half of the deck (\(n \in \{ 0, \dots ,25 \}\)) is to send card \(n\) to \(2n\). Notice that a perfect shuffle looks the same if we flip the deck around, so the effect on the second half of the deck (\(n \in \{26, \dots ,51 \}\)) is to send card \(n\) to \(R(2R(n))\), where \(R(n) = 51-n\) is the operation that gives the reversed index of card \(n\).</p>
\(R(2R(n)) = 51 &#8211; 2(51-n) = 2n &#8211; 51\). Thus each shuffle sends card \(n\) to \(2n\), except that sometimes, we subtract 51 from the answer. (That &#8220;sometimes&#8221; is when the card \(n\) is in the second half of the deck.) The point is, no matter what, the card \(n\) is again sent to something in the range 0,&#8230;,51, so in fact it is sent to \(2n\ (mod\ 51)\) (if we ignore 51, which is basically 0, and is always sent to itself.)</p>
<table border="1" cellpadding="10" style="margin: 2 2 2 2; border-collapse: collapse; border-style: solid; border-color: #365873;">
<tr>
<td>
If we shuffle \(s\) times, we get that card \(n\) is sent to \(2^s n\ (mod\ 51)\).
</td>
</tr>
</table>
<p>Immediately, we see that 8 shuffles works because \(2^8 n \equiv n\ (mod\ 51)\), and </p>
<table border="1" cellpadding="10" style="margin: 2 2 2 2; border-collapse: collapse; border-style: solid; border-color: #365873;">
<tr>
<td>
In general, \(s\) shuffles brings a deck of size \(d\) back to its original order if and only if \((2^s &#8211; 1) n \equiv 0\ (mod\ d-1)\) for all \(n\in \{0,\dots,d-1\}\), that is, if and only if \(d-1\) divides \(2^s &#8211; 1\).
</td>
</tr>
</table>
<p>The last statement is because if \((2^s -1) n \equiv 0\ (mod\ d-1)\) for \(n=1\), then it must be true for all \(n>1\). (The case \(n=0\) is trivial.)</p>
<p>There are more interesting observations. If \(s\) shuffles brings a deck to its original order, then surely \(ks\) shuffles does that, too. And indeed, if \(d-1\) divides \(2^s &#8211; 1\), then it also divides \(2^{ks} &#8211; 1\), which factors as \((2^s &#8211; 1)(2^{(k-1)s} + 2^{(k-2)s} + \cdots + 2^s + 1)\).</p>
<p>In another observation, we can explicitly write the card position after \(s\) shuffles as</p>
<p>\(2(\dots 2(2n &#8211; 51 b_1(n)) &#8211; 51 b_2(n) \dots) &#8211; 51 b_s(n)\ \ \ \ (*)\)
<p>where \(b_i(n)\) is an indicator bit that is 1 if, entering the \(i\)th shuffle, the card is in the second half of the deck, and 0 otherwise.</p>
<p>Expanding \((*)\), we get \(2^s n &#8211; 51 (2^{s-1} b_1(n) + 2^{s-2} b_2(n) \cdots 2^1 b_{s-1}(n) + 2^0 b_s(n)) \)</p>
<p>Now, define \(b(n) \equiv 2^{s-1} b_1(n) + 2^{s-2} b_2(n) \cdots 2^1 b_{s-1}(n) + 2^0 b_s(n)\). Clearly, \(\overline{b_1(n) b_2(n) \dots b_s(n)}\) is the binary expansion of \(b(n)\).</p>
<p>So \(s\) shuffles brings the deck back to its original order if and only if \(2^s n &#8211; 51 b(n) = n\) for all \(n\), i.e. \(b(n) = \frac{2^s &#8211; 1}{51}n\). But we know that 51 divides \(2^s &#8211; 1\) in our example, so \(\frac{2^s &#8211; 1}{51}\) is an integer, and in general</p>
<table border="1" cellpadding="10" style="margin: 2 2 2 2; border-collapse: collapse; border-style: solid; border-color: #365873;">
<tr>
<td>
If \(s\) shuffles brings a deck of size \(d\) to identity, then the integer given by \(\frac{2^s &#8211; 1}{d &#8211; 1} \equiv b(1)\) defines the function \(b(n)\) for all \(n\) by \(b(n) = n b(1)\), where the binary expansion of \(b(n)\) gives which half the card \(n\) is in for all \(s\) shuffles.
</td>
</tr>
</table>
<p>In the case of \(s=8\) and \(d=52\), \(b(1) = 5\).</p>
<p>Now we can also find the minimum number of shuffles to return various deck sizes to identity, by looking at the factors of \(2^s-1\):</p>
<table>
<tr>
<td>\(s\):</td>
<td>\(d\) for which \(s\) is minimal</td>
</tr>
<tr>
<td>1:</td>
<td>2</td>
</tr>
<tr>
<td>2:</td>
<td>4</td>
</tr>
<tr>
<td>3:</td>
<td>8</td>
</tr>
<tr>
<td>4:</td>
<td>6, 16</td>
</tr>
<tr>
<td>5:</td>
<td>32</td>
</tr>
<tr>
<td>6:</td>
<td>10, 22, 64</td>
</tr>
<tr>
<td>7:</td>
<td>128</td>
</tr>
<tr>
<td>8:</td>
<td>18, 52, 86, 256</td>
</tr>
</table>
<p>etc&#8230;</p>
<p>Of course, every deck of size \(d\) can be returned to its original order in <em>some</em> number of shuffles: at most \(d!\) (and if fewer, then the number of shuffles divides \(d!\)), this is because the automorphism group of \(d\) elements is at most size \(d!\). This knowledge is of no practical utility, since \(d!\) is huge, but it does imply that any odd positive integer \(d-1\) divides \(2^{d!} &#8211; 1\). And, it highlights the fact that some deck sizes are very special: any \(d\) that is a power of 2 only takes \(\log d\) shuffles, surely; but especially 52, for such a large deck size that isn&#8217;t a power of 2, the fact that it only takes 8 shuffles is quite amazing.</p>
<p>I do wonder if there are any &#8220;bad&#8221; deck sizes \(d\) that do take up to \(d!\) shuffles to return to the original order. A search of the <a href="http://www.garlic.com/~wedgingt/mersenne.html">factors of Mersenne numbers</a> may be in order.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=84</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>grid cells and 2D position coding</title>
		<link>https://blog.yhuang.org/?p=58</link>
		<comments>https://blog.yhuang.org/?p=58#comments</comments>
		<pubDate>Sun, 18 Feb 2007 09:02:40 +0000</pubDate>
		<dc:creator>admin</dc:creator>
				<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[cortex]]></category>
		<category><![CDATA[encode]]></category>
		<category><![CDATA[entorhinal cortex]]></category>
		<category><![CDATA[grid cells]]></category>
		<category><![CDATA[hexagonal lattice]]></category>
		<category><![CDATA[lattice]]></category>
		<category><![CDATA[mod]]></category>
		<category><![CDATA[position]]></category>
		<category><![CDATA[residue number system]]></category>
		<category><![CDATA[triangular lattice]]></category>

		<guid isPermaLink="false">http://scripts.mit.edu/~zong/wpress/?p=58</guid>
		<description><![CDATA[This is fascinating. Most things that sound fascinating at first really are pretty boring, but this is a notable exception in quite a few years. First there was a seminar (which I missed) with this kind of description How grid cell neurons encode rat position Recently it was discovered that neurons in an area of [...]]]></description>
			<content:encoded><![CDATA[<p>This is fascinating. Most things that sound fascinating at first really are pretty boring, but this is a notable exception in quite a few years.</p>
<p>First there was a seminar (which I missed) with this kind of description</p>
<blockquote><p>
How grid cell neurons encode rat position</p>
<p>Recently it was discovered that neurons in an area of the cortex (called dMEC) of rats, fire on every vertex of a regular triangular lattice that tiles 2-d space. These so-called &#8220;grid cells&#8221; efficiently encode rat position in a system that can be shown is analogous to a residue number system (RNS). By interpreting measured dMEC properties within an RNS framework, we can estimate the amount of position information stored within dMEC, and show how an RNS-like scheme is particularly well-suited to store and update positional information. </p></blockquote>
<p>That tipped me off to this really amazing paper:<br />
<a href="http://www.nature.com/nature/journal/v436/n7052/abs/nature03721.html">Microstructure of a spatial map in the entorhinal cortex</a> by Hafting, Fyhn, Molden, Moser, and Moser.<br />
<span id="more-58"></span><br />
It turns out, through experiments on rats, that in the &#8220;dorsocaudal medial entorhinal cortex&#8221; are a collection of neurons called &#8220;grid cells&#8221; that together ingeniously encode a coordinate chart for the rat to do internal path integration. The neurons respond when the rat is in certain positions in a standard environment &#8212; so what, you say. But no, if you plot the rat positions in the environment at which one particular neuron responds, the positions form a hexagonal lattice!</p>
<p><img src="wp-content/uploads/images/Cropped_nature03721-f3.2.jpg" alt="http://www.physics.ucsb.edu/~complex/ted/web/Cropped_nature03721-f3.2.jpg" align="right"/>What?</p>
<p>Neighboring cells have slightly shifted lattices for their positions of response, and there are other cells whose associated lattices are at a different angles or scales.</p>
<p>What good is this? Well, it implies a structurally coded representation of position. Suppose you had to encode a discretized 1D position, so a number between 1 and something, let&#8217;s say 15. You can use a sparse representation, which is to say, one neuron dedicated to respond to each of the 15 positions. You can use a binary code, and that would take 4 bits; the way to implement it might be a neuron that responds to positions 1-7, another one to positions 8-15, yet another one for positions 1-3, 8-11, another one for positions 4-7, 12-15, etc. That would take a total of 8 neurons. That&#8217;s pretty much an efficient representation. The way that it works in the dMEC is more like one neuron encodes all positions that are 1 (mod 3), another for 2 (mod 3), another for 0 (mod 3), another for 1 (mod 5), one for 2 (mod 5), and so on, for a total of 8 neurons again. These eight neurons repond to points on a lattice of distance 3 or a lattice of distance 5, and the response is narrowly confined. This is what the original seminar meant by residue number system encoding. The same extends to 2D, with the consequence that only on the order of log-of-the-number-of-positions neurons are needed, instead of on the order of the-number-of-positions, are needed to code a position. The advantage of a residue code over a binary (or n-ary) code is the possibility of using neural devices that have fairly similar properties (modulus 3, 5, etc.), instead of ones whose modulus scaling must bridge exponentially larger gaps (2, 4, 8, etc.).</p>
<p>This is about the most elegant representation of a coordinate chart that one can think of. Even decoding seems easy: just sum all the responses onto an unfolded chart &#8212; the hippocampus if I understand correclty, and choose the tallest peak.</p>
<p>There is a bit more complexity than described above, since the responses are not discretized in space, but are functions over a continuous space, and the residue arithmetic is not over a discrete set, either. I can&#8217;t find any experimental data on how responses of different dMEC neurons actually correlate, so how the coding works exactly is still speculation. But I can imagine this being much like random binning, and nested lattice Wyner-Ziv distributed rate-distortion coding. In fact, those are the first things that came to mind. With even a little bit of angular variance or half-phase lattice dithering, a small number of lattices would already be able to code for position with very low distortion by emulating random binning. I don&#8217;t think multiple rotated lattices of different scales (but non-nested) have been considered as a model for distributed rate-distortion coding. If properly chosen, the decoding complexity ought not to be that high. There is something to learn from the brain.</p>
]]></content:encoded>
			<wfw:commentRss>https://blog.yhuang.org/?feed=rss2&#038;p=58</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
