moon halo

http://apod.nasa.gov/apod/image/0812/moonhalo_casado_big.jpg Today I looked up at the night sky and there was this wonderfully full moon, but it was sitting in the middle of a huge perfectly round disk opening into the heavens amidst the clouds. I wondered what the heck it was, thinking it might be the result of Earth-shadow.

It turns out this was a moon halo. The page says that the phenomenon is “familiar,” but I’ve never seen it in my life, and had I not looked up for no reason, I would have missed this one, too! By my hand measurement, it spanned 45° in diameter, which is a pretty big portion of the sky. Jupiter was also visible within the ring of the halo. Quite amazing.

on deepness

Cleverbot is a corpus based chatbot capable of producing some natural conversations by using responses from humans.

As you can see it carries on just fine and can fool a casual observer. But the longer you carry on a conversation with it the more apparent that Cleverbot is frustrating to talk to, not so much that it isn’t human — after all, all of the responses are taken from human sources. If it weren’t so good at emulating a human from which you expected more, you wouldn’t be frustrated.

Cleverbot is frustrating in two ways:

  • it isn’t interesting
  • it doesn’t make sense

In other words, it lacks deepness, like a shallow human. Why?
(Read the article)

On Penmanship in Chinese

I suppose good penmanship is the basis of good calligraphy, since calligraphy is mainly the addition of (variable) brush width to the structure of the characters. This bulk structure is really the key and it is particularly difficult to get correctly without muscle memory. That’s why they tell you to trace character books over and over.

However, there is a way to figure this matter of structure from first principles (and perhaps generate a more unique style as a result), albeit with the tradeoff that you cannot be quick, you must be careful.
(Read the article)

resolving the St. Petersburg paradox

The St. Petersburg paradox is based on one of those gambling games where the usual model of using expected gain to decide whether to play the game gives a counter-intuitive result.

In the simplest of examples, you pay some entry fee to play the game, $1 is put in a pot by a counterparty, then a coin is repeatedly flipped and the pot is doubled on every coin flip by the counterparty, until “tail” comes up. You receive the money in the pot. The expected gain of this game is infinite, regardless of the initial entry fee. So it would seem that one should always play the game, regardless of the amount demanded as entry fee. But, as the article points out, “few of us would pay even $25 to enter such a game.”
(Read the article)