plenoptic cameras and the meaning of photography

Raytrix introduced the R11 Lightfield Camera not too long ago. It is still low-res and expensive, but improved versions of these should eventually catch on — they make too much sense not to.

The idea of plenoptic cameras has been thrown around for quite a while. Instead of a conventional camera with a single lens focusing a single object plane onto the in-camera image plane (i.e. the sensor), a plenoptic camera attempts to capture enough additional information so as to be able to reconstruct “all possible” images that can be obtained from the light entering the same aperture. The most talked-about application is subsequent refocusing; if it were just this, then multi-capture with mechanical focal-length sweeps using a conventional camera would suffice. Another is stereograms, but again, two spaced shots would suffice for that. A plenoptic camera does more in one shot and makes these merely special post-processing cases. The simplest conception of a plenoptic camera is essentially an array of mini-cameras (microlens + micropixel array for each logical pixel) that separately captures light from all directions at each point in the image. In between conventional cameras and plenoptic cameras are perhaps smarter, sparser non-regular arrays like these coded aperture systems that hark back to old radio-astronomy literature. These have good signal processing properties from a deconvolution perspective, but the full-array plenoptic camera like the R11 seems fully general, and with some future industrial scaling, the saved expenses of a compromise may be inconsequential.

Fine, so a plenoptic camera may make clever use of its given aperture size, but do we really get something for nothing? To answer that, first a digression.
(Read the article)

watson v. mit

So being at the event captured in the image, I got to ask a question toward the end. Actually I asked two questions. The first was whether Watson would ring in and use the remaining 3 seconds or whatever to continue to compute. Gondek said it would if it helped. In actual competition it doesn’t appear to be the case, as the buzz-in thresholding condition ensured that further computation would not have been helpful. The second question was a follow-up on the identified weakness of Watson — learning on “common sense” knowledge. I asked what path AI research would take to tackle such knowledge, which are by its very definition, “not in the books.” Gondek said that IBM is building up semantic information (e.g. a “report” is something that can be “turned in” and “assessed,” etc.) from corpus. That wasn’t exactly what I was asking, however.

My point was whether all “knowledge” is written down. There is such a thing as experiential “knowledge,” and humans take years to learn it/be trained in it through parenting (i.e., to “mature”). If only there were a handbook on life, or that life could be learned through reading a series of textbooks, then perhaps I’d believe that the kind of general-purpose AI that most people are probably imagining (rather than an expert/Q&A system) can be achieved along the lines of current methods.

How to (maybe) get WordPress Stats to accept the API key

WordPress stats is a nice little filter for WordPress, one of those devious hidden image IP trackers. Unfortunately, getting it to accept the API key doesn’t always work (even if it works for Akismet without complaint).

You may keep getting

Enter your API key to link this blog to your account. Be sure to use your own API key! Using any other key will lock you out of your stats.

even if the API key is correct. or if you hard code the API key

An API Key is present in the source code but it did not work.

The Stats Plugin is not working because it needs to be linked to a account.

I didn’t find a solution in any of the forums, so I looked at the stupid script some more. Basically it uses the API key to get a “blog_id” (database index, most likely) from and can’t find one. So I made up a blog_id in the code. That shut up the plug-in, but of course stats aren’t tracked.

Finally, I went to, logged into the account, made a new garbage * blog, then a bit later took out the made-up blog_id from the code, de-activated and re-activated, and … everything works. The external blog shows in the “Global dashboard.” Also the real blog_id is returned from code. But, if I take out the hard coded API key, it stops working again.

This is definitely a problem with registering externally hosted blogs, so to make it work, hard code the API key, make sure there is at least one * blog, wait a little bit, then re-activate the stats plug in.

interactive sites, many of which i’ve never heard of

Some marketing image

and statistics


old problem about mirrors

Why do left and right flip but top and bottom do not in a mirror?
After all, the plane of mirroring is neutral to all directions parallel to the plane.

(Read the article)


Ever since the makers of Ghost got bought by Symantec and Symantec got bought by Norton (or is it the other way around?), I have had an inkling of what Ghost might have become through the unfortunate experience of having used Symantec/Norton Antivirus (8.0, I believe it is that MIT offers?)

I got a chance to use Ghost again. Ghost 10.0 that is. Unbelievable! What a piece of crap! I just wanted to image a disk, but now you have to run the ugly yellow UI in Windows — wait, you have to first install it in Windows so it can “help” you “automatically” “define” “restore points” so you can “backup your computer.” What does that user-fuddy gibberish mean?! Oh look here, I can be an “advanced” user and make a straight disk-to-disk copy (no disk to image?) but every time I click the button it wants to install .NET Runtime 1.1 first, what the …? And it keeps wanting me to activate the product and get “LiveUpdates.” Umrghh! Booting the CD up by itself gives me a patchy “recovery console.” No option to image disks in sight. Needless to say I junked the CD.

Fortunately the package tucks in another CD called “Ghost 2003″ for “older” computers. So it turns out Ghost 2003 is the Ghost that I remember. Man, thank goodness for older computers… Snorton has totally killed Ghost. Caveat emptor.

hilly Seattle

Seattle is really long in the north-south axis. Not only that, anybody who has been in Seattle for a while will notice that lots of interesting geographic features run along the north-south axis, like, hills, peninsulas, and lakes. I’m just guessing that retreating glaciers had something to do with it.

Hills, yes.

Yesterday I drove up a fairly steep hill called Phinney Ridge (really quite steep, but not super steep by Seattle standards). I was also going west, so suddenly I was reminded of this geographic and geologic fact of Seattle and thought … “Yeah, I must be hitting a large gradient against one of these hill-spines. Gee, it’s even called a ‘ridge.’ Wonder if I can skirt around it,” and so on.

In fact there are lots of descriptive place names that didn’t register with me — like, what is “Capitol Hill” anyway? — until I saw this image. I cut it out of the USGS data site, where you can play around with such things to your heart’s content. (I also took two looks at but decided they sucked.)

Seattle is supposed to be built originally on “seven” “hills,” (c.f. Rome) and I labeled them here in red numbers 1-7: First Hill, Capitol Hill, Queen Anne, Magnolia, Beacon Hill, Denny Hill (razed early on), and Crown Hill. I call BS on the seven hills theory. Not only do some of these “hills” not look like distinct hills, some of them aren’t even that impressive. There are lots more hills around … and Crown Hill looks sorely out of place, like an after-thought to make the number 7.

I haven’t really been everywhere in Seattle, so I can’t say where the hills are the steepest — they probably keep records of this. Where I have been though … some parts of the eastern ridge of Capitol Hill (green 1), the Downtown (green 2), and Magnolia have made for hair-raising experiences in a manual transmission car. By comparison, Phinney Ridge (green 3) really isn’t so bad.

The only really flat part of Seattle is the industrial/stadium/international district along the shores of the Duwamish River, which can be seen in the image, flanked by West Seattle and Beacon Hill. Seattle could have developed along the only river in the city: Seattle was almost called Duwamps, after all … but no, people had to go live on hills instead of flat land (yeah ok fine, there was too little of it and it was an Asian ghetto from the days of Chin Chun Hock).

Today I can go no farther (part 6)

This is part of the hard disk recovery documentation.

Part 6.

Today I can go no farther (so I stopped)

The last days of this project are spent on two tiring tasks that do not gain me very much, but must be done to carry this project to its logical conclusion. One of these is to decrypt a few very small, but important NTFS-encrypted files. The other is to wring the last readable bits out of the broken Seagate drive by splitting all the error regions to isolate the unreadable regions as much as possible. These can proceed in parallel.
(Read the article)

The tide turns (part 5)

This is part of the hard disk recovery documentation.

Part 5.

The tide turns (rather quickly)

After the exceedingly annoying but ultimately inconsequential ext2 interlude, I’m back on track with the original problem of recovering data from the broken Seagate drive.
(Read the article)