Archive for the 'Uncategorized' Category

senate voting model graph

There was a talk today that referenced this paper by Banerjee, El Ghaoui, and d’Aspremont on obtaining sparse graphical models for parameterized distributions.

This undirected graphical model stating conditional independence relationships of senate voting behavior was shown.

If two nodes A and B are connected only through a set of nodes C, then A and B are independent, conditioned on C. Basically it says if you want to predict anything about B from A and C, then C is enough, because A won’t tell you anything more. As pretty as the graph looks, this is a rather odd visualization. Without seeing the (Ising) model parameters, especially where the edge weights are positive or negative, this graph is hard to interpret, and the conclusions in the paper are especially questionable to me. In particular, being in the middle of this graph does not necessarily imply “moderation” or “independence”, (unlike in let’s say this graph). We would expect moderates to exhibit weak dependency to either party’s large cliques. But if, for example, the edge weight between Allen and B. Nelson is a strongly negative one (which it very well may be, since the two parties are not otherwise connected via negatively weighted edges), then the graph seems to imply that how the two parties vote can largely be predicted from the votes of the likes of Allen or B. Nelson; in that sense, they are the indicators for their parties, disagreeing on exactly those party-disambiguating issues.

There is some additional funny stuff going on. According to the paper, a missing vote counts as a “no” because they only solved the problem for binary and Gaussian distributions. I also count only about 80 nodes in there, while there are 100 senators. The graph structure also seems a bit too sparse, but this may be intentional, in order to drop weak dependencies from the graph. One does wonder though, whether the results weren’t really that good without manual fudging.

Unrelatedly, this reminds me of another famous academic paper graph, the high school dating graph:

If you look carefully, there is some oddball stuff going on here, too.

fail

Must have been a piece of work by MIT students… Windsor Street near Mass. Ave.

employer of last resort

I’ve been reading about these “job guarantee” or “employer of last resort” theories, and they seem interesting. Basically the government provides employment at delta below the legal minimum wage for those who are unemployed, thereby absorbing excess labor into the public sector. The advantages are clear: it is certainly better than welfare and it doesn’t compete with the private sector.

Why is this? Let’s reason about it in a crude way.
(Read the article)

Krugman’s advice to Japan

So I came across one of Paul Krugman’s writings (more here) berating Japan for not taking drastic actions like quantitative easing back in the 1990s. It seems the debate Krugman is having with his imagined theoretical adversaries turns on whether over-capacity should be worked off by means of a long and painful period of under-employment, or whether capital should be injected to keep employment up in those parts of the economy not subject to over-capacity.

I think this is a false argument. Surely, the first approach would mean all parts of the economy suffer together. But it is the wrong assumption to say that the second approach does not have side effects either. In fact, unless sectors of the economy “not in over-capacity” can be clearly identified and capital funneled directly to those parts of the economy or allocated to such by banks, then the injection of capital would simply flow to all parts of the economy, including the parts ostensibly in over-capacity. Look at this complaint:

What really struck me in Skidelsky’s account, however, was the extent to which conventional opinion in the 1920s viewed high unemployment as a good thing, a sign that excesses were being corrected and discipline restored–so that even a successful attempt to reflate the economy would be a mistake. And one hears exactly the same argument now. As one ordinarily sensible Japanese economist said to me, “Your proposal would just allow those guys to keep on doing the same old things, just when the recession is finally bringing about change.”

This is exactly true. As we see over and over, human nature is such that, without pain, we do not learn from mistakes (in general) — since by some definition of “mistake”, things that do not cause pain are not mistakes. Without austerity and pain, it is optimistic to assume the injected capital will not be used to go back to the old investment schemes… and delay restructuring. It is also optimistic to assume politicians beholden to populist wishes will make painful cuts to those sectors in over-capacity, at the same time as they are doling out fresh capital to those deserving capital, both of which are necessary actions for a shortened recession and sustained recovery.

On top of that, what if in the economy, whether due to its interconnections or otherwise, there is over-capacity in every sector? Imagine the world built everything that anybody would possibly want for 10 years, then what, besides unemployment for 10 years? And so what, it isn’t such a bad thing then, is it? Japanese aren’t doing badly, are they? So what if they haven’t been fully productive for 10 years, who cares? If they are happy with their standard of living, they can all work half-time for all I care.

Toyota recall

So Toyota thinks it has a drive-by-wire software bug now.

And it is just now installing the “smart gas pedal.” What is the “smart gas pedal” and what’s “smart” about it, you ask? Watch…

Smart Gas Pedals May Solve Floor-Mat Problem

By Christopher Jensen

Some automakers – primarily European — are using an unusual method to reduce the chances of unintended acceleration from something like a floor mat getting tangled up with the accelerator: smart gas pedals.

If the vehicle is moving and both the gas and brake pedal are being pushed at the same time the computer tells the engine to ignore the gas pedal.

“It is an additional safety feature,” said Thomas Plucinsky, a BMW spokesman. “The brake takes precedence.”

Oh, really, the brake takes precedence. Wow.

I thought it was obvious and necessary that the design of any system with safety concerns should produce a safe outcome in the “failure” state rather than an unspecified outcome. Guess I was wrong.

autotune and avatar

Although autotune is now used as a digital effect, it was originally used to correct pitch in songs. With its use, singers can sing in perfect pitch, so long as they are not too far off. Indeed, autotune does not need to work in real time, and at a high level, it is no different from an instrument synthesizer, but with the instrument sampled in real time. (Perhaps such a hybrid approach could render even more realistic real acoustic instruments, and make almost anybody a “great” music player.) As the automated portion of the autotune’s capability improves, less and less of the singer’s input is needed, and one finds less and less need for the perfect singer, and more and more need for the perfect song and its performance intention — this is, after all, the essence of a creative work — not the much valued virtuosity with which it is performed (for its “difficulty”).

A similar thing has been taking place in motion picture production, with computer assisted graphics taking over for effects and stunts. Lately, the production process for the movie Avatar has pushed this process to a mini-plateau of some sort. Avatar, as you may recall, is produced by sampling the expressivity of the actors on a body grid, then re-rendering in a very different way. Much like the human-controlled machines in the movie, the actors are just giving input to a machine, which follows the director’s desires. Again, as the technology improves, less and less of the actor’s input is needed, and eventually, they, like the singers, will be unnecessary.

When it comes to the creative fields, as it does — I believe — in any field, the evolution of automation technology diminishes birth advantages, allows compartmentalization of skills, promotes specialization of skills, and therefore equalizes opportunities. The beneficiaries are people who engage in true creativity of the mind, both in the arts and in the engineering of the technology, while the losers are the human “performers”, save for the few truly great ones, who will be needed to go through the dehumanizing experience of being sampled as input for a machine.

So… study what a machine cannot do, or, study how to make a machine do that.

the genius of the rotary

Previously, I’ve written about the craziness of the roundabout, otherwise known as the rotary interchange. More and more though, I find the rotary interchange to be one of the best designed traffic devices around, indeed, much better than the traffic light and the stop sign.

The advantages of the rotary are several. Firstly, no need to wait a fixed amount of time to go — since feed traffic always yields to rotary traffic, the amount of time to wait to enter the rotary dynamically adjusts to the density of traffic requiring interchange. Secondly, all traffic is treated fairly, once inside the interchange, regardless of routing destination. Thirdly, intersections become immediately generalizable to more than a four-way junction, as six-way, eight-way, or odd-number-way junctions are possible — indeed that’s primarily where rotaries are used in the first place. Fourthly, “special” paths like U-turns are no longer special, and so are allowed easily without problem.

In short, rotary interchanges make smart use of decentralized and self-organizing decision-making using simple local rules (i.e. yield to rotary traffic) to achieve complex results. Therefore it is genius.

a favorite day

My favorite composition so far, on a theme by cw.


…and a mechanically re-arranged brass version

earthquake prediction

Nowadays there is a large amount of geological and seismic data collected. When earthquakes occur people try to do data analysis on this data to see if there are predictors. For example, there are people who look for foreshocks or changes in wave propagation, and so on. It seems to me that the next step beyond passive data collection would be to send active probe impulses to find the current condition of faults and whether they would fail soon. Is this done or not?

In any case, earthquake prediction may be a misnomer. One can never predict the precise time of an earthquake. But with more data and detection of ever smaller features, one can give more granular probabilistic predictions. So instead of saying there is a probability \(p\) of earthquake in the next 30 years, we may be able to either say (at any given moment) there is a probability \(p_1 \ll p\), or probability \(p_2 \gg p\) of one within the next year.

time, effort, money

It is often said in certain situations that it is the thought that counts, but in fact, it is the amount of some limited resource expended that counts.

Are humans beyond such crassness? I submit that they are not, and to keep them pleased, expend we must.

Certainly that resource is not necessarily as transparent as money, but all the same. After all what is one born with? One is born with time, which is a limited resource. It doesn’t matter if one converts time into money at an exchange rate known as a salary or one converts time into effort at an exchange rate known as physical ability, so long as one is not superman, these exchange rates are finite, and thus the output is limited as well. One must wonder if it isn’t the finitude of the resource (time) that ultimately creates value and if it isn’t the transfer of value that pleases. If so, what can we conclude about even the most benign and thoughtful gestures? That they are merely purchases with a portion of one’s lifetime pool of resources?

There is some kind of corollary to this along the lines of time maximization as a life algorithm, but I don’t have time to think about it.

« Previous PageNext Page »