a list of problems for finance

The system [of finance] is too complex to be run on error-strewn hunches and gut feelings, but current mathematical models don’t represent reality adequately. The entire system is poorly understood and dangerously unstable. The world economy desperately needs a radical overhaul and that requires more mathematics, not less.

This article in the Guardian is a little late to the party and has an intentionally misleading headline, but brings up some points that are usually too esoteric to survive in print:

Any mathematical model of reality relies on simplifications and assumptions. The Black-Scholes equation was based on arbitrage pricing theory, in which both drift and volatility are constant. This assumption is common in financial theory, but it is often false for real markets. The equation also assumes that there are no transaction costs, no limits on short-selling and that money can always be lent and borrowed at a known, fixed, risk-free interest rate. Again, reality is often very different.

There are more false assumptions like Gaussianity of log-returns, complete markets, martingale price paths, etc., but these are merely technical complaints, which can be patched (as many are doing). The real issue is, as the author notes, “… instability is common in economic models … mainly because of the poor design of the financial system.” Namely, there is a lack of accounting for behavioral effects that result in feedback, which give rise to rather more fundamental issues that would require the “radical overhaul” alluded to in the opening quotation to resolve. There are some problems that could be tackled in this area.
(Read the article)

Google+ and its circles, a user-graph evolution

Eduardo … I’m talking about taking the entire social experience of college and putting it online.

When the movie “The Social Network” came out, this line caught my attention. I’m not sure this thesis — let’s call it the “replication thesis” — was what Monsieur Zuckerberg had in mind rather than something the screenwriter came up with, but it makes sense as to what actually undergirds online social platforms of today.

In all likelihood, Zuckerberg did not at first intend Facebook to be more than its namesake — a dorm facebook. Just as, in all likelihood, Twitter was meant as no more than a status message broadcast system, at first. The fact that Facebook became something of a gathering place and Twitter became a “microblogging” service — in essence, taking over functions that used to be conducted in other ways — I think owed something to their use of a “correct” user graph for certain contexts. It was the user graph that allowed, then limited, the range of social functions that people were willing to port over to the online platform. With the undirected graph, Facebook (and clones) modeled something like a big gathering, maybe a party. With the directed graph, Twitter (and clones) modeled something a bit more nuanced, like a groupie-celebrity relationship. (Is it any surprise, then, that celebrities drove the latter’s popularity?)

But I get the sense that neither Facebook nor Twitter truly believes in the replication thesis. They’ve construed their challenge narrowly as one of periodically pushing out new “things you could do,” most of which are nowadays ignored by users, or adding more places at which you could interact, but in the same way. They don’t see that users voluntarily do on a platform only those things that are compatible with their perception of the modeled social space. You can’t push anything on them any more than you can force people to play some game at a party. Yet I see no movement to revisit the user graph and better model real social relationships with all of their complexities. If left unchanged, the inevitable result will be that the range of social functions these platforms support stagnates, and therein should lie their eventual downfall. In fact, that probably solves the supposed “mystery” of Myspace’s decline, too. It is in this context that Google+ arrives.
(Read the article)

learning in social networks

There was this talk (by M. Dahleh) on modeling whether distributed learning occurs in a social network, i.e., is the crowd always right? The problem model was like this: there is a “truth” which is either 0 or 1, representing some binary preference. Then in a connected graph representing a learning network, each node makes a binary decision (0 or 1 again) based on an independent noisy read on the “truth,” as well as the decisions made by some or all of its neighbors who have already made a decision. (Each nodal decision is made once and binding, so there is a predefined decision-making order among the nodes.)

This is an interesting question because at first thought, one would think that in a large enough network, a sufficient number of independent reads on the truth will occur in the aggregate to allow at least the later-deciding nodes to get a really good estimate of the truth. This is the basis of the folk belief in “wisdom of the crowd.” However, this is not what happens all the time.

(Read the article)

senate voting model graph

There was a talk today that referenced this paper by Banerjee, El Ghaoui, and d’Aspremont on obtaining sparse graphical models for parameterized distributions.

This undirected graphical model stating conditional independence relationships of senate voting behavior was shown.

If two nodes A and B are connected only through a set of nodes C, then A and B are independent, conditioned on C. Basically it says if you want to predict anything about B from A and C, then C is enough, because A won’t tell you anything more. As pretty as the graph looks, this is a rather odd visualization. Without seeing the (Ising) model parameters, especially where the edge weights are positive or negative, this graph is hard to interpret, and the conclusions in the paper are especially questionable to me. In particular, being in the middle of this graph does not necessarily imply “moderation” or “independence”, (unlike in let’s say this graph). We would expect moderates to exhibit weak dependency to either party’s large cliques. But if, for example, the edge weight between Allen and B. Nelson is a strongly negative one (which it very well may be, since the two parties are not otherwise connected via negatively weighted edges), then the graph seems to imply that how the two parties vote can largely be predicted from the votes of the likes of Allen or B. Nelson; in that sense, they are the indicators for their parties, disagreeing on exactly those party-disambiguating issues.

There is some additional funny stuff going on. According to the paper, a missing vote counts as a “no” because they only solved the problem for binary and Gaussian distributions. I also count only about 80 nodes in there, while there are 100 senators. The graph structure also seems a bit too sparse, but this may be intentional, in order to drop weak dependencies from the graph. One does wonder though, whether the results weren’t really that good without manual fudging.

Unrelatedly, this reminds me of another famous academic paper graph, the high school dating graph:

If you look carefully, there is some oddball stuff going on here, too.

saving vs. consumption as default actions

Lately, for good reason, there has been many advice columns telling people how to plan for personal financial goals. It always used to boggle my mind when I heard exhortations to save, where “save” is used in the sense of an action among which to choose, parallel to things like “invest” or “work”. Until I realized, some years back, that to save is a parallel action of choice to some.
(Read the article)

Living in the cloud

Cloud computing is taking off. That’s like the first sentence of some recent “introduction” mumbo jumbo I wrote for some paper. There are of course different models of this.

One is to use all services that Google provides, which are entirely built on web applications. I don’t believe this is the right model.
(Read the article)

resolving the St. Petersburg paradox

The St. Petersburg paradox is based on one of those gambling games where the usual model of using expected gain to decide whether to play the game gives a counter-intuitive result.

In the simplest of examples, you pay some entry fee to play the game, $1 is put in a pot by a counterparty, then a coin is repeatedly flipped and the pot is doubled on every coin flip by the counterparty, until “tail” comes up. You receive the money in the pot. The expected gain of this game is infinite, regardless of the initial entry fee. So it would seem that one should always play the game, regardless of the amount demanded as entry fee. But, as the article points out, “few of us would pay even $25 to enter such a game.”
(Read the article)

“rationally” exuberant

Ah, hahahaha! I just found this article by the nuts at the American Enterprise Institute from the late 1990s, reproduced below

Stock Prices Are Still Far Too Low

By Kevin A. Hassett, James K. Glassman
Posted: Saturday, January 1, 2000

ON THE ISSUES
AEI Online (Washington)
Publication Date: March 17, 1999

The U.S. stock market, despite astonishing price appreciation over the past seventeen years, could triple or quadruple in value without exceeding its true worth.

(Read the article)