learning in social networks

There was this talk (by M. Dahleh) on modeling whether distributed learning occurs in a social network, i.e., is the crowd always right? The problem model was like this: there is a “truth” which is either 0 or 1, representing some binary preference. Then in a connected graph representing a learning network, each node makes a binary decision (0 or 1 again) based on an independent noisy read on the “truth,” as well as the decisions made by some or all of its neighbors who have already made a decision. (Each nodal decision is made once and binding, so there is a predefined decision-making order among the nodes.)

This is an interesting question because at first thought, one would think that in a large enough network, a sufficient number of independent reads on the truth will occur in the aggregate to allow at least the later-deciding nodes to get a really good estimate of the truth. This is the basis of the folk belief in “wisdom of the crowd.” However, this is not what happens all the time.


The problem really lies in the fact that a social network is (inadvertently) running a rather constrained version of belief-propagation, not the full-fledged belief-propagation that we would like, in which the nodes are more sharing. For example, in belief-propagation, full distributional beliefs are passed instead of these binary decisions, where estimation information is abridged and suppressed locally. If the observed decisions embody estimates so compressed as to be heavily distorted and not very indicative of the truth, then given certain network topologies, error propagation may not decay away. Then there is also something about single-pass decision-making that makes it impossible for nodes with the wrong answers to correct themselves and for the network overall to always converge on the right answer. Of course, some cooperative protocol could solve the problem, but we can’t really assume all nodes in a social network are cooperative, or even not adversarial.

So the crowd is not always right. In fact, it can be manipulated by “excessively influential” nodes in bad network topologies, as some other results in the talk indicated. Learning occurs a little more robustly in network topologies where some small (but non-negligible) fraction of nodes are “independently observing” ones that don’t listen to other nodes, but are listened to.

No comments yet. Be the first.

Leave a reply