As bots become more a part of our online culture, social network analysts are starting to incorporate bots as “participants” in experiments, with fascinating results that conform earlier findings of cultural evolution research.
The cover of Nature on 18 May 2017 highlights a study that demonstrates the value of “noise” in solving coordination problems in human problem-solving. In an experiment with human participants (20 unique participants at a time, repeated over 200 times) interacting with algorithmic bots online, Hirokazo Shirado and Nicholas Christakis of Yale University introduced three levels of random, bot-driven choices into three different human network configurations, for a total of nine experiment conditions. Against the bot-free human control in each case, only in one of the nine tests did bots help the humans find a solution: the condition where bots generated 10% random noise from a central network position. Also, this particular advantage was greatest when the problem was globally difficult to solve, i.e. when there were only a couple dozen possible solutions as opposed to several hundred possible solutions.
So, the insight is a small amount of noise, from centrally-positioned bots can help humans solve a simple coordination problem. This Yale study puts together several elements of cultural evolutionary theory. First, we need a balance those who copy others in the majority, and those who produce variation in the minority. Second, the network position of these variation-producers matters. Third, the difficulty of the problem affects the optimal balance of copiers and variation-producers in solving a collective problem.
In the Yale study, the noisy bots served the function of individual learners in cultural evolution theory: the variation they they introduced helped search the space of possible solutions. Searching the evolutionary landscape for an optimal solution requires an optimal balance of social learning versus random variation. While the best balance depends on the difficulty of the collective problem, usually only a small amount of random variation is best.
About a decade ago, Mike O’Brien and Alex Mesoudi introduced an online game where participants made projectile points through copying successful parts of other people's points, and occasionally invented their own ideas (variation). This study, and others like it, show that a coordination problem for a group — such as devising the best projectile point — is best solved by about 95% vs. 5% split between social and individual learners, respectively. This is pretty close to the Yale study, which found that 10% random variation was ideal (perhaps 5% variation would have performed even better, as the Yale study tested 0%, 10% and 30%).
Also, the Yale experiment showed that the bots — variation producers - are most helpful when they are in central network locations and when the collective problem is hard to solve. The first part - network centrality, has been shown in dozens of studies of information flow on networks. In a 2015 PNAS paper, for example, Damon Centola and Andrea Baronchelli studied an online coordination game played by pairs of players who were connected within a network. In each round of the game, a random pair of connected players were asked to provide, independently, a name for an object or face. If the two names matched, the pair won points. In the next round, each played again with new a partners from their network. Players would often pick names that they could see were scoring well. The study showed that if players only played their network neighbors, then different names became locally popular in the network. If players were paired though a network-wide mixing process, however, then very often a single name would ultimately sweep across the entire population, destroying other variability. Within this homogeneously mixing population, Centola and Baronchelli discovered that “local failure accelerated global coordination” - the very same inference discovered in the Yale study. In cultural evolutionary theory, copiers can proliferate when the environment is predictable (i.e. not hard to solve), but more variation (individual learning) is needed to solve more complex problems or search “rugged” fitness landscapes.