abstract: As a network of processing elements grows in size and complexity, must the computational resources of individual nodes also grow? Or is there a "universal processor" with limited state (say, 32 bits, total) and reliability (say, one out of every hundred operations produces a wrong result) out of which we can build reliable, arbitrarily large networks of arbitrarily complex topology?
In this talk, after presenting some reasons why one might want to build systems out of such skimpy processing units, we'll review what is known and what appear to be the main open problems. In a nutshell, with a handful of (unreliable) bits, one might be able to achieve much more than might be expected.