What happens when, after a brief chat with a colleague, I re-enter my
office and visually perceive the hot, steaming, mug of coffee that I left waiting on my desk? One possibility is that my brain receives a swathe of visual signals (imagine, for simplicity, an array of activated pixels) that rapidly specify a number of elementary features such as lines, edges, and colour patches. Those elementary features are then fed forward, progressively accumulated, and (where appropriate) bound together, yielding higher and higher level types of information culminating in an encoding of shapes and relations. At some point, these complex shapes and relations activate bodies of stored knowledge, turning the forward flow of sensation into world-revealing perception: the seeing of steaming delicious coffee in (as it happens) a funky retro-green mug. Such a model, though here simplistically expressed, corresponds quite accurately to traditional cognitive scientific approaches that depict perception as a cumulative process of 'bottom-up' feature detection. 1 Here is an alternative scenario. As I re-enter my office my brain already commands a complex set of coffee-and-office involving expectations. Glancing at my desk, a few rapidly processed visual