It don't mean a thing - Musical structure at the intersection between music theory, cognition and computation
Music constitutes a central aspect of human nature and is a cross-cultural universal. In recent years, music research has attracted interest across various cognitive disciplines ranging from linguistics, music, psychology, computer science, up to evolution and anthropology (Patel, 2008; Rebuschat, Rohrmeier, Cross & Hawkins, 2011). Moreover, music is a highly complex stimulus that opens a rich window into the human mind: it involves hierarchical organization of meter and rhythm, interaction and entrainment, processing of multiple concurrent streams of events, complex sequential structure, predictive processing, and the generation and regulation of emotion. Thus music provides an ideal domain for the study of various aspects of human auditory cognition and emotion as well as the capacity of processing complex temporal sequences without the need to worry about interferences with propositional semantics. This talk focuses on the syntactic complexity of music, formal modeling, and their implications for cognitive representation and processing. Musical syntax is closely tied to the temporal unfolding and is of crucial importance for understanding and enjoying music: It becomes audible, for instance, when a part of music elicits a sense of finality, or when listeners experience the flow of musical tension and release. I will present my recent work on a model of harmonic syntax and discuss research investigating some of its predictions with empirical evidence as well as work in progress.