Music is one of the great unsolved scientific mysteries. Although most of us know what music is in a subjective sense, none of us really knows what it is in an objective sense. There has been a revival of "music science" in recent decades, but modern science remains profoundly ignorant about what music is, what it means (if anything) and why we respond to it the way we do.
In this book, Philip Dorrell presents his "super-stimulus" theory of music. The basic assumption of the theory is that music perception is really the perception of something else. This leads to the question: "What is it that is like music but which is not music?", and the only reasonable answer to that question is "speech". It follows that "musicality" must be a perceived aspect of speech, and music is a "super-stimulus" or "ultra-normal" stimulus for musicality.
Proceeding on this assumption, Dorrell analyses individual aspects of music. He assumes that each aspect represents a super-stimulus for a corresponding cortical map, where each such cortical map performs a particular task in the perception of speech. (At this point in the analysis, no particular assumption is made about what "musicality" means or represents.) Several important discoveries are made during this analysis. One is that it is possible to reconcile apparent differences between the characteristics of speech and music, including that music has simultaneous pitch values (i.e. harmony) whereas speech doesn't, that musical melodies are based on scales whereas speech "melodies" are mostly continuous, and that musical rhythm is very regular and hierarchical whereas speech "rhythm" is mostly irregular. A second is the significance of musical symmetries, in particular pitch translation invariance and time scaling invariance, both of which imply a non-trivial implementation built into the brain's speech (and music) perceptual machinery. (For the sake of completeness, a total of six musical symmetries are analysed in the book.) The third, and perhaps the most significant, because it seems to point the way to discovering the meaning of "musicality", is that "constant activity patterns" occur in cortical maps when responding to music, but not when responding to speech.
The book contains some analysis of additional issues, including octave translation invariance (which counts as one of the six symmetries), calibration (of pitch translation invariance, and analogously of time scaling invariance) and repetition (an additional aspect of music not included in the analysis so far).
The final stage in the analysis is an attempt to develop a plausible explanation for the significance of constant activity patterns, and why they cause the listener to feel the emotional effect of music. A plausible (if somewhat speculative) hypothesis is that constant activity patterns in the listener's brain are an "echo" of constant activity patterns in the speaker's brain, and that constant activity patterns in the speaker's brain are an indication of the speaker's level of "conscious arousal". The exact meaning of "conscious arousal" is uncertain, but it is assumed to be something which involves modal changes over large regions of the brain (which accounts for the increased constancy of activity patterns), and which reflects some aspect of mental state likely to be of interest to other people. The emotional response to perceived musicality occurs on the assumption that if a speaker is consciously aroused and the content of their speech is emotionally charged, then the listener should take that content more seriously (where this assumption has in effect been "hard-wired" into our brains by evolution).
A final chapter in the book speculates about the effect that a scientific understanding of music will have on the existing music "industry", that in the future, instead of buying music composed by brilliant composers and song-writers, we will just run some software on our personal computers and press the "Compose New Music" button.