| |
Abstract:
Language and music afford two instances of rich syntactic
structures processed by the human brain. Important differences in
the form, purpose, and use of syntactic structures in the two
domains suggests that the processing of structural relations in
language and music should be unrelated. However, recent
event-related brain potential (ERP) data suggest that some aspect
of syntactic processing is shared between the two domains. This
apparent contradiction can be resolved in a framework that adapts a
recent psycholinguistic theory of sentence processing ("Syntactic
Prediction Locality Theory") to the processing of musical
structure. Syntactic Prediction Locality Theory (Gibson, 1998,
Cognition 68 (1):1-76) provides a metric for structural integration
and memory costs during sentence processing. Combining this theory
with recent ERP results leads to a novel hypothesis that linguistic
and musical syntactic processing engage different cognitive
operations, but rely on a common set of neural resources for
processes of structural integration in working memory ("shared
structural integration resource" hypothesis or SSIR). The SSIR
hypothesis yields a non-intuitive prediction concerning musical
processing in aphasic individuals, namely that high- and
low-comprehending agrammatic Broca's aphasics should differ in
their musical syntactic processing abilities. This hypothesis
suggests how comparing linguistic and musical syntactic processing
can be a useful tool for the study of processing specificity
("modularity") in cognitive neuroscience. This work was supported
by Neurosciences Research Foundation.
|