Computational approaches to analyzing and generating comics

AbstractThe past decade has seen an increasing focus on visual narratives like comics as an area for investigating numerous facets of cognition across numerous subfields of Cognitive Science (Cohn & Magliano, 2019). While early work focused on applying linguistic theories to analyze the structure of these visual and multimodal narratives, empirical work has extended to methods in cognitive psychology and cognitive neuroscience. This research has illustrated how the visual representations of sequential images share structural properties with language, and often overlap in their neurocognitive comprehension mechanisms (Cohn, 2019), albeit manifested in the visual-graphic modality rather than the verbal modality. Adjacent to this has been a growing focus on computational methods applied to comics (Augereau, Iwata, & Kise, 2018; Laubrock & Dunst, 2019). These include the use of computational modeling to analyze corpora of comics, the use of parsers to extract underlying properties of comics and their comprehension, and the programming of computational systems to generate novel comics. These approaches again provide opportunities to integrate various facets of cognition, as they combine analytical methods often used to analyze or generate text alongside those often applied to visual representations.


Return to previous page