Novel Worlds Conference 2018

Thursday, October 18 – Saturday, October 20, 2018
Montreal, QC

The aim of the conference is to begin the long overdue conversation between data-driven research and literary scholarship more generally. The particular theme of the conference will center around the question of “worlds”: How are we to think about the worlds of the novel, either the possible worlds constructed by novels or the worlds outside of the novel through which they circulate and help shape? What theoretical models are currently available and what work remains to be done? What can computational models contribute to this conversation and where are the limitations? As questions of scale enter more and more into our research, from notions of world literature to worlds of deep time, the time is ripe to begin the work of methodological bridging and identify common critical ground.

Read the abstracts here!

Presentations:

Mark Algee-Hewitt and Nichole Nomura, “Novel Worldbuilding: Representation and communication in 20th century genre fiction”

Marijeta Bozovic, “Knight Moves: Russifying Quantitative Literary Studies”

Susan Brown, Joel Cummings, Kim Martin, and Deb Stacey, “Women and Fiction, or the Angel in the Machine: Comparative Computational Literary History”

Wai Chee Dimock, “The Three-Body Problem: Computation and the Data of Translation”

Jim English, “Separate Worlds: Genre and Reception in the Age of Literary Superabundance”

Matt Erlin, “Is that Kafka!? Exploring Computational Approaches to Untranslatability”

Adam Hammond, “Is Fictional Dialogue Distinctive?”

Eric Hayot, “Poetry Worlds”

Berenike Herrmann, “Kafka’s worlds as data: Rediscovering the skillful un/making of certainty”

Fotis Jannidis, “Literature as a commodity – distant reading ‘dime novels’”

Priya Joshi, “The Invention of Archives:  Books, Histories, Readers”

Heather Love, “Realism and Realization in Patricia Highsmith’s The Price of Salt”

Rachel Sagner Buurma, “Smaller Worlds”

Hoyt Long, “Sampling World Literature”

Christina Lupton, “Michel Serres, Ali Smith and the Reader’s Time”

Laura Mandell, “Queer Worlds: Text-mining against Gender Normativity”

Geoffrey Rockwell and Stéfan Sinclair, ” Thinking-Through the Body of Text”

Gisèle Sapiro, “Possible worlds as sets of possibilities : using MCA to explore literary narratives”

Richard Jean So, “The Geometry of Whiteness: A Data History of Race and the Post-war US Novel”

Stéfan Sinclair and Geoffrey Rockwell, “Dreamscape: Developing a New Geospatial Tool on Shifting Ground”

Ted Underwood, “Can We Map the World of Genres?”

Matthew Wilkens, “‘Too isolated, too insular’: American Literature and the World”

Abstracts:

Mark Algee-Hewitt and Nichole Nomura, “Novel Worldbuilding: Representation and communication in 20th century genre fiction”(?)

The created worlds of novels exist on the borders between mimesis and invention, pulling from the real-world experience of the author and her readers to create a fictionalized setting for plot, theme, and narrative. But as the worlds built by authors become less reflective and more inventive, particularly in twentieth-century genres like Science Fiction and Fantasy, the strategies the author employs to educate her readers on the rules of the newly-invented world while maintaining the flow of narrative change accordingly. In this project, we explore the narrative techniques, and microgeneric shifts, that allow authors of genre fiction to simultaneously create and communicate invented worlds. Pushing back on Bakhtin’s concept of the chronotope and its implicit connection with the real, we turn to text mining methods to identify and parse the lexical, syntactical and generic changes that authors use to put unreal worlds in contact with the lived experience of readers.

Marijeta Bozovic, “Knight Moves: Russifying Quantitative Literary Studies”

Ideas move strangely. Such was a central tenet of Russian Formalism and its many heirs—for the early twentieth-century Russian avant-garde of linguistic and literary research has had its own unexpected legacies. Today, the most provocative self-proclaimed heirs to the Formalists come from a very visible branch of what falls under the umbrella term of Digital Humanities: computer-assisted quantitative literary analysis.

As scholars working within Digital Humanities and Russian literary studies, my collaborators and I attempt to examine the suggestive continuities between Russian literary theory and contemporary quantitative literary analysis. Our aim in placing the two fields into dialogue is to provoke self-reflection and disquiet over method in both, rather than to gloss over inconsistencies and divergence.

Susan Brown, Joel Cummings, Kim Martin, and Deb Stacey, “Women and Fiction, or the Angel in the Machine: Comparative Computational Literary History”

Claims about the relationships between women and novels, and women writers and the form of the novel, are abundant; indeed the novel is bound up with the emergence of feminist literary criticism. Yet, once one moves beyond a very small set of writers and texts, it becomes very hard to assess larger claims about novels and gender, particularly some of the more interesting claims having to do with the relationship between novels and social formations. As a means of thinking through what would be required to do so, we will advance some exploratory work that draws on The Orlando Project’s data about women writers and their texts. We will compare high-level views of generic patterns in Orlando’s data with those found in other computational studies of the novel and women’s writing. We will explore questions prompted by these visualizations through what Alison Booth calls midrange reading of other visualizations that bring to the fore connections between the lives and cultural identities of the authors of those genres, as structured through an ontology produced for and from the Orlando data. Looking beyond direct connections between writers to links made by shared identities and social factors allows us to consider what kinds of inquiry are facilitated by different computational models. Comparisons are more evocative than conclusive, given that they will be based on incommensurable datasets, but that in itself is instructive of the challenges involved in trying to ask big questions.

Wai Chee Dimock, “The Three-Body Problem: Computation and the Data of Translation”

Beginning with Cixin Liu’s widely popular (and Hugo Award-winning) sci–fi novel, The Three-Body Problem, translated by Ken Liu, I explore the proliferating data of translation as an interesting problem for computation.  The Three-Body Problem, generally considered a throwback to the earlier “alien contact” format and unconcerned with the eco-catastrophes dominating contemporary sci-fi, nonetheless becomes much more contemporary through the porousness of Ken Liu’s translation.  I discuss Ken Liu’s exhibition, “Shanghai 2116: A City Under Water,” as a case of seepage interesting both on its own terms and as a challenge to machine learning.

Jim English, “Separate Worlds: Genre and Reception in the Age of Literary Superabundance”

This paper considers the economic and institutional circumstances of contemporary fiction production in light of some recent findings from a data crawl of the Goodreads social reading site.   Notable trends in the c21 publishing industry include the massive number of new novels emerging each year; the rapid creation and saturation of new sub- and cross-generic niches (paranormal romance, urban fantasy, Christian erotic); the innovative use of social marketing tools such as Facebook Messenger; and the rise of Goodreads, the giant reception wing of Amazon, Inc., with its online curators, clubs, and competitions, and its various unique affordances.  Judging from analysis of a randomly sampled 1750 highly active Goodreads users, these developments may be running counter to the prevailing sociological narrative about cultural consumption, which stresses ever more fluid and permeable fields and more freely ranging and omnivorous consumers.  The people who make heaviest use of the site – which includes a large fraction of today’s true enthusiasts of the novel form – do often describe themselves as wide-ranging and eclectic in their reading habits.  But the patterns of taste actually observed on Goodreads consist of tight preference-clusters around established genres or genre-hybrids.  Viewed through this analytical aperture, reading in the digital age is a scene of small and separate worlds, segregated neighborhoods of reception.

Matt Erlin, “Is that Kafka!? Exploring Computational Approaches to Untranslatability”

In the wake of the publication of Barbara Cassin’s Vocabulaire européen des philosophies (2004), rendered into English in 2014 as the Dictionary of Untranslatables, the category of untranslatability has emerged as a challenge to the easy globalism associated with certain models of World Literature. As Shaden Tageldin puts it in her 2014 contribution to the ACLA’s Ideas of the Decade, “If translatability has underpinned ‘efforts to revive World Literature’ within and against the discipline of comparative literature over the last decade (3), as Emily Apter has argued in Against World Literature (2013), surely its obverse—untranslatability—is a ghostwritten word of that decade and a watchword of the next.” Similar concerns can be found in recent work on translation theory, addressed, however, from a distinct perspective, one that foregrounds the untranslatable as a spur to creativity and advocates for translation as a world-disclosing act of interpretation. Lawrence Venuti has been a key figure in this context, arguing strongly for “foreignizing” over “domesticating” translations, because the former exercise “an ethnodeviant pressure on [target-language cultural] values to register the linguistic and cultural difference of the foreign text, sending the reader abroad” (1995).

What is missing from these and a number of other fascinating analyses is any engagement with research in computational translation studies, which has tended to focus, not on unique “cultural moments that deny transfer” and provide a nodal point for critical engagement, but on untranslatability as a universal feature of all languages. Corpus linguists and computer scientists have sought to develop conceptual categories and metrics that allow for a more precise determination of just what is being “lost” in translation as well as what non-typical features are being grafted onto the target languages. Of particular significance is the idea of “translation universals,” the idea that “translated texts, in any language, can be considered a dialect of that language” (Volansky, Ordan, and Wintner, 2013).

On the basis of a digital corpus of twelve English (for now) translations of Franz Kafka’s novella, Die Verwandlung, my presentation seeks to place these approaches into a productive dialogue with one another in order to disentangle a variety of translation effects. In a first step, I will offer some examples of how corpus-based approaches can clarify what we mean when refer to a “foreignizing” translation, by enabling us to identify transformations of specific linguistic features. The second part of the discussion will suggest some corpus-based strategies for pinpointing the untranslatable, those moments in the original that resist easy assimilation into the target language. Ultimately, if we want to understand translations of Kafka’s fiction (and literary translation more generally) as a form of worldmaking that challenges or even extends the linguistic and cultural horizons of the cultures into which it is translated, then we need to be able to make distinctions: between effects that can be linked to alleged translation universals, effect that reflect the positionality of particular translators, effects that depend on specific features of the target languages, and effects that derive from something unique to Kafka. Of course this is a tall order, and one that may run up against the limits of both our current computational tools and the hermeneutic abilities of human readers, but I think it is a project work pursuing.

Adam Hammond, “Is Fictional Dialogue Distinctive?”

For the past several years, my work with Julian Brooke and Graeme Hirst has focused on using techniques in computational stylistics to explore the phenomenon of dialogism or multi-voicedness in literature. In this paper, we test the fundamental assumption on which this research has been premised: that dialogism exists at all. Beginning from Andrew Piper’s finding in Enumerations that “character-text” (the words used to describe characters) is surprisingly uniform within and across novels, we ask whether something similar might be true of “dialogue-text” (the words that characters say). Are authors able to produce fictional worlds in which characters speak in styles distinguishable from one another and their authors? Does the manner of writing dialogue differ markedly from one author to another? These questions turn out to be difficult to answer (and even to investigate) computationally. For novels, the difficulty of associating dialogue with characters at scale makes a full investigation of the problem impossible. By using our quantitative approach to literary style to compare novels’ narration against all dialogue, however, we are able to suggest that individual authors — as well as authors in particular periods — do seem to write dialogue distinctively. Turning to drama, where the problem of character-dialogue association is less severe, we employ a variety of classification techniques to show that individual characters’ dialogue can indeed be highly distinctive — and that, as expected, the degree of this distinctiveness varies widely between individual authors.

Eric Hayot, “Poetry Worlds”

It is possible to describe diegetic worlds in formal terms. But what about nondiegetic worlds? Can they exist? How? To test the proposition I move away from those cultural modes where the fictional diegesis is “easy”—so, novels, video games, films—to a case where the entire question of the diegesis is up in the air: the lyric poem. I’ll talk briefly about the history of the lyric diegesis, and then focus in on one specific genre of it (the Chinese fu poem), using the latter as a test case for a series of questions about the possibility and nature of an asymptotically “formal” worldedness.

Berenike Herrmann, “Kafka’s worlds as data: Rediscovering the skillful un/making of certainty”

Franz Kafka is known for shaping fictional worlds marked by a profound uneasiness – in a terse prose. From a computational perspective, the attested uniqueness of Kafka’s work may be turned into a set of empirical questions: How exactly, in terms of recurrent word usage, does Kafka achieve his ‘tentative realism’ (Engel, 2010, p. 84)? How does he craft his prose to create alternative, but strangely realistic, worlds? What dimensions of his texts cater to the different types of emotions that guide and reinforce readers in co-building fictional worlds (Tan, 2008)? Traditional scholarship points to a unique combination of realistic and fantastic elements, and a particular technique of perspectivation.

In my contribution, taking up this lead, I quantitatively examine Kafka’s prose in comparison with a selection of German fictional narrative literature published by 43 authors between 1880-1930 (a total of 102 texts; 4.9 Mio. words). Applying a software originally created for social-psychological analyses of expressive writing (LIWC, Pennebaker et al., 2015), I examine the corpus in terms of fictional world building (see also Piper, 2016) and markers of literariness (van Peer, 1986). Using the 86 categories assigned by the German LWIC dictionary, I identify categories associated with (a) realistic world-building, (b) epistemic doubt and tentativeness, and (c) literariness.

Fotis Jannidis, “Literature as a commodity – distant reading ‘dime novels’”

There is a field of popular fiction which doesn’t attract the attention of German literary scholars very often: ‚dime novels‘, sometimes also called pulp fiction because of the bad quality of the paper. ‚Dime novels‘ are regarded as a specific group of texts, usually short genre novels (rd. 64 pages), and sold in large parts in magazine kiosks and not in bookshops. Millions of them are printed every year but they don’t appear in any bestseller list, because they don’t belong to the usual system of literary communication. There is not a lot of literary criticism on dime novels in Germany and it perceived them quite long mainly as part of the cheap escapist entertainment industry, targeting especially the lower classes and adapting to their specific skills and needs, while in recent years the complexity of some of the series and of the communication of fans about their dime novels has been highlighted in contrast. A cooperation with the German National Library has made it possible to access 12.000 dime novels published in recent years and explore genres, topics and the complexity of the texts in an attempt to reevaluate some of these research positions.

Priya Joshi, “The Invention of Archives:  Books, Histories, Readers”

This paper explores how archives of readership are fabricated from data often hard to make “hard” in order to more fully understand the history of the novel.  It is a meditation on method, and on how specific methods enable specific vistas.  Book historians, for instance, pursue data and books, production and product, markets and mentalités much as data-driven researchers and some historians of the novel have.   Yet, what book historians do with data is often distinct from the work of computational analytics and novel theorists.  The present paper’s focus on the invention of archives is a way to plot its ensuing meditation on method.

Heather Love, “Realism and Realization in Patricia Highsmith’s The Price of Salt”

Throughout her fiction, Patricia Highsmith describes the alienating and derealizing effects of widespread surveillance during the Cold War period. Her 1952 novel The Price of Salt also tracks these effects through recurrent tropes of detection, paranoia, and dissociation. At the same time, Highsmith offers a very different account of observation through the story of one woman’s erotic awakening.

When the novel opens, Therese Belivet perceives the world as a dead and mechanical spectacle. Although she is watchful, she can barely see the people or places around her; these scenes are either blurred by emotion or marked by a terrifying precision. When she falls in love with Carole, Therese begins to see the world as a concrete and living reality: a calm receptivity replaces the anxiety and repulsion that characterized her earlier response to ordinary social scenes. Her attention to Carol—informed by love, but marked by a paradoxical objectivity—shares features with more invasive and aggressive ways of looking, such as such as the unblinking gaze of the camera, the scanning of the cop on the beat, or the searching scrutiny of the social scientist. But these scenes also suggest how such apparently violent, detached, and objectifying practices can be driven by motives of fidelity and care. This essay tracks the relation between what Mark Seltzer has called Highsmith’s “forensic realism” and the novel’s plot of realization in order to reassess the ethics of observation in the period after WWII.

Rachel Sagner Buurma, “Smaller Worlds”

The rhetorical opposition between “close reading” and “distant reading” invites polar thinking, but in fact many of the quantitative and computational practices gathered up in the bag of “distant reading” actually help create forms that bring into view the medium-sized world, middle-distance scale, and extensible but bounded social group. Novel theory, on the other hand, works well for moving from the small and exemplary to the large and representative, but has more trouble in the middle. It would therefore be useful at this moment for us to turn to some of the mid-sized forms the genre of the novel offers us, and to ask how some computational methods might help make these novelistic forms more visible. The novel series, roman fleuve, or shared-world novel – examples include Trollope’s Barsetshire and Palliser series, Oliphant’s Chronicles of Carlingford, Galdos’s Episodios Nacionales, Arnold Bennett’s Clayhanger Family, and Balzac’s Human Comedy – is one example of a genre-specific form that quantitative analysis may help us to understand more fully than novel theory has allowed. In this paper I will discuss how computational methods might help us theorize and analyze groups of novels whose midsized worlds share uneven connections.

Hoyt Long, “Sampling World Literature”

Histories of world literature have long looked to the literary anthology as instrumental to the construction of world literature itself. In 1927, the Tokyo publisher Shinchosha joined this history when it released its hugely successful Collected Works of World Literature, a 57-volume series that billed itself as Japan’s Everyman’s Library. If anthologies such as this one raise important questions about the creation of world literary canons and their intended audiences, they also challenge us to situate their underlying principles of selection within broader national and international currents. In this talk, I use the anthology as a wedge with which to think about such currents as captured in bibliographic data. Specifically, I utilize an index of nearly 30,000 titles translated into Japanese between 1912 and 1955 to contextualize the contents of Shinchosha’s series. I examine how its sampling of world literature aligns with and deviates from the national market, both before and after publication, and consider what this tells us about the specific vision of world literature that editors were creating. Finally, I situate these national trends within international ones in order to think about what novel configurations of the literary world emerge in any one place out of the many accumulated acts of selection by individual translators.

Christina Lupton, “Michel Serres, Ali Smith and the Reader’s Time”

This paper looks at Michael Serres’ fantasies of a humanist scholar able to connect texts across time and out of order.  Ideally, he suggests, written worlds become infinitely available to the humanist scholar in the present, freed from chronological structures of history.  The logical extension of this fantasy is the promiscuous digital reader he imagines in his 2010 polemic, Thumbelina.  My case, however, is for the way humanist reading practices reintroduce chronology, even as we fantasise about freeing ourselves from it. My case study will be Ali Smith’s How to Be Both, a novel that shares with Serres a commitment to the principle of historical disordering, but ultimately suggests the way that reading itself demands the conjugation of the text in time.

Laura Mandell, “Queer Worlds: Text-mining against Gender Normativity”

In The Trouble with Normal, Michael Warner hints that the novel form itself curtails the development of queer worlds insofar as novelistic closure is traditionally achieved through either marriage or death. Even during the late eighteenth and early nineteenth centuries, there were novels, however, that ended “queerly,” establishing character roles and familial configurations that are not “normal” — Jacobin novels, for instance, written by women writers including Mary Wollstonecraft and Mary Hays. In this talk, I present a project designed to track gender categories, beyond the binary M/F, that emerged in a set of novels written by women between 1788-1810 (Garland’s series, The Feminist Controversy Project in England—44 titles, 90 volumes, including novels by Hays and Wollstonecraft). These emergent genders have been foreclosed by history — they won’t neatly map onto the gender categories that we have now. My goal in this project is to demonstrate how text mining can be used to experiment within the field of the history of sexuality by relying upon the research that has gone into understanding eighteenth-century conceptions of gender.  The goal of the Feminist Controversy project is to propose a model for text-mining that allows definitional categories to emerge from literary and historical materials instead of being imposed anachronistically, based upon the way we divide up the world now.

Geoffrey Rockwell and Stéfan Sinclair, “Thinking-Through the Body of Text”

Our idea of text, and by extension, text analysis, is grounded in Cartesian assumptions about the separation of mind and body, content and form. The mind, or that with which we think, is the essential content, and the body is the peripheral form. Following Descartes Method (2006) we are suspicious of the perceptions of form as a source of error while building our epistemology from the certainty of the cogito. What would text and text analysis look like if we didn’t start from a Cartesian model where the design of text is unimportant? How might we use computers to interact with texts instead?

This paper will be an experiment in following the critique of Philip Agre in Computation and Experience (1997). Agre argues that the field of AI is built on a Cartesian assumption about the division of thinking (cogitation) and body (action). Thinking is constituitive of the subject while the actions of the body are external. This assumption authorizes a tradition of computing to assume that modeling cogitation is the modeling human intelligence. Worse, starting with Simon and Newell, Agre argues, AI has taken symbol processing to be an adequate description of cogitation, or what the mind does. This assumption is convenient as it allows computers to serve as a platform for modelling thinking intelligence. The work of AI becomes that of identifying forms of thinking (like playing chess) and then modelling those on a computer as a way of not just learning about computation, but of learning about human experience too. As Agre points out, the problem with symbol processing as thinking is that then the world also has to be represented symbolically for the AI to interact with and that’s where much is lost. The AI really interacts with an artificial world, not the world we are thrown in to. This may work in the case of chess where the world of chess playing rules and pieces can be represented satisfactorily, but it doesn’t work with the hurly burly of human experience. By the late 1990s Agre, building on the work of Dreyfus (1979) and others, was suggesting a phenomenological turn back to the body and human intelligence as lived in a body interacting with the world.

Gisèle Sapiro, “Possible worlds as sets of possibilities : using MCA to explore literary narratives”

How can we encode possible fictional worlds for data analysis ? This paper seeks to construct operative variables for elaborating idealtypes of possible worlds, using Multiple Correspondence Analysis. MCA is a powerful tool of geometric analysis of data, which explores the relations between a large set of modalities. It is particularly well suited to field theory. Bourdieu conceives of fields as spaces of possibilities. The concept of field helps us grasp the aesthetic possibilities in a specific socio-historical configuration. Until a recent date, however, whereas computational textual analysis has emerged as a domain following Moretti’s « distant reading », MCA was mainly used for external analysis : prosopography of writers (including their aesthetic position taking), databases of books in translation, cultural practices of a literary festival’s audience. This paper proposes a research program for using MCA to explore fictional worlds as possible worlds by encoding properties of the works themselves, following Moretti’s distant reading approach, but with different tools. MCA can indeed help us display the space of possibilities of fictional possible worlds in a specific configuration of the literary field, not just by a lexicometric approach, as has been interestingly done by textual analysts, but by encoding propreties of the narrative (using the fabula and suzjet distinction), its structure and composition, its time and space settings, the point of view, the characters, etc. It could also help us compare literary narratives with other cultural forms of storytelling, and raise the issue of the relationship between models, norms and practice.

Richard Jean So, “The Geometry of Whiteness: A Data History of Race and the Post-war US Novel”

This paper attempts to imagine a version of cultural analytics – the use of computers and quantitative models to study culture and literature – focused on race that is commensurable with critical race studies. The two are often seen as incompatible, the former relying on the quantification and creation of artificial categories of identity and the latter fixated on the deconstruction of those categories. The long history of the use of statistics (and more recently computational algorithms) to degrade racial minorities looms over any new attempts to quantify race and racial difference. I develop a case study focused on the post-war American novel and whiteness to argue that a conceptual and methodological meeting is possible between these two fields, one that generatively pushes the limits of both.

Stéfan Sinclair and Geoffrey Rockwell, “Dreamscape: Developing a New Geospatial Tool on Shifting Ground”

Humanities Scholars are increasingly interested in studying named entities, such as geographical locations, in large-scale text collections. Unfortunately best-of-breed named entity annotators still produce noisy and inaccurate data. We could use the data and ignore the problems or on the contrary avoid the problems by working on other data. Dreamscape, a new Voyant Tools prototype, is an attempt to compromise: to suggest the potential of working with geospatial data at scale while also critically recognizing the limitations and responsibilities of the scholar.

Ted Underwood, “Can We Map the World of Genres?”

Imagining the world of aesthetic affiliation as social “space” is not just a scholarly conceit: the metaphor has been embraced by a wide range of people who draw “maps” of the internet and of musical genre. But many of us also sense that the metaphor has limits. To explore those limits, this paper takes the project of mapping seriously, and tries to measure the proximity between fictional genres in several different ways. I conclude that maps of genre are worth drawing, and can teach us something about literary history, although the world of genre differs from physical space in fundamental ways—not just because it has more dimensions, but because the shortest path between two points may not always be a line.

Matthew Wilkens, “‘Too isolated, too insular’: American Literature and the World”

Fiction by US authors published over the last two centuries centuries responded — sometimes with alacrity — to changes in domestic demography and economics, but could be stubbornly resistant, in aggregate, to even very large changes in similar factors outside the United States. How should we quantify and explain this discrepancy? Does it represent, as has sometimes been argued, a fundamental disinterest in global cultures on the part of American writers? Is the apparently strong domestic bias of US literature different in scale or in kind from that of other national literatures? This talk uses a corpus of more than 100,000 volumes of English-language fiction in combination with social and economic data from a range of sources to assess the changing factors that shaped large-scale literary attention — especially in the US and UK — from 1800 to the turn of the millennium.