The Journal of the AGLSP

XXVIII.1_CM04_Read


     [journal home]

 

Ben Read, David Isaak, Emma Holland, Josh Grgas, Emi Karydes, Soroa Lear, and Thor Madsen were all students at Reed College in Kris Cohen’s ART 525: Approaches to Media Studies course, part of the Masters of Liberal Studies (MALS) program. Isaak, Grgas, Holland, Karydes, and Madsen are graduate students in the program. Lear and Read were undergraduates taking the course, and both graduated in May 2021.

 
 
 

nonfiction

Let’s Be Disinterested Together:
Social Media, Personhood, and Control

Ben Read, David Isaak, Emma Holland, Josh Grgas, Emi Karydes, Soroa Lear, and Thor Madsen, Reed College 

<preface type = “abstract”>[1]

The present study addresses Simone Browne’s question, “How do we understand the body when it is made into data?”[2] Using a shared Instagram account, we explore the relationship between social media, data, and personhood through a collaborative portrait of a fictitious social media user. We contextualize this relationship as a social phenomenon by tracing two separate but interrelated genealogies of data transmission and control. We analyze how the Instagram algorithm responded to our behavior, and we wrestle with who or what exactly we created and how the account represents the group and is represented by our interests. Reckoning with this digital existence, we look at disinterest as a possible form of resistance to networked societies of control.

</preface>

 

<preface type = “introduction” subtitle = “data and the human”>

In Control: Digitality as Cultural Logic, Seb Franklin attempts to go beyond understanding how our contemporary socioeconomic reality has been changed by digital technology and computers, as many attempts to periodize the so-called information age have done. He suggests that those technological changes emerge from and are contingent on a longer history of political economy based in the broader logic of digitization. Franklin describes this digitality as a process of filtering, sorting, and organizing data or information “to render the world legible, recordable, and knowable.”[3] He writes: “This ontological digitality, separated from the machines and interfaces with which it has become synonymous, entails a fundamental process of discretization that can be purely conceptual as much as it can enable particular technological processes.”[4] Digitality operates as a conceptual framework that predates and, in some ways, acts as the precondition for the development of digital technology. The process of selective inclusion and exclusion that digitality transcribes in turn describes the process of regulating who gets to be human that has long been at the nexus of the body and data. 

One history of this relationship finds its origins in the political economy of transatlantic slavery and anti-blackness. In her essay “Mathematics Black Life,” Katherine McKittrick argues that the historical archive of data about enslaved people relegates black bodies to the status of objecthood rather than the status of human being, as enslaved people were recorded as cargo and property.[5] Insofar as it relies on the separation of enslaved people from their birthplace and each other, isolating them as objects to be shipped and circulated as commodities in the slave trade, the regime of slavery provides one example of how the discretizing logic of digitality has its roots in violent exclusion—in this case, the exclusion from the category of the human. Similarly, W. E. B. Du Bois writes about how sociological data collected during the Reconstruction era generalized trends about black life in America in such a way that obscured or excluded what Franklin would call noise and what Du Bois calls the individual. Franklin describes the basic process of digitization as the “imposition of uniform, discrete steps onto continuous matter” which “allows for the exclusion of ‘noise’ by dividing [analog signals] into ‘desired’ bands and those that fall outside of them.”[6] In his essay “Sociology Hesitant,” Du Bois expresses his skepticism of sociology as a science that seeks to determine laws of human action based on observable information because it similarly excludes noise: “the evident incalculability in human action” that he represents as the figure of “the Individal Man [sic].”[7] Du Bois suggests that the individual moves within and against the general trends or truths which sociology, as a data science, claims. It is this factor of chance that Du Bois tried to represent in the unconventional data portraits which serve as the visual inspiration for the figures included here.

In her essay on how data interact with and constitute racial categories, Simone Browne brings these concerns about data as a means of regulating the human to bear on contemporary technology, asking, “How do we understand the body when it is made into data?” This question is, at a basic level, a question of digitality, and she suggests that the digitization of the racial subject “[alienates] the subject by producing a ‘truth’ about the body and one’s identity (or identities) despite the subject’s claims.”[8] Browne demonstrates as well the contingency of contemporary regimes of control on past conceptualizations of race, identity, and power; her understanding of digital epidermalization as “the exercise of power” by which surveillance technology defines personhood, bodies, and identity through biometric data follows from Frantz Fanon’s account of epidermalization as the process in which he is objectified and overdetermined by the white observer’s gaze that reduces him to his black skin.[9]

Through each of these cases, we can trace a legacy of slavery, one genealogy of filtering data in which the body being made into data allows for the control or management of that body. In the mid-twentieth century, cybernetics emerged as another way of thinking about the relationship between data and the human, seeking to analyze information, systems, and behavior by collapsing the distinction between humans and machines.[10] This view of the individual and the body as mechanisms that process, communicate, and produce information in part precipitates the shift that Gilles Deleuze tracks from disciplinary societies to “societies of control,” from the factory to the corporation. Deleuze suggests that where institutions like the factory represented spaces of enclosure which served as “molds” for individuation, the corporation represents a system of “modulations” and management that “substitutes for the individual or numerical body the code of a ‘dividual’ material to be controlled.”[11] That is to say, as dividuals, we are divisible as “masses, samples, data, markets, or ‘banks,’” and having been divided, our material can be sorted, controlled, or digitized.[12]

Franklin draws on Deleuze to develop his understanding of control. Synthesizing Deleuze’s theorization of the societies of control with the technical understanding of control as a system of information processing and self-regulation, Franklin argues:

The logic of control as episteme describes a wholesale reconceptualization of the human and of social interaction under the assumption [...] that information storage, processing, and transmission [...] not only constitute the fundamental processes of biological and social life but can be instrumentalized to both model and direct the functional entirety of such forms of life.[13]

Digitality operates as a mechanism of this control. In this paper, we explore this understanding of control and how individuals are made into dividuals through the management of personal data in the context of the relationship between social media, algorithms, and personhood. Within this context, we are interested in “new forms of resistance” to the societies of control. As Deleuze asks, “Can we already grasp the rough outlines of these coming forms, capable of threatening the joys of marketing?”[14]

</preface>

 

<preface type = “methods” subtitle = “discoursenetwork3k”> 

As an exploration of the digital relationship between data and personhood, we attempted to both simulate and represent a fictional social media user. The creation of a data portrait, which encompasses both the simulation and representation of this persona, required collaborative planning and frequent communication. The group met thrice over Zoom and used a Google doc to capture ideas, ask questions, and document decisions about the project. The group decided to use Instagram as the social media platform for two reasons. First, it was the platform most familiar to the members of the group. Second, the platform allows for multiple participants to login with the same username and password from different IP addresses to manipulate the account at the same time. This permitted each member of the group to engage both passively and actively with the account without the fear of being locked out or marked as spam and deactivated by the platform. A persona was collectively and quickly created by the group using the raw materials from our course. The persona was named “Art Smith” because the graduate class being taken originated from the Art department and Smith was a commonplace surname. The username was an updated homage to Alan Liu’s “discourse network 2000,”[15] whereas the email referenced the course number. The group decided Art should be middle-aged to avoid being either very young or old. The account was created at the first Zoom meeting, and a profile image was selected from the W. E. B. Du Bois data visualizations showed at the 1900 Paris Exposition.[16] A short bio for Art Smith was added in the days following the initial meeting: “Graphic designer working with sociology, storytelling, and social justice. Pronouns: they/them” (Fig. 1).


Instagram uses a variety of algorithms, classifiers, and processes, each with its own purpose. For the sake of simplicity, we will refer to each of these elements collectively as “the algorithm” throughout our observations. The app accumulates information about a user’s behavior and begins to learn their preferences. The developers call this information “signals.” There are six key factors that the Instagram algorithm accounts for as it is learning from a user’s signals: interest, relationship, timeliness, frequency, following, and usage.[17]With regard to interest, the feed is not only based on who the user follows but also on the accounts and types of posts the user has liked historically. The algorithm is consistently trying to determine personal relationships by analyzing the user’s interactions (likes, direct messages, search history). If the goal of the account is to grow a social media following, then timeliness becomes important. The app tracks when each user posts to ensure that it is showing the user the latest posts with highest engagement (likes, saves, comments, shares). Frequency is less about how often new posts are created but rather how often the account is being used. For frequent scrollers, the feed will look more chronological as Instagram shows posts since the last visit. If the Instagram app is used less often, the feed will be sorted into what Instagram predicts the user will like, instead of chronologically. Usage and following are related because Instagram monitors to ensure that users are not bots or engaging as a spam account. The algorithm is designed to add and remove signals and predictions over time, working to get better at surfacing content based on each user’s interests.

After creating the account and adding the biographical statement, but before we posted, liked, and commented, we took one day to make baseline observations. The first day of use on the application, a user has signaled fewer preferences, so the algorithm creates a set of predictions based on initial behavior, namely how likely the user is to interact with a post or profile. During the control day, the algorithm could not base any of its suggestions on the six core tenants, but its main goal was to provide content that it considers to be the newest and most interesting based on popular engagement. For us, especially given the inclusion of “social justice” in the account bio, that meant multiple anti-racist accounts whose content had been steadily growing in popularity during the protests against police violence in the wake of the murder of George Floyd. Ironically, the algorithm seemed to pull the word “Art” from our account name as well as the profession “graphic designer” that we included in the bio field, and the “For You” content was a collection of visual art, how to make art boards for design careers, Adobe Creative Suite how-to guides, and graphic art. From here, the key questions we decided to consider were (1) What topics are going to be suggested for Art? (2) Once the algorithm has created an initial profile of us, what sponsored advertisements will it push to Art? 

The group used a semi-structured approach in which we all followed the same protocol initially, but which left room for spontaneous action. The project ran for a three-week period. Each member of the group was assigned a day of the week on which to actively engage with the account, creating posts and interacting with other users’ posts. Other members had the ability to observe the account on days to which they were not assigned but not to like, post, or comment. On their assigned day, each person would first review the “For You” section. Second, suggested followers were reviewed. During the first week of the project, the designated group member for that day added the first five suggested accounts as well as additional “seed” accounts based on individual interests. After the first week, each member on their assigned day only added the first five suggested accounts. Third, the timeline was reviewed for posts, ads, stories, comments, etc. Finally, at least one image or video post was added. We decided to avoid Instagram Live because we did not want to present our faces to the algorithm for fear of influencing its perception of Art Smith. At the end of each session, a web form was filled out to collect qualitative and quantitative data about the experience. Screen shots were also used to capture visual information. To avoid potential cross contamination from other online activities, group members either accessed Instagram through web browsers in private or incognito mode or used dedicated devices for the project.

Each member of the group approached “being” Art differently. Some attempted to embody Art based on the stated bio. Others thought of Art as a subsection of their own identity. Two members of the group expressed hesitancy for interacting with other users, beyond likes, because they did not know how to engage as Art. Experience using Instagram varied across the group, and this too resulted in different features being used and unexpected challenges in capturing precisely how the account changed over time. The data download feature from Instagram was incomplete.[18] Although most categories of data were cumulative, some only had so-called “recent” data. Most inexplicably, although not stated in the Instagram documentation, the data pertaining to “ads, profiles, and content you've seen” only appeared to go back one week. Also notable was that the data download did not include the likes Art received, only the likes given by Art. Likes received were recorded by the group in the web form. 

</preface>

 

<argument title = “results” subtitle = “almost everyone wears clothes”>

A partial representation of how the account changed over time is represented in Figure 2. The group consistently posted images, but other content, like commenting on other posts or stories, was less frequent. Ads were viewed at a slower rate than posts, with the self-reported survey data noting that it took approximately one week before the frequency of sponsored ads became notable. Likes received rose steadily, but there was a sudden uptick in likes received after reposting a Tik Tok video of a noted YouTube personality speaking about gender. The third and final week of the project saw almost no new followers for Art. 

Art’s overall engagement with the platform and the reciprocal engagement of others with Art are represented in Figure 3, an homage to another Du Bois data visualization from the Paris Exposition.

Art avoids self-identification beyond the stated bio by not revealing a somatic self—a face and body are not shown. The account, instead, sidesteps this type of self-construction by building a persona through interaction with other accounts, interests, and causes. The outline of Art is shaped through posts covering topics such as social justice, gender identity, self-actualization, and through making connections with like-minded accounts. As a result, for an average observer, it is more difficult to determine details such as age, family, or personal history than general political leanings or ideological footing. The result is something reflective of our group: educated, politically liberal, and, despite spending a semester together in an online classroom environment, still relatively opaque to each other in specific, personal ways. This opacity is both strategic (we do not want Instagram to identify the individuals who make up Art) and operative (we could have, for instance, chosen a proxy face to represent Art in the vein of catfishing). The profile became a collection of minds alternating between overlapping each other's content and nudging one another into slight deviations, which logically follows from the semi-structured methodology. Our interdisciplinary approach both attends to and breaks from the model of an empirical study because, like Art, it cannot be easily sorted. As a result, Art is incomplete, and they are not easily traceable or trackable.

The Instagram algorithm thrives on personal data. In her essay on how Big Data transform individuals into characters, Wendy Chun explains that this emphasis on personal data results from a more general shift toward “tethering on- and offline identities as the best and easiest way to foster responsibility and combat online aggression,” which is also a means of control or regulation.[19] An excellent example of the enforcement of identity tethering was one of our first obstacles in setting up the account: creating a new account requires a verification process, which was complicated (but not impossible) to work around in such a way as to not directly connect it with an existing online data entity. What the algorithm requires to create revenue are targeted ads based on specific personal data. What we gave it instead was the loosely imagined persona we created for this project. We did not codify a predetermined set of interests or personality traits, which allowed for flexibility in the content posted and accounts followed, creating a nebulous cloud of data without distinct “neighborhood” edges.[20] We created Art to investigate how the Instagram algorithm would react to an account run by seven separate people acting as one. In this way Art was given agency and an indistinct form but not a cohesive or complete identity. Art has the illusion of agency, but from our perspective, Art is our class project, a digital object, the dividual data shadow of a fictional individual whose internal divisions and boundaries are simultaneously methodologically distinct and perceptually indistinct. Would it be possible for a person (with or without personal knowledge of the seven group members) or an algorithm to determine which posts were written by which person by looking at the captions alone? Members of the group reported uncertainty as to which other members created which posts. On the other hand, from Instagram’s perspective, Art is a steady and distinct stream of data. But which of these are the “real” Art Smith? In some ways, this is the wrong question; Art is real only insofar as they have been digitized. 

Art’s opacity is further made apparent when reviewing our account data. This included a file titled “Your Topics,” defined within the file as a “collection of topics determined by your activity on Instagram that is used to create recommendations for you in different areas of Instagram.” The list of topics included 145 unique results, ranging from the expected (“Poetry”; “Music”, “TV & Movies”; “Performing Arts TV & Movies”) to the perplexing (“Unofficial & Offbeat Holidays & Observances”; “Pants & Shorts”; “America’s Got Talent”; “Drinking Water”). The former group can be directly tied to the types of accounts Art follows and interacts with, whereas the latter reflects Art’s infancy as a data source. The lack of behavioral and biographical anchor points presents challenges for Instagram to establish a pattern of consistent interests, resulting in a chaotic abundance of potential topics: a list that begs for the type of refinement that comes with more time and more data.

A review of “Ad Interests'' offers similar results. This list, totaling 319 ad categories, begins with “Online Shopping,” “Dresses,” and “Shopping and Fashion” before expanding into topics such as “SUVs,” “Photography,” “Meditation,” “High-intensity Interval Training,” “Clairvoyance,” and “Human Spaceflight.” Despite Art’s progressive politics—perhaps the account’s most distinct public characteristic—this list includes prominent conservative figures such as “Ben Shapiro,” “Rush Limbaugh,” “Ivanka Trump,” and the generic “Conservatism in the United States,” whereas left-leaning political content is absent. Despite the “Ad Interests” including political figures, no user noted any political ads. The inclusion of conservative ideologues in the list of ad interests tells us more about Instagram’s default ad listings and sources of revenue than anything about Art’s burgeoning behavior. For a new, opaque account, ads for clothing are expected (almost everyone wears clothes), but even with this universally safe target algorithmic confusion was apparent. During the active management of the account, group members noted the transition from male-focused clothing ads to more gender-neutral apparel as Instagram seemingly struggled to understand Art and gain their consumerist attention. Toward the end of the project, sponsored ads widened to become more potentially inclusive: self-help, furniture, and career development. How seven users participated in a single account generated varied signals which forced the application’s design to consistently shift its predictions about the user’s persona. As each person’s use on the platform changed, so did the algorithms struggle to weigh signals to define how it should perform regarding the user’s interests.

</argument>

 

<argument title = “discussion” subtitle = “disinterested together”>

What does it mean to have interests? To be interested? In the cybernetic imaginary that Orit Halpern outlines in Beautiful Data: A History of Vision and Reason since 1945, individual, machinistic bodies are understood as black boxes, defined by what they do instead of what they are.[21] What the individual as a black box does comes to describe what they are. In the context of the Instagram algorithm, what Art does is perform their interests. Especially in the short time span of our project and the limited interaction between our account and others via direct messages, interest became the most salient of Instagram’s six criteria, determined by who we followed, what kind of content we shared, and what posts we liked or otherwise engaged with. In the data downloaded from Instagram, these interests manifest as “Your Topics'' or “Ad Interests.” So the question remains, what does it mean to have interests?

To be interested is to be invested. In Fred Moten and Stefano Harney’s The Undercommons: Fugitive Planning & Black Study, the two thinkers articulate a relationship between interest in the colloquial sense of finding something interesting and the financial sense of having to pay interest, a relationship based on the role of governance. Governance seeks control through regulating interests of both kinds. Governance, or in this case, the algorithm as a form of governance, requires your interest as investment or participation or a form of being tethered to the system. Moten and Harney see interest as a kind of immaterial labor that the system demands from individuals to make it work, and governance creates these interests as much as it manages them. The two write, “Governance operates through the apparent auto-generation of these interests,”[22] as Instagram did during our control period by suggesting popular accounts to follow. In other words, “interests are solicited, offered up, and accumulated.”[23] These accumulated interests constitute the dividual material by which identity can be digitized and controlled. The kinds of black box or digital identities that come into existence through the governance of interests, such as Art, are “subjectivities of interests.”[24]

The flip side of this question is what does it mean to be disinterested? This question is especially relevant to our project, because our collective attitude toward the Instagram account can be described as disinterested insofar as we were more interested in how to construct and understand Art than the content itself. The algorithm seemed to be grasping at identifying our interests by producing a list of 319 different ad interests, an effect of both the variety of our collective interests and our disinterest in actual content. The sheer variety and the internally contradictory nature of the ad interests the algorithm generated demonstrates an impediment to, if not a failure of, digitality insofar as our interests are understood as the data that allows us to be digitized, to be rendered intelligible as a discrete individual. In some ways, to be interested—which is to be intelligible—is to be white. That is, whiteness allows individuals to be defined by their interests instead of their race and to have good credit in the eyes of governance, to continue the financial metaphor. The Instagram algorithm constructs individuals in the image of whiteness by governing their interests. Moten and Harney describe “the condition of being without interests,” of refusing to participate or invest in the system that wants to control you, as criminality insofar as “governance is understood as the criminalisation of being without interests.”[25] As a refusal to be governed, this criminal disinterest, which is also historically associated with blackness, is one means of resistance that synthesizes many of the other means of resistance offered by media scholars in the face of Big Data. The two write:

Whom do we mean when we say “there’s nothing wrong with us”? The fat ones. The ones who are out of all compass however precisely they are located…. The ones who manage to evade self-management in the enclosure. The ones without interest who bring the muted noise and mutant grammar of the new general interest by refusing…. Our cousins. All our friends.[26]

In this list of the various forms of collectivity that escape the enclosure of governance, Moten and Harney describe this being without interests in terms familiar to media theory. By describing this collective “us,” a collective that cannot be divided or dividuated, as “the fat ones,” Moten and Harney link disinterest with what Katherine Behar calls bigness, obesity, or conatus—a form of slowness that refuses the accelerationist demands of data.[27] This description of “the ones without interest” is also related to the anarchist tactics of Black data or Black Ops that Shaka McGlotten describes as a politics without specific demands, only “furious refusals.”[28] The denial of the interests Instagram generates from our data is at once a furious refusal and a humorous moment of disidentification. Our confusion at the list that supposedly represents our own interests constitutes an opportunity or a possibility for rethinking both personhood and resistance in terms that in turn refuse legibility. What this looks like is an open question, especially when data reshape relationships as interest in each other. Maybe it means logging out. But for now, let’s be disinterested together. 

</argument>

 

<argument title = “conclusion” subtitle = “nonsensically scattered sociality”>

This leads us to ask how we can exist in relation (this collective “us”) without having these relations be the very thing which data aggregates to turn us into an imitation of community, which is made through the very acts of (in)dividuation on which Big Data builds itself. This relates to John Cheney-Lippold’s claim, following Rosi Braidotti, that corporations and algorithms make us “as if” we were subjects, where “constraints on our algorithmic identities position us within a unique kind of subject relation,” using data about how and with whom we interact to make our data useful to corporate needs.[29] Our relationality—one aspect of our existence that often offers us refuge and solace from the ugly demands of civic life as an infrastructural support—becomes a major source of vulnerability when existing online. Dividual identities, then, are created by and for Big Data through an aggregation of the relationships we have which can be mined for relevant information. This form of community as we experience it on social media is what Moten calls elsewhere a “necropolitical imitation of life,” making our relationality an oversimplified project of capitalist utility.[30] This is the danger of the optimistic or even utopian promise of social media that the internet will bring us closer together even, as it is mining our relationships for information and exploiting our interest in one another as investment in the structure of control. That is, social media asks us to make the forms of collectivity we inhabit legible to algorithms by performing our interests through likes, comments, and messages. 

Here we return to Deleuze’s question about what power we have to threaten the joys of marketing. Art Smith is ultimately a relational assemblage in which our relationships to one another were able to remain illegible—or at least not be aggregated as a means of creating a dividual identity. Not only this, but our relationships also played a large part in why the algorithm never quite “figured us out.” This is not to say that Art was not simplified by the algorithm or targeted by specific ads, but the ads never quite made sense, as we were too scattered to be synthesized into something profitable. Although there was active communication and community behind the account, these communications were illegible to the algorithm, because they happened separately from Instagram. Furthermore, the legible relations Art built—the accounts we interacted with—were semi-intentionally and nonsensically scattered. Our collective disinterest in the content of the social media account facilitated this form of social life that confused the algorithm as we untethered our offline community from our online identity. In this way, we played with data or governance’s “necropolitical imitation of life” by imitating sociality: following people and accounts that could not be built into a relational web because they did not cohere in relation to each other or to us. The relational aspect of Art remained opaque. In the absence of the branded mode of individuality fed to us by and that we in turn feed into social media, how might we reimagine relationships that cannot be sorted or forms of personhood that cannot be discretized? This is not to overly romanticize the analog or promote a nostalgic desire to return to the days before the internet, especially given the contingency and continuity of the logic of control, but rather to offer a provisional, contingent form of collectivity that dwells both within and outside the framework of digitality, in this case within and outside of the (in)dividual named Art Smith.

</argument>


[1] This paper reports on a collaborative project for Kris Cohen’s “Approaches to Media Studies” course and follows the lead of Alan Liu’s “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse” in using XML tags to demarcate the sections of an academic paper and to filter our data into an empirical presenting dividual.

[2] Simone Browne, “Digital Epidermalization: Race, Identity and Biometrics,” Critical Sociology 36, no. 1 (January 2010): 144.

[3] Seb Franklin, Control: Digitality as Cultural Logic (Cambridge, MA: The MIT Press, 2015), xix. 

[4] Ibid. 

[5] Katherine McKittrick, “Mathematics Black Life,” The Black Scholar 44, no. 2 (2014): 17.

[6] Franklin, xx. 

[7] W. E. B. Du Bois, “Sociology Hesitant,” Boundary 2 27, no. 3 (2000):  41.

[8] Browne, 133–135.

[9] Ibid.

[10] Orit Halpern, Beautiful Data: A History of Vision and Reason since 1945 (Durham, NC: Duke University Press, 2014), 39–78.

[11] Gilles Deleuze, “Postscript on the Societies of Control,” October 59 (1992): 4, 7.

[12] Deleuze, 5. 

[13] Franklin, xviii. 

[14] Deleuze, 7. 

[15] Alan Liu, “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse,” Critical Inquiry 31, no. 1 (September 2004): 50.

[16] Whitney Battle-Baptiste and Britt Rusert (eds.), W. E. B. Du Bois’s Data Portraits: Visualizing Black America (New York: Princeton Architectural Press, 2018), Plate 22.

[17] Alfred Lua, “How the Instagram Algorithm Works: Everything You Need to Know.” Buffer Library, February 16, 2021. https://buffer. com/library/instagram-feed-algorithm/.

[18] We downloaded the data for the first time two weeks into the project only to discover that not all usage data were included. If we were to repeat the study, we would download the data weekly.

[19] Wendy Hui Kyong Chun, “Big Data as Drama,” ELH 83, no. 2 (2016): 375.

[20] Chun, 370–71.

[21] Halpern, 44.

[22] Fred Moten and Stefano Harney, The Undercommons: Fugitive Planning & Black Study (New York: Minor Compositions, 2013),  54.

[23] Ibid., 55.

[24] Ibid., 56.

[25] Ibid., 57.

[26] Ibid., 52.

[27] Katherine Behar, Bigger Than You: Big Data and Obesity (Santa Barbara, CA: Punctum Books, 2016), 40.

[28] Shaka McGlotten, “Black Data.” S&F Online : Transversing Technologies, February 13, 2014. https://sfonline.barnard.edu/traversing-technologies/shaka-mcglotten-black-data/.

[29] John Cheney-Lippold, We Are Data: Algorithms and the Making of Our Digital Selves (New York: New York University Press, 2017), 154.

[30] Moten, “Blackness and Nothingness (Mysticism in the Flesh),” 740.

Copyright © 2022 by Association of Graduate Liberal Studies Programs