It’s said to be quite powerful. I won’t pretend to know how it works—my understanding is that they don’t know either, which is a clever alibi. I hear that it’s a specter haunting our world. In fact, I hear that it knows that specters haunt worlds (but only at the start of an essay), which could be another way of saying that it has an uncanny grasp of cliché. It’s something like a meta-specter, really, haunting our hauntings. It’s apparently there, even if you can’t see it, lurking behind incipient fascism, pervasive misogyny, drone warfare, and denialism of all kinds. “I need the algorithm,” Jesse Eisenberg’s Mark Zuckerberg said to Andrew Garfield’s Eduardo Saverin with the determination of an addict. The Algorithm. Never has something—some agent—had such a determining force on human actions and desires, at least since the discovery of the libido or the invention of the printing press or the belief in God.
And yet, it’s only an equation. Or, per the dictionary, “a procedure or set of rules used in calculation and problem solving.” But as it passed from Eisenberg’s mouth to Garfield’s ear—The Algorithm, I need it—it was so much more. When The Social Network was released in 2010, the Algorithm was the procedure or set of rules that turned teenagers into creators and millionaires into billionaires, solving problems but also generating them. And since then, it’s only become more impressive.
Of course, there is no such thing as “The Algorithm” or even “the Facebook algorithm”—nor “the TikTok algorithm,” which is said to be better. It’s both a synecdoche and a hypostatization, like if I called my car “my wheels” and then insisted they were reinventing everything. The Algorithm—sometimes short for algorithmic recommendations, sometimes a stand-in for social media or the internet in toto—does seem to know what I want to see, if not exactly what I want to know. It’s easy to get the impression that it knows everything, but also only what I’ve told it. It homogenizes, and it silos. It’s the commons, but with gatekeepers. There’s never been anything like it! But it’s really just an extension of Enlightenment rationalism. It’s all there in Leibniz. None of this is strictly true, but it’s all become truism. If only there were some method of thinking—some procedure or set of rules used in calculation and problem solving—that could help us work through these contradictions.
when trying to compute what all this computing has done to our lives, it is tempting to fall into forms of technological determinism—a slur lobbed at those who have described complex cultural conditions as the result of technological invention. The internet has made the world a global village. Social media is making teenagers suicidal. The verb “to make” is doing just as much theoretical work as the technologies themselves in these crude examples, which endow hardware and software with more agency than their developers and users.
At its most flagrant, technological determinism relies on an anti-dialectical algorithm: add computing to culture, and culture changes as a result. This would be a useful formula if only “computing” and “culture” weren’t constantly moving targets, deeply entangled but irreducible to each other. Where does computing end and culture begin? After all, there would be no Facebook without Zuckerberg and Saverin, without Harvard and its servers, its Houses and their databases—to say nothing of the staff that makes it all run; without, for that matter, Eisenberg and Garfield, the cult of white male American nerdery and its fantasy of revenge; without bandwidth and fiber-optic micro cable, venture capital and patents, legislators and workarounds, data storage and content moderation, Foxconn and its wage slavery, and so on and so on.
Why should any of us trust a black box to define our tastes?
As much as technological determinism offers to simplify a staggeringly complex network of relations and capital, it is not altogether simple. It may not count for much as media theory, but it does account for something: for the feeling that some force of nature has, with a whoosh and a bang, disrupted norms of cultural production and distribution; for the feeling that we users—now also addicts—have had no say in the disruption of these norms but have to adapt to them. Oftentimes our jobs depend on them. Sometimes we have been nudged into them. (The Algorithm is good at nudging.) Mostly we have just wanted to keep up with what is sometimes called “the social” as it migrated online. We have been delighted by the convenience of connection and overtaxed by the work of keeping up. We feel plugged in and self-estranged. We want recognition and fear surveillance; now even the richest and whitest of us are experiencing the insecurity of not being able to control the terms by which we “feel seen.” It is tempting to describe this brave new world—governed by tech moguls, if not by The Algorithm per se—as a brand-new culture. Call it Algorithmic Culture. Many people do.
More than its near-cognates Internet Culture or Network Culture, Algorithmic Culture seems particularly disposed to the seductions of technological determinism. Predictive by nature and anthropomorphic by design, algorithmic recommendation systems draw strategic attention to the agency of their operations over that of the humans who have been populating their datasets. We feed them inputs, sure, but it is no coincidence that their outputs are “feeds” too, making it hard to know who—or what—is feasting on whom.
The phrase Algorithmic Culture was first used in academic circles by media theorist Alexander R. Galloway in 2006, but his book, Gaming: Essays on Algorithmic Culture, left it undefined. It never appears after the title page. In his new book, Algorithmic Culture Before the Internet, media studies professor Ted Striphas army crawls his way through intellectual history into a definition, drawing the long wake of the word algorithm (from ninth-century Baghdad to the nineteenth century’s Royal Asiatic Society) into proximity with that of culture (from Matthew Arnold to Clifford Geertz). One gets the sense from Striphas that, like backward-facing magnets, these words have circled each other, at least since the 1960s, unable to quite connect until Netflix’s algorithmic systems started recommending movies (in 2000) and Twitter first started asking its subscribers “What are you doing?” and posting the answers online (in 2006). Until, in other words, some version of “culture” became “content.”
Ultimately, Striphas does offer a formal, two-part definition of Algorithmic Culture, highly flexible and impeccably dialectical:
First, as the use of computational processes to sort, classify, and prioritize people, places, objects and ideas; and second, as the repertoires of thought, conduct, expression, and feeling that flow from and back into those processes.
Algorithmic systems reflect and shape the inputs of human agents, with something like technology mediating something like ideology, and vice versa. This sounds right, but it doesn’t capture that feeling—floating somewhere between mania and motion sickness—that everything has changed. Everything from dating to policing, from shopping to electoral politics. It’s a feeling that is bound to accelerate as artificial intelligence gets better at approximating human labor (physical, intellectual, emotional). It’s that tech-determinism feeling.
two new books by Kyle Chayka and Taylor Lorenz try to account for the contours and texture of Algorithmic Culture, broadly and narrowly defined, with Chayka most persistently working to capture the feeling of waning agency in the face of computation, which he calls “algorithmic anxiety.” Both writers have been active observers of life online from their posts at legacy media institutions, The New Yorker and The Washington Post, respectively. Chayka’s book, Filterworld: How Algorithms Flattened Culture, has a strongly articulated thesis: algorithms have made culture homogeneous, repetitive, less interesting, and therefore less rewarding to consumers. The result is a “frictionless” experience of culture in which convenience and profit outpace discretion and taste.
It is pretty clear what Chayka means by algorithms: the processes by which some bit of online content “goes viral” or is given weighted priority over other bits of content on privately owned, publicly traded web applications. The anxiety that algorithms visit upon Chayka—and which he identifies as a cultural condition—is a fundamentally humanistic one. Why should any of us trust a black box to define our tastes? One could argue that taste has always been a black box and that having it has always depended less on liking the right things than on aligning oneself with the right people. But even though Chayka spends a chapter on theories of taste that passes through Pierre Bourdieu, he does not seem to believe this. Instead, Filterworld elaborates an ethics of personal taste that is peer-to-peer networked, not informed by Twitter feeds or For You Pages, demography or capital.
Over six chapters, Chayka interviews artists and curators whose work he understands to have been distorted or displaced by algorithmic recommendation systems: writers who try to “game” The Algorithm, pandering to its logic of “likeability” to get their work more widely read; musicians who resent that an outlier in their discography—a parody pop song—blew up on Spotify, seeing in the app’s popularity feedback loops a portent of cultural fascism. For Chayka, “algorithmic” is not merely a mock aesthetic category or a new way of calling things commercial or insipid (“did a bot write this?”). Instead, he warns that algorithmic recommendation systems are actively shaping contemporary aesthetics and diminishing cultural production in a way that is different, not just in scale but in kind, from the age-old logics of commercialism.
If it is mostly clear what Chayka means by algorithms, it can be hard to know what he means by culture. What exactly is being “flattened” and filtered into uniformity? Mostly Chayka means cultural artifacts—songs, movies, essays, tweets—which may explain why he frequently refers to “pieces of culture,” in the manner of “pieces of content.” At other times, culture refers to systems of valuation (we defer to “algorithmic taste”); sometimes it is more like an ecosystem or ambiance (we live an “algorithmic life,” held in “algorithmic space,” subject to “algorithmic government”). All of this is culture—or, at least, all of it is the terrain of Cultural Studies, as defined by one of the field’s founding fathers, Raymond Williams (who, it should probably be said, was a vocal and sophisticated opponent of technological determinism). But has all of it really been flattened, or made “frictionless,” by algorithms? The argument is so big that counter-examples glare between the gaps of its sprawling surface area.
Take fiction. Chayka writes, “Young writers often find ways to cultivate public presences online even before they enter MFA programs, on Twitter, Instagram, or TikTok. They subject their voices to the force of social media flattening.” What could this mean? Did young writers—say, before 2006—have autonomous voices, independent of social norms, economic imperatives, and technologies of production? Can’t MFA programs be called a homogenizing force on U.S. fiction? (They famously have been.) And is “the force of social media” on authorial voice so singular? Does social media have a house style? Is it the same across all tech platforms and publics? The reader could get into the weeds with Chayka about any number of his claims about algorithmic flatness, but there are more pressing questions to ask, like: what does this idea of Algorithmic Culture as totalizing and oppressive do? Whose interests is it serving?
In pursuit of an answer, we might start by looking at Chayka’s vocabulary, which offers an echo—with a nineteen-year time lag—of that other book about flatness and the internet, Thomas L. Friedman’s The World Is Flat, a triumphalist account of web-induced globalization and free trade. Both authors picture the internet as a “frictionless” surface on which capital and culture can skate from China and India to Wall Street and Silicon Valley, but for Friedman, flatness implies not cultural banality but equal opportunity. Friedman is a self-proclaimed technological determinist (“guilty as charged,” he wrote in The World Is Flat, thumb-nosing italics included). For him, frictionlessness is inherent to web-based commerce and constitutes its primary virtue. It welcomes new markets into a global economy, levels playing fields, increases competition and transnational collaboration—all as a result of open-source software and outsourced labor. Friedman’s valuations and objects of inquiry differ from Chayka’s; in Filterworld, algorithmic frictionlessness is an aesthetic scourge, making coffee shops in Bucharest look like coffee shops in Bushwick and the songs recommended by Spotify all sound like “ambient synth washes.” But their diagnosis is the same: the world is flat. First the economy, then the culture, now all algorithmicized.
For Friedman, “the biggest source of friction” interrupting The Algorithm’s world-flattening is national and religious identity. He writes:
The more the flattening forces reduce friction and barriers, the sharper the challenge they will pose to the nation-state and to the particular cultures, values, national identities, democratic traditions, and bonds of restraint that have historically provided some protection and cushioning for workers and communities. Which do we keep and which do we let melt away into the air so we can all collaborate more easily?
It’s a leading question. Friedman takes for granted a culture of deregulation, melting away state protection for “workers and communities” since the thaw of the 1970s and 1980s. As a result, the state actors who might introduce friction into The Algorithm’s slip-’n-slide economy remain just as “airy” as the “flattening forces” that have eclipsed them in his narrative.
Chayka values the friction of cultural particularity, not because it impedes capital’s flow, as Friedman seems to bemoan, but as a pledge of aesthetic value: friction is that uphill effort required to record a favorite episode on VHS tape or to discover a favorite band from a friend instead of an Apple Music playlist. Seeking out friction rewards what Chayka calls “the organic development of culture,” not “flatness and sameness, the aesthetics that are the most transmissible across the networks of digital platforms.” Here cultural complexity seems correlated to file density, as if eclecticism would jam the servers and break the internet.
For this reason, Chayka’s book (in which flat is bad and friction is good) might appear to offer a rebuke of sorts to Friedman’s (in which flat is good and friction is bad). But their shared technological determinism leads the writers to compatible, compatibly degraded political visions. Theirs is a political arena with such limited scope as to be essentially external to the workings of the internet and the production of capital, financial and cultural. Whatever culture is to Chayka—whether individual “pieces” or dominant aesthetics—it seems to exist outside of politics, which is why it can be said to grow “organically” or to be debased by a technological invention, rather than by the social and institutional forces guiding its development and distribution.
This is not to say that Chayka completely excludes politics from view. In fact, he spends forty pages describing various legislative efforts in the United States and the European Union to regulate social media platforms and enforce algorithmic transparency. But because his sense of aesthetics and technology is so divorced from the political and social contexts of their emergence, he is unequipped to intervene meaningfully in this terrain. The best he can do is throw up his hands—like Friedman does above—and declare, “a law can force a platform to ban problematic content, but it can’t make Spotify recommend more challenging or creatively interesting playlist of music.”
Ultimately, Chayka’s recommendation for fleshing out the flatness of Algorithmic Culture is to reinvest human agency into the narrative he has crafted to eliminate it. He calls for a return to human curators, not algorithmic ones: go to MoMA, subscribe to the Criterion Collection (for art house movies) or Idagio (for classical music). He suggests identifying the DJ of a song that YouTube recommends to you, after which you can “pay them a tip for their cultural curation” or “buy a digital copy of one of the songs or albums that are included.” In other words, be more active consumers. There is a strong strand of cultural conservatism that runs through Chayka’s recommendations—one that sees social forces as degrading to mass culture and yet exogenous to high culture. It is a conservatism that is strikingly compatible with Friedman’s neoliberalism, since neither proposes any real speedbumps to capital in the name of regulation or good taste.
it is a strange impulse to consider one’s own Twitter feed or For You Page “flat,” “generic,” or “impersonal.” Algorithmic recommendation systems are, if literally nothing else, personalized. This is not to say that they do not regress into genre categories: Netflix “knows” that I like “Critically Acclaimed LGBTQ+ Movies”; Twitter “knows” that I waste my time on petty disputes between academics and pictures of cats in bodegas; Instagram often tries to sell me bras for small-breasted women. There is some genre of person—call it a demographic—coming into focus here that, when reproduced for my convenience online, can often anticipate what I want to watch, share, and buy. It invariably feels reductive—such is the nature of genre constraints—which is perhaps to say that it feels “flattening.” But I would not call it impersonal any more than I would call any form of advertising impersonal. What is unique about predictive algorithmic recommendation systems is that they are rebranding demography as subjectivity—and a coherent, consistent form of subjectivity at that. This page is For You (as you have been), not For Someone Like You (as you seem to be), nor even For Someone You Might Be (if only you buy this product).
To imagine one’s recommendations as wholesale “generic,” rather than generic to one’s demographic, is a step stranger. It may be a tacit acknowledgment of one’s own claim to cultural dominance—the world might look flat if you are looking at it from above—or a concession to one’s own frictionless passage through space, online and off. A U.S. passport, white skin, straightness, cis-masculinity: it is easy enough to attach these characteristics to the internet itself, along with its cultural output. Most histories of the internet do, aligning The Algorithm with its most famous founders and profiteers: Bill Gates, Steve Jobs, Larry Page and Sergey Brin, Mark Zuckerberg, Jack Dorsey, Elon Musk, and so on and so on.
What’s most remarkable about Taylor Lorenz’s new book, Extremely Online: The Untold Story of Fame, Influence, and Power on the Internet, is that these men go mostly unnamed. Rather than taking the familiar narrative of web-induced uniformity for granted, Lorenz disarticulates it by shifting our focus. In her finely reported social history of social media, she gives algorithmic systems and their engineers little credit for the current state of online culture. Other names surface instead: Heather Armstrong, Kirsten “Kiki” Ostrenga, Julia Allison, Bree Avery, Aliza Litch, Cates Holderness, Liz Eswein, Amber Venz, Olivia Palermo, Emma Chamberlain, and Charli and Dixie D’Amelio.
The purported thesis of Extremely Online is that social media platforms have become entertainment industries. The companies whose founders envisioned their apps as portals for online “friendship” have had to redirect their resources to the content creator economy, building out their platforms to prioritize “entertainment” over “connection.” The many naysayers who complained, “I don’t want to know what my friends are eating for breakfast,” when Facebook unveiled its “status update” feature in 2006, have been partially vindicated. As it turns out, people do want to know—and watch—what some people are eating for breakfast: call them influencers, online personalities, or scammers. Their preferred title is “creator.” And some of them get paid millions of dollars a year to eat their breakfasts online.
Lorenz establishes this “war” between rival social media business models early on in Extremely Online. She maps them onto U.S. cities and major platforms: Silicon Valley is the home of the Facebook model, in which apps promote friends’ connections; New York and Los Angeles produced the Myspace model, in which apps host creators’ music, dances, confessionals, and skits. That the Myspace model has won this war will come to many as a surprise, since the company lost its own battle against Facebook in the mid-aughts.
But this internecine internet feud is eclipsed, in Lorenz’s reporting, by a shadow thesis of which she seems largely unaware: that the engine for this corporate revolution—this rerouting of business models and marketing campaigns from “connection” to “content creation”—has consistently been white women. White women producers (as mommy bloggers, Myspace scene queens, bedroom vloggers, and Instagram influencers); white women marketers (as pioneers of sponsored content, brand ambassadorships, and affiliate marketing programs); and white women as consumers have all fueled the content creator economy, boosting its supply and demand. Enter: Heather Armstrong, Kirsten “Kiki” Ostrenga, Julia Allison, Bree Avery, Aliza Litch, Cates Holderness, Liz Eswein, Amber Venz, Olivia Palermo, Emma Chamberlain, and Charli and Dixie D’Amelio.
Lorenz’s history begins with Armstrong, a mother of two in Salt Lake City who registered the domain name Dooce.com in 2001. A Mormon-raised web developer, Armstrong was fired from her job in 2002, after her employer learned that she had been blogging about her coworkers alongside posts about postpartum depression, breastfeeding, and the misogyny of the LDS Church. Newly the matriarch of a single-income family, Armstrong began running ads on her highly trafficked site in 2004 to compensate for her lost salary. The result, as Lorenz describes it, was “a tidal wave of backlash” from readers. Armstrong was exploiting the intimacy she had developed with them, coopting the unpaid work of mothering (and mommy blogging) for an income.
In Lorenz’s telling, these familiar attacks on a “relatable” white woman were just the growing pangs of the content creator economy. Lorenz credits Armstrong as its trailblazer, developing some of the first sponsored content—like when Verizon subsidized her home office renovation—and some of the earliest affiliate marketing programs, which offer creators commissions for the products they recommend. By 2005, Armstrong’s site had become so profitable that her husband quit his job, becoming her manager and effectively the first “Instagram boyfriend” several years before Instagram existed. Lorenz also credits mommy bloggers in general, and Armstrong in particular, with inspiring changes to the Federal Trade Commission’s disclosure guidelines in 2008, requiring bloggers to announce when they are being compensated for product promotion.
In April 2017, the FTC doubled down on its enforcement of these guidelines, during what Lorenz calls “peak Instagram,” when some creators—led by the Kardashian sisters—were drawing hundreds of thousands of dollars per sponsored post, without ever signaling that their “content” was in fact advertising. Lorenz demonstrates how contrary to Instagram’s mission statement this high-profit posting was—and how influential white women, historically positioned to enter their girl boss eras, were to its “peak” economy. Mike Krieger and Kevin Systrom, Instagram’s founders, didn’t include devices for monetizing content in their earliest versions of the app: there were no ads for sale and no tools for paid promotions built into its design. They were following the model of Facebook and Twitter, which prioritized community over growth—at least at the beginning. But Systrom was also “strongly against” advertising, Lorenz reports, not wanting Instagram’s “wall of beautiful images” to become “a billboard.”
Lorenz highlights two white women, Liz Eswein and Amber Venz, who built Instagram empires by helping brands “find a side door” to advertising on the app. Eswein’s company, Mobile Media Lab (founded in 2012), was an early influencer agency, connecting users with high follower counts to brands for campaigns. Venz’s company, LIKEtoKNOW.it (founded in 2011), developed an affiliate program for influencers. In 2013, Instagram executives, submitting to the terms of the economy already thriving on their platform, began selling their own ads. In 2018, they began allowing businesses to add product links. And in 2021, they added their own “native affiliate marketing tool,” allowing creators to earn commissions from sales, cutting out Venz’s middleman organization, which is now valued at more than $2 billion. If, for many users, Instagram’s aesthetic has been dominated by white women clichés—avocados, pumpkin spice latte art, and any of the other tropes featured in Bo Burnham’s 2021 parody song “White Woman’s Instagram”—this is not because of The Algorithm’s will to flatness. It may be because white women have been directing the flow of capital through the app since its founding.
Of course, not all creators are white women. One begins to get the sense, throughout Extremely Online, that just as Chayka’s eulogy for friction is circumscribed by his own cultural positioning, Lorenz’s reporting may be overdetermined by her own For You Page. She never calls attention to the attention she gives white women in her narrative. Even so, Lorenz’s version of algorithmic myopia is less analytically limiting than Chayka’s; her narrow view reveals far more. No, not all creators are white women—but the richest ones do tend to be. “Black mothers and mothers representing other marginalized identities were not granted the most lucrative brand deals” in the early mommy blog ecosystem, Lorenz writes. “This bias went well beyond blogs and would become an even bigger concern as social media became more visual.”
Their preferred title is “creator.” And some of them get paid millions of dollars a year to eat their breakfasts online.
The Algorithm did not invent anti-Blackness, nor even algorithmic racism. Lorenz has a stubborn resistance to abstraction, and she rarely connects her anecdotes to the social forces that helped to shape them, sacrificing interpretation at the altar of the fact. But as she describes the individuals and institutions that have shaped the culture of social media (partner programs, content houses, multichannel networks, the FTC), you begin to intuit the reciprocal mediation of new technology and existing ideology.
The platform that Armstrong used was new; the digital tools that Eswein and Venz developed were new too. But the convergence of actors and factors that shaped their development and use are as old as mass-market advertising itself. Mothers control up to 85 percent of household spending in the United States, and white women have been the face of this market and its targeted consumers since the nineteenth century. Algorithmic recommendation systems are changing things; targeted advertising is no doubt offering companies finer tools with which to capture “niche” demographics and hawk products For You. But the fine line between aspiration and relatability that has always fueled the American entertainment and advertising industries—and that has always been most available to white Americans for toeing—remains their stimulus for sales.
caught between the privations of patriarchy and the privileges of white supremacy—between the conflicting imperatives of aspiration and relatability—these highly public white women can fall into forms of depression and anxiety that are, among other things, highly salable. Armstrong sold it, giving her readers unfettered access to her mental illness (including a long history of alcoholism) until her suicide earlier this year. The D’Amelio sisters—TikTok sensations whose bedroom dancing during the pandemic resulted in a combined 208 million followers and deals with Amazon, Prada, Hulu, and Dunkin’ Donuts—are selling it now. The first season of their Hulu show devoted hours to Charli’s anxiety and Dixie’s suicidal ideation. (The Algorithm, Eisenberg said to Garfield, like Mephistopheles whispering into Faust’s ear, I need it.)
In 2021, Frances Haugan, now known as the Facebook whistleblower, reported to Congress that the multi-billion-dollar corporation where she worked had been sitting on studies revealing the harmful effects of its apps on teenagers, especially teenage girls. Among teens who reported suicidal ideation, 13 percent of British users and 6 percent of American users traced the desire to kill themselves to Instagram, one study reported. And roughly 32 percent of teenage girls said that “when they felt bad about their bodies, Instagram made them feel worse.” In February of 2023, the Centers for Disease Control and Prevention released a report stating that 57 percent of teenage girls felt persistently sad or hopeless in 2021, up from 36 percent in 2011. Thirty percent of girls in ninth through twelfth grade said that they “seriously contemplated suicide,” also up from 19 percent a decade earlier. Marketing is not new, nor is its deleterious effect on young women’s self-image. To acknowledge this historical continuity is not to undermine the above statistics. Technological determinism may be a logical fallacy, but it is not an affective one: the feeling of lost agency—and even lost hope—in Algorithmic Culture appears to be quite real.
Although the CDC report made no mention of social media as the causative agent of these dire trends—perhaps the global pandemic, the yawning income gap, and looming climate disaster played a role?—it has been widely cited as the motivation for Utah’s two new social media regulation bills, the first of their kind. Brad Wilcox, director of something called the Institute for Family Studies, seems to be the source of this campaign, connecting the CDC report to the demand for Utah’s legislation. The state’s governor, Spencer Cox, has retweeted Wilcox and linked to his writing about the CDC report in his own essay in National Review.
The bills, which Cox signed into law last March, are the only real impediment to growth that any U.S. legislative body has placed in front of tech companies. Despite the best efforts of Lina M. Khan, Biden’s FTC chair, to break up their monopolies, the burden of negotiating social media misinformation, hate speech, and unregulated data sharing remains in the hands of American consumers. We have to be our own curators, as Chayka would have it, and our own safety nets.
All of us, that is, except Utah’s teens. Cox’s legislation requires parental consent for minors’ social media accounts, gives parents access to those accounts, and creates a default “curfew” on social media access at 10:30 p.m. It limits direct messaging to minors, restricts their accounts from search results, and bans companies from collecting data from them and advertising to them. More, the laws, which will go into effect in March 2024, prohibit social media companies from using unnamed “addictive design features” and impose a $250,000 fine “for each violation” plus a $250,000 penalty “per child for those exposed to an addictive feature.” No one knows how Utah intends to enact or enforce these various prohibitions, fines, and penalties, nor even if they are legal. We do know that teenage girls—mostly teenage white girls, as Utah is 87 percent white—are meant to be protected by them.
Some have heralded the laws as victories for parents’ rights, a dog whistle for culture warriors on the right, using children as the technology for the programmatic rollback of civil rights. Others see the laws as an infringement of children’s privacy and free speech. Some, more modestly, are just relieved that someone, somewhere, is doing something. It’s a dangerous wager, a true devil’s bargain, placing political hope in the hands of paternalists acting on behalf of white girls. It’s dangerous but familiar. This is Algorithmic Culture, for us, as we have been: updating American norms, phobias, incentives, and risks—and staying the same.
Anna Shechtman is a Klarman Fellow at Cornell University and author of The Riddles of the Sphinx, about the history of the crossword puzzle and the sexual politics of wordplay. Her writing has appeared in Artforum, The New Yorker, The New York Review of Books, Slate, and Los Angeles Review of Books, where she is an editor-at-large.
Subscribe
New perspectives, enduring writing. Join a conversation 200 years in the making. Subscribe to our print journal and receive four beautiful issues per year.
We can’t rely on Big Tech to reign over online speech
Claire Bond Potter
Subscribe
New perspectives, enduring writing. Join a conversation 200 years in the making. Subscribe to our print journal and receive four beautiful issues per year.