Digital Literacy Demonstration of Application

Abstract

My project proposal details the development of a Digital Literacy Demonstration of Application (DOA) blog, designed to challenge the culture of “technochauvinism” and algorithmic oppression prevalent in online education. Rejecting the “autonomous model” of literacy, which treats technology as a neutral tool, and instead advocates for a critical digital literacy framework that recognizes how algorithmic systems shape thinking, identity, and institutional power.

Drawing on theoretical foundations such as Brian Street’s ideological model, Walter Ong’s cognitive restructuring, and Ruha Benjamin’s “New Jim Code,” my project explores how digital platforms are deeply embedded in social hierarchies and power dynamics. Through case studies like the International Baccalaureate scandal and the impact of machine bias in facial recognition, my project exposes the human costs of prioritizing machine logic over human judgment. Utilizing WordPress as a platform for digital intervention, I analyze the “digital grammars” of platforms including YouTube and Spotify to empower students to move from passive technical fluency to active critical agency.

Implementation strategies prioritize equity and access through mobile-friendly design, multimodal content, and plain language to dismantle “ivory tower” barriers for marginalized populations. Ultimately, this intervention seeks to achieve cognitive justice by restoring human agency to the center of the pedagogical experience, equipping students to interrogate, resist, and transform the digital platforms that shape their lives.

Project Mission and Personal Vision

As an Online Writing Instructor and a digital literacy critic, I have created a Digital Literacy Demonstration of Application blog to actively challenge the culture of “technochauvinism” that dominates online education. This culture is built on the flawed assumption that computational solutions—such as AI proctoring tools or automated grading algorithms—are inherently superior to human judgment and wisdom. Through my WordPress-based project, I intend to reject the view that digital literacy is simply a checklist of technical skills and instead advocate for a critical approach that recognizes the dangers of relying solely on technology. My blog’s position as a direct intervention against algorithmic oppression, aims to move students away from the “autonomous model” of literacy, where technology is seen as a neutral tool, and toward a framework that critically examines how algorithmic systems shape our thinking, identities, and reinforce the status quo at the expense of marginalized individuals.

In today’s classrooms and online assessment environments, we are stifled by pervasive technical chauvinism—a belief so deeply rooted that it causes educators to favor machine logic over the rich complexities of human experience. My project confronts the reality that AI systems and automated grading models are often inconsistent, unfair, and more biased than the people they attempt to replace. By dismantling the protective veil of these algorithms, my blog seeks to expose the human cost of systemic failures, such as those seen in the International Baccalaureate scandal, where reliance on automated systems prioritized institutional stability at the expense of individual dignity. Rather than merely providing information, my blog serves as an act of intervention against algorithmic oppression, challenging the myths that uphold the status quo and advocating for a digital literacy that centers human agency, critical awareness, and cognitive justice.

Building on this critical stance, my blog aims to achieve three essential objectives. First, encourage students to move from passive recipients to active participants in digital literacy, empowering them to engage thoughtfully with technology rather than simply adapting to it. Second, the WordPress platform serves as more than a space for writing—it acts as a digital intervention that confronts technical chauvinism, emphasizing the importance of human perspectives. By doing so, it reframes students not as mere data points who are subject to algorithmic judgments, but as individuals whose experiences deserve recognition beyond the confines of biased and inflexible automated systems. Finally, my project directly addresses the issue of algorithmic “black boxes,” illuminating the hidden mechanisms and unexplained biases within these systems. This exposure, challenges and disrupts the structures that perpetuate algorithmic oppression, advocating for transparency and cognitive justice in digital education.

Definition: Criticaldigital literacy is a critically engaged, socially embedded process of interacting with digital Discourses, where technology is not viewed as a neutral tool but as a system that actively shapes thinking, identity, and access. It calls for an awareness of the ways institutional power, platform design, and algorithmic structures—such as AI proctoring and automated grading—can reinforce bias and perpetuate the status quo. Rather than simply mastering technical skills, critical digital literacy centers human agency, challenges the myths of technological neutrality, and advocates for cognitive justice by confronting algorithmic oppression and prioritizing individual dignity and critical consciousness.

Theoretical Foundations: Literacy as Socially Situated Practice

Reflecting on the purpose of my Digital Literacy Demonstration of Application (DOA), and the fundamental need for creating a WordPress Online Writing Instruction (OWI) blog, to dismantle the widespread misconception that digital literacy consists merely of mastering technical skills moves the needle from technical fluency to critical agency. The “technical skills” misconception is dangerous, it treats digital literacy like a driver’s manual, where you learn the right buttons to push, and you’re all set. This prevailing “autonomous model” treats digital abilities as neutral, isolated competencies—an approach that fails to capture the complex realities I have encountered in both physical classrooms and virtual learning environments. My experience has consistently revealed that digital literacy is shaped by social, cultural, and institutional forces that extend far beyond the mechanics of technology itself. To dismantle that myth, my blog will illustrate that true digital literacy in on OWI environment involves understanding that a WordPress post isn’t just a blog, it’s a rhetorical move within a specific algorithmic pragmatism and that negotiating one’s identity means recognizing that themes, tags and other tools are performed by users and internet crawlers. I intend to also highlight the labor students do to reconcile their personal voice, while meeting platform expectations.

My perspectives align closely with the ideological model articulated by Brian Street. His model asserts that literacy is not a neutral skill but a socially situated practice, deeply embedded within networks of power, culture, and identity. This ideological model speaks to my own observations: access to digital tools and educational opportunities is never simply a matter of learning to operate interfaces—it is a process of negotiating the expectations, norms, and power dynamics that define those digital spaces. These structures can serve as gateways for empowerment or barriers that reinforce exclusion, depending on how they are designed and enacted.

Creating my DOA blog is far more than assembling a “how-to” manual for WordPress or digital platforms. I see it as a critical intervention in what’s become the “Platformization of education.” Broussard refers to this term as the growing dominance of digital platforms in shaping educational experiences, often reducing complex learning and teaching processes to algorithm-driven routines and standardized interfaces. By developing this blog, I aim to challenge the assumption that technology is simply a neutral conduit for information. Instead, I want to highlight how digital platforms actively influence not only what students learn, but how they think, interact, and express themselves within educational spaces.

Drawing from Walter Ongs’ insight that technology has the power to “restructure consciousness,” I believe it is crucial to help students understand that the tools they use—whether for learning, communicating, or creating—actively shape how they think and what they are able to express. Ong’s concept of cognitive restructuring suggests that our minds adapt to the affordances and limitations of digital environments, which can fundamentally alter our ways of knowing, reasoning, and participating. In the context of online writing instruction, this means that students are not just learning to use WordPress—they are navigating a space where their identities, voices, and agency are shaped by the platform’s design and its embedded algorithms.

In designing this blog, I’m consciously working to make students aware of how interface architecture influences their learning journey. The layout, navigation, and features of WordPress are not accidental; they are constructed to guide users toward certain actions, limit other possibilities, and collect data in ways that serve institutional interests or reinforce dominant norms. As Collin Bjork points out, these interfaces are “never neutral,” and I want to ensure students recognize this fact and feel empowered to question and shape the platforms that mediate their experiences.

My hope is that this blog will encourage students to see digital literacy not as a passive skill set, but as an active, ongoing negotiation with the systems that structure their lives. Digital literacy, in this context, involves understanding how technology, institutional power, and platform design intersect to create opportunities and barriers for expression, learning, and engagement. Through reflective practice, critical analysis, and collaborative exploration, students can begin to challenge the myths of technological neutrality and advocate for more equitable, transparent, and inclusive digital environments. Ultimately, my blog’s purpose is to inspire students to become active participants in the digital world, equipped with the tools and mindset to interrogate, resist, and transform the platforms that shape their educational journeys.

The pedagogical stance of my project is rooted in the synthesis of foundational literacy theories that frame digital engagement as a cognitive and ideological act. The following table illustrates how these theories are being applied within my DOA blog to challenge the “autonomous model” of technical proficiency.

TheoristCore ConceptApplication
Walter OngCognitive RestructuringMy blog will explore how generative AI leads to “distributed cognition,” where the machine and human co-author, fundamentally altering the writer’s consciousness.
Brian StreetIdeological ModelRejects the “autonomous model” (neutral skills) to demonstrate how digital communication is embedded in power relations and institutional norms.
James Paul GeeDiscourse TheoryDigital literacy, framed as “Discourse participation,” where students perform competence and professional identity within academic and algorithmic spaces.
Harvey GraffThe Literacy MythUsed as a cautionary lens, Graff’s work will remind students that simply providing WordPress access does not guarantee social mobility or equitable outcomes.

Understanding Machine Bias (The Noble and Kim Lens)

The concept of “machine bias” is central to understanding how digital platforms and algorithms shape our online experiences. Broussard’s work highlights that these biases are not accidental or inconsequential; rather, they are deeply embedded within the design and operation of digital systems. One of the most significant contributions to this discussion comes from Safiya Noble’s Algorithms of Oppression. Noble’s research demonstrates that search engines, such as Google, do not simply reflect an objective reality. Through systematic analysis of search results, she reveals how commercial interests and political agendas influence what users see, often perpetuating and reinforcing racist stereotypes and social inequalities. This evidence challenges the assumption that digital tools are neutral, urging us to critically examine the power dynamics at play in everyday technologies.

To illustrate real-world effects of algorithmic bias, my blog will feature experiments such as Ming’s search for the term “beauty” in different languages. The results for “beauty” in English versus Chinese reveal starkly different images and narratives, filtered through cultural and algorithmic lenses. This phenomenon, known as “algorithmic pragmatics,” shows that algorithms are shaped by—and actively shape—cultural norms and expectations. By presenting such experiments, students can be guided to see how search engines and digital platforms do more than deliver information; they curate and construct realities based on underlying assumptions and priorities.

Furthermore, my blog will emphasize the importance of critically evaluating digital sources. Students often assume that search results are unbiased and comprehensive. However, what appears as a neutral outcome is frequently a “filtered reality,” shaped by commercial interests and designed to maximize engagement or profit. This creates “filter bubbles,” where users are presented with content tailored to their preferences, sometimes at the expense of accuracy or diversity of perspectives. Teaching students to recognize these filters is crucial for fostering critical digital literacy, empowering them to question the information they encounter and to seek out more reliable, balanced sources.

Critical Interventions: Dismantling Technochauvinism and Machine Bias

The central objective of my project is to expose how technology “predicts the status quo,” essentially taking a closer look at how algorithmic governance prioritizes institutional stability over individual fairness. This means that, rather than actively working to challenge existing inequalities, many technological systems and AI-driven platforms tend to reinforce patterns already present in society. In educational settings, for example, algorithms are often programmed to favor statistical consistency, which can inadvertently perpetuate biases against marginalized groups and overlook the unique circumstances of individual students.

We must confront the reality that AI systems are often less consistent than the humans they aim to replace; for example, research into student assessments shows that while AI grading achieves 90% accuracy, human grading maintains a superior 95% accuracy rate. This 5% gap represents a significant margin for error that often falls upon the most vulnerable. The implications of this difference are profound: when an algorithm makes an error, it is frequently viewed as a necessary trade-off for efficiency and objectivity, rather than as a correctable mistake that impacts real lives. As a result, students who are already at a disadvantage—whether due to socioeconomic status, race, or other factors—are more likely to suffer from these misjudgments, deepening educational inequities.

We face the dehumanization of error; when teachers make a mistake, there is a path where individuals can appeal. Human graders can recognize context, listen to appeals, and adjust their assessments based on nuanced understanding. However, when algorithms or monitoring software fails, the error is often treated as a statistical necessity rather than a life-altering flaw. There is rarely a clear route for students to contest their results or explain their circumstances to an automated system. This lack of recourse removes the possibility for empathy or individualized correction, effectively silencing those most affected by algorithmic misjudgments.

Systems protect their own objectivity by ignoring the nuances of a student’s individual journey. The pursuit of mathematical fairness and standardized outcomes often comes at the expense of recognizing the diversity of student experiences and needs. Ultimately, this approach elevates institutional priorities above personal justice, reinforcing a cycle where technology serves to maintain the status quo rather than promote equitable outcomes. By highlighting these issues, my project calls for a critical reevaluation of how we deploy and trust AI in educational and other institutional contexts, advocating for greater transparency, accountability, and opportunities for human intervention.

To illustrate the profound human consequences of algorithmic failure and bias, my blog will delve into the 2024 International Baccalaureate (IB) scandal, a vivid example of how statistical formulas and automated decision-making can undermine academic integrity and student agency. The scandal unfolded when scores for thousands of students were determined not by their individual performance, but by an algorithm that factored in the historical success rates, geographic location, and economic status of their schools. This approach stripped students of the opportunity to have their work fairly evaluated and led to widespread frustration and outrage.

The case of Isabel Castañeda is especially telling. Despite being a top performer, Isabel received failing scores because her school’s economic and geographic data were weighted more heavily than her actual academic achievements. Her story highlights how automated systems can erase years of dedication, and how such formulas disproportionately impact students from marginalized backgrounds, deepening educational inequities. Isabel’s experience is not an isolated incident; it is emblematic of a larger trend where the pursuit of statistical consistency in educational algorithms overrides the need for individualized assessment and fairness.

Even more disturbing is the situation faced by Robert McDaniel in Chicago. Although McDaniel had a practically clean record, he was targeted by predictive policing software—an algorithmic system that uses historical crime data and personal information to forecast individuals who are likely to become involved in criminal activity. The software flagged him as a potential risk, resulting in repeated persecution from police. This not only disrupted McDaniel’s life but also led his neighbors to believe he was cooperating with authorities, or a “snitch.” Unfortunately, this suspicion ended in McDaniel being shot. His case obviously illustrates the life-altering and sometimes deadly consequences of machine bias, especially when systems operate with little transparency or recourse for those affected.

The IB scandal was further exaggerated by the massive leak of exam materials, with more than 45,000 downloads reported across subjects such as Mathematics, Business, Management, and Chemistry. That breach undermined the credibility of the assessment process and left many students feeling that their two years of preparation and hard work was meaningless. For those in marginalized communities, who lacked access to the leaked materials and contended with systemic barriers, the sense of betrayal was extremely critical. Those students faced a dual disadvantage: first, from the algorithmic bias embedded in score calculation, and second, from the inequities introduced by the leak.

These incidents are not mere technical glitches or isolated miscalculations; they are demonstrations of what scholars’ term as the “New Jim Code”—a system where technological solutions perpetuate or exacerbate existing social inequalities under the guise of objectivity and efficiency. Rather than correcting injustices, these digital tools often reinforce them, making it harder for vulnerable individuals to seek redress or have their voices heard. By spotlighting these stories, my blog aims to challenge the narrative that technology is inherently neutral, and to advocate for greater transparency, accountability, and the integration of human judgment in systems that profoundly impact people’s lives.

The New Jim Code in Education is an AI-driven assessment tools that frequently reinforce systemic disparities under the guise of mathematical objectivity. Recent research has identified the following bias scores in AI grading accuracy:

  • Socioeconomic Status: -0.3
  • Gender: -0.2
  • Race/Ethnicity: 0.1
  • Note: Negative scores indicate systematic undervaluation of student work.

These bias scores reveal how the New Jim Code affects AI grading systems, which are intended to be impartial and efficient, however, they can perpetuate longstanding inequities. For instance, students from lower socioeconomic backgrounds are systematically undervalued, receiving scores that do not accurately reflect their abilities or achievements. Gender bias further intensifies these disparities, often resulting in under-recognition of female students’ work. Meanwhile, racial and ethnic bias—though sometimes less overt—can manifest in ways that either undervalue or artificially elevate certain groups, distorting fair assessment.

The glaring consequences of these biases, whether implicit, cognitive, or algorithmic, are not merely academic; they have real world impact, shaping how access to opportunities, scholarships, and future successes are prioritized. When AI systems are implemented without careful scrutiny and mechanisms for human oversight, the errors they generate become institutionalized, leaving students with little recourse. As illustrated by cases like Isabel Castañeda and Robert McDaniel, algorithmic misjudgments can erase years of hard work and, in some instances, lead to life-altering and even tragic outcomes.

Addressing the New Jim Code in education requires more than technical fixes; it demands a fundamental shift toward transparency, accountability, and the integration of human judgment. By recognizing and challenging these systemic biases, educators and policymakers can work to ensure that AI tools support equitable outcomes rather than reinforce the status quo. It is imperative that we advocate for systems that not only measure achievement but also honor the diversity and complexity of every student’s experience.

The New Jim Code: Facial Recognition and the Justice System

Encountering Ruha Benjamin’s concept of the “New Jim Code,” a critical framework for understanding how modern algorithms assumes that technology is neutral or fair, reinforces and amplifies racial inequality. The reality is much more unsettling; Benjamin’s work demonstrates that the very systems built to “improve” society, can actually reinforce the same racial hierarchies we claim to be overcoming. In the context of facial recognition and criminal justice, this isn’t an abstract risk—it’s a lived reality for those who are misidentified, wrongly surveilled, and are unfairly targeted by so-called objective algorithms.

Themes about “Recognizing Bias in Facial Recognition” and “Machine Fairness and the Justice System” are all addressed through Ruha Benjamin’s concept of the “New Jim Code.” Benjamin argues that technological systems attempt to reproduce social and racial biases under the guise of objectivity. Students and colleagues claim that “data doesn’t lie,” but Benjamin’s framework challenges us to ask these pointed questions: whose data, collected by whom, and for what purpose? My skepticism continues to grow as I read and watch news stories depicting how facial recognition led to wrongful arrests, especially among Black and Brown individuals. These are not glitches—they are signals that the systems were never built with true equity in mind.

I believe that teaching about these issues, isn’t just about technical literacy; it is also about ethical literacy. When students explore the New Jim Code, they’re not only learning to spot bias in machines—they’re learning to question the narratives and motivations behind technology’s development and implementation. Benjamin’s work encourages me to frame OWI discussions around power, history, and accountability.

It is not enough, to simply provide access to technology, in our digital orientation activities. As Harvey Graff’s critique of the “literacy myth” suggests, handing someone a computer or a smartphone doesn’t guarantee a fair shot at success—or justice. I intend to remind my students that technical access is only one piece of the puzzle; awareness of structural and systemic barriers is critical if we want to ensure digital equity.

Dismantling Structural Obstacles:

My DOA acknowledges that “access to technology… does not guarantee equitable outcomes,” reflecting Harvey Graff’s critique of the “literacy myth”. His perspective challenges the assumption that simply introducing computers or smartphones into educational settings will level the playing field for all students. In reality, access is only the starting point; true equity involves understanding and addressing the structural barriers—such as socioeconomic status, cultural capital, and systemic bias—that influence who benefits from digital tools and who is left behind. I intend to design activities that ask students to reflect on their own digital journeys—who gets to participate, who gets left out, and why? These activities might include guided discussions, reflective essays, or collaborative projects that encourage students to examine their personal experiences with technology and consider the broader social context. By surfacing these questions, we will begin to dismantle the idea that technology alone is a social equalizer. Instead, we’ll foster a critical awareness that digital literacy is intertwined with issues of power, access, and opportunity. Through this approach, students will not only gain technical skills but also develop the ethical and systemic awareness needed to advocate for more just and inclusive digital environments.

Systemic Awareness:

I have heard digital literacy being compared to a “literacy of survival.” Just as the Green Book once provided Black travelers with information needed to navigate a segregated America, my blog aims to equip students with the critical skills to survive in a digital world shaped by automated systems—some of which can reproduce and even deepen existing inequities. The Green Book was a tool for resilience, helping people avoid harm and find safe passage; in much the same way, critical digital literacy today must serve as a protective guide. This means teaching students not only how to use technology effectively, but also how to recognize, question, and resist the biases embedded within algorithms and platforms. By cultivating a “literacy of survival,” students gain the confidence to challenge unjust systems, advocate for fair treatment, and make informed decisions in digital spaces that often lack transparency and accountability. The goal is for students to leave not just as consumers or users of technology, but as critical thinkers and advocates for justice in the digital age. They should feel empowered to interrogate the motivations behind technological innovation, to demand greater equity in design, and to contribute to conversations about ethical technology use. Ultimately, this approach will prepare students to navigate—and transform—an environment where technology is both a tool and a terrain of social struggle.

In short, engaging with the New Jim Code isn’t just an academic exercise—it’s a call to action. It compels us to recognize how power, prejudice, and technology intersect, and to commit to building digital environments that are truly just for all. The concept of the New Jim Code, as articulated by Ruha Benjamin, challenges us to see technology not as inherently neutral or fair, but as a system that can reinforce social hierarchies if left unchecked. By addressing these issues within educational settings, we move beyond surface-level digital access and toward a deeper systemic awareness. This includes fostering discussions about who is included or excluded by digital infrastructures, encouraging students to reflect on their own experiences with technology, and designing activities that promote critical engagement with the ethical dimensions of digital tools. When students and educators work together to confront algorithmic bias and advocate for transparency and accountability, they help shape a digital landscape that respects the diversity and complexity of every user’s experience. In this sense, confronting the New Jim Code becomes an ongoing responsibility—one that requires vigilance, empathy, and collective action to ensure digital justice is not only a goal, but a reality.

Artifact Analysis: Navigating Digital Grammars

My project was introduced as an Ignite Talk —a high-stakes genre requiring 20 visual-heavy slides at 15 seconds each. This format challenged me, as the presenter, to distill complex arguments into concise, visually compelling moments, demanding both clarity and creativity. The rapid pacing not only heightened audience engagement but also forced a disciplined rhetorical approach, where every second and image must serve a clear purpose.

In my Ignite Talk, I focused on dissecting the “digital grammars” of three key platforms: WordPress, YouTube, and Spotify. “Digital grammars” refers to the unique set of rules, conventions, and affordances each platform provides, shaping how users communicate, construct meaning, and participate in digital spaces. My analysis examined how these platforms’ underlying designs—ranging from hyperlink structures and nonlinear navigation to audio cues—actively shape the rhetoric of their users.

By exploring these platforms, I aimed to reveal how their technical and aesthetic choices influence not only what is possible to say but also how users negotiate their identities and agency online. The Ignite Talk format provided an ideal vehicle for this exploration: its visual and time constraints mirrored the constraints and possibilities found within digital environments themselves. Through this lens, my project illuminated the often-invisible ways that platform architecture dictates patterns of rhetorical engagement, participation, and even exclusion.

Artifact Analysis Summaries

WordPress: Hyperlink Agency and Identity-Shaping:

  • WordPress stands out as a platform designed to support academic rigor and intellectual exploration, offering users the ability to craft content that goes beyond surface-level engagement. Its structure encourages the creation of dense, thoughtfully organized paragraphs that invite deeper reading and reflection. Central to WordPress’s digital grammar is the concept of “hyperlink agency,” which empowers students not merely to consume information but to actively curate, connect, and contextualize their own digital narratives.
  • This agency is exercised through the strategic placement of hyperlinks, which can guide readers to relevant sources, expand on arguments, and weave together complex webs of meaning. By choosing what to link, how to frame those connections, and which resources to highlight, students assert control over the representation of their academic and professional identities. The process transforms them from passive recipients of information into active participants, architects, and even gatekeepers of their online presence.
  • Additionally, WordPress’s affordances—such as customizable layouts, tagging systems, and comment features—further support identity-shaping. Students can tailor their blogs to reflect their interests, values, and expertise, fostering a sense of ownership and authenticity. This personalization is not just aesthetic; it signals to readers and potential employers the student’s ability to engage critically with digital tools and to communicate effectively in professional contexts.
  • In this way, WordPress becomes a microcosm for digital literacy, illustrating how technical and rhetorical choices intersect to shape both discourse and self-presentation. The platform’s emphasis on hyperlink agency mirrors broader trends in digital communication, where the ability to navigate, synthesize, and direct attention is central to participation in academic and algorithmic spaces. As students build and refine their WordPress sites, they will be practicing the competencies needed to perform their professional identities and negotiate their agency within an increasingly interconnected, digital world.

YouTube: Nonlinear Reading Practices:

  • By utilizing rapid cuts and visual storytelling, YouTube caters to broad demographics. However, its reliance on “nonlinear reading practices” and high pacing can impose a significant cognitive load. Unlike traditional platforms that encourage sequential, text-based engagement, YouTubes’ digital grammar is characterized by fragmented, visually driven content. Users are prompted to jump between ideas, follow hyperlinks embedded in videos, or navigate playlists and suggested content that disrupt linear progression. This approach enables viewers to construct personalized pathways through information, fostering creativity and flexibility in learning. Yet, the speed and diversity of visual cues demand constant attention and synthesis, which may overwhelm less experienced users or those with limited digital literacy. Furthermore, the platform’s algorithmic recommendations often amplify this nonlinear experience, guiding users toward related or tangential topics, sometimes at the expense of depth and sustained focus. In effect, YouTubes’ architecture both empowers and challenges its audience, shaping how knowledge is accessed, interpreted, and retained in digital spaces.

Spotify: Oral Literacy and Tone-Focused Rhetoric:

  • Spotify distinguishes itself from other platforms by prioritizing oral communication over visual or textual modes. Its digital grammar is shaped by audio cues—intonation, pace, volume, and rhythm—which foster a sense of conversational intimacy and emotional connection. This environment is particularly effective for storytelling, interviews, and narrative-driven content, where the nuances of voice and tone convey meaning beyond what written text can offer. Listeners engage with content in a way that feels personal and immediate, often forming a deeper bond with creators through recurring podcasts or music playlists.
  • However, this emphasis on oral literacy and tone presents challenges when it comes to instructional clarity and educational content. Complex pedagogical tasks that require step-by-step guidance, visual aids, or precise terminology can be difficult to communicate through audio alone. Unlike platforms such as WordPress, which allow for hyperlinks, diagrams, and structured text, Spotify’s format restricts the ability to reference external materials or provide detailed explanations. The absence of visual support means that listeners must rely entirely on their auditory processing skills, which may lead to misunderstandings or gaps in knowledge, especially for abstract or technical subjects.
  • Additionally, Spotify’s affordances—such as playlist curation, episode tagging, and interactive features like polls or Q&A—do offer some avenues for engagement and identity-shaping. Creators can tailor their podcasts to reflect their interests and expertise, while listeners can personalize their subscriptions and playlists to align with their values and learning preferences. Despite these options, the platform’s primary reliance on sound limits the scope for deeper academic exploration and critical engagement, often favoring entertainment, inspiration, or casual learning over rigorous, structured instruction.
  • In summary, Spotify’s digital grammar encourages rhetorical practices centered on voice, tone, and oral storytelling. While this approach is powerful for building community and sharing experiences, it can constrain the clarity and depth needed for complex educational tasks. The platform thus illustrates both the possibilities and limitations of audio-centered communication in digital spaces, shaping how users perform their identities and negotiate agency through sound.

Connecting Scholarly Influences

Broussard Influential ConceptHow It Shows Up
Machine BiasAlgorithms of OppressionUse of bias case studies, and the “beauty” results experiment to spark critical conversations.
Justice SystemThe New Jim CodeEncourage students to find and question structural biases embedded in platforms.
Facial RecognitionSurveillance NetworksGoes beyond tool training, focusing on critical awareness of monitoring and surveillance.
Imaginary GradesPlatformization of EducationReddit and WordPress are sites for critiquing educational norms and deficit views.
Ability & TechnologyMultiliteracies & Mobile EquityEnsure content is multimodal and accessible, using plain language for diverse abilities and equitable access.

Instructional Goals: Identity-as-Pedagogy and Algorithmic Pragmatics

Identity-as-Pedagogy

I intend to focus on how students negotiate their digital identities by exploring the tension between the authentic self-expression and the performance requirements within institutional and algorithmic discourses. When students enter these spaces, they aren’t just learning, they are building a brand that satisfies two masters. They are encouraged to navigate platform expectations, develop an academic tone, and adhere to citation norms. By pushing them towards these specific goals, students are being asked to adopt the persona of an “objective scholar,” forcing them to suppress their natural voice and cultural vernacular to fit the standardized mold of “intelligence.” My blog will teach them to evaluate how the architecture of digital interfaces carries rhetorical weight and encodes assumptions, about them as users. By engaging with these practices, students are empowered to perform professional identities in public-facing digital environments, equipping them to understand and manage their presence and credibility online.

Algorithmic Pragmatics: [The Algorithmic Hustle]

Students negotiating digital platforms’ algorithmic discourses, must navigate what is deemed valuable, visible, or permitted within these systems. These environments create pressure to demonstrate confidence and expertise, even when they are still learning or simply sharing ideas. As they optimize their online identities for engagement and professional credibility, the need to perform certainty can be especially taxing. There is minimal space for vulnerability or uncertainty, as institutions often regard doubt as a lack of rigor, while algorithms render doubt invisible because it does not trend. Consequently, students frequently present a polished and confident version of themselves—often at the expense of their genuine, evolving identities and authentic self.

Students are taught to evaluate AI not as an oracle, but as a “47-year-old compliance lawyer“—the sterile, average voice resulting from training data like the US Patents and Trademark Office dataset. My approach will prompt students to critically assess AI outputs for the mediocrity inherent in statistical averaging. They will learn to dissect AI-generated paragraphs to uncover the missing nuanced human voice and recognize that true insight often lies beyond algorithmic summaries. Students will be guided to negotiate co-authorship by treating AI as a neutral collaborator rather than an objective truth-teller, fostering a more thoughtful and collaborative engagement with technology in their academic and professional pursuits.

Implementation Strategy: Equity and Access on WordPress

To ensure this project reaches marginalized populations and dismantles “ivory tower” barriers, I have implemented the following features:

  • Mobile-Friendly Design: Prioritizing access for students whose primary or only reliable device is a smartphone.
  • Plain Language and Multimodal Content: Accommodating diverse literacy levels through text, audio, and video.
  • Public Access: Providing content freely without requiring institutional logins or “pay-to-play” barriers.
  • Cognitive Signposts/Scaffolding: Utilizing clear navigational markers to help novice or multilingual readers negotiate the dense paragraph structures of WordPress.

Critical Digital Literacy

Digital literacy is an ongoing negotiation, not a fixed destination. As we navigate an era of rapid AI integration, we must prioritize “cognitive justice” and challenge the dominant social attitudes of computer science that treat “whiteness” as an unremarked optimum. The goal of this project is to restore human agency to the center of the pedagogical experience, ensuring that students do not just use tools, but critically examine who profits from them and what biases they contain.

This means encouraging students to question the invisible forces that shape their digital experiences, including algorithms that determine what is visible, valuable, or permissible online. Rather than accepting these systems as neutral or authoritative, students will be taught to interrogate the ways in which algorithms reinforce existing power structures and marginalize certain voices. By cultivating an environment where vulnerability and uncertainty are valued, we create space for authentic learning and self-expression—countering the pressure to always perform certainty or expertise.

Furthermore, my project emphasizes the importance of democratizing access to digital platforms and educational resources. Through strategies such as mobile-friendly design, plain language, multimodal content, and public accessibility, I aim to dismantle barriers that have historically excluded marginalized populations from full participation. By providing clear cognitive signposts and scaffolding, I intend to support novice and multilingual readers in navigating complex digital texts, fostering greater equity in digital literacy.

Ultimately, the future of critical digital literacy depends on empowering students to become thoughtful, reflective co-authors in their interactions with technology. They will be encouraged to treat AI not as an infallible oracle, but as a collaborator whose outputs must be scrutinized for mediocrity and missing nuance. By recognizing that every algorithm is a social choice, students can learn to resist being reduced to mere data points and instead assert their agency, insight, and humanity within digital spaces. This approach can ensure that education is not confined to the “ivory tower,” but is accessible, relevant, and responsive to the needs of all learners—especially those most impacted by algorithmic bias and oppression.

Final Reflection

I believe we must move education out of the ivory tower and into the real world. My work is driven by the conviction that those most impacted by algorithmic oppression—those whose potential is discarded by a -0.3 socioeconomic bias score—must have a seat at the table. We cannot have faith in technology that treats human complexity as a “glitch.” I demand a pedagogy that prioritizes democratic participation and recognizes that every algorithm is a social choice. My goal is for students to realize they aren’t just using a platform; they are navigating a world where they must fight to be seen as more than just a data point.

References

Bawden, D. (2008). Origins and concepts of digital literacy. In C. Lankshear & M. Knobel (Eds.), Digital literacies: Concepts, policies and practices (pp. 17–32). Peter Lang

Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, 2019.

Bjork, C. (2018). Integrating usability testing with digital rhetoric in OWI. Computers and Composition, 49, 1–14.

Bjork, O. (2011). Digital literacy in the university writing classroom. Computers and Composition, 28(4), 247–261.

Byrd, A. J., & Oleksiak, T. (2021). Contingent faculty and the remaking of higher education. University Press of Colorado.

Carthon, J. M. B. (2019). Addressing systemic inequities in education. Journal of Education Policy, 34(4), 567–582.

DePew, K. E. (2015). Preparing for the rhetoricity of OWI. In B. L. Hewett & K. E. DePew (Eds.), Foundational practices of online writing instruction (pp. 477–504). WAC Clearinghouse.

Gee, J. P. (2015). Social linguistics and literacies: Ideology in discourses (5th ed.). Routledge.

Graff, H. J. (1979). The literacy myth: Literacy and social structure in the nineteenth-century city. Academic Press.

Lankshear, C., & Knobel, M. (2008). Digital literacies: Concepts, policies and practices. Peter Lang

Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and social learning (3rd ed.). Open University Press.

Laquintano, J. (2020). Linguistic justice and AI-mediated writing. College Composition and Communication, 72(2), 243–270.

Ong, W. J. (1982). Orality and literacy: The technologizing of the word. Methuen.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.

Selfe, C. L. (1999). Technology and literacy in the twenty-first century: The importance of paying attention. Southern Illinois University Press.

Selber, S. A. (2004). Multiliteracies for a digital age. Southern Illinois University Press.

Street, B. V. (1984). Literacy in theory and practice. Cambridge University Press.

Warnock, S. (2009). Teaching writing online: How and why. National Council of Teachers of English. 

Yang, A. (2022). AI authorship and digital composition. Computers and Composition, 65, 102–118.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.