A quiet but potent experiment unfolded on LinkedIn in November, sparking a widespread conversation about algorithmic fairness and the subtle ways artificial intelligence might perpetuate existing societal biases. At the heart of this inquiry was a collective effort by women professionals, who, driven by observations of declining engagement on their posts, initiated what they termed the #WearthePants campaign. This grassroots investigation involved numerous individuals temporarily altering their LinkedIn profiles to identify as male, observing the subsequent impact on their content’s reach and visibility.
The Genesis of a Digital Experiment
The catalyst for the #WearthePants initiative was a growing anecdotal perception among women users that their professional content was experiencing significantly reduced exposure on the platform. Michelle, a product strategist with over 10,000 followers, whose identity is known to TechCrunch but preferred anonymity for the article, recounted a perplexing disparity. Despite her substantial following, her posts often garnered comparable impressions to those of her husband, who commanded a far smaller audience of approximately 2,000 followers. This discrepancy, she noted, persisted even when she ghostwrote content for him, leading her to suspect gender as a potential, albeit unwelcome, variable.
In a direct test of this hypothesis, Michelle changed her profile’s gender to male and her name to Michael. The results, she reported, were striking: a 200% surge in impressions and a 27% increase in engagement. Her experience was echoed by Marilynn Joyner, a founder who had consistently posted on LinkedIn for two years. Joyner observed a marked decline in her posts’ visibility in recent months. Upon switching her profile gender from female to male, her impressions reportedly jumped by 238% within a single day. Similar findings were independently reported by a host of other women, including Megan Cornish, Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, and Lucy Ferguson, among others, further fueling the conversation around potential algorithmic bias.
The #WearthePants movement itself originated with entrepreneurs Cindy Gallop and Jane Evans, who initially recruited two male counterparts to post identical content. Their aim was to compare engagement metrics between posts attributed to men versus women, especially given their combined following of over 150,000, significantly dwarfing the approximately 9,400 followers of the men involved. Gallop reported that her post reached a mere 801 people, while the identical content posted by a male colleague achieved a reach of 10,408 individuals, exceeding 100% of his follower count. These preliminary findings spurred wider participation, leading many women, including those like Joyner who leverage LinkedIn for business marketing, to voice concerns and demand accountability from the platform.
LinkedIn’s Algorithmic Evolution and the Rise of AI
LinkedIn, established in 2002, has evolved from a simple online resume repository into the world’s preeminent professional networking platform, boasting hundreds of millions of users globally. Its core function revolves around connecting professionals, fostering career development, and enabling thought leadership through content sharing. As with all major social platforms, algorithms play a critical role in curating the user experience, determining what content appears in one’s feed, and influencing the visibility of posts.
The current wave of concern follows a significant announcement in August by Tim Jurka, LinkedIn’s Vice President of Engineering. Jurka disclosed that the platform had "more recently" integrated Large Language Models (LLMs) into its content surfacing mechanisms. This adoption of advanced artificial intelligence was intended to enhance the relevance and utility of content delivered to users, ideally improving the overall feed experience. However, it also coincided with the reported drops in engagement that prompted the #WearthePants experiment.
The integration of LLMs represents a broader industry trend. Many digital platforms are increasingly relying on sophisticated AI to personalize content, predict user preferences, and manage the vast influx of daily information. While promising enhanced user experience and efficiency, this shift also introduces new complexities and potential pitfalls, particularly concerning the inherent biases that can be encoded within AI systems trained on human-generated data. The opaque nature of these "black box" algorithms often makes it challenging for external observers, and sometimes even internal developers, to fully comprehend their decision-making processes.
The "Black Box" of AI Bias
In response to the mounting allegations of gender bias, LinkedIn issued a statement asserting that its "algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed." The company further clarified that "a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias" within the Feed. Sakshi Jain, LinkedIn’s Head of Responsible AI and Governance, reiterated this stance in a November post, emphasizing that demographic data is solely utilized for internal testing purposes, ensuring that content from diverse creators competes on equal footing and that the scrolling experience remains consistent across various audiences.
However, experts in social algorithm design and data ethics offer a more nuanced perspective. Brandeis Marshall, a data ethics consultant, highlights that platforms operate through "an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly." While explicit sexism may not be directly programmed, implicit biases can readily emerge. Marshall suggests that simply changing a profile photo or name is but one of many levers influencing algorithmic behavior, with user interaction history and content engagement patterns also playing significant roles. "What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume," she observed.
A critical analytical commentary on AI bias points to the origins of the training data itself. Researchers have consistently found evidence of human biases, including sexism and racism, embedded within popular LLM models. This is largely because these models learn from vast datasets of human-generated content, which inherently reflect societal prejudices. Furthermore, human involvement in post-training or reinforcement learning phases can inadvertently reinforce existing biases. Marshall succinctly articulated this concern, stating that most of these platforms "innately have embedded a white, male, Western-centric viewpoint" due to the demographics of those who primarily train and develop these models. The specific implementation of AI systems by any individual company, however, typically remains shrouded in the proprietary secrecy of the "algorithmic black box."
Beyond Gender: Broader Implications of Algorithmic Shifts
While the #WearthePants experiment focused on gender, the underlying issues have broader implications for how professional identity and communication are valued in the digital sphere. Michelle, after her successful "Michael" experiment, concluded that the system wasn’t "explicitly sexist" but rather appeared to deem communication styles commonly associated with women as "a proxy for lower value." This observation delves into the subtle realm of implicit bias, where the algorithm might unknowingly favor certain linguistic or structural patterns.
Stereotypical male writing styles are often perceived as more concise, direct, and authoritative, whereas those typically associated with women are sometimes imagined to be softer, more emotional, or more collaborative. If an LLM is inadvertently trained to boost content that aligns with these male communication stereotypes, it creates a subtle, implicit bias that can disadvantage other valid and effective communication styles. As previous reports have indicated, many LLMs are indeed riddled with such ingrained biases.
Sarah Dean, an assistant professor of computer science at Cornell, underscored the complexity, noting that platforms like LinkedIn frequently consider entire user profiles, alongside engagement behavior, when determining content promotion. This encompasses details like job history and typical content interactions. Dean explained that "someone’s demographics can affect ‘both sides’ of the algorithm — what they see and who sees what they post." LinkedIn corroborates this, stating its AI systems analyze hundreds of signals, including insights from a user’s profile, network, and activity, to determine content visibility.
The platform also emphasizes that user behavior significantly shapes the feed, with daily shifts in what people click, save, and engage with, along with preferences for different content formats, naturally influencing what appears. Chad Johnson, a sales expert active on LinkedIn, characterized the recent algorithmic changes as a deprioritization of superficial engagement metrics like likes, comments, and reposts. He suggested the LLM system "no longer cares how often you post or at what time of day," but rather prioritizes "whether your writing shows understanding, clarity, and value." This shift, while potentially promoting higher-quality content, also introduces new, less transparent criteria for success.
The Search for Transparency and the Path Forward
The mixed user experiences following the algorithmic changes highlight a pervasive sentiment of confusion and frustration. Shailvi Wakhulu, a data scientist who averaged a post a day for five years, expressed demotivation after seeing her thousands of impressions dwindle to mere hundreds, a sentiment shared by her husband. While some men also reported engagement drops, others noted significant increases, attributing their success to writing on "specific topics for specific audiences," which they believe the new algorithm rewards.
However, the experience of Brandeis Marshall, who is Black, adds another layer to the bias discussion. She observed that posts about her experiences often perform more poorly than those specifically related to her race. "If Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias," she asserted, pointing to another potential area where implicit biases might be at play, amplifying existing social narratives rather than purely merit-based expertise.
LinkedIn acknowledges the increased competition on its platform, citing a 15% year-over-year increase in posting and a 24% rise in comments. The company advises that content focusing on professional insights, career lessons, industry news and analysis, and educational information related to work, business, and the economy tends to perform well.
Ultimately, the core demand from many users, like Michelle, is for greater transparency. The "black box" nature of content-picking algorithms, however, presents a significant challenge. Companies often guard these algorithms as proprietary trade secrets, fearing that full disclosure could lead to manipulation and gaming of the system, undermining the very fairness they strive for. This inherent tension between commercial confidentiality and public accountability means that complete algorithmic transparency remains a distant and perhaps unattainable goal for users navigating the complex and ever-evolving landscape of professional digital networking. The ongoing debate underscores the critical need for continuous vigilance, ethical development, and robust testing of AI systems to mitigate unintended biases and foster truly equitable digital environments.




