The Signal Beneath The System
- Khahlil Louisy
- Jun 25
- 9 min read
Updated: Jul 25

In 2016, Microsoft released Tay, an AI chatbot designed to learn from conversations with users on Twitter. The system was elegant in its simplicity, which was advanced machine learning algorithms that would analyze human interactions and gradually become more sophisticated through exposure to real conversation. But, within 24 hours, Tay was posting inflammatory and offensive content and Microsoft shut it down, calling the result "a coordinated attack by a subset of users."
Here's what Microsoft missed; Tay wasn't broken by hackers, it was working exactly as designed. The system had learned from its environment, which was one that included real human behavior, real human biases, and real human cruelty. The algorithm actually didn't malfunction; instead, it revealed something uncomfortable about the social system it was embedded in.
We talk about systems like they're machines, that is, predictable, mechanical, and governed by clear rules and logical processes, but inside every system, whether it's an algorithm, an organization, or a market, are people. And people bring stories, biases, relationships, and patterns that no flowchart can capture. Learning to read these human signals is fundamental to capturing what's really happening beneath the surface.
Facebook, perhaps, provides a more salient example. In 2004, Mark Zuckerberg launched the platform with a simple mission which was to connect college students on campus. The platform was designed to be a digital directory and a way for students to find and connect with classmates. It was clean, simple, and focused. But twenty years later, Facebook has become something entirely different. It is now a global infrastructure for information, politics, commerce, and social connection that shapes elections, influences mental health, and determines what billions of people see and believe. Zuckerberg didn't set out to build a machine for political polarization or teenage anxiety, yet that’s partly what he created. The system he built wasn't the system he designed and the signals that could have warned him were there from the beginning, if he had known how to read them.
This is the fundamental challenge facing everyone who builds systems today, whether you're launching a startup, creating a new organization, or designing public policies. You think you're building the system you designed, but what you're actually building is the system that emerges from the collision between your design and human reality.
Most builders are so focused on their blueprint which itemizes the features, org charts, and the policy frameworks, that they miss the signals telling them what they're actually creating. They optimize for their intended outcomes while ignoring the unintended consequences that often become the real story. Learning to read these signals isn't just about avoiding disasters, it's about understanding the difference between building what you want and building what the world needs. Though, of course, a more pessimistic view may be that the builder does in fact recognize these signals and chooses to ignore them altogether.
The mythology of systems-as-machines runs deep in how we think about everything from corporations to governments to artificial intelligence. We diagram org charts as if they explain how decisions actually get made, write algorithms as if code is ideology-free, and we design markets as if they operate in some frictionless theoretical space. This mechanical thinking isn't just wrong, it's dangerous and obscures the human dynamics that actually drive outcomes, making us blind to the signals that could help us understand what's really going on. Policymakers drafting theories of change for intervention programs often fall victim to this.
Consider how hiring algorithms were supposed to remove human bias from recruitment, with companies like Amazon building systems to screen resumes and rank candidates objectively. The problem? The AI was trained on historical hiring data, which reflected decades of human biases. The algorithm learned to penalize resumes that included words like "women's" (as in "women's chess club captain") because the historical data showed that men were more likely to be hired. The system wasn't neutral, it was amplifying and automating existing biases while hiding them behind a veneer of technological objectivity. The signal beneath the system was the accumulated weight of human prejudice, but the mechanical framing made it invisible.
“The human signal they missed was that anger and fear are among the most engaging emotions. The algorithm learned to serve us content that made us mad because mad people click, share, and comment more than happy people.”
Every technological system carries the DNA of the social system that created it. The trick is learning to see the human patterns embedded in the code, the processes, and the outcomes. Take recommendation algorithms on social media platforms. On the surface, they seem like optimization engines designed to show users content they'll find engaging, but digging deeper will reveal they're actually mirrors of human psychology, reflecting our tendency toward confirmation bias, our appetite for outrage, and our tribal instincts.
The engineers at Facebook didn't set out to polarize society, they set out to maximize engagement, which seemed like a reasonable proxy for user satisfaction. The human signal they missed, though, was that anger and fear are among the most engaging emotions. The algorithm learned to serve us content that made us mad because mad people click, share, and comment more than happy people. The real signal beneath the Facebook system wasn't "users want to see diverse perspectives" or "users want accurate information," it was "users respond strongly to content that confirms their existing beliefs and triggers emotional reactions." The algorithm is working perfectly, but it was and continues to optimize for the wrong thing.
Algorithmic Bias as Social Archaeology
Here's a radical way to think about algorithmic bias: it's not a bug, it's a feature, though not in the sense that it's desirable, but in that it's revealing something that was already there. Algorithms don't create bias, they uncover and amplify biases that exist in the social systems they're trained on.
When criminal justice algorithms disproportionately flag Black defendants as high-risk for reoffending, they're not malfunctioning, they're actually reflecting the biases embedded in arrest records, sentencing patterns, and policing practices. These algorithms become a form of social archaeology, revealing the accumulated impact of decades of discriminatory practices.
The same pattern shows up everywhere. When image recognition systems have trouble identifying people with darker skin, they're reflecting the fact that their training data came from a world where photography and digital imaging were developed primarily by and for white people and when voice recognition systems struggle with accents, they're reflecting the linguistic biases of their creators and training data. The signal beneath these systems is the social reality of who gets to be "normal" in our society and the technology makes visible patterns of exclusion that were previously hidden or deniable.
As technological and social systems become more intertwined, we're seeing new forms of network effects that amplify human behavior in unexpected ways. The technical term is "algorithmic amplification," but the human reality is that our individual biases and behaviors get magnified and accelerated by the systems we interact with.
Consider how conspiracy theories spread online. The technical explanation focuses on algorithmic recommendation systems and filter bubbles, but the human signal is more complex; people are drawn to stories that make them feel special, that explain complex problems with simple villains, and that provide a community with like-minded others. The algorithm doesn't create these human needs, but it does make it easier for people to find each other and for extreme ideas to reach mainstream audiences. The system amplifies existing human patterns in ways that can quickly spiral out of control and this is why purely technical solutions to social problems often fail. You can't fix misinformation by tweaking recommendation algorithms if you don't address the underlying human need for meaning, community, and simple explanations for complex problems.
Reading Power Dynamics in Data
One of the most important skills for reading systems is understanding how power dynamics show up in data and processes. Who gets to define what counts as normal? Whose experiences are captured in the training data? Whose voices are included in the decision-making process? These questions matter because systems inevitably reflect the power structures of the organizations and societies that create them. When tech companies build AI systems with teams that are predominantly white, male, and from affluent backgrounds, those systems will reflect that perspective and when government agencies design social services with input only from administrators and politicians, those systems will reflect their priorities rather than the needs of the people they're supposed to serve. The most important signals beneath any systems are always: whose perspectives are included, whose are excluded, and how does that shape what the system optimizes for?
Every system embeds stories about how the world works and what matters. Corporate org charts tell stories about hierarchy and authority, market structures tell stories about value and worth, and algorithms tell stories about what's normal and what's deviant. These stories matter because they shape behavior. When performance management systems focus on individual metrics, they tell a story that success is about individual achievement rather than collective effort and when social media platforms prioritize engagement metrics, they tell a story that attention is more valuable than truth or civic health.
Perhaps the most important dynamic to understand is how humans and systems create feedback loops that amplify both positive and negative patterns. Humans create systems that reflect their biases and assumptions and those systems then shape human behavior, which creates new data that reinforces the original biases. This is why diversity in AI teams isn't just about fairness, it's about system performance. Homogeneous teams create systems that work well for people like them but fail for everyone else. Those failures create bad data, which makes the systems worse and which then creates more failures. Breaking these feedback loops requires intentional intervention at multiple levels, including having diverse teams, representative data, inclusive design processes, and ongoing monitoring for unintended consequences.
The same principles apply to understanding organizational systems. Companies create elaborate processes, policies, and structures that look rational on paper but often work very differently in practice. Consider how many organizations have "open door" policies that theoretically allow anyone to raise concerns with senior leadership. The policy exists, the process is documented, but the human signal often tells a different story. Who actually uses these channels? What happens to people who raise uncomfortable issues? And how do informal power networks shape who gets heard? Reading organizational systems requires looking past the official charts and policies to understand the informal networks, cultural norms, and incentive structures that actually drive behavior. The signal beneath those systems might be fear, favoritism, or genuine commitment to values, but rarely is it what the employee handbook says it is.
So now we have an integration dilemma. As technological and social systems become more integrated, the challenge of reading signals becomes more complex but also more important because we're not just dealing with human bias or algorithmic bias, we're dealing with hybrid systems where human and machine elements amplify each other in unpredictable ways. This integration is happening everywhere. Criminal justice systems that combine human judgment with algorithmic risk assessment, healthcare systems that use AI to support medical decision-making, and financial systems that blend human traders with automated trading algorithms. Each of these hybrid systems creates new forms of accountability gaps. When something goes wrong, it's often unclear whether the problem was human error, algorithmic bias, or the interaction between them, which makes it easier for organizations to avoid responsibility while making it harder for individuals to seek redress.
But how do you develop better "signal intelligence," which, let’s define as the ability to read the human patterns beneath systems? I would like to propose some practical approaches: First, look for gaps between stated and revealed preferences; what do people say they value versus what they actually reward? What behaviors do systems incentivize versus what they claim to promote? Second, pay attention to edge cases and exceptions; how does the system handle situations it wasn't designed for? Who gets excluded or harmed by the normal operation of the system? Third, follow the data trail; what data is collected, what's ignored, and how do those choices shape outcomes? Who decides what counts as success? Fourth, map the informal networks; who actually has influence? How do decisions really get made? What are the unwritten rules? And finally, listen to the people closest to the impact; the people using systems day-to-day often understand their real dynamics better than the people who designed them.
The Future of Human-System Integration
As we go deeper into this era where the line between human and machine decision-making is increasingly blurred, AI systems are becoming more sophisticated at mimicking human judgment, while humans are becoming more dependent on algorithmic support for complex decisions. This integration creates new opportunities and new risks. On one hand, we can potentially design systems that combine the best of human intuition and machine analysis while on the other hand, we risk creating systems that are too complex for anyone to fully understand or control. The solution lies in maintaining human agency and accountability even as we delegate more decisions to automated systems. This requires not just technical safeguards but also organizational processes that keep humans in the loop and social norms that hold people responsible for the systems they create and deploy.
Marshall McLuhan famously said "the medium is the message," to mean that the characteristics of communication technologies matter as much as the content they carry. In our age of algorithmic systems, we might say "the signal is the message."
The human patterns embedded in our systems, including the biases, assumptions, and power dynamics, are not just bugs to be fixed, they're signals that tell us who we are as a society, what we value, and how we relate to each other. Learning to read these signals is not just a technical skill, it's a form of social literacy that's becoming essential for navigating our increasingly complex human-machine-intertwined world. Whether you're a policymaker trying to understand the unintended consequences of new regulations, a business leader trying to build more inclusive organizations, or a citizen trying to make sense of the algorithmic systems that shape your daily life, the ability to see the human story beneath the mechanical surface is crucial.
The systems around us are not neutral machines, they're extensions of human choices, human biases, and human values and the signal beneath the system is always, ultimately, us. And, if we want to build better systems, we need to start by understanding ourselves better. The most important question isn't "how do we fix the algorithm?" It's "what kind of people do we want to be, and how do we design systems that help us become that?"
The signal beneath the system is pointing toward an answer, we just need to learn how to read it.




Comments