Tech built by white engineers is uncritical of its own biases and is thus dangerous for Black folk.

-Daniel Johnson

by Daniel Johnson

The tech world is an incredibly white space, and there is a mountain of evidence as to why. Hiring practices fall in line with general findings that American hiring culture is still virulently anti-Black. Any cultural space that is conducive to homogeneity creates and maintains blind spots. In this case, those blind spots are sometimes literal, with technology that is supposed to be able to recognize Black faces and Black skin tones often unable to do those jobs. However, in other cases, Blackness becomes hypervisible or imbued with racial stereotypes.

What this also means is that technology which can be used to create databases, such as those containing criminal records and using recognition software, will remain biased against Black people. As explained by Guardian, tech built by white engineers is uncritical of its own biases and is thus dangerous for Black folk: “Law enforcement agencies often don’t review their software to check for baked-in racial bias – and there aren’t laws or regulations forcing them to. In some cases,… law enforcement agencies are even obscuring the fact that they’re using such software.”

RELATED: #Data4BlackLives conference highlights how technology and race interact

In addition to these failures, technology that claims to be able to “read” emotion also fails the racial bias test, according to The Conversation, which ran a study on 400 NBA players that often resulted in assigning Black NBA players lower emotional scores regardless of whether or not they smiled in the images. “Across all the NBA pictures, the same pattern emerges. On average, Face++ rates black faces as twice as angry as white faces. Face API scores black faces as three times more contemptuous than white faces. After matching players based on their smiles, both facial analysis programs are still more likely to assign the negative emotions of anger or contempt to black faces.”

What this means, ultimately, is that throughout the technology sector, unless there is a disruption of who is writing the code and overseeing the programming and the testing of these features, new technology will function as yet another arena in which Black people must find ways to combat white supremacy and forms of anti-Blackness.

Recently, there was a conference at MIT held by Data for Black Lives which broached among other things the topics of surveillance, community held data, activism, safety, and policy creation.  Presenters there were diverse, with over 80% of presenters being of color and 60% being women, numbers in direct contradiction with how most of the tech industry is structured.

Part of the appeal of this conference was the introduction of groups like Black in AI, which was founded by Timnit Gebru and a group of her friends. Gebru tells MIT Technology Review: “The reason diversity is really important in AI, not just in data sets but also in researchers, is that you need people who just have this social sense of how things are. We are in a diversity crisis for AI. In addition to having technical conversations, conversations about law, conversations about ethics, we need to have conversations about diversity in AI. We need all sorts of diversity in AI. And this needs to be treated as something that’s extremely urgent.” She went on to detail how difficult it is for her to be a Black woman working in the field of Artificial Intelligence. In 2016, she attended a conference where, out of 8,500 people, she was only able to count six other Black people.

Since Artificial Intelligence is being hailed as the future development by tech outlets such as WIRED, it has significant implications for Black people if we aren’t allowed to shape its direction. Without our input, AI will be developed in ways which are adaptable and justified by the state to crush us in a number of ways.

Groups like COLET (Collective for Liberation, Ecology, and Technology) are organizing against the inclusion of Black people in facial recognition software. They are a self-described community based, radical feminist and anti-capitalist collective and are passionate about this work. Their blog reads:

I consider it obvious that most if not all data collected by police to serve their inherently racist mission will be severely biased. It is equally clear to me that no technology under police control will be used to hold police accountable or to benefit Black folks or other oppressed people. Even restricting our attention to machine learning in the so-called ‘justice’ system, examples abound of technology used to harm us, such as racist predictive models used by the courts to determine bail and sentencing decisions — matters of freedom and captivity, life and death. Accordingly, I have no reason to support the development or deployment of technology which makes it easier for the state to recognize and surveil members of my community. Just the opposite: by refusing to don white masks, we may be able to gain some temporary advantages by partially obscuring ourselves from the eyes of the white supremacist state.”

It is this reading that most closely aligns with a vision of radical freedom from the ever-watchful eyes of the state. This reading is seen and created by abolitionists who wish to push the discussion beyond a framing of inclusion into the framing of being absolutely free of the prying eyes and heavy penalization practiced by the state.

The state weaponizes technology against Black bodies. Having Black people design software that can very well be turned into objects of our destruction is not freedom, and certainly is not something that those who see abolition as the way forward can live with.

RELATED: ICE and the DEA are paying for hidden streetlight cameras to be watch you all across the country

There is a tension between either creating technology that is mindful of Black existence or contending with the fact that, no matter what technology is created, that technology is capable of being perverted by the state into an instrument of anti-Black harm.

That tension is the difference between neoliberal solutions to state violence in the assumed interest of fairness and creating radical solutions to state violence. It’s the difference between creating programs that do not tip the scales against Black people and disenrolling or disengaging from the collection of both the digital and the corporal Black body by the state.

If the state is allowed to have their desired neoliberal solutions to piggyback off of, then their aim of destroying as many Black bodies as possible will be well within reach. This is an outcome which those of us who are deeply committed to the liberation of Black people will never cosign, and this is why abolition work will need to take technology into account moving forward.


Daniel Johnson studies English and creative writing at Sam Houston State University. In his spare time, he likes to visit museums and listen to trap music. His work can be found at The Root, Black Youth Project, Racebaitr, Those People, and Afropunk.