Erin White

Trans-inclusive design for the Prosocial Design Network

May 13, 2025

The kind folks at the Prosocial Design Network asked me to be a guest for April’s “pro-social,” a very low-key virtual gathering for folks interested in creating more inclusive digital spaces.

More about PDN:

The Prosocial Design Network connects research to practice toward a world in which online spaces are healthy, productive, respect human dignity, and improve society.

Here’s their recap of the event, and a video of our Q&A segment (15 minutes).

They shared the questions in advance, which I very much appreciated! Here are my prepared notes - we certainly didn’t cover it all during the call.

What principles should be front of mind in designing inclusive digital spaces, particularly social spaces?

First off, hire people with different lived experiences from yours. Hire trans people. Hire Black people. Hire disabled people. Hire disabled Black trans people. Let them cook. Listen to them. Otherwise you are, as my wife says, “Pissing into the wind.”

Prioritize accessibility. Ensure spaces are accessible for users on many devices, using different device settings, in different contexts in the real world including with assistive technologies. Often accessibility is an afterthought. Shift left and allow it go drive your design and architecture decisions from the jump. For social apps, this includes setting smart defaults - i.e. requiring folks to add alt text if they’re uploading images.

Keep your tech stack light and boring. Design for a 4-year-old Android phone on a 3g connection, with bandwidth paid for by the megabyte. Bloatware takes longer to load and harms or disincentivizes participation from folks on slower connections or older tech.

Design for trust, privacy and safety. Design for people to be able to protect their privacy, control what they share and what they see.

Allow people to define themselves. The way you do it ain’t the way everybody else does it.

You may have noticed this isn’t necessarily specific to trans-inclusive design. That’s because this is the kind of work that, by considering folks in marginalized positions, benefits everyone. It’s the curb cut effect for accessibility AND privacy AND safety AND inclusion. By focusing our design on the margins we include everyone between them too.

Since you wrote your article in 2019, what are fails sites continue to make when it comes to trans inclusive design?

The biggest fail I continue to see is that folks are asking for gender or sex information at all, because it is usually not needed. It usually means that this data is being brokered into a database somewhere and sold for money.

I don’t need to tell you my gender to book a hotel. Why are you asking for it?

The unnecessary asking for gender gets worse now that we are seeing a rollback of previous progress in inclusive design we had made in the past few years. We’d been doing so well! The US Web Design system had a really thoughtful pattern about asking for gender that was starting to roll out to all these government forms. But now agencies are in the process of removing the pattern for asking for gender in an inclusive way, and replacing it with a binary option for sex.

These design systems changes are in addition to removing all references to being trans from websites, and no longer offering services or information for trans people. It’s a very literal erasure of trans identity. It’s really upsetting, scary, and for trans folks, it’s existential.

I encourage practitioners to plan ahead for the moment when you are asked to do something that you know is wrong. That day will come. What will you say? What will you say no to? What’s your red line?

What new concerns do you have with AI and do you have any advice for tech folk?

I have a lot of concerns with AI. I do think there are useful applications for the technology, and 99.99% of the applications out there are either actively predatory, passively harmful, gratuitous and mid, or all of the above. And they are all harming the environment and our health.

  1. Garbage in, garbage out. AI is pattern recognition. And the patterns it’s trained on are filled with bias! Bias harms people who are in the minority. According to a recent study out of Stanford:

    “synthetically generated texts from five of the most pervasive LMs …perpetuate harms of omission, subordination, and stereotyping for minoritized individuals with intersectional race, gender, and/or sexual orientation identities.” - Laissez-Faire Harms: Algorithmic Biases in Generative Language Models (2024)

  2. …and this includes code. When AI is trained on design patterns or code that is widely popular, but that also includes a lot of code that’s inaccessible or unusable, the resulting code is also inaccessible or unusable. We should also be extremely wary of any AI tool that claims it can refactor a codebase written in a language that most modern coders are not using.

  3. AI is a tool of capitalism and state violence. Generative AI is being used to consolidate, analyze, and generate information in a way that can be used to surveil, prosecute, incarcerate, and kill people.

  4. AI is seen as a smart humanoid. People tend to believe algorithms more than each other as task complexity increases - but we also tend to view AI as human-like. We anthropomorphize AI tools by giving them human-like names or designing them as chat prompts (rather than command prompts or even search boxes), which leads us to believe that we are in fact talking with another living being rather than a computer. It also leads some folks to think that AI will become sentient. It won’t, actually, but it will if humans believe that it is, which is perhaps worse.

  5. AI is mid. And by that, I mean that what it produces is functionally a middle-of-the-road, average, non-“edge case” output. This flattens our differences and creates a “norm” which actually does not exist. Individual people aren’t “normal”, but AI sure likes to tell us that’s a thing, and that really harms people who are far from that norm. Saying that everyone is the same denies the fact that we are all weird as hell. It’s our differences that make us stronger, more creative, better.

  6. Critique is painted as fear. Proponents of AI say that skeptics are “afraid” of AI or don’t understand it. I, for one, am not afraid of it - I’m frustrated by how folks are positioning it as the solution to all our problems. I do understand it! I know too much. Dismissing AI detractors as “fearful” allows proponents to dismiss valid critique outright rather than engage with it. It’s a strawman argument.

    If you are AI-critique curious:

My AI wishlist for technologists

If you don’t need to use AI, don’t. Do something else. Turn off default settings that include AI. Switch your search engine to DuckDuckGo and turn off AI features. Turn off Apple intelligence. Turn off Google Gemini. Take a harm-reduction approach to your tech use. (FWIW, this is my approach to eating animal food products. I’m not vegan or even completely vegetarian, but I don’t build my food habits around animal products, which reduces how many animal products I consume.)

Don’t make AI your main thing. Charles Eames said, “Never delegate understanding.” Don’t rely on AI alone to make decisions about what’s true, certainly not for core parts of your work.

Understand the bias that ships with your LLM. Do everything you can to critically evaluate outputs for inaccessible, biased or otherwise harmful content. Right-size your models and turn down the “creativity” setting.

Advocate for sustainable, safe AI, including regulation and environmental mitigation measures. Individual choices get us down the road a piece, but what we really need is to mitigate the impacts at a high level.

Engage your discomfort. If someone critiques AI and it makes you uncomfortable, listen to understand and be open to changing your mind. Most of the folks who are warning about the harms of AI are minoritized people - Black and brown women, queer and trans people. Believe them!

Are there any questions you think researchers could help answer regarding trans-inclusive design?

This is an excellent question. Some of the things I’d ask folks to understand include…

What are ways we can design for trust and safety? How can we create digital spaces where people feel safe? What are some of the ways we can foster trustworthiness?

What would trans-informed design look like? How can we use the very concept of transness - boundary-crossing, liminality, non-binary thinking - to expand our thinking about how technologies can be used, and to what ends?

Oliver Haimson is studying this very thing, and his new book Trans Technologies is available for free, open access, from MIT Press.

How might trans-inclusive digital design change IRL service design? We’re already seeing this as part of our work in Civic Tech, moving from automation to true digital transformation. We all know that real-world constraints map to technological design choices. How then do we transform the tech stack and use that to change our very service delivery model?