‘Exclusion compounds’: Women in tech push to shape AI before it’s too late

Speakers at the Women in Tech Regatta in Seattle said AI is at risk of repeating longstanding patterns of exclusion, from biased hiring tools to gaps in who's shaping AI strategy. Read More

‘Exclusion compounds’: Women in tech push to shape AI before it’s too late
Panelists during a session at the Women in Tech Regatta in Seattle on Wednesday. From left, moderator Sarah Studer of the University of Washington, Maria Martin of Nordstrom, Nandita Krishnan of Adobe, and Anya Edelstein of Highspot. (WiT Regatta Photo)

Women have long been left out of the datasets and decisions shaping everything from car safety to medical diagnoses. Industry leaders warn a rushed approach to artificial intelligence risks repeating those patterns.

That was a central message at this week’s Women in Tech Regatta in Seattle, where speakers urged earlier and broader participation in AI development as adoption accelerates.

“Exclusion compounds over time and becomes much harder to detect,” Anya Edelstein, learning experiences manager at Seattle-based Highspot, said during an AI leadership panel on Wednesday. “If your perspective isn’t taken into account in the room when those decisions are initially made, it’s harder to make a change later down the road.”

Over the past few years, researchers have sought to mitigate the failures of machine-learning models trained on biased or skewed datasets, including misdiagnosis of kidney failure in women. In the meantime, women worldwide are about 20% less likely than men to engage with AI tools, furthering the training disparity.

In the tech field, at least, the AI gender gap seems to be closing. It’s a noteworthy shift as companies race toward automation at scale, and concerns about misinformation and data security swirl around Anthropic and OpenAI going public

Women are leading AI strategy – with caution

Most women in senior roles (80%) are driving AI strategy in the workplace, where they prioritize responsible adoption over speed, according to a poll of more than 1,700 industry leaders published earlier this month by Chief, a women-focused leadership network.

This is often in contrast to company pressures to deploy AI tools and strategies at an increasingly rapid pace, said Maria Martin, product management director at Nordstrom. 

“There’s less runway between a decision getting made, and a decision scaling,” Martin said at the panel Wednesday. “It’s important to get ahead and get involved early.”

In the group of women Chief surveyed, 71% were first at their companies to flag AI risks.  

“If we’re not intentionally creating interventions every step along the way,” said Edelstein, “bias has an opportunity to creep in.”

Getting women into the room

The problem with bringing qualified women into AI leadership and decision-making spaces may start with hiring. At least two-thirds of recruiters use AI to screen candidates, a process shown to reproduce race and gender bias, often intersectionally. 

Attendees connect at the Women in Tech Regatta in Seattle on Wednesday. (Courtesy of WiT Regatta)

In 2024, researchers at the University of Washington found that AI resume screeners choose masculine names over feminine 89% of the time, and white-associated names over Black-associated names 85% of the time. A year later, UW found that hiring managers mirror their AI model’s biases

Women and people of color face pressures to assimilate and code-switch – like using a race-and gender-neutral name on a resume – before they even enter the office. Once they’re hired, it’s about finding the right people for support, said Cynthia Tee, a longtime engineering leader and computer scientist.

Tee suggests more industry leaders can implement a sponsorship model, which requires greater intention, tangible risk and cost compared to typical allyship in the workplace.

“Keep insisting on promoting people who deserve it,” Tee said during a panel about navigating workplace dynamics. “Keep bringing more diverse people through your hiring pipelines. Keep bringing up people whose voices are not heard.”

The AI conversation is for everyone

There can be a confidence barrier to understanding or using AI, partially due to the industry’s “black box” design. Nandita Krishnan, a data scientist at Adobe who builds apps on the side, suggests setting time aside every week to read up on the latest news and experiment with automating daily tasks. 

“If you’re vibe coding, do it in a manner that makes the software still secure,” she said at the panel with Edelstein and Martin. “When you’re building out AI systems, it’s very prone to hallucinate. Add something to ground the LLMs, and give your agent this fact or database of knowledge to make sure it does not derail.” 

Participation in AI decision-making isn’t limited to technical expertise. Edelstein suggests establishing values around AI – including education, healthcare and the environment – and finding industry leaders or companies who align to engage with.

Many workers are learning AI out of fear of being left behind, she added, but curiosity leads to better outcomes. 

“If we can shift a lot of the perceptions around AI,” she said, “that is the first step to bringing more people into the conversation.”

Share

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0