© All rights reserved. Powered by Techronicler 

National AI Day: 7 Thought Leaders On Making the AI Industry More Equitable

by The Techronicler Team

National AI Day is a great day for a yearly check-in! A day to celebrate how far artificial intelligence has come and to think about where it’s headed. 

It’s not just about cool new tech breakthroughs—it’s a chance to ask: Is AI working for everyone? 

This day serves as a notable benchmark for progress, pushing us to make sure AI’s benefits reach all corners of the globe. 

AI’s changing everything—economies, healthcare, education, you name it—but if we don’t include diverse voices in building it, we’re just doubling down on old inequalities. That’s a problem. 

If AI only reflects a narrow slice of the world, it leaves too many people behind and hands too much power to a few. 

By focusing on equity, we can tap into AI’s potential to tackle big issues—like climate justice, teaching in multiple languages, or making healthcare accessible to all. 

In this piece, the Techronicler team is proud to bring together seven young, diverse, and brilliant thought leaders to share their perspective on a core question we should all remember to ask more often: 

How do we make the AI industry more equitable? 

Their reflective thoughts reveal to us something deeper than just use cases and scenarios. 

Read on!

As a designer from India now shaping global platforms in California, I’ve seen how AI can transform lives—from empowering farmers with data to amplifying voices through analytics. 

But its potential hinges on one truth: Ai’s only as good as the people building it. 

If the creators of AI don’t reflect the world we live in and respect its diversity, the systems they design are already baking exclusion into the code, no matter how advanced the tech.

Bias isn’t just a technical glitch; it’s an experience that hits hard. In my work on global tools, I’ve seen how a wrong assumption—about language, device, or context—can alienate millions and make the development completely irrelevant.

AI must be interrogated, not blindly trusted. 

I use AI to stress-test designs, ensuring they work across languages, reading levels, and accessibility needs. We must keep asking: Who’s missing from this dataset? Whose story isn’t told? If we don’t, we risk amplifying harm instead of opportunity.

Equity? It’s not just about handing out success. It demands action beyond this. Open-source models are a start, but real change means investing in community-driven projects, designing for regions with shaky internet, and focusing on use cases like multilingual education or rural governance. 

My experience founding a startup for farmers showed me that innovation must meet people where they are—not where tech businesses assume they should be.

As a mentor to underrepresented designers, I’ve learned equity begins early, and with exposure. It’s not just teaching how to use AI, but to challenge it. Through my children’s books rooted in Kodava culture, I try to plant these seeds early, showing young folks how their stories belong in tech’s future.

Design is where equity takes shape. Every choice we make—defaults, edge cases, strategy—either builds trust or erodes it. AI should help us empathize, not automate exclusion. As designers, we hold the power to make inclusion visible, ensuring AI serves not just the privileged few, but the world we all share. 

Let’s build with intention, inclusion, and empathy, together!

About Tej Kalianda:
Tej Kalianda is a UX designer with 15 years of experience shaping products across enterprise, consumer, and AI-powered platforms. With an academic background spanning environmental engineering and design, she brings a systems-thinking approach to her work, infused with mindfulness, sustainability, and social impact. For nearly a decade, Tej has been part of the Silicon Valley tech ecosystem, working at the intersection of emerging technologies and real human needs. She’s currently at Google, designing for Search at scale, and has previously taken multiple products from 0 to 1 at Citrix and PayPal. Deeply curious about how AI can enhance the nuances of human experience, she’s especially interested in the role of interaction design in building more inclusive and responsible AI systems.

There needs to be tighter regulations targeting algorithm biases (race, gender, socioeconomic status, etc.), whether it be for hiring, providing health diagnostics, generating social media images, or other areas.

As of now, a select number of countries and corporations dominate AI development, so democratizing access would be incredibly helpful, especially opening opportunities for marginalized communities and advocates to contribute to the design and applications.

We must also address the invisible laborers who label AI data, as these are typically underpaid workers in developing nations, and true equity would mean compensating them with higher wages and giving them more control over their data.

Transparency should be mandatory, or what I like to think of as the nutrition labels for AI, disclosing data origins, who processed it, and potential failures like facial recognition’s racial bias against darker skin tones, in which companies must then commit to fixing, not just acknowledging them.

About Beverlyn Tsai:
Beverlyn Tsai is a rising sophomore and a Presidential and Viterbi Scholar at the University of Southern California majoring in Computer Science and Business Administration with an AI Applications minor. She co-leads AthenaHacks, Southern California’s premier women-centric hackathon, supports corporate outreach for the Society of Women Engineers as an officer, and works as a Learning Assistant for an AI programming course. At USC Information Sciences Institute’s HUMANS Lab in the AI Department, Beverlyn leverages GPT-4o and OpenCV to detect AI images and identify superspreaders, and she applies web scraping, tweetNLP, and the Mann-Whitney U test to analyze emotional sentiment in AI versus non-AI political image tweets, research crucial to understand how AI-generated political media influences public opinion, trust, and election integrity.

I think the AI industry has a long way to go when it comes to equity. So much of this technology is built without the people it impacts most at the table, especially Indigenous communities. 

I’d love to see more funding and training go directly to Indigenous people, especially women, to not just use AI but to actually build it. That means investing in community-led education, research, and infrastructure so these tools can actually reflect the languages, values, and priorities of our communities.

Personally, I’m working on projects that show how AI can support cultural and mental health, like using machine learning for Anishinaabemowin language prediction or VR for reconnecting youth to land. 

However, I’ve had to piece together resources on my own. 

I want to live in a world where Native girls can see themselves as engineers and builders of our digital future, not just users of someone else’s design.

About Madeline Gupta:
Madeline Gupta is a recent graduate from Yale University where she studied how digital tools can increase community wellness around the globe. Her most recent projects are a virtual reality video game focused on land re-creation for her tribal nation, the Sault Ste. Marie Tribe of Chippewa Indians, and a statistical exploration into how large language models can contribute to Indigenous language education and preservation.This fall, she is starting as a software engineer at Google. She has worked as an intern at Zillow, Apple, and Kode with Klossy and her work has previously been featured by TEDx, NBC, and the United Nations.

One important step is to invest in educational programs, especially those that serve underrepresented and disadvantaged communities, that focus on AI literacy, development, and workforce-relevant skills. Equipping more people from diverse backgrounds with access to AI education is one of the most direct ways to close the equity gap in the AI industry. 

It’s also important to uplift professional communities like Rewriting the Code (RTC) that not only advocate for inclusive representation in tech, but also provide mentorship, training, and career support focused on AI and data-related roles. These types of communities help build belonging and access where it’s often missing. 

Equity must also be built into the design and training of AI systems. That means ensuring training datasets represent a wide range of populations and perspectives, not just a narrow or dominant few. That accountability piece is just as critical: humans still play an essential role in identifying and correcting biases within data and models. However, to do that well, the teams reviewing and testing these systems must also reflect a broad mix of lived experiences. 

Finally, it’s important to recognize that every team member, regardless of background, brings unique strengths to the development and responsible use of AI. An equitable AI industry isn’t just about who gets a seat at the table; it’s about valuing and leveraging each person’s insight to create better, safer, and more impactful technology.

About Angela Cao: Angela Cao is a Rewriting the Code (RTC) member based in Houston and a data scientist at Memorial Hermann Health Systems, where she leads high-impact AI and analytics projects to drive data-informed decisions in healthcare. She also holds a Masters of Data Science from Rice University and double Bachelor of Science degrees in Computer Science and Mathematics from the University of Texas at Austin. Angela is also a co-founder and board member of Women Who Do Data (W2D2) since its inception in 2024, where she leads initiatives to support and advance women and underrepresented minorities in Data and AI.

To build a more equitable AI industry, we need curated community programs that teach practical AI skills and expand access to the workforce. 

One of these communities I look up to is Rewriting the Code, which hosts curated professional development workshops on best practices on leveraging AI and what it takes to break into the AI industry, to name a few. 

At the same time, workplaces should openly discuss how AI can free up time for more meaningful, human-centered work. 

By combining external skill-building with internal culture shifts, we can create a more inclusive and empowering AI future.

About Monica Para: Monica Para is a tech content creator and an early career member of Rewriting The Code. She is very passionate about diversity and sharing accessible resources in the tech and startup sectors. Her project, ChiMaps, is an AI-powered map that highlights startup and venture capital firms across the Chicago tech ecosystem. She aims to make tech more inclusive and navigable for all through content, community, and data-driven tools.

I believe equity in the AI industry begins with access—to education, resources, and representation. Many talented individuals, especially from the underrepresented communities, are excluded due to systemic barriers in academia and industry. We should promote open-source tools, free online courses, and community-based mentorship programs in underserved regions.

Internships, conferences, and publishing opportunities are often gatekept by networks of privilege. As a grad student, I see the need for transparent selection processes, funded opportunities, and targeted outreach to ensure that diverse candidates can enter and thrive in AI careers.

About Chahana Dahal: Chahana Dahal is a Computer Science graduate with a Data Science minor from Westminster University, where she completed her degree in just three years. She was selected for the Google Computer Science Research Mentorship Program (CSRMP), which started her research journey in AI/ML. Her work on knowledge graph completion with RelatE is under review for NeurIPS 2025, and she is currently developing a Federated RAG framework using large language models. She also presented her independently proposed AI-powered education framework at AAAI 2024 and previously served as a Machine Learning Engineer at Omdena, contributing to adaptive AI tutors for refugee education. She plans to begin her graduate degree in ML in fall 2025.

The field of artificial intelligence (AI) is rapidly advancing and with it, so has companies adoption of AI. This has a high potential for an increase in inequities across various industries.

As AI is being embraced in most industries, we need to focus on mitigating inequities and biases, and the best approach, I believe, would be to take an interdisciplinary approach to AI.

Governments and organizations need to push for more stringent policies before integrating AI.

For example, a company creating an AI product geared towards kindergarten students should follow a mandatory policy whereby it has to pass various checks created by experts with both broad and niche specialties in fields such as early childhood education, child psychology, and social work, to name a few, before launching their product or making major changes to a product.

The widespread use of AI is still in its infancy, and we have already seen drastic shifts in society, both positive and negative, but in order for the positives to outweigh the negatives, we have to take a holistic and robust approach to AI integration.

About Deneille Guiseppi:
Deneille Guiseppi is a software developer with a background in web development, data analysis and software testing. She graduated from McGill University with a bachelors in Computer Science and a minor in Mathematics. Her pursuit to meld both technology and social impact has led her to Public Invention, an organization that pursues humanitarian invention projects and Apart Research, an AI Safety lab. Through Rewriting the Code network, she entered Apart Research’s Women in Safety Hackathon, and is currently, alongside her teammate – Siya Singh, trying to acquire support to continue building SafeRLHub, an interactive educational resource for reinforcement learning for reasoning models and AI agents and their safety risks, through Apart Research’s Studio.

Our chief editor, Stanley Anto, shares his thoughts:

Let’s keep the conversation going. Make National AI Day more than just a day to geek out over AI’s latest tricks—let’s use it as a reminder to check ourselves and make sure this tech is lifting everyone and enhancing more lives. 

The insights from these seven young thought leaders show us that equity in AI isn’t just a nice-to-have; it’s the key to unlocking a world where innovation doesn’t leave anyone behind. 

If we want AI to tackle the big stuff—climate justice, education for all, or healthcare that actually reaches everyone—we’ve got to keep pushing for diversity in who’s at the table, coding, designing, and dreaming up what’s next. 

Let’s take this energy from National AI Day and commit to building an AI future that’s as inclusive as it is groundbreaking, where every voice counts and every community thrives!

On behalf of the Techronicler community of readers, we thank these leaders and experts for taking the time to share valuable insights that stem from years of experience and in-depth expertise in their respective niches.

If you wish to showcase your experience and expertise, participate in industry-leading discussions, and add visibility and impact to your personal brand and business, get in touch with the Techronicler team to feature in our fast-growing publication. 

Leave a comment