AI is here to stay

How can you learn more about how to embrace these tools for positive social change!

My journey with AI began when I was a child who loved building things. In 2001, while I was doing my A-levels, my dad took me to a university open day. I was captivated by a robotics demo and decided then that I wanted to study Artificial Intelligence and Computer Science. When I started my degree in 2002, hardly anyone knew what AI was, and I was one of only four women on the course. It was challenging but I was determined.

From the very beginning I was passionate about creating AI tools that could make a real difference in people’s lives. I imagined medical diagnostics powered by AI, search and rescue robots for disaster zones, and robots exploring space. I was also aware of how bias in data could create skewed algorithms, and as a social justice activist I wanted to tackle this problem head on.

When I graduated in 2006, AI was still mostly an academic pursuit with innovation happening mainly in America and Japan. Jobs in the UK were scarce, so I became a software developer. Over time I grew disillusioned with how AI was being used mainly to drive sales or social media engagement. I longed to build something with real social impact. In 2019, working with an oncologist, I began developing an AI tool to improve pathways for lung cancer diagnosis and treatment.

Since then AI has exploded with the rise of Large Language Models and Generative AI. But the problems I first noticed as a student have only multiplied. Biased algorithms continue to cause real harm, and we have also seen new issues emerge such as environmental damage, workforce exploitation and the displacement of communities. A lack of diversity in AI development only deepens these risks.

For me, abstaining from AI is not the answer. If marginalised communities step away, the technology will continue to evolve without us, and the harms will grow. Instead I believe in building AI for good. I am inspired by the ways AI can support people with disabilities, assist neurodivergent users, and create the kinds of life changing tools I dreamt about years ago. Earlier this year I was invited to a parliamentary roundtable on AI and ethics, focusing on the experiences of marginalised groups. That conversation encouraged me to expand beyond healthcare. I developed Beyond the Colonial Mind, a GPT that educates people about racism while centring Black and Brown voices, and my team is also working on a new tool to help psychotherapy trainees engage more ethically with racialised clients.

I believe ethical AI is something we can bring into WRKWLL’s work. It aligns naturally with WRKWLL’s philosophy of offering thoughtful and assistive tools to help clients engage with complex conversations about race, gender, sexuality, disability and more. AI can also support WRKWLL’s values in other ways: collaborative tools that turn workshop conversations into shared insights, equitable tools that reveal hidden patterns of exclusion, explorative tools that let clients test new ideas and perspectives, impactful tools that measure and visualise change, and joyful tools that help teams celebrate progress and collective wins. For me, AI is not about replacing people. It is about equipping us with tools that amplify fairness, imagination and connection.


If you would like to know more about my work, you can find it here:
Femma Ashraf
Astronomical AI website
Beyond the Colonial Mind GPT
Parliamentary Round Table Blog
Being a Brown Female Founder five part blog series

Femma Ashraf

Specialist in AI, research, software development, and medtech, bridging technical innovation with culturally attuned mental health practices.