The ubiquity of algorithms in digital platforms has significantly influenced the online experiences of children and adolescents. While designed to optimize user interactions, algorithms often perpetuate biases that expose youth to harmful stereotypes, reinforce echo chambers, and limit access to diverse perspectives and opportunities. Such algorithmic biases pose critical challenges to digital safety by shaping content recommendations, social interactions, and targeted advertising, thereby impacting young users’ perceptions, behaviors, and mental wellbeing. This presentation examines the implications of algorithmic bias on the digital lives of youth, situating the discussion within the broader discourse on equity and inclusivity in technology. Drawing on empirical studies and interdisciplinary insights, it highlights the mechanisms through which algorithmic systems replicate and amplify societal biases. Key areas of concern include biased search algorithms, discriminatory content curation, and inequities in data-driven decision-making processes that disproportionately affect marginalized groups. To address these challenges, the presentation advocates for a multi-pronged approach. It emphasizes the importance of fostering algorithmic literacy among youth to enable critical engagement with digital content. It further underscores the need for ethical and inclusive algorithm design, transparency in algorithmic decision-making, and stakeholder collaboration to promote responsible governance of digital technologies. By advancing these strategies, this presentation aims to contribute to the creation of safer and more equitable digital ecosystems that uphold the principles of diversity, inclusivity, and youth empowerment. The discussion invites educators, technologists, and policymakers to collaboratively address the pressing issue of algorithmic bias in safeguarding the digital wellbeing of future generations.