This article originally featured in MediaCat Magazine. You can read the original article here.
Right now the AI scene is buzzing with potential
Businesses are aggressively racing to incorporate AI into their operations, heralding a new era of efficiency. But with this relentless pursuit of innovation comes the risk of staying connected to what matters most: the people. Open AI has just unveiled its latest development, Sora — a text-to-hyper-realistic video tool. This tool can create real-time videos from text prompts with an unparalleled level of realism. While many celebrate this long-awaited development of AI, many are also mourning the ‘death’ of something else — trust. The dark underside of innovative AI erodes trust in the life behind the art. Take the rise of AI influencers on social media, for example. These pixel-perfect, algorithmically generated personalities are raking in cash for brands, with no sleep or coffee breaks needed. Meanwhile, real-life influencers are left competing against tireless, ever-perfect virtual counterparts.
There’s a term I recently discovered, FOBO: fear of becoming obsolete
With AI technology on the rise staff are increasingly worried about their roles becoming obsolete. This fear is of the very practical loss of a role (most likely of someone already from a marginalised community, as studies show) and a wider fear of being left behind. The mass adoption of AI requires a level of digital literacy and agility that promises success to those quick to adopt, and threatens to leave behind those who might struggle to keep up with the pace of change. Implementing AI across a business is actually less about the technology itself and more about the approach and strategy behind the use of AI.
Is AI being developed to rid people of their jobs, or to enhance their day-to-day work and open up space for innovative thinking? Are we designing AI tools to dismantle bias and mitigate harm, or is the pace of development glossing over key ethical questions? Are we leveraging AI tools to uplift marginalised groups in the workplace, or are we pushing people out? And crucially, are we equipping our teams with the skills to navigate this new AI-enhanced landscape, or are we leaving them in the dark? The acute need for a Diversity, Equity and Inclusion lens across all AI strategies is missing in many conversations around AI, even in the ethical AI space. DEI must be a foundational principle to embed AI systems that put people first and to help build trust that AI can benefit all. Applying a DEI lens means prioritising equity, fairness and inclusivity from the get-go.
It means centring marginalised communities in AI’s design and deployment, tailoring training programs to ensure all employees can upskill in AI, and committing to equitable outcomes through AI usage, even if it means taking a hit to the bottom line. In the world of AI video and influencer marketing that’s not likely to mean removing AI video and influencers altogether. But it might mean ensuring rigorous control over potentially harmful content created by these tools. It could mean working with minority-led businesses to shape marketing strategies or ring-fencing budgets to commit to working with human influencers and creative strategists.
Embracing bold technologies demands equally bold strategies
With AI development outpacing regulatory frameworks, businesses must take the lead in crafting people-centred AI strategies that leave no one behind. AI has the potential to be the ultimate leveller, democratising access to information and resources, automating routine tasks and opening room for innovation. AI can become a pivotal force for equality when anchored with a commitment to fair outcomes.
But realising this promise depends on businesses’ collective commitment to addressing not just the technological possibilities, but the human implications, so that they don’t lose sight of what truly matters in their quest for relevance.