Though Artificial Intelligence (AI) and Large Language Models (LLMs) have been rapidly evolving over the last two decades, 2023 has certainly been a hallmark year with an unprecedented number of everyday citizens accessing and leveraging these technologies.
For example, according to recent statistics, ChatGPT has over 100 million users and 12.31% of those users are from the United States. The disruption is here, and we are embracing it. We are also investing in it. The latest numbers show an uptick in AI deal counts, unicorn births, and $100M+ mega-rounds in Q2’23.
At MindMoves, we envision a sustainable future where organizations design and develop technology with greater purpose to benefit people, profits, and the planet. And we are not alone.
AI and LLMs are making the world more accessible.
AI can transform human health and potential! About 16% of the global population lives with a disability, and yet accessibility is not always equitable. Last month, a paralyzed woman spoke for the first time in 18 years using AI technology developed at the University of California. Essentially, a brain-computer interface can decode brain signals, convert them into synthesized speech, which a digital avatar then speaks aloud while also recreating natural facial movements. (The video in the link is truly amazing!)
In the same month, ElevenLabs released Eleven Multilingual v2 – a foundational AI speech model that can “automatically identify nearly 30 written languages and generate speech in them with an unprecedented level of authenticity.” This will make content dramatically more accessible across many industries. But think of what this means for educational institutions! They can use AI to instantly provide students with accurate audio content in 30 languages.
But are AI and LLMs safe to use?
AI technology is developing faster than policy, but governments, researchers and companies are taking conscious steps to ensure AI and LLMs are developed and utilized responsibly. Under Biden’s leadership, seven major AI companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust. Cornell University is exploring the potential of automatically correcting LLMs. Protect AI has raised $35 million to “help customers build a safer AI-powered world.” Finally, open-source models can help to ensure AI safety due to public oversight (like in Wikipedia).
Clearly, leaders from across the globe are rising together to ensure the good of AI outweighs the bad. We must all stay vigilant and ahead of the learning curve. As an individual, you can also find ways to mitigate risks to businesses and our society by applying Responsible AI principles.
Call to action.
One thing is for certain: AI will disrupt the workforce and we need humans in the loop and engaged in reskilling (to acquire new skills and, in some cases, use them to change occupations). A recent study by IBM suggests that 40% of the workforce will need to reskill over the next three years due to implementing AI and automation. While entry-level employees may see the biggest shift, leaders and managers will also be impacted. New roles in prompt engineering, algorithmic verification, AI safety and democratized software development have already taken root (click here for a robust list of valuable resources). We are at an inflection point as the accelerated pace of technology requires accelerated change leadership.
Mind Moves offers a comprehensive suite of AI and change management services designed to transform and elevate your business. We can help you to craft a comprehensive AI strategy that aligns with your business objectives, customize LLM models for your specific needs, manage complex AI projects with expert oversight, or augment your team with AI capabilities. Contact us to receive a complimentary 30-minute consultation. AI is here. How will you respond?