When ChatGPT burst onto the market this year, one thing immediately became clear: generative artificial intelligence (AI) and other AI technologies are major disruptors in how people work and businesses operate. According to IBM’s 2022 Global AI Adoption Index, 35% of businesses report using AI, and an additional 42% are exploring AI use. For leaders and businesses, it’s no longer a question of if AI technologies should be adopted but how.
The businesses at the forefront of this revolution are adopting AI with both speed—and care. Speed is necessary to remain competitive. Preliminary evidence suggests integrating AI technologies into the way we work can expand creativity, improve data analysis, and automate certain tasks in ways that boost employee productivity. Care is necessary to safeguard data and privacy, correct for the “hallucination” of facts and references, and mitigate bias.
Deciding which AI technologies to adopt and how to implement them at various velocities can be challenging. When developing a strategy for AI, there are three considerations we’ve found that can help businesses capitalize on new opportunities while moving judiciously:
- Build on your strengths
- Support workforce adoption
- Navigate risks wisely
Here is how Propeller is integrating these ideas internally and with our clients.
# 1. Start by investing in AI tech that accelerates what’s already working at your company.
When ChatGPT was first released, people caught up in the excitement were eager to ask: “What can we build around AI?”
But for business leaders looking to build business lines that last, a wiser question might be: “What are our existing strengths and opportunities, and how can AI help accelerate our path?”
At Propeller, for example, our strength is hiring and growing management consultants who help businesses thrive through change. So, when we considered how to leverage AI, we knew we wanted to use AI to up-level the work our consultants were already doing for clients.
We are targeting investment toward building foundational AI literacy for all our employees and then toward up-leveling consultants who specialize in areas most likely to be early adopters of AI. We are also considering improvements for our business operations with AI tools that integrate with our existing technology stack.
Other companies will need different approaches, depending on their competitive strengths. When advising a big tech client recently on their AI strategy and activation, for example, we emphasized opportunities to reevaluate existing product roadmaps for AI enhancements. We also helped them pilot AI code-writing tools within existing engineering teams. In other words, we focused on how AI could build on their strengths to further the competitive advantage of their core offerings.
Related Content: Driving AI Tech Opportunity Management for a Global Tech Company
# 2. Support your workforce in adopting the new capabilities AI tech demands.
When people first got access to calculators for doing arithmetic, they didn’t stop doing math—but instead needed to learn both new skills for how to do the old math problems and discern what new kinds of problems they could now tackle.
Similarly, employees will need to learn how to do their current job more effectively by leveraging AI.
It may be tempting to dismiss the learning curve for AI tech as inconsequential for all but those directly involved in programming; however, AI is likely to become ubiquitous, and those who don’t have the skills to access its full potential will likely fall behind their peers. Just as we know how important “digital literacy” is for employees to perform well in their jobs, inclusive AI literacy will be just as important in the future. According to a study in Harvard Business Review, for 80% of U.S. workers, at least 10% of their tasks could be done faster by an AI without compromising quality; and for 19% of workers, this number is as high as 50%.
At Propeller, we are supporting our workforce by building a learning curriculum that’s a blend of synchronous and on-demand content. We aim to help all our employees learn to prompt generative AIs effectively and collaborate with these technologies as we deliver for our clients. We believe this will maximize the advantage we get from these technologies and, in turn, the value we deliver to clients. Also, this will ensure our consultants and directors have hands-on experience with the technologies we advise and partner with clients on. While the focus of our learning curriculum is on prompt engineering (how to interact with an AI bot to generate more impactful responses), we also cover capabilities, responsible design and use, limitations, governance, and risk.
Related Content: Making Big Organizational and Tech Transformations? Don’t Forget About Your People.
Various industries and companies may need to pursue a blend of externally provided and internally customized content. A customer service team using chatbots, for example, may need to understand how AI tools can enhance the customer experience but also be limited in language comprehension. They would need a learning program that blends likely off-the-shelf content to understand the AI product they are using with internally customized content that helps them understand how to use the product for different use cases and in accordance with company values and policies.
# 3. Deliberate the risks and apply mitigations before taking action.
While many business leaders understand there are risks associated with deploying AI technologies, they may not know how AI technologies compare to other software platforms. But the fact is, AI technologies may expose businesses to risks at a level and magnitude well beyond what they’ve seen in the past.
AI technologies can analyze large datasets in novel ways in seconds. While this can be powerful for businesses looking for patterns or key insights about customers, operations, and more, it does mean that access to information needs to be critically considered.
Similarly, an AI with vast data access could identify patterns or make conclusions that violate individual privacy or compromise proprietary information in a way that is unlike what we’ve seen previously. Companies will need to update their data governance policies and processes to account for the future of how users and AI technologies will be interacting with their data. In some cases, it may even be appropriate to use modern tools like clean rooms to analyze how an AI tool interacts with company or user data and how it solves problems.
In addition, there are not yet comprehensive regulations around how AI vendors use the information input into their systems—though the Executive Order President Biden signed last month paves the way for new testing and safety standards. In the meantime, companies using AI technologies are at risk of unintentionally releasing proprietary or protected data. The White House AI guidance on data privacy opens with, “You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.” While this “Blueprint for an AI Bill of Rights” is not enforceable, it provides a useful benchmark against which companies may wish to vet potential AI systems and open up areas for vendor negotiation and/or risk acceptance or mitigation internally. Without comprehensive diligence up front, organizations could face serious ethical, customer, and legal troubles.
Another risk is that AI technologies can produce biased or inaccurate information; this includes “hallucinated” references and facts that can appear real but are inaccurate. A recent study found AI chatbots invent information at least 3% of the time and some as much as 27% of the time. Employees need to be trained on these technological weaknesses and how to notice them when they occur. Additionally, companies have a moral, if not legal, responsibility to perform their own (and enlist technical, ethical, and legal experts) in preemptive and ongoing testing to probe for potential biases, inaccuracies, and misuse.
Companies that provide transparency on how they are approaching decisions around risk and bias, for example, through publishing an ethical statement and providing plain-language disclosures, can help assure clients and customers that their interests are being protected. At Propeller, we are releasing an AI Statement of Ethics and updating our Employee Handbook with new policies relevant to AI technologies use this year.
# Developing an AI Strategy: Bringing It All Together
Part of what makes developing an AI strategy so complicated for businesses is that it requires inputs from many different parts of the organization, including Tech, HR, Legal, Product, and more. At Propeller, we recently supported a cross-functional Task Force to jumpstart our AI strategy. Similarly, we recently supported a big tech client in creating a strategy, prioritized roadmap, and foundation, creating a new AI Center of Excellence. Standing up a temporary entity to organize around AI can be an effective way to bring together different parts of a company and deliberate the implications of AI from varying perspectives, which can lead to more innovative ideas and help surface more ideas around potential impacts.
In some ways, AI technologies are a revolution unlike anything we’ve seen before; in other ways, they are an opportunity for business leaders to practice the tried-and-true fundamentals: move strategically, invest in technology and people together, and align business decision-making with customer benefits.