Broad top-down mandates to use AI fail because they’re too vague to act on, while unmanaged employee experimentation can expose sensitive data to unauthorized parties. Molly Lebowitz of Propeller and Anthony Prestia of TerraTrue argue that successful AI adoption requires identifying bona fide use cases and establishing clear human checkpoints — and making it easier for employees to experiment safely rather than trying to shut down experimentation.
Generative AI has moved out of specialist teams and into everyday work, with adoption now spanning finance, marketing, product, operations and people teams. Employees encounter large language models not only through their personal ChatGPT or Claude accounts but also through AI features embedded in the business software they already rely on for email, collaboration and HR.
As usage spreads across the enterprise, urgency for quick results follows close behind. In many cases, AI platform adoption is happening without shared intent, clarity of ownership or alignment to real work.
Adoption is often driven from two directions. From the top, broad mandates tell people to “use AI” in hopes of driving value, whether that be reducing cost, driving efficiency or increasing output. From the bottom, employees experiment with personal LLM accounts and AI-powered features inside sanctioned tools. Each of these scenarios introduces new privacy and security risks, burdensome compliance reviews and employee concerns over what the adoption of AI will mean for their jobs.
Both approaches can fail for the same reason: lack of intentional design.
Successful AI adoption depends less on the sophistication of the models than on the intentionality of the approach. Organizations need to be deliberate about where large language models create meaningful value today and align safeguards to the risk and impact of each use case. Equally critical is engaging employees in that effort by clearly explaining changes, providing approved tools, sharing concrete examples and listening to the people closest to the work.
When these elements are missing, adoption stalls or introduces risk without return. In practice, secure AI adoption at scale is a leadership and change management challenge, not a purely technical one.
In the full article, Molly and Anthony explores:
- Why broad AI mandates often fail to produce real business outcomes
- How leaders can identify credible AI use cases today
- Practical ways to manage shadow AI and data privacy risks
- The role of human checkpoints in responsible AI adoption
- Why scaling AI successfully is ultimately a leadership and change management challenge
Read the full article on Corporate Compliance Insights, co-written by Molly Lebowitz, Vice President of Industries at Propeller, and Anthony Prestia, Vice President of Privacy at TerraTrue, a data privacy platform.