The license is not the finish line. Here's what we're actually tracking.
Here's what we're tracking ↓AI has a footprint. This toolkit helps you use it deliberately.
Sustainability Toolkit →Not a comprehensive list. The ones that actually matter for work.
Most knowledge workers have tried AI. A much smaller number have built it into the actual rhythm of how they work. The gap between "tried it" and "changed how I work" is where capabilityNot the same as access. Not even close. building happens. That's where most organizations are stuck.
By the time your L&D team builds a course on a tool, the tool has changed. The organizations getting ahead aren't building better training programs. They're building better learning cultures. Places where trying, failing, and trying again is how learning works.
That number isn't getting smaller. The gap isn't technical. It's judgment. People don't know which tasks AI is good for. They don't know when to trust it. They don't know when to push back.
Search is changing. AI systems are increasingly the front door to how clients and customers find vendors, firms, and partners. Organizations that don't understand this are building beautiful rooms nobody walks into.
Organizations build a real AI workflow, something that saves time, and then route it through a review process designed for human-generated output. The workflow gets slower than the original problem. Nobody uses it. The ROI case collapses. This isn't an AI problem. It's a process design problem. But it shows up in every pilot debrief we've seen.
Three friction points we're watching.
AI transcription tools solve the notes problem. They don't solve the meeting problem. The meeting problem is a decision problem: who's in the room, what authority they have when they leave. That's a design problem, not a software one.
Most knowledge workers aren't drowning in tasks. They're drowning in information they can't convert into a decision. AI is genuinely good at synthesis. But only if you know what question you're asking. The bottleneck moved from finding information to knowing what to do with it.
When AI saves someone two hours a week, what happens to those two hours? In most organizations: more work. The capacity fills. CapabilityNot the same as access. Not even close. never compounds. The exhaustion stays. This is a leadership question, not a tool question.
Organizations design a space, then use it differently six months later. The plan doesn't survive contact with the people. TracE exists because of that problem. Space utilization data told a completely different story than the floor plan assumed. If you don't know how your space is being used, you can't design it well. The same logic applies to AI: if you don't know how your people are actually using it, you can't build the conditions for it to work.
GEO started with a different friction: most organizations design for collaboration but schedule for focus. The physical environment and the work patterns were pulling in opposite directions. GEO tries to close that gap. The right question isn't "does this space work?" It's "what kind of work is this space asking people to do?"
The cohort program is our answer to the capabilityNot the same as access. Not even close. problem. Not tool access. Every organization has that. The bet is that if you give people a structured way to try, fail, reflect, and try again alongside colleagues doing the same thing, something compounds. Not just skill. Permission.
Updated monthly. Each one gets one sentence: what it is, why it matters.
How would you describe your relationship with AI right now?
Good. The skeptics are usually right about something. They just don't know what yet.
Start here: pick one task your team does repeatedly that involves a first draft of anything. A summary, a proposal section, a status update. Run it through AI once before your next meeting. Don't evaluate the tool. Evaluate whether the output changes the conversation. That's a leadership experiment, not a commitment.
The stuck feeling is usually a question problem. You're not sure what to ask.
Try this: describe a recent moment of friction. Something slow, something repetitive, something you dreaded. That description is your first prompt. Paste it into an AI tool and see what it does with it.
The next level is judgment, not volume.
Ask yourself: what am I using AI for where I consistently have to heavily edit or verify the output? That's the signal. Either the prompt needs work, or this is a task AI isn't suited for. Learning to tell the difference is the skill.
What would change about how you work if AI was genuinely good at the thing that slows you down most?