AI Toolkit

This resource is for the MillerKnoll dealer network.

AI and the Workplace

Everyone has access.
Not everyone has capabilityNot the same as access. Not even close..
That's the gap.

The license is not the finish line. Here's what we're actually tracking.

Here's what we're tracking ↓

AI has a footprint. This toolkit helps you use it deliberately.

Sustainability Toolkit →
What We're Seeing

Five signals worth tracking.

Not a comprehensive list. The ones that actually matter for work.

1

Adoption is high. Daily use is not.

Most knowledge workers have tried AI. A much smaller number have built it into the actual rhythm of how they work. The gap between "tried it" and "changed how I work" is where capabilityNot the same as access. Not even close. building happens. That's where most organizations are stuck.

2

AI is moving faster than training programs can chase.

By the time your L&D team builds a course on a tool, the tool has changed. The organizations getting ahead aren't building better training programs. They're building better learning cultures. Places where trying, failing, and trying again is how learning works.

3

The skills gap is widening.

That number isn't getting smaller. The gap isn't technical. It's judgment. People don't know which tasks AI is good for. They don't know when to trust it. They don't know when to push back.

0%
Workers who say they lack the skills to use AI effectively.
— Charter, 2024
4

Generative Engine Optimization is changing how organizations get found.

Search is changing. AI systems are increasingly the front door to how clients and customers find vendors, firms, and partners. Organizations that don't understand this are building beautiful rooms nobody walks into.

5

The approval layer is where AI goes to die.

Organizations build a real AI workflow, something that saves time, and then route it through a review process designed for human-generated output. The workflow gets slower than the original problem. Nobody uses it. The ROI case collapses. This isn't an AI problem. It's a process design problem. But it shows up in every pilot debrief we've seen.

What This Means for Work Design

MillerKnoll makes the places where people work. That means when work changes, we pay attention.

Three friction points we're watching.

Friction 1

Meeting overload.

AI transcription tools solve the notes problem. They don't solve the meeting problem. The meeting problem is a decision problem: who's in the room, what authority they have when they leave. That's a design problem, not a software one.

Friction 2

Synthesis debt.

Most knowledge workers aren't drowning in tasks. They're drowning in information they can't convert into a decision. AI is genuinely good at synthesis. But only if you know what question you're asking. The bottleneck moved from finding information to knowing what to do with it.

Friction 3

Time savings without intention.

When AI saves someone two hours a week, what happens to those two hours? In most organizations: more work. The capacity fills. CapabilityNot the same as access. Not even close. never compounds. The exhaustion stays. This is a leadership question, not a tool question.

MillerKnoll in the Room

What we keep running into.

Organizations design a space, then use it differently six months later. The plan doesn't survive contact with the people. TracE exists because of that problem. Space utilization data told a completely different story than the floor plan assumed. If you don't know how your space is being used, you can't design it well. The same logic applies to AI: if you don't know how your people are actually using it, you can't build the conditions for it to work.

GEO started with a different friction: most organizations design for collaboration but schedule for focus. The physical environment and the work patterns were pulling in opposite directions. GEO tries to close that gap. The right question isn't "does this space work?" It's "what kind of work is this space asking people to do?"

The cohort program is our answer to the capabilityNot the same as access. Not even close. problem. Not tool access. Every organization has that. The bet is that if you give people a structured way to try, fail, reflect, and try again alongside colleagues doing the same thing, something compounds. Not just skill. Permission.

Bad outputs aren't failure. They're the curriculum.
The Reading List

Ten things worth reading right now.

Updated monthly. Each one gets one sentence: what it is, why it matters.

→ Items coming soon. Harvey updates this list monthly.
Where to Start

Pick your path.

How would you describe your relationship with AI right now?

The Skeptic's Path

Good. The skeptics are usually right about something. They just don't know what yet.

Start here: pick one task your team does repeatedly that involves a first draft of anything. A summary, a proposal section, a status update. Run it through AI once before your next meeting. Don't evaluate the tool. Evaluate whether the output changes the conversation. That's a leadership experiment, not a commitment.

  • Pick one task. Run it once.
  • Note what you had to fix. That's your calibration data.
  • Come back when you've done it three times.

Curious but Stuck

The stuck feeling is usually a question problem. You're not sure what to ask.

Try this: describe a recent moment of friction. Something slow, something repetitive, something you dreaded. That description is your first prompt. Paste it into an AI tool and see what it does with it.

  • Name the friction. Write it in one sentence.
  • Paste it as a prompt. Don't overthink the format.
  • Edit what comes back. The editing is the learning.

Going Deeper

The next level is judgment, not volume.

Ask yourself: what am I using AI for where I consistently have to heavily edit or verify the output? That's the signal. Either the prompt needs work, or this is a task AI isn't suited for. Learning to tell the difference is the skill.

  • Audit your last 10 AI tasks. Which ones required heavy editing?
  • Pick one. Rewrite the prompt from scratch.
  • Track whether the output-to-edit ratio improves.

What would change about how you work if AI was genuinely good at the thing that slows you down most?