This toolkit is a starting point, not a verdict. If you're using AI (and you are), you're generating compute load. Some of it is necessary. Some of it isn't. The work is learning to tell the difference. Not using less. Using deliberately.
Not a guilt gauge. A compass. Shows where typical individual, team, and enterprise usage sits relative to common benchmarks. Ranges, not false precision.
The goal isn't to use less AI. It's to use it with the same intentionality you'd bring to any material decision.
See how AI is reshaping the workplace more broadly.
AI and the Workplace →Most teams don't have a clear picture of what AI they're running, how often, or for what. That's not negligence. It's how tools get adopted. You install them, you use them, you don't inventory them.
But AI tools aren't free to run. They consume energy. The consumption varies a lot, depending on which model, how often, and where it's running.
Start here: map your AI touchpoints. Which tools does your team use? How often? For what types of tasks? This isn't a data science exercise. It's a ten-minute conversation with a piece of paper.
The most honest proxy for compute, where platforms expose it. More tokens = more compute = more energy. Imprecise. Still better than nothing.
This is where most of the variance lives. Running a frontier model on a three-word email subject line isn't a sustainability issue. It's a waste of compute and money. The right model for the right task is the highest-leverage decision most teams aren't making deliberately.
Best for: complex reasoning, multi-step synthesis, nuanced judgment calls. High cost per token.
Best for: drafting, summarizing, Q&A, most daily work tasks. Good quality, lower cost.
Best for: classification, extraction, structured data, simple single-turn tasks. Lowest compute cost.
Matters more as AI moves closer to local models. On-device is generally more efficient for simple tasks. Cloud is necessary for complex ones. Knowing the difference is an emerging skill worth building now.
Underused as a frame. When AI replaces a process that had its own footprint: a business trip, a print run, a redundant approval cycle. That displacement counts. Name it. Track it.
GPT-4 for a subject line is like driving a semi truck to pick up a sandwich. Use a smaller model for simple, high-frequency tasks. Most platforms let you choose. Most people don't.
Before deploying a new AI workflow, name one thing it replaces. That replacement had a footprint too, and the net matters. Then ask: what model does this actually need? What's the frequency? If the answers point toward a high-compute, high-frequency workflow that doesn't replace anything, that's the conversation to have before you build, not after.
These are reasonable questions. Vendors who won't answer them are telling you something.
The Aeron wasn't designed to be ergonomic. It was designed because sitting in most office chairs for eight hours causes real physical harm. Nobody had treated that as a design problem worth solving seriously. The chair came from the conviction that material decisions and human outcomes are inseparable. You don't get to call something well-designed if it damages the body it's built for.
We're not neutral on AI's environmental impact. It's real, underdisclosed, and worth naming honestly. The answer isn't to use less AI. It's to use it with the same intentionality we bring to any material decision. We call it three disciplines internally:
Right model for the right task, every time.
Where it's possible, run heavy inference when the grid is cleaner.
If you can't estimate the environmental cost of a project, you're not ready to run it.
We don't accept vendor carbon credits as a substitute for real measurement. We don't think you should either.
See the broader picture on AI and work.
AI and the Workplace →What sustainability question do you want us to answer next?