The GreenM3 Data Center Project

Back to Green Data Centers

I stopped writing about green data centers for a while because the conversation started feeling stale.

The same ideas kept showing up with new logos attached: renewable energy claims, PUE numbers, sustainability reports, renderings, commitments, and announcements. Some of the work is real. Some of it is excellent. But the public conversation has become predictable.

So I decided to come back a different way.

Instead of writing about another announced facility, I am going to write about my own fictional green data center — one that lets me test what "green" actually means when the claims have to hold.

The project starts with a simple physical frame:

100,000 square meters of floor area, 10 meters tall, for a total of 1,000,000 cubic meters of space.

That number — 1,000,000 cubic meters — is not arbitrary. It is a forcing function.

At that scale, the comfortable hand-waving that fills most green data center writing stops working. You cannot just say "we use renewable energy" and leave it there. You cannot cite a PUE number without explaining how you measured it. You cannot claim cooling efficiency without accounting for what happens when the ambient temperature spikes, the grid gets stressed, or the AI workload doubles overnight.

At 1,000,000 m³, every claim becomes a structural argument.

And structural arguments either hold or they do not.

What I Got Bored Of

The green data center space has a formula. You have seen it.

A press release announces that a new hyperscale facility will be powered by 100% renewable energy. There is a rendering. There are sustainability commitments. There is a PUE number that sounds impressive. The facility opens. The sustainability report comes out twelve months later. Much of it reads like marketing.

I am not saying the work is not real. Some of it is. But the industry conversation has become a loop.

The hard questions are usually avoided.

What does it actually mean to be green in a way that can be verified by someone other than the company making the claim?

What happens to green commitments during a prolonged drought, when cooling towers become a liability?

What happens when the local grid is stressed and diesel generators run for four hours?

What happens when the AI workload doubles overnight and the thermal profile of the building changes?

Those questions are more interesting to me than another announcement.

The Fictional Project as a Tool

So I built a fictional one.

No specific location. No owner. No PR constraints. Just a volume of space and the question:

What would it take to make this genuinely, structurally green?

Fictional does not mean unserious. It means unconstrained. It lets me test the claims without being trapped inside a vendor story, a corporate sustainability report, or a single site's limitations.

I use the word structurally deliberately.

I have been developing a way of thinking called StructuralTruth: the idea that any serious claim about a system should be expressible as:

  • invariants — what must remain true

  • violations — what must never occur

  • boundaries — where the claim holds

  • transformations — what is allowed to change

If you cannot express your "green" claim in those terms, you probably do not have a claim yet.

You have an aspiration.

The fictional data center is the test bed for applying that thinking to physical infrastructure.

The fictional project has a name: GreenM3DC.

M3 stands for the cubic meter — the fundamental unit of the space.

Everything I write in this series will be grounded in one question:

Does the claim hold?

Not does it sound right. Not does it appear in a sustainability report. Not does it support a nice rendering.

Does it actually hold, under measurement, over time, in real operating conditions?

That is the standard I am interested in. It is harder than it sounds.

Let's go.

Next: What does "green" actually mean? A structural definition that survives contact with reality.

How I Use ChatGPT and Claude Code Together — and Why I Don’t Mix Their Roles

Over the last several weeks, I’ve settled into a workflow that looks unusual on the surface but has proven extremely effective in practice:

  • ChatGPT for structural exploration and review

  • Claude Code for deterministic compilation and execution

  • No overlap between their responsibilities

The key is not which models I use—it’s how I separate their roles.

The Capability Asymmetry That Matters

Here is the practical difference that forced this separation:

That tells you how each tool wants to be used.

ChatGPT = Structural Workspace

I use ChatGPT for:

  • Long-lived thinking

  • Naming and structure

  • Clarifying intent

  • Reviewing results after execution

I do not use it to touch the filesystem or “prove” code works.

Claude Code = Compiler

Claude Code is treated as a deterministic machine:

  • It edits real files

  • It runs real commands

  • It fails concretely

  • It enforces correctness through execution

No long-term reasoning. No design debates.

The Critical Rule

I never use ChatGPT to review Claude Chat.

Instead, the loop is always:

Structure → Compile → Review

  1. ChatGPT defines structure

  2. Claude Code executes it

  3. ChatGPT reviews what actually happened

This avoids language-only feedback loops and keeps everything grounded in reality.

Why This Works

  • Exploration stays fast

  • Execution stays correct

  • Code becomes expendable

  • Structure becomes durable

I’m now applying this workflow to OS-level services for electrical and mechanical systems in AI data centers, where ambiguity is expensive and determinism matters.

Final Thought

Most AI frustration comes from asking one tool to do two incompatible jobs.

Once you separate exploration, compilation, and review, AI starts behaving like a real engineering toolchain—not a chatbot.

Writing code with help of AI

I took computer programming classes at UC Berkeley and spent years trying to get better at programming afterward. While I understood the fundamentals, writing software always felt like it required an enormous amount of time and effort relative to the progress made. The work felt more about managing complexity than solving the underlying problems I cared about.

About a month ago, while exploring some ideas involving AI, I unexpectedly revisited writing code—this time with AI’s help. What surprised me wasn’t that the AI could write code, but that it fundamentally changed where the effort was spent. Instead of wrestling with syntax, frameworks, and coordination details, the work shifted toward defining structure, relationships, and invariants.

A week ago, Ray Ozzie wrote about his own experience collaborating with AI to design and prototype hardware and software systems. Ray is best known for creating Lotus Notes, and for his later work on large-scale distributed systems. His reflections strongly resonated with my own experience—but also highlighted something important.

What took weeks of focused effort in his case unfolded for me in hours.

Not because the problems were simpler, but because the approach was different.

Having spent much of 2025 transforming the way I write code, a few months ago I decided to see how far I could push myself in collaborating with AI to tackle hardware design.

The project - motivated by conversations with a customer - is nontrivial. Physical and cost constraints; both analog and digital domains; edge compute/storage ML; power challenges. Of course, Notecard for secure cloud backhaul.

I worked on it on-and-off for about 3-4 weeks - surprised not just that the foundation models had so much knowledge of EE, but that they clearly had internalized a vast number of components’ datasheets. Several times I ran into roadblocks where ultrathink or deep research yielded specific choices I’d never have considered.
— Ray Ozzie

Now I often spend 2–4 hours a day working on computer code. But the work itself is no longer about coding. I’m working on harder, more upstream problems, and the code is simply the executable output of the design.

This ability didn’t appear overnight. It comes from years working in product development as a project and program manager for both hardware and software—guiding execution across teams, understanding how the big picture fits together, and how small decisions compound. Over time, you learn to see systems as composed structures, where relationships matter more than parts, and where symmetries persist even as details change.What used to require explanation and persuasion now shows up as functional proof.

In the past, I might have written a paper or given a conference presentation. Now, in a fraction of the time, I can produce a functional proof—one that can be run, tested, and shared, and that scales to far more people than a paper or presentation ever could.

What also feels right is reaching out directly to Ray Ozzie. After connecting on LinkedIn, I was able to share a few thoughts on how his early technical decisions at Microsoft helped enable the company’s cloud evolution. I’m looking forward to exchanging perspectives on how AI is changing the way we think about coding, structure, and system design.

Why Acoustics Became My Path to Solving Hard Problems

When you’re trying to solve a hard problem, sometimes the only way forward is to take a completely different path. For most of my career, I worked in the world of the visual: graphics, printing, scanning, monitors, typography. Everything was about sight.

And then I realized — sight has limits.

Our eyes top out at around 60 hertz. That’s it. That’s the ceiling. Yet the world runs much faster. Structures change faster. Energy moves faster. Problems unfold faster. And we’ve built entire industries around the assumption that vision is enough.

It isn’t.

What changed my thinking was a conversation nearly fifteen years ago. A friend of mine, a software architect working on autonomous driving, told me something that stuck with me ever since:

> **“Sound solves the driving problems faster than vision.”**

He was right. Sound reacts faster. Sound carries more directional information. Sound sees around corners. And unlike vision, sound doesn’t care about lighting, weather, or glare. That idea opened a door for me that I didn’t fully walk through until much later.

I had worked on the Sound Manager for MacOS System 7, and some of the same developers moved with me from Apple to Microsoft. So sound wasn’t foreign to me — it was just sitting in the background of my career. Waiting.

Then the real shift happened.

A friend needed help with operations problems at Starbucks Coffee Roasting. And out of nowhere I said:

> **“Why don’t we use sound to count the beans?”**

It was obvious to me. Acoustic signatures are clean, distinct, and cheap to capture. You can count beans — accurately — for fractions of a penny. You can detect flow problems. You can measure consistency. You can treat the roasting line like an instrument.

The best part was that this random idea led me straight into the world of academic acoustics. I found a professor who had written papers on the acoustics of coffee bean roasting — which I didn’t even know was a real field — and I’ve been talking with him for more than six months now. Those conversations cracked open everything.

Because once you study how universities and the military use acoustics, you realize just how advanced the field really is.

From there I went deeper. Much deeper.

I revisited the signal-processing foundations I hadn’t touched since working on analog displays and power supplies decades ago. I reconnected with electromagnetic radiation engineers from my Apple days who had to battle compliance certifications at high frequencies. And I discovered something that surprised me:

> **There are way more engineers and funding in RF and high-frequency signal processing than in acoustics.**

So I asked myself the most obvious question:

**What software do they use?**

I found it — a DARPA-backed platform with twenty-four years of development behind it. And I spent a week at their user conference, talking to PhDs, researchers, and engineers who’ve spent their lives working in gigahertz domains.

That was the moment everything clicked.

If their methods work at gigahertz speeds, they will work at megahertz and kilohertz.

If the math works in RF, it works in acoustics.

If the structural patterns hold at high frequencies, they hold at low frequencies.

It all scales.

And so I spent the next couple of months digging into the mathematics — the real math — underneath signal processing. Complex signals. Phase. Time. Direction. Coherence. I/Q analysis. Energy emissions. The structures hidden inside the waves.

That exploration pulled everything together.

All the fields I had touched in my career — typography, printing, sound, color, monitors, analog electronics, imaging, scanning — suddenly made sense as variations of the same underlying structure: **signals and the truths they reveal.**

And that’s why I’ve gone so deep into acoustics.

Not because it’s trendy.

Not because it’s a niche.

But because sound — more than anything else we have — reveals the true structure of the world in real time.

Acoustics isn’t an afterthought.

It’s the path.

Solving the Unsolvable — The Promise of Structural Intelligence Engineering (SIE)

Everyone knows the triangle.

Cost. Schedule. Quality.

Pick two.

You can’t have all three.

That’s the law of control.

And for more than a century, every industry — from construction to computing — has lived under its shadow.

But what if the triangle was never a law at all?

What if it was just a symptom — a structure out of phase with itself?

The Unsolvable Problem

Every project, product, or system faces the same paradox:

  • If you rush, quality suffers.

  • If you chase quality, costs explode.

  • If you control costs, you lose time.

It’s the illusion of trade-offs — the belief that stability demands sacrifice.

But that belief belongs to the era of control.

Control works by feedback — measuring after the fact.

By the time the system reacts, coherence is already lost.

The real world doesn’t run on steps and loops — it runs on phase.

And phase can drift long before a problem is visible.

The Breakthrough: Coherence

Structural Intelligence Engineering (SIE) replaces control with coherence.

It’s the art and science of keeping systems in phase — physically, temporally, and energetically.

Instead of fighting trade-offs, coherence makes them vanish.

When structure is coherent, cost, schedule, and quality no longer compete —

they resonate.

A Simple Analogy

Think of great wireless earbuds.

They deliver high-fidelity sound, cancel external noise, and fit comfortably —

all in a device small enough to disappear in your ear.

Twenty years ago, that combination was impossible.

Power limits, latency, interference — all made “great sound everywhere” a fantasy.

Then engineers discovered how to maintain phase coherence

using I/Q signals and Phase-Locked Loops (PLLs) to keep everything synchronized, even in chaotic environments.

The result wasn’t just better performance —

it was seamless experience.

That’s what SIE brings to engineering itself.

The Principle

At the core of SIE is a single idea:

Systems don’t fail from lack of control; they fail from loss of coherence.

SIE continuously senses and tunes coherence across every relationship in a structure —

using the same physics that make modern wireless sound so smooth:

  • I/Q sensing detects amplitude (what’s happening) and phase (how it’s moving).

  • PLLs continuously synchronize signals across domains.

  • Symmetry verifies balance and conservation across energy, time, and flow.

The result: a self-tuning structure that stays truthful to its design, no matter how complex the environment.

How Coherence Achieves the Impossible

The Equation of Coherence


When phase drift d\phi/dt is near zero, everything flows together.

That’s when cost, timing, and quality naturally balance —

because the structure itself is synchronized.

The Future of Building with AI

AI is not just another layer of control.

It’s the medium through which coherence can finally be measured, modeled, and maintained.

In the age of AI factories, robotic construction, and autonomous design,

SIE is the framework that teaches machines how to stay in tune with reality

the way noise-canceling systems stay in tune with sound.

The result isn’t tighter management.

It’s structural harmony.

The Structural Truth

Control manages outcomes.

Coherence composes truth.

That’s what Structural Intelligence Engineering achieves —

the ability to do what’s been considered impossible for more than a century:

Cost. Schedule. Quality. All three. Continuously.

Not by working harder,

but by working in phase.