When you’re trying to solve a hard problem, sometimes the only way forward is to take a completely different path. For most of my career, I worked in the world of the visual: graphics, printing, scanning, monitors, typography. Everything was about sight.
And then I realized — sight has limits.
Our eyes top out at around 60 hertz. That’s it. That’s the ceiling. Yet the world runs much faster. Structures change faster. Energy moves faster. Problems unfold faster. And we’ve built entire industries around the assumption that vision is enough.
It isn’t.
What changed my thinking was a conversation nearly fifteen years ago. A friend of mine, a software architect working on autonomous driving, told me something that stuck with me ever since:
> **“Sound solves the driving problems faster than vision.”**
He was right. Sound reacts faster. Sound carries more directional information. Sound sees around corners. And unlike vision, sound doesn’t care about lighting, weather, or glare. That idea opened a door for me that I didn’t fully walk through until much later.
I had worked on the Sound Manager for MacOS System 7, and some of the same developers moved with me from Apple to Microsoft. So sound wasn’t foreign to me — it was just sitting in the background of my career. Waiting.
Then the real shift happened.
A friend needed help with operations problems at Starbucks Coffee Roasting. And out of nowhere I said:
> **“Why don’t we use sound to count the beans?”**
It was obvious to me. Acoustic signatures are clean, distinct, and cheap to capture. You can count beans — accurately — for fractions of a penny. You can detect flow problems. You can measure consistency. You can treat the roasting line like an instrument.
The best part was that this random idea led me straight into the world of academic acoustics. I found a professor who had written papers on the acoustics of coffee bean roasting — which I didn’t even know was a real field — and I’ve been talking with him for more than six months now. Those conversations cracked open everything.
Because once you study how universities and the military use acoustics, you realize just how advanced the field really is.
From there I went deeper. Much deeper.
I revisited the signal-processing foundations I hadn’t touched since working on analog displays and power supplies decades ago. I reconnected with electromagnetic radiation engineers from my Apple days who had to battle compliance certifications at high frequencies. And I discovered something that surprised me:
> **There are way more engineers and funding in RF and high-frequency signal processing than in acoustics.**
So I asked myself the most obvious question:
**What software do they use?**
I found it — a DARPA-backed platform with twenty-four years of development behind it. And I spent a week at their user conference, talking to PhDs, researchers, and engineers who’ve spent their lives working in gigahertz domains.
That was the moment everything clicked.
If their methods work at gigahertz speeds, they will work at megahertz and kilohertz.
If the math works in RF, it works in acoustics.
If the structural patterns hold at high frequencies, they hold at low frequencies.
It all scales.
And so I spent the next couple of months digging into the mathematics — the real math — underneath signal processing. Complex signals. Phase. Time. Direction. Coherence. I/Q analysis. Energy emissions. The structures hidden inside the waves.
That exploration pulled everything together.
All the fields I had touched in my career — typography, printing, sound, color, monitors, analog electronics, imaging, scanning — suddenly made sense as variations of the same underlying structure: **signals and the truths they reveal.**
And that’s why I’ve gone so deep into acoustics.
Not because it’s trendy.
Not because it’s a niche.
But because sound — more than anything else we have — reveals the true structure of the world in real time.
Acoustics isn’t an afterthought.
It’s the path.