GitHub for Knowledge Work Is the Wrong Metaphor. You Need a Refinery, Not a Repo.

Mesa is pitching version control for expert knowledge. But you can't version what was never structured in the first place.

You have 20 years of judgment in your head and nowhere clean to put it. Client calls, Loom recordings, Slack threads, a Notion graveyard, three unfinished SOPs. The chaos is real, and every week it compounds while someone else's model gets smarter off the exhaust.

The cost is not abstract. A senior expert billing $400/hr who spends six hours a week re-explaining the same frameworks to team members, clients, or an AI chat window is burning $124,800 a year on repetition. That is before you count the deals you lost because you could not scale yourself into the room.

So the real question is not how to organize the chaos. It is what becomes possible when your judgment is structured, queryable, and owned by you. What does a practice look like when your IP answers questions at 3am, trains agents on your terms, and licenses to platforms instead of getting scraped by them?

I'm Matt Cretzman. I've spent the last two years building that stack. And I want to tell you why the metaphor everyone is reaching for right now is wrong.

Mesa, GitHub, and the Seduction of Version Control

Mesa is getting attention for pitching version control for knowledge work. The pitch lands because experts feel the chaos in their bones. Git solved a real problem for engineers: multiple people, one artifact, need history and merge. If you squint, knowledge work looks similar. Multiple drafts, multiple contributors, no source of truth.

But squinting is the problem. Version control assumes the artifact already exists.

Git does not help you decide what the code should do. It helps you track it once written. Applied to expert knowledge, version control assumes you already have structured, committable IP. Most experts do not. They have lived judgment trapped in conversation, calls, intuition, and pattern recognition that has never been extracted, let alone written down.

Versioning that? You are versioning someone else's extraction pipeline. The transcript a tool made from your call. The summary an LLM generated from your Loom. The SOP a junior wrote by watching you. You are tracking changes on a derivative while the original, the actual judgment, stays locked in your head and leaks out one client call at a time.

That is not ownership. That is custody of the copy.

The Three Extractions and Where Repos Fit

I write about three extraction economies in my book. Worth naming them here because the repo metaphor lives inside one of them.

First, LLMs ingested expert knowledge without consent. The UK copyright fight last year made it clear: 88% of surveyed creators wanted protection, the government punted, the models kept training. Their terms.

Second, platforms like Mercor pay experts hourly to dump IP into training sets. Clean transaction, clean conscience, and your life's work becomes a line item in someone else's moat. Still their terms.

Third, expert-owned extraction. You structure the judgment, you own the artifact, you control distribution and revenue. Your terms.

A repo sits in the first two economies by default. If the artifact you are versioning was produced by someone else's pipeline, the pipeline owner wins. Version control is downstream of extraction. Whoever owns the extraction owns the economy.

This is the thing most experts miss. The upstream moment, where lived judgment becomes structured IP, is the moment where ownership is decided. Everything downstream is just accounting.

What a Refinery Actually Is

Crude oil is not gasoline. It has to be refined. Expert judgment is not IP. Same problem.

A refinery is upstream of the repo. It is the system that takes the raw material, your calls, your decisions, your pattern recognition, your weird non-obvious intuitions, and turns it into structured, queryable, licensable knowledge. Not transcripts. Not summaries. Structured cards with provenance, context, and the judgment layer intact.

That is what I built Skill Refinery to do. The Knowledge Delivery System, or KDS, is the refinery stack. It sits upstream of every tool, every agent, every repo you will ever use. The output is cards you own, indexed the way you think, connected to the decisions they came from.

Once you have that, the repo question becomes trivial. Version control on refined, structured IP is useful. Version control on raw exhaust is theater.

Why MCP Changed the Math

When Anthropic launched the Model Context Protocol in November 2024, a lot of people saw a developer convenience. I saw the rails. MCP is an open standard for how models talk to tools and knowledge sources. It means your refined IP does not have to live inside one vendor's walled garden.

In December 2025 Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation. That matters. It is no longer a one-vendor bet. The rails are now neutral infrastructure, which means the expert who refines their own IP can plug that IP into any model, any agent, any client, on their terms.

This is the part Mesa and the version-control-for-knowledge crowd are not selling you. A repo locks you into a workflow. An MCP-native refinery lets your IP move. You refine once, serve everywhere. The refinery becomes the source of truth. The agents become the delivery mechanism. You stay in the middle, compensated, cited, and in control.

The Numbers Experts Keep Missing

Let me anchor this in real math for a senior expert.

Say you run a specialty consulting practice. $400/hr rate. 1,200 billable hours a year. $480,000 gross. You spend roughly 15% of your working time repeating yourself, onboarding, explaining the same framework for the forty-eighth time. That is 180 hours, $72,000 of pure repetition cost.

Now refine. Three months of work to get your top 50 decision patterns into the refinery. Those 50 cards feed an agent that handles 80% of the repetition. You reclaim 144 of those 180 hours. At your rate that is $57,600 recovered in year one, every year after.

But that is the small number. The bigger number is what refined IP lets you do: productize, license, train junior staff against a real knowledge base, serve inbound leads at 3am, and yes, license access to platforms instead of getting extracted by them. Even a conservative licensing arrangement on refined expert IP clears six figures for a single practitioner.

A repo does none of that. A repo stores. A refinery produces.

The Quiet Part

I believe the judgment you have accumulated is a gift and a calling, not a commodity. You did not spend twenty years building pattern recognition so an anonymous training run could dilute it into the average.

Stewardship of that gift means structuring it, owning it, and deciding the terms on which it moves through the world. That is not a branding exercise. That is integrity with what you were given.

Move on.

What To Actually Do

Stop looking for a better filing system. Stop versioning your exhaust. Start upstream.

First, identify the 30 to 50 decisions that carry most of your value. Not topics. Decisions. The calls you make that a smart generalist cannot.

Second, refine them. Context, input, judgment applied, outcome, provenance. Structured enough that an agent can retrieve them. Owned enough that a platform cannot claim them.

Third, route them through MCP-native rails so the IP is portable across whatever model wins the next eighteen months. Do not bet on a vendor. Bet on the protocol.

Fourth, put the refinery in front of every agent, tool, and client touchpoint in your business. The refinery is the spine. Everything else is a limb.

That is the category. Not GitHub for knowledge work. A refinery for expert judgment, with the repo sitting downstream where it belongs.

I'm writing a book about this. On Whose Terms: The New Expert Economy and the Fight for What You Know. If the thesis resonates, join the launch list.

If you want to see what the stack looks like built out, the full breakdown lives at mattcretzman.com.

Keep Building,

— Matt

← Back to all posts