Hi, I'm Sal.

I design operating systems for organizations. Not software. The actual architecture that determines how information flows, how decisions get made, and how teams coordinate across competing priorities. I've done that across Fortune 500 enterprise, PE-backed SaaS, and AI-first startups.

What's different now is that AI is the infrastructure layer, not just another tool in the stack. I build systems where AI handles the commodity work and humans stay focused on judgment, context, and the parts that actually require thinking. Not automation for its own sake. Architecture that makes both sides stronger because they exist together.

That's integrative thinking applied to system design. Roger Martin coined the term. I've spent my career living it.

If you're curious about how I think, you can explore my ideas, look at the systems I've built, or read some of my writing. Or you can just ask.

Interview Me

Trained on how I think, what I've built, and how I approach work. Treat it like a first conversation.

Built with Claude by Anthropic

Thinking

The AI Problem, Explained in a Kitchen

For most of my career, I got really good at cooking. I learned the knives, the pans, the timing. I got comfortable in my kitchen.

Then AI came along. But AI didn't change my kitchen. What it did was give me ten more hands to prepare the same meal I had made dozens of times before.

That changed everything.

Not because I don't know how to cook. I do. But when you suddenly have twelve hands, the skill that matters most isn't the cooking anymore. It's the coordination. What should all those hands be doing, when, and why? How do they work together without knocking things over?

That's not a tool problem. That's an architecture problem.

Three things changed when AI arrived: Tools used to be passive. Now they participate. They don't change the space, but they change how we operate in it. And they increase creation at a rate that makes curation the new competitive advantage.

Those shifts are what most organizations are getting wrong right now. They're making the same meal, in the same kitchen, with ten more hands, and no change to the workflows, decision chains, or coordination patterns built for a world with two hands and passive tools. The result is speed without structure. Output without discernment. Scale without quality.

AI made creation easy. It made curation essential.

I design the operating systems that make curation possible at scale.


Integrative Thinking: Beyond Synthesis to Transformation

Not compromise. Not balance. Sublation: holding opposing models in tension until they produce something better than either one alone.

Roger Martin at the Rotman School built the framework. Integrative thinking: the discipline of holding two opposing models in tension long enough to produce a resolution that's superior to either one. Most organizations pick a side. Speed or governance. Automation or judgment. Scale or accountability. Integrative thinking refuses the choice. It designs for both.

What I do takes that further. In Hegel's language, the word is sublation. Not synthesis as compromise. Transformation. The output isn't a blend of two inputs. It's something structurally new that couldn't have existed without the tension between them.

That instinct shows up in everything I build. Four patterns repeat:

01 Governance precedes execution in every system.

Constraints load before capabilities. AI reads its boundaries before it touches anything. That's not caution. That's engineering. A system without constraints produces output. A system with constraints produces trustworthy output.

02 AI functions as infrastructure, not as a tool.

The mental shift isn't "how do I use AI?" It's "what operating system does this domain need, and where does AI fit in the architecture?" Tools do what you tell them. Infrastructure constrains, routes, remembers, coordinates, and judges.

03 Memory is engineered into every workspace.

AI models are stateless. They forget everything between sessions. Most practitioners accept this and move on. I couldn't. Every system I build has a memory architecture: context files that load before AI acts, intelligence briefs that accumulate, persistent trackers that maintain state across sessions. A system that can't retrieve what it learned last week isn't a system. It's a tool you have to re-teach every morning.

04 The same architectural signature transfers across radically different domains.

Govern, route, synthesize, execute, augment. That sequence appeared independently in a 3,000-contact recruiting pipeline, a faceless content business, a skills assessment tool, a go-to-market diagnostic, and a 28-pattern career operations system. The problems were different. The architecture converged.


What the Evidence Shows

When I mapped 130 operational patterns across 8 workspaces to a cognitive taxonomy designed for evaluating AI systems, the data told a story I hadn't planned to tell. Here's what it showed about how I build.

I think in constraints before capabilities. Every one of my 8 production workspaces starts with governance. Behavioral boundaries. Decision gates where AI stops and a human chooses. DO NOT lists. Constitutional documents that load at the start of every session. The constraint architecture always comes first. 19 reasoning patterns, the highest count in the heatmap, and most of them are judgment layers that determine what ships and what doesn't.

I design for the tension, not past it. The operating model I derived from my own work holds five roles in productive tension: governance constrains execution, routing eliminates ambiguity, synthesis transforms raw input into rules (not summaries), execution stays inside defined corridors, and augmentation puts human judgment precisely where irreversible decisions happen. Those roles don't resolve the tension between AI speed and human authority. They make both sides stronger because they exist together.

I build institutional memory by instinct. 18 memory patterns across 8 workspaces. The third-highest cognitive ability in the portfolio. 99-file knowledge bases, persistent context specifications, intelligence briefs with declared authority hierarchies, single-source-of-truth documents loaded as Required Reads before any AI action. This wasn't a deliberate strategy. It was an instinct that repeated until the data made it visible.

I operate at compressed timelines without compressing quality. A 12-pattern GTM diagnostic tool (framework, scorecards, article, outreach note) deployed as a single atomic commit. A 636-line product marketing site built in one AI session. A content business architecture with 5 engines, 12 API keys, and a 10-stage production pipeline designed as a single orchestrated system. The speed comes from fluency with the infrastructure, not from shortcuts.

I build systems that monitor themselves. 14 metacognition patterns. Self-assessment before publish, confidence scoring, pipeline health monitoring, evaluator disagreement detection. Systems that know what they don't know. Frankly, this is the capability most organizations skip entirely. And it's the one that determines whether AI scales safely or just scales.

I work across domains without losing architectural coherence. 8 completely independent workspaces: startup operations, company culture, content business, portfolio engineering, product marketing, skills assessment, go-to-market diagnostics, career management. Same five roles. Same governance-first instinct. Same memory architecture. The signature transferred without modification to the underlying design principles.

I know where I'm thin. Learning loops and social cognition are my two weakest cognitive layers. My systems are excellent at Day 1 judgment. They're less equipped for Day 90 improvement. The feedback mechanisms that would make the systems genuinely adaptive over time are designed but not yet instrumented. I say that because the point of mapping your work to a taxonomy isn't just to see what you've built. It's to see what you haven't.

The pattern across 130 operational patterns is clear. I don't build AI tools. I build AI operating systems. The architecture is governance-first, memory-rich, structurally reasoned, and designed so that AI and human judgment make each other stronger. Not because I set out to build that way. Because the problems I'm drawn to require it.

Have a question about how I think?

Systems

What I Actually Do

I'm not an AI engineer. I don't write the models.

I'm not a traditional operator. I don't just use the tools.

I'm the person who designs the operating layer between AI capabilities and organizational outcomes. Governance, routing, memory, coordination, judgment architecture. That layer barely exists in most organizations. The people who can design it are rare. And the gap between organizations that have it and organizations that don't is widening every quarter.

The rest of this page is the evidence. The architecture I derived from 130 operational patterns across 8 independent workspaces. How those patterns map to a cognitive taxonomy designed for evaluating AI systems. And the working systems you can experience yourself.


The Architecture

Five roles AI plays when it becomes infrastructure. Derived from 130 patterns across 8 production workspaces.

01 Govern.

Define what AI can and cannot do before it touches anything. Constraints precede capabilities. Every workspace has documents, rules, or constitutional files that load before AI executes. This is the rarest instinct in AI adoption right now, and the most important one for organizations trying to scale safely.

02 Route.

Detect what you're dealing with and dispatch it to the right process. AI classifies inputs, confirms the classification, and sends work to the correct pipeline without the human having to specify. This is the connective tissue between governance and execution.

03 Synthesize.

Transform raw, high-volume input into structured, decision-ready intelligence. Transcripts become strategic insights. Research becomes operational rules. Scattered sources become a coherent knowledge base with declared authority hierarchies. This is where AI's leverage is highest and where most organizations leave the most value on the table.

04 Execute.

Once governance is set, routing is defined, and synthesis is complete, AI runs the operation at machine speed inside defined corridors. The human doesn't execute. The human designed the system that executes.

05 Augment.

At decision points that have irreversible downstream consequences, AI presents analysis, scores, and recommendations. The human decides. The system is designed so the human sees a recommendation, not a blank slate.

Those names came from the work itself. No borrowed framework. No academic reference. Just the patterns that kept repeating across systems built to solve completely different problems.


Why This Architecture Is Rare

In March 2026, Google DeepMind published a cognitive taxonomy for measuring progress toward AGI. Ten cognitive abilities derived from decades of psychology, neuroscience, and cognitive science, built to evaluate whether AI systems had achieved general capability.

When I mapped my five roles against their framework, the structures aligned across all ten dimensions.

I hadn't seen the paper. My framework used none of the same language. The convergence wasn't designed. It was discovered.

What that means practically: the cognitive abilities that determine whether an AI system is genuinely capable are the same ones that determine whether the operational layer a human builds around AI will hold. Reasoning, memory, planning, metacognition. These aren't just what makes AI smart. They're what makes AI systems reliable. A workflow without memory repeats the same mistakes every session. A system without metacognition can't catch its own errors. A pipeline without planning collapses under load.

Most organizations are building fast. Few are building sound. The difference is architecture. Specifically: governance before execution, memory at every layer, routing that eliminates ambiguity, synthesis that produces rules not summaries, and human judgment placed precisely where the stakes are highest.

That's what 130 patterns across 8 workspaces looks like when the same design mind applies the same principles to completely different problems.


130 Patterns. 10 Cognitive Abilities.

Every pattern I built, mapped against the cognitive taxonomy Google DeepMind designed to measure progress toward AGI. Click any cell to see what's behind it.

130
patterns built
10/10
abilities covered
8
independent workspaces
fewer
more patterns
Workspace PER GEN ATT LRN MEM RSN MTC EXF PSV SOC
Click any cell to see which patterns map to that intersection

What the Map Reveals

The heatmap doesn't just show coverage. It shows where my work concentrates. And the concentration tells a story about the kind of AI problems I'm drawn to.

RSN
Reasoning: 19 patterns
This one showed up everywhere. Scoring logic, gap analysis, framework selection, bias prevention. Every system I build has a judgment layer. AI generates, but structured reasoning decides what ships. Mind you, that's not a philosophical preference. It's a design constraint. Without it, you get output without discernment.
GEN
Generation: 19 patterns
Coordinated multi-engine output, production pipelines, specification-driven asset creation. Not "AI writes things." Architectures where generation is governed, templated, and quality-controlled before it reaches anyone. Said differently: I don't build tools that produce. I build systems that curate what gets produced.
MEM
Memory: 18 patterns
99-file knowledge bases, persistent context specifications, conversation history, decision archives. AI systems without memory repeat the same mistakes every session. Every workspace I build has a memory architecture. A system that can't retrieve what it learned last week isn't a system. It's a tool you have to re-teach every morning.
EXF
Executive Functions: 14 patterns
Pipeline orchestration, parallel track coordination, multi-system automation. This is the coordination problem from the kitchen analogy. When you go from two hands to twelve, the skill that matters isn't the cooking anymore. It's making sure all those hands work together without knocking things over.
MTC
Metacognition: 14 patterns
Self-assessment before publish, confidence scoring, pipeline health monitoring, evaluator disagreement detection. Frankly, this is the one most organizations skip entirely. Systems that know what they don't know. That's the difference between AI that's useful and AI that's dangerous.

The pattern is clear. The densest coverage falls on the abilities that separate AI tools from AI systems: reasoning, orchestration, memory, and self-monitoring. These are the capabilities most organizations are missing. And they're the ones that define whether AI scales safely or just scales.

130 patterns. 8 workspaces. 10 cognitive abilities. Built independently, mapped retrospectively.
Taxonomy: Google DeepMind, "Measuring Progress Toward AGI: A Cognitive Taxonomy" (March 2026)


8 Workspaces. 130 Patterns.

None were built from a shared template. Each was built to solve a different problem. The architecture converged anyway.

Startup Operations 18 patterns

3,000+ pipeline contacts managed, 130+ candidates scored, investor intelligence pipeline, real-time strategic analysis during live executive decisions.

Company Culture Infrastructure 15 patterns

38+ hours of CEO transcripts processed into structured strategic intelligence, ticket-as-prompt system turning every work item into AI-executable instructions, SAFE and NTD communication frameworks making all organizational communication machine-parseable by design.

AI Content Business 21 patterns

5 AI engines orchestrated through a shared data layer, 10-stage production pipeline from research to published video, faceless brand architecture where AI voice replaces human presenter, rubric-driven judgment system producing scored analysis at near-zero marginal cost.

Portfolio Site 14 patterns

141-line system prompt functioning as a compressed, queryable version of a person, serverless AI backend where one API call is the entire product, observability pipeline capturing what questions humans choose to ask an AI.

Product Marketing 15 patterns

636-line context-window-optimized single file where every section carries its strategic purpose as a directive for future AI sessions, AI-generated visual assets shipping as production artifacts, no git history because the AI session is the version control.

Skills Assessment Tool 12 patterns

Dual AI evaluators that never share context to prevent anchoring bias, 7 skill domains powered by one engine through config-as-code prompts, anti-gaming controls protecting the integrity of AI evaluation, badge and credential pipeline where one assessment becomes a LinkedIn post, a downloadable SVG, and a structured data object simultaneously.

GTM AI Diagnostic 12 patterns

18-question maturity assessment producing a scored profile across 6 categories, closed-loop artifact system where the tool, three scorecards, a LinkedIn article, and an outreach note each make the others more credible, entire system shipped as a single atomic commit.

Career Operations 28 patterns

6 automation rules governing 5 interlocking trackers, 99-file knowledge base, 47 sources distilled into 6 operational intelligence briefs with declared authority hierarchies, universal input router handling 15+ signal types, 3-gate automated screening engine.


Experience the Architecture

These aren't demos. They're working systems built on the architecture above.

GTM AI Readiness Diagnostic

Organizations keep saying their GTM teams are AI-ready. They're not. In a future where vendors deliver outcomes rather than seat licenses, the gap between ready and not ready will determine who survives. Take 5 minutes and find out where you actually stand.

Interactive ~5 minutes

Trash Shield

25% of US homes have an under-mount pull-out cabinet for their garbage bin. The fundamental flaw in that design is that garbage falls into the cabinet space behind the bin. Trash Shield clicks in within 5 seconds and eliminates the problem permanently. From concept to provisional patent in 90 days using AI-native design.

Product Patent Pending

Verified Candidate Skills Assessment

Resumes say what people claim they can do. This proves it. Candidates demonstrate how they actually think through real problems across 7 skill domains and receive a verified proficiency rating on a 1-5 scale. Hiring managers get evidence, not assertions. Built on a dual-evaluator architecture where one AI assesses and a separate AI scores, and they never share context, to eliminate the bias that plagues traditional screening.

AI-Powered 7 skill domains 1-5 certification

Want to know more about how these were built?

Writing

Published

Thinking out loud about integrative design, AI infrastructure, and how organizations change when they stop treating AI as a feature.

Anthropic Hired a Philosopher to Write Claude's Soul
Tracing integrative thinking from Hegel through Roger Martin to Anthropic's constitutional AI — and what it means for how organizations design their relationship with AI.
Your Humanities Degree Is a Ticket to Tech
Why the skills that seem least technical — interpretation, synthesis, ethical reasoning — are exactly what AI-era organizations need most.
The Renaissance Generalist: Why Chief of Staff Roles Are Critical in the AI Age
The death and rebirth of the generalist. Why the people who thrive with AI won't be the specialists — they'll be the ones who see across boundaries.
From US Army to Sales & HR Leadership
Servant leadership, personal growth, and why great leaders learn who you are instead of recreating you in their image.

Coming Next

Longer pieces drawing on the case study and framework work.

The 5 Roles AI Plays When You Stop Treating It Like a Chatbot
A framework for designing AI as infrastructure — govern, route, synthesize, execute, augment.
I Audited 8 Repositories and Found 130 Patterns of AI-as-Infrastructure
What AI-native work actually looks like when you open the hood.
Integrative Thinking in the Age of AI
Why the organizations that thrive with AI won't be the ones that automate the most.

Want to discuss any of these ideas?