Skip to content

Research

Frameworks and methodologies developed from 15+ years of building and scaling data and AI capabilities in complex, regulated enterprises. These aren't theoretical models — they're codified patterns from operational experience at organisations like Westpac and Cochlear.

The Strategy-Execution Gap

Organisations spend millions on strategy development, then fail at execution. The standard explanations are incomplete. A diagnostic methodology for identifying execution failure patterns in organisational liminal spaces.

12 min read · Feb 2026

A Practitioner's Data Product Taxonomy

The term 'data product' has become meaninglessly broad. This taxonomy classifies data products by the organisational function they serve and the decision type they support — from operational datasets to strategic intelligence layers — giving teams a shared vocabulary for portfolio planning and investment decisions.

10 min read · Sep 2025

Defensive and Offensive Data Strategy

Organisations tend to over-index on either defensive data management (governance, compliance, quality) or offensive data exploitation (analytics, AI, monetisation) — rarely both. This framework provides a diagnostic for assessing your current balance and a sequencing model for building both capabilities without paralysing either.

11 min read · Dec 2025

The Pilot-to-Production Chasm

The overwhelming majority of AI models that 'work' in a pilot never reach production. The standard explanation is technical debt. The actual explanation is an execution gap that follows predictable organisational patterns. This paper maps the chasm between AI pilot and production deployment, identifies the failure layers that pilots are designed to avoid, and provides a diagnostic for organisations stuck in perpetual pilot mode.

11 min read · Feb 2026

The AI Governance Paradox

Organisations in regulated industries know how to govern. They've spent decades building control environments for risk, compliance, and operational integrity. When AI arrives, the instinct is to apply that same machinery. The result is governance that is technically defensible but practically paralysing. This paper examines the paradox at the heart of enterprise AI governance and proposes a tiered model that matches governance intensity to actual risk.

12 min read · Feb 2026

GenAI and the Data Foundation Reckoning

For a decade, organisations tolerated poor data foundations because previous analytics paradigms were forgiving enough to work around them. Generative AI is not forgiving. RAG on ungoverned data produces confident nonsense. Fine-tuning on inconsistent data amplifies the worst patterns. GenAI is the forcing function that finally makes the data foundation problem impossible to defer, and the organisations that treated Tier 1 data products as optional are discovering this the hard way.

11 min read · Feb 2026