Research
Frameworks and methodologies developed from 15+ years of building and scaling data and AI capabilities in complex, regulated enterprises. These aren't theoretical models — they're codified patterns from operational experience at organisations like Westpac and Cochlear.
The Strategy-Execution Gap
Organisations spend millions on strategy development, then fail at execution. The standard explanations are incomplete. A diagnostic methodology for identifying execution failure patterns in organisational liminal spaces.
A Practitioner's Data Product Taxonomy
The term 'data product' has become meaninglessly broad. This taxonomy classifies data products by the organisational function they serve and the decision type they support — from operational datasets to strategic intelligence layers — giving teams a shared vocabulary for portfolio planning and investment decisions.
Defensive and Offensive Data Strategy
Organisations tend to over-index on either defensive data management (governance, compliance, quality) or offensive data exploitation (analytics, AI, monetisation) — rarely both. This framework provides a diagnostic for assessing your current balance and a sequencing model for building both capabilities without paralysing either.
The Pilot-to-Production Chasm
The overwhelming majority of AI models that 'work' in a pilot never reach production. The standard explanation is technical debt. The actual explanation is an execution gap that follows predictable organisational patterns. This paper maps the chasm between AI pilot and production deployment, identifies the failure layers that pilots are designed to avoid, and provides a diagnostic for organisations stuck in perpetual pilot mode.
The AI Governance Paradox
Organisations in regulated industries know how to govern. They've spent decades building control environments for risk, compliance, and operational integrity. When AI arrives, the instinct is to apply that same machinery. The result is governance that is technically defensible but practically paralysing. This paper examines the paradox at the heart of enterprise AI governance and proposes a tiered model that matches governance intensity to actual risk.
GenAI and the Data Foundation Reckoning
For a decade, organisations tolerated poor data foundations because previous analytics paradigms were forgiving enough to work around them. Generative AI is not forgiving. RAG on ungoverned data produces confident nonsense. Fine-tuning on inconsistent data amplifies the worst patterns. GenAI is the forcing function that finally makes the data foundation problem impossible to defer, and the organisations that treated Tier 1 data products as optional are discovering this the hard way.