This talk describes recent progress on the integration of compact representations of learnt shape, computational anatomy, learnt motion, the biomechanics of tissue deformation and multi-scale analysis into diagnostic imaging and image guided interventions. Our core technology and starting point is establishing spatial correspondence with image registration. This work has inspired a new approach to the use and development of imaging technology. The conventional medical imaging paradigm is to acquire the best image that scanning technology can provide, given the constraints of patient workflow, and then as a separate process to interpret or analyse the image data for diagnosis or to guide an intervention.  A more logical alternative is to optimise image data collection and information processing given the diagnostic state of the patient to be determined, or the therapeutic procedure to be undertaken.  Our approach to the development of this paradigm is illustrated with applications in neuroscience and the detection and treatment of cancers in the breast, lung, liver, colon and prostate.