This paper investigates the use of amount and structure of talk as a basis for automatic classification of patient case discussions in multidisciplinary medical team meetings recorded in a real-world setting. We model patient case discussions as vocalisation graphs, building on research from the fields of interaction analysis and social psychology. These graphs are "content free" in that they only encode patterns of vocalisation and silence. The fact that it does not rely on automatic transcription makes the technique presented in this paper an attractive complement to more sophisticated speech processing methods as a means of indexing medical team meetings. We show that despite the simplicity of the underlying representation mechanism, accurate classification performance (F-scores: F-1 = 0.98, for medical patient case discussions, and F-1 = 0.97, for surgical case discussions) can be achieved with a simple k-nearest neighbour classifier when vocalisations are represented at the level of individual speakers. Possible applications of the method in health informatics for storage and retrieval of multimedia medical meeting records are discussed.