Caramazza, Chialant, Capasso, and Miceli (2000) describe two aphasic patients, with impaired processing of vowels and consonants, respectively. The impairments could not be captured according to the sonority hierarchy or in terms of a feature level analysis. Caramazza et al. claim that this dissociation demonstrates separate representation of the categories of vowels and consonants in speech processing. We present two connectionist models of the management of phonological representations. The models spontaneously develop separable processing of vowels and consonants. The models have two hidden layers and are given as input vowels and consonants represented in terms of their phonological distinctive features. The first model is presented with feature bundles one at a time and the hidden layers have to combine their output to reproduce a unified copy of the feature bundle. In the second model a “fine-coded” layer receives information about feature bundles in isolation, and a “coarse-coded” layer receives information about each feature bundle in the context of the prior and subsequent feature bundle. Coarse-coding facilitated processing of vowels and fine-coding processing of consonants. These models show that separable processing of vowels and consonants is an emergent effect of modular processors operating on feature-based representations. We argue that it is not necessary to postulate an independent level of representation for the consonant/vowel distinction, separate from phonological distinctive features.