Improvements of visual hyperacuity are a key focus in research of perceptual learning. Of particular interest has been the specificity of visual hyperacuity learning to the particular features of the trained stimuli as well as disruption of learning that occurs in some cases when different stimulus features are trained together. The implications of these phenomena on the underlying learning mechanisms are still open to debate; however, there is a marked absence of computational models that explore these phenomena in a unified way. Here we implement a computational learning model based on reweighting and extend it to enable direct comparison, by means of simulations, with a variety of existing psychophysical data. We find that this very simple model can account for a diversity of findings, such as disruption of learning of one task by practice on a similar task, as well as transfer of learning across both tasks and stimulus configurations under certain conditions. These simulations help explain existing results in the literature as well as provide important insights and predictions regarding the reliability of different hyperacuity tasks and stimuli. Our simulations also shed light on the model's limitations, for example in accounting for temporal aspects of training procedures or dependency of learning with contextual stimuli, which will need to be addressed by future research.