The Use of Caching in Decoupled Multiprocessors with Shared Memory

T.J. Harris, N.P. Topham

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In the following we evaluate the costs and benefits of using a cache memory with a decoupled architecture supporting shared memory in both the uniprocessor and multiprocessor cases. Firstly we identify the performance bottleneck of such architectures, which we define as Loss of Decoupling costs. We show that in both uniprocessors and multiprocessor machines with high latency such costs can greatly effect performance. We then assess the ability of cache to reduce loss of decoupling costs in both uniprocessors and multiprocessors. Through use of graphical tools we provide an intuition as to the behaviour of such decoupled machines. In multiprocessors we define the target model of shared memory and introduce various coherency schemes to implement the model. Each coherency scheme is then evaluated experimentally. We show that hardware coherence schemes can improve the performance of such architectures, though the relationship between hit rate and performance is substantially d...
Original languageEnglish
Title of host publicationProceedings of Int. Workshop on Large-Scale Shared Memory Systems
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages16
Publication statusPublished - Apr 1994

Fingerprint

Dive into the research topics of 'The Use of Caching in Decoupled Multiprocessors with Shared Memory'. Together they form a unique fingerprint.

Cite this