Distributed Shared Memory

-must decide placement
place memory (pages) close to relevant processes
-must decide migration
when to copy memory(pages) from remote to local
-must decide sharing rules

Client
-send requests to file service

Caching
-improve performance (seen by client) and scalability

Servers
-own and manage state(files)
-provide service(file access)

Each node…
-“owns” state => memory
-provides service
memory reads/writes
from any node
consistency protocols

permits scaling beyond single machine memory limits
– more shared memory at lower cost
– slower overall memory access
– commodity interconnect technologies support this(RDMA)

Hardware vs Software DSM
hardware supported
– relies on interconnect
– os manages larger physical memory
– NICs translate remote MM accesses to messages
– NICs involved in all aspects of Mm management, support atomics
Software-supported
– everything done by software
– os or language runtime

Application access algorithm
-single reader/ single write(srsw)
-multiple readers / single writer(mrsw)
-multiple readers / multiple writers(mrmw)

Performance considerations
DSM performance metric == access latency
Achieving low latency through.. migration
-makes sense for srsw
-requires data movement
Replication(caching)
-more general
-requires consistency management

DSM Design: Consistency management
DSM ~shared memory in SMPs
in smp
– write -invalidate
– write -update

DSM Desgin: Cosistency management
Push invalidations when data is written to…
Pull modification info periodically…

if MRMW…
– need local caches for performance
– home node drives coherence
– all nodes responsible for part of distributed memory management

“Home” node
– keep state: pages accessed, modifications, caching enabled/disabled, locked…
– current “owner”

Consistency model == agreement between memory(state) and upper software layers

“mem behaves correctcy if and only if software follows specific rules”