Server design

Server design
-persistent metadata m1, m2, ..mm
-normal data structures + code
=>
create external data segments to back persistent data structures
-apps manage their persistence needs

Designer’s choice to use single or multiple data segment

RVM Primitives
Initialization
– initialize(options)
– map(region, options)
– unmap(region)

Body of server code
– begin_xact (tid, restore_mode)
– set_range(tid, addr, size)
– end_xact(tid, connect, t_mode)
– abort_xact(tid)

GC to reduce log space
– flush()
– truncate()

bigin_xact(tid,mode);
	set range(tid, base_addr, #bytes);
	write meta data m1
	write meta data m2
end_xact(tid, mode);

No action by LRVM
LRVM create redo log in memory

Redo log
-AV changes of different region between begin and end xact

Commit
Redo log, window of vulnerability

Log truncation
apply to data seg, read redo log

Log based striping

Client: memory, log seq, log fragments parity, LAN, storage services

Stripe Group, stripe group for x, y, z, L, M, N
-subset server into stripe group
-parallel client activities
-increased availability
-efficient log cleaning

Cache coherence
-single writer, multiple reader
-fileblock unit of coherence
write- request
-receive token
-manager revokes

Log cleaning
log segment evolution, writes to file blocks
– may belong to different files

Unix File System
{filename, offset} -> i-node -> data blocks on disk

Client node action
filename -> mmap -> metadata manager
Manager node actions
filename -> File Dir -> i-number -> i-map -> i-node a dir -> stripe group map -> storage server -> index node of logseg id -> stripe group map -> storage servers -> data blocks form -> storage severs

LRVM
– persistent memory layer in support of system service
Riovista
– performance conscious design of persistent – memory
Quicksilver
– making recovery a first clacks citizen in os design

Persistence
– need of os subsystems
– make virtual memory persistent
– subsystem designers 4 is performant
– use persistent logs to record changes to VM
VM -> replace VM

Implementation

Write(x)
DSM + OS coop x -> twin, original writable
Write project after release Twin, X’ diff run length

Non-page-based DSM
shared r/w -> dsm software -> <-data library-based -annotate shared variables -coherence achons inserted at point of access API calls lang runtime structured DSM -APU for struct -coherence action on app calls DSM and speedup LAN - DSM library - p -> mem

First NFS
-SUN, 1985

Client, LAN, Servers(student), file cache

DFS
no central server
-each file distributed across several nodes
DFS implemented across all disks in the network

Preliminaries: Striping a file to multiple disks
-increase I/O bandwidth by striping to parallel disks
-Failure protection by ECC

Log Structured File System
– Buffer changes to multiple files in one contiguous log segment data structure
– log segment to disk once a file up or periodically solves small write program
mods to x, mods to y, log segment written the disk

Eebra File System(uc-berkeley)
-combines LFS + RAID
use commodity hardware
-stripe by segment on multiple nodes’ disks in software

xFS – a DFS
-log based striping, cooperative caching, dynamic manageent of data, subsetting storage servers, distributed log cleaning

Traditional NFS with centralized server(s)
– memory content
* metadata, file cache, client caching divectory

Software DSM

Address space partitioned address equivalence
distributed ownership
contact owner of page to get current copy

application, global virtual memory abstraction, dsm software implementation, local physical memeories

LRC with multi-writer coherence protocol
PI lock(L);
x, y, z pages modified in cs => Xd, Yd, Zd } diffs
unlock(L); fetch at point of access
P3 lock(L); x <- => X’d unlock(L);

Implementation
write(x), twin, original writable
release run-length encoded

Systems from Components

specify -> code -> optimise -> deploy
ioa, ocaml, nuprl

digging deeper from spec to implementation

Spring Operating System
-> spring system
Java RMI, EJB

How to innovate OS?
brand new os? or better implementation of known os?
Market place needs
-large complex server software
=>Take the Intel inside approach… in the case unix inside for box makers like sun microsystems

Spring Approach
Strong interfaces, open, flexible, and extensible
conncet to other machines, Mkernel(threads/ipc)
All outside the kernel

Nucleus – “microkernel” of Spring
Domain, Door table, Door IDs, nucleus, door, target domain
Object invocation across the network
client domain, server domain, proxy A・B、Nucleus

client domain -> doory nucleusB -> front object -> underlying object -> ACL

Sources of overhead in RPC

marshaling
data copying
control transfer
protocol transfer

Three copies
-client stub
-kernel buffer
-dma to controller

Reducing copies?
-marshal into kernel buffer directly
-shared descriptors between client stub and kernel

Routing on the internet
Application
transport
network
link
physical

network, link, physical
App payload|qos
protocol stack++
IP-hdr code payload
dst dst dst

Potential Apps
protocol independent multicast
reliable multicast
congestion notification
private ip(PIP)
any casting

Distributed system definition

N1, N2, … Nn
LAN/WAN
fiber, cable, satellite
no physical shared memory between nodes
N1, N2

Even a cluster is a distributed system

Belief!
-processes sequential
-> events totally ordered
h->i, f->g, d->e..
-send before receive
->a-b, e->f,…

Lamport’s logical clock
each node
* its own events
* its communication events
Lamport’s logical clock
* monotonic increase of own event time
condition 1: Ci(a) by lamport’s
logical clocks plus PID to break ties

Latency
Elasped time
Throughput
Event per unit time
Bandwidth: throuput measure
RPC performance
Hardware overhead
Software overhead
Foucs of this lesson
How to reduce software overhead

Components of RPC Latency
1.client call
2.Controller latency
3.Time on wire
4.Interrupt handling
5.Server setup to execute call

MPI’s interface

MPI_Init(&argc,&argv),
MPI_Barrier(MPI_WORLD_COMM);
MPI_Finalize(),

optimize counter

#include 
#include 
#include 
#include "gtmp.h"

static MPI_Status* status_array;
static int P;

void gtmpi_init(int num_threads){
	P = num_threads;
	status_array = (MPI_Status*) malloc((P - 1)* sizeof(MPI_Status));
}

void gtmpi_barrier(){
	int vpid, i;

	MPI_Comm_rank(MPI_COMM_WORLD, &vpid);

	for(i = 0; i < vpid; i++)
		MPI_Send(NULL, 0, MPI_INT, i, 1, MPI_COMM_WORLD);
	for(i = vpid + 1; i < P; i++)
		MPI_Send(NULL, 0, MPI_INT, i, 1, MPI_COMM_WORLD);

	for(i = 0; i < vpid; i++)
		MPI_Recv(NULL, 0, MPI_INT, i, 1, MPI_COMM_WORLD, &status_array[i]);
	for(i = vpid + 1; i < P; i++)
		MPI_Recv(NULL, 0, MPI_INT, i, 1, MPI_COMM_WORLD, &status_array[i-1]);	
}
void gtmpi_finalize(){
	if(status_array != NULL){
		free(status_array);
	}
}

Optimize Tree

#include <stdlib.h>
#include <stdio.h>
#include <omp.h>
#include "gtmp.h"

/*
From the MCS Paper: A sense-reversing centralized barrier
shared count: integer := P
shared sense : Boolean := true
processor private local_sense : Boolean := true
	local_sense := not local_sense // each processor
if fetch_and_decrement(&count) = 1
	count := P
	sense := local_sense
	else
		repeat until sense = local_sense
*/
typedef struct _node_t{
	int k;
	int count;
	int locksense;
	struct _node_t* parent;
} node_t;

static int num_leaves;
static node_t* nodes;

void gtmp_barrier_aux(node_t* node, int sense);

node_t* _gtmp_get_node(int i){
	return &nodes[i];
}

void gtmp_init(int num_threads){
	int i, v, num_nodes;
	node_t* curnode;

	/*Setting constants */
	v = 1;
	while( v < num_threads)
		v *= 2;

	num_nodes = v - 1;
	num_leaves = v/2;

	/* Setting up the tree */
	nodes = (node_t*) malloc(num_nodes * sizeof(node_t));

	for(i = 0; i < num_nodes; i++){
		curnode = _gtmp_get_node(i);
		curnode->k = i < num_threads - 1 ? 1 : 1;
		curnode->count = curnode->k;
		curnode->locksense = 0;
		curnode->parent = _gtmp_get_node((i-1)/2);
	}

	curnode = _gtmp_get_node(0);
	curnode->parent = NULL;
}

void gtmp_barrier(){
	node_t* mynode;
	int sense;

	mynode = _gtmp_get_node(num_leaves - 1 + (omp_get_thread_num() % num_leaves));
	sense = !mynode->locksense;
	gtmp_barrier_aux(mynode, sense);		
}

void gtmp_barrier_aux(node_t* node, int sense){
	int test;

#pragma omp critical
{
	test = node->count;
	node->count--;
}

	if( 1 == test ){
		if(node->parent != NULL)
			gtmp_barrier_aux(node->parent, sense);
		node->count = node->k;
		node->locksense = !node->locksense;
	}
	while(node->locksense != sense);
}
void gtmp_finalize(){
	free(nodes);
}

Shared Memory Multiprocessor

OS for parallel machine
mem ICN challenges
– numa effects
– deep mem hierarchy
– false sharing

principles
Cache conscious decisions
limit shared system data structure
keep memory accesses local
cpu – mem – icn – mem

CPU -> vpn -> TLB lookup -> miss -> PT lookup -> miss -> locate file -> I/O -> page frame -> vpn, pfn -> pt update -> vpn -> TLB update -> p.f. service complete

Parallel os + page fault service
easy scenario
-multiprocess workload
N1 T1 … Tn Nn
*threads independent
*page tables distinct
*no serialization
Hard Scenario
-multi thread workload
process T1, T2, T3, T4 shared address space
T1, T3 N1, T2, T4 N2
*address space shared
*page table shared
*shared entries in processor TLB’s

Recipe for scalable structure in parallel os
for every sub system
-determine functionally needs of that service
-to ensure concurrent execution of service
* minimize shared data structures
less sharing -> more scalable
– where possible replicate/ partition
system data structures
-> less locking
-> more concurrency

Tornado’s secret sauce: clustered object
object reference: illusion of single object, under the cover multiple representations
Degree of clustering
– choice of implementor of service
* singleton rep, one per core
* ppc for consistency
TLB – Process -> region -> FCM -> core – DRAM -> page