Lecture 01 Introduction
Agenda
- Try to get people to front?
- Introduce self.
- Write name on board, pronounce name.
- Today
- Break the ice.
- Layout the structure for the course, including the boring stuff.
- Jump right in and start discussing MapReduce.
Intro
Assessment Exercise
Orgranization
Do (organization stuff)[01-org].
Main Topics
- Calendar will flow through these topics (we'll visit all of these ideas
several times in many orders, but this is the intended order of the focus of
the papers):
- Messaging, remote interaction (RPC)
- Fault-tolerance, replication, and consensus (Raft)
- Primary-backup replication (GFS)
- Fault-tolerant large-scale compute (MapReduce, Spark)
- Consistency/consistency models (Bayou, Dynamo)
- Real-world consistency and scaling (Scaling Memcached at Facebook)
- Transactions (Thor, Spanner, Argus)
- Byzantine fault-tolerance, P2P (PBFT, Bitcoin)
- Other possibly topics: verifying distributed systems (Verdi)
Discussion
Labs
Lab submission is weird; walk through that.
focus: fault tolerance and consistency -- central to distrib sys
- lab 1: MapReduce
- labs 2 through 4: storage servers
- progressively more sophisticated (tolerate more kinds of faults)
- progressively harder too!
- patterned after real systems, e.g. MongoDB
- end up with core of a real-world design for 1000s of servers
what you'll learn from the labs
- easy to listen to lecture / read paper and think you understand
- building forces you to really understand
- you'll have to do some design yourself
- we supply skeleton, requirements, and tests
- you'll have substantial scope to solve problems your own way
- you'll get experience debugging distributed systems
- tricky due to concurrency, unreliable messages
we've tried to ensure that the hard problems have to do w/ distrib sys
- not e.g. fighting against language, libraries, etc.
- thus Go (type-safe, garbage collected, slick RPC library)
- thus fairly simple services (mapreduce, key/value store)
grades depend on how many test cases you pass
- we give you the tests, so you know whether you'll do well
- careful: if it usually passes, but occasionally fails,
chances are it will fail when we run it
Lab 1: MapReduce
- framework for parallel programming on 1000s of computers
- help you get up to speed on Go and distributed programming
- first exposure to some fault tolerance
- motivation for better fault tolerance in later labs
- motivating app for many papers
- popular distributed programming framework
- with many intellectual children
MapReduce computational model
- programmer defines Map and Reduce functions
- input is key/value pairs, divided into splits
- perhaps lots of files, k/v is filename/content
- Where do the k/v pairs come from?
- Usually massive shared FS (GFS, see FDS lecture).
- MR needs to know how to parse the files to convert intp k/v pairs.
// Apply a function to each key/value pair, each application produces a list of
// key value pairs, perhaps with different types than the input.
map :: (k1, v1) -> [(k2, v2)]
// For each v2 from map that share a common k2, apply a function that 'merges'
// them resulting in a list of v2s.
reduce :: (k2, [v2]) -> [v2]
Distributed grep
map :: (linenum, string) -> [(linenum, string)]
map (l s) = if contains("search-term") [(l, s)] else []
reduce :: (linenum, [string]) -> [string]
reduce (l ss) = "Match on line " ++ linenum ++ ":" ++ (head ss)
Sum values for all matching keys:
Input Map -> a,1 b,7 c,9
Input Map -> b,2
Input Map -> a,3 c,7
| | |
| -> Reduce -> c,16
-----> Reduce -> b,9
Map(k, v)
split v into words
for each word w
emit(w, "1")
Reduce(k, v)
emit(len(v))
- What does MR framework do for word count?
- [master, input files, map workers, map output, reduce workers, output files]
input files:
f1: a b
f2: b c
send "f1" to map worker 1
Map("f1", "a b") -> <a 1> <b 1>
send "f2" to map worker 2
Map("f2", "b c") -> <b 1> <c 1>
framework waits for Map jobs to finish
workers sort Map output by key
framework tells each reduce worker what key to reduce
worker 1: a
worker 2: b
worker 2: c
each reduce worker pulls needed Map output from Map workers
worker 1 pulls "a" Map output from every worker
each reduce worker calls Reduce once for each of its keys
worker 1: Reduce("a", [1]) -> 1
worker 2: Reduce("b", [1, 1]) -> 2
Reduce("c", [1]) -> 1
Why is the MR framework convenient?
- programmer only needs to think about the core work,
the Map and Reduce functions, does not have to worry
network communication, failure, etc.
- the grouping by key between Map and Reduce fits
some applications well (e.g., word count), since
it brings together data needed by the Reduce.
- but some applications don't fit well, because MR
only allows the one type of communication between
different parts of the application.
e.g. word count but sort by frequency.
Why might MR have good performance?
- Map and Reduce functions run in parallel on different workers
- Nx workers -> divide run-time by N
- But rarely quite that good:
- move map output to reduce workers
- stragglers
- read/write network file system
What about failures?
- People use MR with 1000s of workers and vast inputs
- Suppose each worker only crashes once per year
- So a big MR job is very likely to suffer worker failures
- Other things can go wrong:
- Worker may be slow
- Worker CPU may compute incorrectly
- Master may crash
- Parts of the network may fail, lose packets, etc.
- Map or Reduce or framework may have bugs in software
Tools for dealing with failure?
- retry -- if worker fails, run its work on another worker
- replicate -- run each Map and Reduce on two workers
- replace -- for long-term health
- MapReduce uses all of these
Puzzles for retry
- how do we know when to retry?
- can we detect when Map or Reduce worker is broken?
- can we detect incorrect worker output?
- can we distinguish worker failure from worker up, network lossy?
- why is retry correct?
- what if Map produces some output, then crashes?
- will we get duplicate output?
- what if we end up with two of the same Map running?
- in general, calling a function twice is not the same as calling it once
- why is it OK for Map and Reduce?
Helpful assumptions
- One must make assumptions, otherwise too hard
- No bugs in software
- No incorrect computation: worker either produces correct output,
- or nothing -- assuming fail-stop.
- Master doesn't crash
- Map and Reduce are pure functions on their arguments
- they don't secretly read/write files, talk to each other,
- send/receive network messages, etc.
lab 1 has four parts:
- Part I: Do I/O for Map and reduce
- Part II: just Map() and Reduce() for word count
- Part III: we give you most of a distributed multi-server framework,
- you fill in the master code that hands out the work
to a set of worker threads.
- Part IV: make master cope with crashed workers by re-trying.
Part II: main/wc.go
- stubs for Map and Reduce
- you fill them out to implement word count
- Map argument is a string, a big chunk of the input file
demo of solution to Part I
./wc master kjv12.txt sequential
more mrtmp.kjv12.txt-1-2
more mrtmp.kjv12.txt