Skip to content

Commit

Permalink
Improve instructions in README.md, remove error msg in scripts.
Browse files Browse the repository at this point in the history
  • Loading branch information
qianl15 committed May 24, 2018
1 parent fb9f774 commit f7e31c0
Show file tree
Hide file tree
Showing 3 changed files with 60 additions and 69 deletions.
47 changes: 39 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,12 @@ one of the memcached worker thread.

### How do I use memtier_skewsyn benchmark?

1. Clone this repo and swith to the skewSynthetic branch.
1. Clone this repo to your target directory ${MEMTIER_SKEWSYN_DIR}.
The default branch is `skewSynthetic` branch.

```
MEMTIER_SKEWSYN_DIR=${HOME}/memtier_skewsyn
git clone https://github.com/PlatformLab/memtier_skewsyn.git ${MEMTIER_SKEWSYN_DIR}
cd ${MEMTIER_SKEWSYN_DIR}
git fetch
git checkout skewSynthetic
```

2. Use the `scripts/prepare.sh` to install PerfUtils and compile memtier
Expand All @@ -49,7 +47,7 @@ By default, logs will be saved in `${MEMTIER_SKEWSYN_DIR}/exp_logs`
1) Skew benchmark, only uses 1 client machine:
```
./runSkew.sh <server> <key-min> <key-max> <data-size> <iterations> <skew_bench> <log directory prefix: arachne/origin>
./runSkew.sh <server> <key-min> <key-max> <data-size> <iterations> <skew_bench> <log directory prefix>
```
For example:
```
Expand All @@ -59,12 +57,41 @@ By default, logs will be saved in `${MEMTIER_SKEWSYN_DIR}/exp_logs`
2) Colocation benchmark, uses 2 (or more) client machines (we suppose memtier_skewsyn is installed in
the same path, or machines share the directory via nfs):
```
./runSynthetic.sh <server> <key-min> <key-max> <data-size> <iterations> <synthetic_bench> <videos[0/1]> <prefix: arachne/origin> [list of clients...]
./runSynthetic.sh <server> <key-min> <key-max> <data-size> <iterations> <synthetic_bench> <video[0/1]> <log directory prefix> [list of clients...]
```
For example:
```
./runSynthetic.sh ${server} 1000000 9000000 200 1 workloads/Synthetic16.bench 0 arachne_0vid ${client1}
```
If you set `<video>` option to 0, it will run without background video processes.
If you are interested in the video colocation workload, please follow
instructions in the [following section](#how-to-play-with-video-colocation-workload).
4. Parse logs:
For skew benchmark, the logs loacte at
`${MEMTIER_SKEWSYN_DIR}/exp_logs/<log directory prefix>_iters<iterations>_skew_logs/`
directory.
For colocation benchmark, the logs are at
`${MEMTIER_SKEWSYN_DIR}/exp_logs/<log directory prefix>_iters<iterations>_synthetic_logs/`
Inside the directory there are two directories: `latency/` and `throughput/`.
`latency/` contains the latency log files on the main client machine.
`throughput/` has throughput logs for both main client machine and ${client1}
machine. The structures are as follows:
```
latency/ (skew workload does not record latencies)
|---- latency_iter1.csv
|---- latency_iter2.csv
|...

throughput/
|---- qps_iter1.csv
|---- qps_iter1_${client1}.csv (for colocation workload)
|...
```
### How to play with video colocation workload?
Expand All @@ -83,6 +110,8 @@ suppose the directory would be `${videoDir}`.
in `${videoDir}/PerfUtils`, install nasm and x264 in `${videoDir}/install`.
It will also automatically update the `$PATH` in `~/.bashrc`. By default, logs
will be saved to `${videoDir}/exp_logs`.
This script can take a couple of minutes, especially the video downloading
part.
2. Test video encoding alone by running:
```
Expand All @@ -104,6 +133,8 @@ modify `videoPATH` parameter in the `runSynthetic.sh` file. Then simply run:
```
./runSynthetic.sh ${server} 1000000 9000000 200 1 workloads/Synthetic16.bench 1 arachne_1vid ${client1}
```
You will find the corresponding log directories in `${videoDir}/exp_logs` and
`${MEMTIER_SKEWSYN_DIR}/exp_logs`.
You will find the corresponding log directories in
`${videoDir}/exp_logs/<log directory prefix>_iters<iterations>_synthetic_logs/` and
`${MEMTIER_SKEWSYN_DIR}/exp_logs/<log directory prefix>_iters<iterations>_synthetic_logs/`.
The log directory structures are the same as we described before.
66 changes: 14 additions & 52 deletions scripts/runSkew.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,8 @@
scriptname=`basename "$0"`
if [[ "$#" -lt 7 ]]; then
echo "Usage: $scriptname <server> <key-min> <key-max> <data-size> " \
"<iterations> <skew_bench> <prefix: arachne/origin>" \
"[list of clients...]"
echo "Example: $scriptname n1 100000 200000 32 5 workloads/Skew.bench arachne n4..."
"<iterations> <skew_bench> <logDirPrefix>"
echo "Example: $scriptname n1 1000000 9000000 200 1 workloads/Skew16.bench arachne_test"
exit
fi

Expand Down Expand Up @@ -41,7 +40,7 @@ runlog=exp_logs/${prefix}_iters${iters}_skew_runlog.log
echo "Saving logs to: $logdir"
mkdir -p $logdir
rm -rf $logdir/* # Clear previous logs
rm $runlog
rm -f $runlog

# Load data into Memcached and warm up
cmd="bash ${scriptPATH}/loadonly.sh $server $keymin $keymax $datasize $keyprefix"
Expand All @@ -53,43 +52,8 @@ read -p "Press enter to continue"
# Execute experiments multiple times
for iter in `seq 1 $iters`;
do
# $9... will be client addresses, start clients!
if [[ "$#" -gt 7 ]]; then
# shift previous ones
for argn in `seq 1 7`;
do
shift
done

clientlogdir=$dirPATH/$logdir
clientbench=$dirPATH/$benchfile

for client in "$@";
do
echo "Starting client: $client ..."
clientqpsfile=${qpsprefix}_iter${iter}_${client}.csv
clientRunLog=$dirPATH/${runlog}_${client}
cmd="$dirPATH/memtier_benchmark -s $server \
-p 11211 -P memcache_binary --clients $clients --threads $threads \
--ratio $ratio --pipeline=$pipeline \
--key-prefix=$keyprefix \
--key-minimum=$keymin --key-maximum=$keymax \
--data-size=$datasize --random-data --hide-histogram \
--key-pattern=$keypattern --run-count=1 \
--distinct-client-seed --randomize \
--test-time=1 -b \
--config-file=$clientbench --ir-dist=$irdist \
--log-dir=${clientlogdir} \
--log-qpsfile=$clientqpsfile"
#echo $cmd
sshcmd="ssh -p 22 $client \"nohup $cmd > $clientRunLog 2>&1 < /dev/null &\""
echo $sshcmd
ssh -p 22 $client "nohup $cmd > $clientRunLog 2>&1 < /dev/null & "
done
fi

logqpsfile=${qpsprefix}_iter${iter}.csv
#latencyfile=${latencyprefix}_iter${iter}.csv
cmd="./memtier_benchmark -s $server -p 11211 -P memcache_binary \
--clients $clients --threads $threads --ratio $ratio \
--pipeline=$pipeline \
Expand All @@ -101,11 +65,9 @@ do
--config-file=$benchfile --ir-dist=$irdist --log-dir=$logdir \
--log-qpsfile=$logqpsfile \
--videos=$videos"
#--log-latencyfile=$latencyfile \

# execute the commnad
echo $cmd
# $cmd >> $runlog 2>&1
$cmd 2>&1 | tee -a $runlog
done

Expand All @@ -117,16 +79,16 @@ mkdir -p $qpsdir
mkdir -p $latencydir

# Clean those directories
rm $qpsdir/*
rm $latencydir/*
mv $logdir/${qpsprefix}_iter* $qpsdir
mv $logdir/${latencyprefix}_iter* $latencydir
rm -f $qpsdir/*
rm -f $latencydir/*
mv $logdir/${qpsprefix}_iter* $qpsdir > /dev/null 2>&1
mv $logdir/${latencyprefix}_iter* $latencydir > /dev/null 2>&1

# Merge the data
cmd="scripts/mergeStats.py ${qpsdir} ${prefix}_iters${iters}_qps"
echo $cmd
$cmd >> $runlog 2>&1

cmd="scripts/mergeStats.py ${latencydir} ${prefix}_iters${iters}_latency"
echo $cmd
$cmd >> $runlog 2>&1
#cmd="scripts/mergeStats.py ${qpsdir} ${prefix}_iters${iters}_qps"
#echo $cmd
#$cmd >> $runlog 2>&1
#
#cmd="scripts/mergeStats.py ${latencydir} ${prefix}_iters${iters}_latency"
#echo $cmd
#$cmd >> $runlog 2>&1
16 changes: 7 additions & 9 deletions scripts/runSynthetic.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ if [[ "$#" -lt 8 ]]; then
echo "Usage: $scriptname <server> <key-min> <key-max> <data-size>" \
"<iterations> <synthetic_bench> <video[0/1]> <prefix: arachne/origin>" \
"[list of clients...]"
echo "Example: $scriptname n1 100000 200000 32 5 workloads/LoadEstimator.bench 1 arachne n3 n4..."
echo "Example: $scriptname n1 1000000 9000000 200 1 workloads/Synthetic16.bench 0 arachne_0vid n3"
exit
fi

Expand Down Expand Up @@ -42,7 +42,7 @@ runlog=exp_logs/${prefix}_iters${iters}_synthetic_runlog.log
echo "Saving logs to: $logdir"
mkdir -p $logdir
rm -rf $logdir/* # Clear previous logs
rm $runlog
rm -f $runlog

# Load data into Memcached and warm up
cmd="bash $scriptPATH/loadonly.sh $server $keymin $keymax $datasize $keyprefix"
Expand Down Expand Up @@ -95,10 +95,9 @@ do
pipeline=1
irdist="UNIFORM"
benchfile=workloads/Synthetic16-master.bench
#benchfile=workloads/Synthetic16_1sec-master.bench
fi

# Master usually run in non-blocking mode
# Start the main master node
logqpsfile=${qpsprefix}_iter${iter}.csv
latencyfile=${latencyprefix}_iter${iter}.csv
cmd="./memtier_benchmark -s $server -p 11211 -P memcache_binary \
Expand All @@ -116,7 +115,6 @@ do

# execute the commnad
echo $cmd
# $cmd >> $runlog 2>&1
$cmd 2>&1 | tee -a $runlog
done

Expand All @@ -128,10 +126,10 @@ mkdir -p $qpsdir
mkdir -p $latencydir

# Clean those directories
rm $qpsdir/*
rm $latencydir/*
mv $logdir/${qpsprefix}_iter* $qpsdir
mv $logdir/${latencyprefix}_iter* $latencydir
rm -f $qpsdir/*
rm -f $latencydir/*
mv $logdir/${qpsprefix}_iter* $qpsdir > /dev/null 2>&1
mv $logdir/${latencyprefix}_iter* $latencydir > /dev/null 2>&1

# Merge the data
#cmd="scripts/mergeStats.py ${qpsdir} ${prefix}_iters${iters}_qps"
Expand Down

0 comments on commit f7e31c0

Please sign in to comment.