#! | 110 lines | 70 code | 40 blank | 0 comment | 0 complexity | 4e42457077b022bd6b932bef655fcf3d MD5 | raw file
1SUMMARY 2------- 3 4apiary is a load-testing framework for MySQL, written in Python, that replays 5captured queries, simulating production load patterns. 6 7A QueenBee process feeds sequences of one or more messages into an AMQP queue, 8and one or more WorkerBee processes retrieve sequences from the queue, send the 9associated messages to the specified target host, and report their results to 10the BeeKeeper to be tallied. The QueenBee prints a summary of the progress 11every 15 seconds and when all the WorkerBees are done. The QueenBee and 12WorkerBees work together to attempt to mimic production load patterns as 13closely as possible, including timing. 14 15Query timing can be slowed down or sped up by an arbitrary factor using the 16--speedup option. In testing, Apiary has reproduced 30,000 queries per 17second when run on a powerful test system. An ideal system will have at least 4 18CPU cores that are as fast as possible. This is preferable to more cores that 19are slower. The gating factor tends to be RabbitMQ. 20 21REQUIREMENTS 22------------ 23 24* RabbitMQ >= 1.6 25 26* lsprof (if you want to use --profile) 27http://codespeak.net/svn/user/arigo/hack/misc/lsprof/ 28 29* py-amqplib (only tested with an implementation of the 0-8 spec) 30http://hg.barryp.org/py-amqplib/file/ 31 32* maatkit or percona-toolkit (either may be available in your Linux distribution) 33http://www.percona.com/software/percona-toolkit 34 35 36RABBITMQ SETUP 37-------------- 38 39To configure a local, running instance of RabbitMQ, execute the following: 40 41 sudo bin/apiary_setup_rabbitmq.sh 42 43This will delete the apiary vhost and user, re-add them, and then set up 44the appropriate permissions. This must be run as root. 45 46 47TUTORIAL 48-------- 49 50This tutorial assumes you have mysql running on localhost with no password for 51the root user. Adapt the commands below appropriately if that is not the case. 52 531. Some sample data is included. This will create an "apiary_demo" database: 54 55 mysql -u root < doc/examples/demo.sql 56 572. Use tcpdump to capture mysql traffic for consumption by pt-query-digest: 58 59 sudo tcpdump -i lo port 3306 -s 65535 -x -n -q -tttt> /tmp/tcpdump.out 60 613. In a separate terminal, generate some queries against the test data: 62 63 while read query; do \ 64 mysql -u root -h localhost --protocol=TCP -e "$query" apiary_demo > /dev/null; \ 65 sleep 0.1; \ 66 done < doc/examples/demo_queries.sql 67 684. Stop the tcpdump process that you started in step 2. 69 705. Turn the tcpdump data into a query digest using percona-toolkit: 71 72 pt-query-digest --type=tcpdump --no-report --print /tmp/tcpdump.out > /tmp/apiary_query_digest.txt 73 74Use mk-query-digest if you have maatkit instead of percona-toolkit. They both work identically. 75 766. Convert the query digest into a sequence file: 77 78 bin/genjobs /tmp/apiary_query_digest.txt > /tmp/apiary_test.jobs 79 807. Run apiary: 81 bin/apiary --workers 10 --mysql-user root --mysql-db apiary_demo --mysql-host localhost /tmp/apiary_test.jobs 82 839. Apiary will fork Queenbee, WorkerBee, and Beekeeper processes internally. 84 They will work together to read and execute your traffic. Apiary will very 85 quickly print this message: 86 87 Waiting for workers to complete jobs and terminate (may take up to 300 seconds)... 88 89 This is because the QueenBee prefills the job queue with up to 300 seconds of query 90 traffic to make sure that the WorkerBees never run out of jobs to run. 91 9210. You should see a summary of the results as queries are being executed and 93 after all queries have been replayed. 94 95 96NOTES 97----- 98 99I found this command line useful to reset between tests: 100 101sudo pkill -f rabbitmq; pkill -f bin/apiary; sudo /etc/init.d/rabbitmq-server start; sudo bin/apiary_setup_rabbitmq.sh ; watch -n 1 sudo rabbitmqctl list_queues -p /apiary 102 103I found these networking settings were useful with large query volumes: 104 105sudo bash -c 'echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse' 106sudo bash -c 'echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range' 107 108Otherwise, you may start to see errors like "Could not connect to MySQL host", 109or "resource not available", because the kernel will quickly run out of local 110ports it's willing to use.