PageRenderTime 36ms CodeModel.GetById 20ms RepoModel.GetById 0ms app.codeStats 0ms

/README

https://bitbucket.org/lindenlab/apiary/
#! | 110 lines | 70 code | 40 blank | 0 comment | 0 complexity | 4e42457077b022bd6b932bef655fcf3d MD5 | raw file
  1. SUMMARY
  2. -------
  3. apiary is a load-testing framework for MySQL, written in Python, that replays
  4. captured queries, simulating production load patterns.
  5. A QueenBee process feeds sequences of one or more messages into an AMQP queue,
  6. and one or more WorkerBee processes retrieve sequences from the queue, send the
  7. associated messages to the specified target host, and report their results to
  8. the BeeKeeper to be tallied. The QueenBee prints a summary of the progress
  9. every 15 seconds and when all the WorkerBees are done. The QueenBee and
  10. WorkerBees work together to attempt to mimic production load patterns as
  11. closely as possible, including timing.
  12. Query timing can be slowed down or sped up by an arbitrary factor using the
  13. --speedup option. In testing, Apiary has reproduced 30,000 queries per
  14. second when run on a powerful test system. An ideal system will have at least 4
  15. CPU cores that are as fast as possible. This is preferable to more cores that
  16. are slower. The gating factor tends to be RabbitMQ.
  17. REQUIREMENTS
  18. ------------
  19. * RabbitMQ >= 1.6
  20. * lsprof (if you want to use --profile)
  21. http://codespeak.net/svn/user/arigo/hack/misc/lsprof/
  22. * py-amqplib (only tested with an implementation of the 0-8 spec)
  23. http://hg.barryp.org/py-amqplib/file/
  24. * maatkit or percona-toolkit (either may be available in your Linux distribution)
  25. http://www.percona.com/software/percona-toolkit
  26. RABBITMQ SETUP
  27. --------------
  28. To configure a local, running instance of RabbitMQ, execute the following:
  29. sudo bin/apiary_setup_rabbitmq.sh
  30. This will delete the apiary vhost and user, re-add them, and then set up
  31. the appropriate permissions. This must be run as root.
  32. TUTORIAL
  33. --------
  34. This tutorial assumes you have mysql running on localhost with no password for
  35. the root user. Adapt the commands below appropriately if that is not the case.
  36. 1. Some sample data is included. This will create an "apiary_demo" database:
  37. mysql -u root < doc/examples/demo.sql
  38. 2. Use tcpdump to capture mysql traffic for consumption by pt-query-digest:
  39. sudo tcpdump -i lo port 3306 -s 65535 -x -n -q -tttt> /tmp/tcpdump.out
  40. 3. In a separate terminal, generate some queries against the test data:
  41. while read query; do \
  42. mysql -u root -h localhost --protocol=TCP -e "$query" apiary_demo > /dev/null; \
  43. sleep 0.1; \
  44. done < doc/examples/demo_queries.sql
  45. 4. Stop the tcpdump process that you started in step 2.
  46. 5. Turn the tcpdump data into a query digest using percona-toolkit:
  47. pt-query-digest --type=tcpdump --no-report --print /tmp/tcpdump.out > /tmp/apiary_query_digest.txt
  48. Use mk-query-digest if you have maatkit instead of percona-toolkit. They both work identically.
  49. 6. Convert the query digest into a sequence file:
  50. bin/genjobs /tmp/apiary_query_digest.txt > /tmp/apiary_test.jobs
  51. 7. Run apiary:
  52. bin/apiary --workers 10 --mysql-user root --mysql-db apiary_demo --mysql-host localhost /tmp/apiary_test.jobs
  53. 9. Apiary will fork Queenbee, WorkerBee, and Beekeeper processes internally.
  54. They will work together to read and execute your traffic. Apiary will very
  55. quickly print this message:
  56. Waiting for workers to complete jobs and terminate (may take up to 300 seconds)...
  57. This is because the QueenBee prefills the job queue with up to 300 seconds of query
  58. traffic to make sure that the WorkerBees never run out of jobs to run.
  59. 10. You should see a summary of the results as queries are being executed and
  60. after all queries have been replayed.
  61. NOTES
  62. -----
  63. I found this command line useful to reset between tests:
  64. sudo pkill -f rabbitmq; pkill -f bin/apiary; sudo /etc/init.d/rabbitmq-server start; sudo bin/apiary_setup_rabbitmq.sh ; watch -n 1 sudo rabbitmqctl list_queues -p /apiary
  65. I found these networking settings were useful with large query volumes:
  66. sudo bash -c 'echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse'
  67. sudo bash -c 'echo 1024 65535 > /proc/sys/net/ipv4/ip_local_port_range'
  68. Otherwise, you may start to see errors like "Could not connect to MySQL host",
  69. or "resource not available", because the kernel will quickly run out of local
  70. ports it's willing to use.