/docsite/rst/guide_rax.rst

https://github.com/ajanthanm/ansible · ReStructuredText · 592 lines · 472 code · 120 blank · 0 comment · 0 complexity · 63995dda5fd66b0bc42a1b13dcea73fd MD5 · raw file

  1. Rackspace Cloud Guide
  2. =====================
  3. .. _introduction:
  4. Introduction
  5. ````````````
  6. .. note:: This section of the documentation is under construction. We are in the process of adding more examples about the Rackspace modules and how they work together. Once complete, there will also be examples for Rackspace Cloud in `ansible-examples <http://github.com/ansible/ansible-examples/>`_.
  7. Ansible contains a number of core modules for interacting with Rackspace Cloud.
  8. The purpose of this section is to explain how to put Ansible modules together
  9. (and use inventory scripts) to use Ansible in a Rackspace Cloud context.
  10. Prerequisites for using the rax modules are minimal. In addition to ansible itself,
  11. all of the modules require and are tested against pyrax 1.5 or higher.
  12. You'll need this Python module installed on the execution host.
  13. pyrax is not currently available in many operating system
  14. package repositories, so you will likely need to install it via pip:
  15. .. code-block:: bash
  16. $ pip install pyrax
  17. The following steps will often execute from the control machine against the Rackspace Cloud API, so it makes sense
  18. to add localhost to the inventory file. (Ansible may not require this manual step in the future):
  19. .. code-block:: ini
  20. [localhost]
  21. localhost ansible_connection=local
  22. In playbook steps, we'll typically be using the following pattern:
  23. .. code-block:: yaml
  24. - hosts: localhost
  25. connection: local
  26. gather_facts: False
  27. tasks:
  28. .. _credentials_file:
  29. Credentials File
  30. ````````````````
  31. The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
  32. .. code-block:: ini
  33. [rackspace_cloud]
  34. username = myraxusername
  35. api_key = d41d8cd98f00b204e9800998ecf8427e
  36. Setting the environment parameter RAX_CREDS_FILE to the path of this file will help Ansible find how to load
  37. this information.
  38. More information about this credentials file can be found at
  39. https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md#authenticating
  40. .. _virtual_environment:
  41. Running from a Python Virtual Environment (Optional)
  42. ++++++++++++++++++++++++++++++++++++++++++++++++++++
  43. Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
  44. There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
  45. .. code-block:: ini
  46. [localhost]
  47. localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
  48. .. note::
  49. pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
  50. .. _provisioning:
  51. Provisioning
  52. ````````````
  53. Now for the fun parts.
  54. The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
  55. - Avoiding installing the pyrax library on remote nodes
  56. - No need to encrypt and distribute credentials to remote nodes
  57. - Speed and simplicity
  58. .. note::
  59. Authentication with the Rackspace-related modules is handled by either
  60. specifying your username and API key as environment variables or passing
  61. them as module arguments, or by specifying the location of a credentials
  62. file.
  63. Here is a basic example of provisioning an instance in ad-hoc mode:
  64. .. code-block:: bash
  65. $ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes" -c local
  66. Here's what it would look like in a playbook, assuming the parameters were defined in variables:
  67. .. code-block:: yaml
  68. tasks:
  69. - name: Provision a set of instances
  70. local_action:
  71. module: rax
  72. name: "{{ rax_name }}"
  73. flavor: "{{ rax_flavor }}"
  74. image: "{{ rax_image }}"
  75. count: "{{ rax_count }}"
  76. group: "{{ group }}"
  77. wait: yes
  78. register: rax
  79. The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
  80. .. code-block:: yaml
  81. - name: Add the instances we created (by public IP) to the group 'raxhosts'
  82. local_action:
  83. module: add_host
  84. hostname: "{{ item.name }}"
  85. ansible_ssh_host: "{{ item.rax_accessipv4 }}"
  86. ansible_ssh_pass: "{{ item.rax_adminpass }}"
  87. groupname: raxhosts
  88. with_items: rax.success
  89. when: rax.action == 'create'
  90. With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
  91. .. code-block:: yaml
  92. - name: Configuration play
  93. hosts: raxhosts
  94. user: root
  95. roles:
  96. - ntp
  97. - webserver
  98. The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
  99. to the next section.
  100. .. _host_inventory:
  101. Host Inventory
  102. ``````````````
  103. Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up Ansible via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, etc. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
  104. In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
  105. .. _raxpy:
  106. rax.py
  107. ++++++
  108. To use the rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentails file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
  109. .. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
  110. .. note:: Users of :doc:`tower` will note that dynamic inventory is natively supported by Tower, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
  111. $ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
  112. ``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
  113. When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
  114. As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
  115. Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
  116. of an individual file, will cause ansible to evaluate each file in that directory for inventory.
  117. Let's test our inventory script to see if it can talk to Rackspace Cloud.
  118. .. code-block:: bash
  119. $ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
  120. Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the
  121. following information, which will be utilized for inventory and variables.
  122. .. code-block:: json
  123. {
  124. "ORD": [
  125. "test"
  126. ],
  127. "_meta": {
  128. "hostvars": {
  129. "test": {
  130. "ansible_ssh_host": "1.1.1.1",
  131. "rax_accessipv4": "1.1.1.1",
  132. "rax_accessipv6": "2607:f0d0:1002:51::4",
  133. "rax_addresses": {
  134. "private": [
  135. {
  136. "addr": "2.2.2.2",
  137. "version": 4
  138. }
  139. ],
  140. "public": [
  141. {
  142. "addr": "1.1.1.1",
  143. "version": 4
  144. },
  145. {
  146. "addr": "2607:f0d0:1002:51::4",
  147. "version": 6
  148. }
  149. ]
  150. },
  151. "rax_config_drive": "",
  152. "rax_created": "2013-11-14T20:48:22Z",
  153. "rax_flavor": {
  154. "id": "performance1-1",
  155. "links": [
  156. {
  157. "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
  158. "rel": "bookmark"
  159. }
  160. ]
  161. },
  162. "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
  163. "rax_human_id": "test",
  164. "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
  165. "rax_image": {
  166. "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
  167. "links": [
  168. {
  169. "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
  170. "rel": "bookmark"
  171. }
  172. ]
  173. },
  174. "rax_key_name": null,
  175. "rax_links": [
  176. {
  177. "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
  178. "rel": "self"
  179. },
  180. {
  181. "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
  182. "rel": "bookmark"
  183. }
  184. ],
  185. "rax_metadata": {
  186. "foo": "bar"
  187. },
  188. "rax_name": "test",
  189. "rax_name_attr": "name",
  190. "rax_networks": {
  191. "private": [
  192. "2.2.2.2"
  193. ],
  194. "public": [
  195. "1.1.1.1",
  196. "2607:f0d0:1002:51::4"
  197. ]
  198. },
  199. "rax_os-dcf_diskconfig": "AUTO",
  200. "rax_os-ext-sts_power_state": 1,
  201. "rax_os-ext-sts_task_state": null,
  202. "rax_os-ext-sts_vm_state": "active",
  203. "rax_progress": 100,
  204. "rax_status": "ACTIVE",
  205. "rax_tenant_id": "111111",
  206. "rax_updated": "2013-11-14T20:49:27Z",
  207. "rax_user_id": "22222"
  208. }
  209. }
  210. }
  211. }
  212. .. _standard_inventory:
  213. Standard Inventory
  214. ++++++++++++++++++
  215. When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be adventageous to retrieve discoverable hostvar information from the Rackspace API.
  216. This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
  217. .. code-block:: ini
  218. [test_servers]
  219. hostname1 rax_region=ORD
  220. hostname2 rax_region=ORD
  221. .. code-block:: yaml
  222. - name: Gather info about servers
  223. hosts: test_servers
  224. gather_facts: False
  225. tasks:
  226. - name: Get facts about servers
  227. local_action:
  228. module: rax_facts
  229. credentials: ~/.raxpub
  230. name: "{{ inventory_hostname }}"
  231. region: "{{ rax_region }}"
  232. - name: Map some facts
  233. set_fact:
  234. ansible_ssh_host: "{{ rax_accessipv4 }}"
  235. While you don't need to know how it works, it may be interesting to know what kind of variables are returned.
  236. The ``rax_facts`` module provides facts as followings, which match the ``rax.py`` inventory script:
  237. .. code-block:: json
  238. {
  239. "ansible_facts": {
  240. "rax_accessipv4": "1.1.1.1",
  241. "rax_accessipv6": "2607:f0d0:1002:51::4",
  242. "rax_addresses": {
  243. "private": [
  244. {
  245. "addr": "2.2.2.2",
  246. "version": 4
  247. }
  248. ],
  249. "public": [
  250. {
  251. "addr": "1.1.1.1",
  252. "version": 4
  253. },
  254. {
  255. "addr": "2607:f0d0:1002:51::4",
  256. "version": 6
  257. }
  258. ]
  259. },
  260. "rax_config_drive": "",
  261. "rax_created": "2013-11-14T20:48:22Z",
  262. "rax_flavor": {
  263. "id": "performance1-1",
  264. "links": [
  265. {
  266. "href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
  267. "rel": "bookmark"
  268. }
  269. ]
  270. },
  271. "rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
  272. "rax_human_id": "test",
  273. "rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
  274. "rax_image": {
  275. "id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
  276. "links": [
  277. {
  278. "href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
  279. "rel": "bookmark"
  280. }
  281. ]
  282. },
  283. "rax_key_name": null,
  284. "rax_links": [
  285. {
  286. "href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
  287. "rel": "self"
  288. },
  289. {
  290. "href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
  291. "rel": "bookmark"
  292. }
  293. ],
  294. "rax_metadata": {
  295. "foo": "bar"
  296. },
  297. "rax_name": "test",
  298. "rax_name_attr": "name",
  299. "rax_networks": {
  300. "private": [
  301. "2.2.2.2"
  302. ],
  303. "public": [
  304. "1.1.1.1",
  305. "2607:f0d0:1002:51::4"
  306. ]
  307. },
  308. "rax_os-dcf_diskconfig": "AUTO",
  309. "rax_os-ext-sts_power_state": 1,
  310. "rax_os-ext-sts_task_state": null,
  311. "rax_os-ext-sts_vm_state": "active",
  312. "rax_progress": 100,
  313. "rax_status": "ACTIVE",
  314. "rax_tenant_id": "111111",
  315. "rax_updated": "2013-11-14T20:49:27Z",
  316. "rax_user_id": "22222"
  317. },
  318. "changed": false
  319. }
  320. Use Cases
  321. `````````
  322. This section covers some additional usage examples built around a specific use case.
  323. .. _example_1:
  324. Example 1
  325. +++++++++
  326. Create an isolated cloud network and build a server
  327. .. code-block:: yaml
  328. - name: Build Servers on an Isolated Network
  329. hosts: localhost
  330. connection: local
  331. gather_facts: False
  332. tasks:
  333. - name: Network create request
  334. local_action:
  335. module: rax_network
  336. credentials: ~/.raxpub
  337. label: my-net
  338. cidr: 192.168.3.0/24
  339. region: IAD
  340. state: present
  341. - name: Server create request
  342. local_action:
  343. module: rax
  344. credentials: ~/.raxpub
  345. name: web%04d.example.org
  346. flavor: 2
  347. image: ubuntu-1204-lts-precise-pangolin
  348. disk_config: manual
  349. networks:
  350. - public
  351. - my-net
  352. region: IAD
  353. state: present
  354. count: 5
  355. exact_count: yes
  356. group: web
  357. wait: yes
  358. wait_timeout: 360
  359. register: rax
  360. .. _example_2:
  361. Example 2
  362. +++++++++
  363. Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
  364. .. code-block:: yaml
  365. ---
  366. - name: Build environment
  367. hosts: localhost
  368. connection: local
  369. gather_facts: False
  370. tasks:
  371. - name: Load Balancer create request
  372. local_action:
  373. module: rax_clb
  374. credentials: ~/.raxpub
  375. name: my-lb
  376. port: 80
  377. protocol: HTTP
  378. algorithm: ROUND_ROBIN
  379. type: PUBLIC
  380. timeout: 30
  381. region: IAD
  382. wait: yes
  383. state: present
  384. meta:
  385. app: my-cool-app
  386. register: clb
  387. - name: Network create request
  388. local_action:
  389. module: rax_network
  390. credentials: ~/.raxpub
  391. label: my-net
  392. cidr: 192.168.3.0/24
  393. state: present
  394. region: IAD
  395. register: network
  396. - name: Server create request
  397. local_action:
  398. module: rax
  399. credentials: ~/.raxpub
  400. name: web%04d.example.org
  401. flavor: performance1-1
  402. image: ubuntu-1204-lts-precise-pangolin
  403. disk_config: manual
  404. networks:
  405. - public
  406. - private
  407. - my-net
  408. region: IAD
  409. state: present
  410. count: 5
  411. exact_count: yes
  412. group: web
  413. wait: yes
  414. register: rax
  415. - name: Add servers to web host group
  416. local_action:
  417. module: add_host
  418. hostname: "{{ item.name }}"
  419. ansible_ssh_host: "{{ item.rax_accessipv4 }}"
  420. ansible_ssh_pass: "{{ item.rax_adminpass }}"
  421. ansible_ssh_user: root
  422. groupname: web
  423. with_items: rax.success
  424. when: rax.action == 'create'
  425. - name: Add servers to Load balancer
  426. local_action:
  427. module: rax_clb_nodes
  428. credentials: ~/.raxpub
  429. load_balancer_id: "{{ clb.balancer.id }}"
  430. address: "{{ item.rax_networks.private|first }}"
  431. port: 80
  432. condition: enabled
  433. type: primary
  434. wait: yes
  435. region: IAD
  436. with_items: rax.success
  437. when: rax.action == 'create'
  438. - name: Configure servers
  439. hosts: web
  440. handlers:
  441. - name: restart nginx
  442. service: name=nginx state=restarted
  443. tasks:
  444. - name: Install nginx
  445. apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
  446. notify:
  447. - restart nginx
  448. - name: Ensure nginx starts on boot
  449. service: name=nginx state=started enabled=yes
  450. - name: Create custom index.html
  451. copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
  452. owner=root group=root mode=0644
  453. .. _advanced_usage:
  454. Advanced Usage
  455. ``````````````
  456. .. _awx_autoscale:
  457. Autoscaling with Tower
  458. ++++++++++++++++++++++
  459. :doc:`tower` also contains a very nice feature for auto-scaling use cases.
  460. In this mode, a simple curl script can call a defined URL and the server will "dial out" to the requester
  461. and configure an instance that is spinning up. This can be a great way to reconfigure ephemeral nodes.
  462. See the Tower documentation for more details.
  463. A benefit of using the callback in Tower over pull mode is that job results are still centrally recorded
  464. and less information has to be shared with remote hosts.
  465. .. _pending_information:
  466. Orchestration in the Rackspace Cloud
  467. ++++++++++++++++++++++++++++++++++++
  468. Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
  469. * Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
  470. * Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
  471. * A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
  472. * Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively