PageRenderTime 46ms CodeModel.GetById 20ms RepoModel.GetById 1ms app.codeStats 0ms

/boto-2.5.2/docs/source/cloudwatch_tut.rst

#
ReStructuredText | 116 lines | 97 code | 19 blank | 0 comment | 0 complexity | 230c9591938b3560d6d8ae09231314ef MD5 | raw file
  1. .. cloudwatch_tut:
  2. ==========
  3. CloudWatch
  4. ==========
  5. First, make sure you have something to monitor. You can either create a
  6. LoadBalancer or enable monitoring on an existing EC2 instance. To enable
  7. monitoring, you can either call the monitor_instance method on the
  8. EC2Connection object or call the monitor method on the Instance object.
  9. It takes a while for the monitoring data to start accumulating but once
  10. it does, you can do this::
  11. >>> import boto
  12. >>> c = boto.connect_cloudwatch()
  13. >>> metrics = c.list_metrics()
  14. >>> metrics
  15. [Metric:NetworkIn,
  16. Metric:NetworkOut,
  17. Metric:NetworkOut(InstanceType,m1.small),
  18. Metric:NetworkIn(InstanceId,i-e573e68c),
  19. Metric:CPUUtilization(InstanceId,i-e573e68c),
  20. Metric:DiskWriteBytes(InstanceType,m1.small),
  21. Metric:DiskWriteBytes(ImageId,ami-a1ffb63),
  22. Metric:NetworkOut(ImageId,ami-a1ffb63),
  23. Metric:DiskWriteOps(InstanceType,m1.small),
  24. Metric:DiskReadBytes(InstanceType,m1.small),
  25. Metric:DiskReadOps(ImageId,ami-a1ffb63),
  26. Metric:CPUUtilization(InstanceType,m1.small),
  27. Metric:NetworkIn(ImageId,ami-a1ffb63),
  28. Metric:DiskReadOps(InstanceType,m1.small),
  29. Metric:DiskReadBytes,
  30. Metric:CPUUtilization,
  31. Metric:DiskWriteBytes(InstanceId,i-e573e68c),
  32. Metric:DiskWriteOps(InstanceId,i-e573e68c),
  33. Metric:DiskWriteOps,
  34. Metric:DiskReadOps,
  35. Metric:CPUUtilization(ImageId,ami-a1ffb63),
  36. Metric:DiskReadOps(InstanceId,i-e573e68c),
  37. Metric:NetworkOut(InstanceId,i-e573e68c),
  38. Metric:DiskReadBytes(ImageId,ami-a1ffb63),
  39. Metric:DiskReadBytes(InstanceId,i-e573e68c),
  40. Metric:DiskWriteBytes,
  41. Metric:NetworkIn(InstanceType,m1.small),
  42. Metric:DiskWriteOps(ImageId,ami-a1ffb63)]
  43. The list_metrics call will return a list of all of the available metrics
  44. that you can query against. Each entry in the list is a Metric object.
  45. As you can see from the list above, some of the metrics are generic metrics
  46. and some have Dimensions associated with them (e.g. InstanceType=m1.small).
  47. The Dimension can be used to refine your query. So, for example, I could
  48. query the metric Metric:CPUUtilization which would create the desired statistic
  49. by aggregating cpu utilization data across all sources of information available
  50. or I could refine that by querying the metric
  51. Metric:CPUUtilization(InstanceId,i-e573e68c) which would use only the data
  52. associated with the instance identified by the instance ID i-e573e68c.
  53. Because for this example, I'm only monitoring a single instance, the set
  54. of metrics available to me are fairly limited. If I was monitoring many
  55. instances, using many different instance types and AMI's and also several
  56. load balancers, the list of available metrics would grow considerably.
  57. Once you have the list of available metrics, you can actually
  58. query the CloudWatch system for that metric. Let's choose the CPU utilization
  59. metric for our instance.::
  60. >>> m = metrics[5]
  61. >>> m
  62. Metric:CPUUtilization(InstanceId,i-e573e68c)
  63. The Metric object has a query method that lets us actually perform
  64. the query against the collected data in CloudWatch. To call that,
  65. we need a start time and end time to control the time span of data
  66. that we are interested in. For this example, let's say we want the
  67. data for the previous hour::
  68. >>> import datetime
  69. >>> end = datetime.datetime.now()
  70. >>> start = end - datetime.timedelta(hours=1)
  71. We also need to supply the Statistic that we want reported and
  72. the Units to use for the results. The Statistic can be one of these
  73. values::
  74. ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']
  75. And Units must be one of the following::
  76. ['Seconds', 'Percent', 'Bytes', 'Bits', 'Count',
  77. 'Bytes/Second', 'Bits/Second', 'Count/Second']
  78. The query method also takes an optional parameter, period. This
  79. parameter controls the granularity (in seconds) of the data returned.
  80. The smallest period is 60 seconds and the value must be a multiple
  81. of 60 seconds. So, let's ask for the average as a percent::
  82. >>> datapoints = m.query(start, end, 'Average', 'Percent')
  83. >>> len(datapoints)
  84. 60
  85. Our period was 60 seconds and our duration was one hour so
  86. we should get 60 data points back and we can see that we did.
  87. Each element in the datapoints list is a DataPoint object
  88. which is a simple subclass of a Python dict object. Each
  89. Datapoint object contains all of the information available
  90. about that particular data point.::
  91. >>> d = datapoints[0]
  92. >>> d
  93. {u'Average': 0.0,
  94. u'SampleCount': 1.0,
  95. u'Timestamp': u'2009-05-21T19:55:00Z',
  96. u'Unit': u'Percent'}
  97. My server obviously isn't very busy right now!