PageRenderTime 20ms CodeModel.GetById 16ms app.highlight 2ms RepoModel.GetById 1ms app.codeStats 0ms


ReStructuredText | 116 lines | 97 code | 19 blank | 0 comment | 0 complexity | 230c9591938b3560d6d8ae09231314ef MD5 | raw file
  1.. cloudwatch_tut:
  7First, make sure you have something to monitor.  You can either create a
  8LoadBalancer or enable monitoring on an existing EC2 instance.  To enable
  9monitoring, you can either call the monitor_instance method on the
 10EC2Connection object or call the monitor method on the Instance object.
 12It takes a while for the monitoring data to start accumulating but once
 13it does, you can do this::
 15    >>> import boto
 16    >>> c = boto.connect_cloudwatch()
 17    >>> metrics = c.list_metrics()
 18    >>> metrics
 19    [Metric:NetworkIn,
 20     Metric:NetworkOut,
 21     Metric:NetworkOut(InstanceType,m1.small),
 22     Metric:NetworkIn(InstanceId,i-e573e68c),
 23     Metric:CPUUtilization(InstanceId,i-e573e68c),
 24     Metric:DiskWriteBytes(InstanceType,m1.small),
 25     Metric:DiskWriteBytes(ImageId,ami-a1ffb63),
 26     Metric:NetworkOut(ImageId,ami-a1ffb63),
 27     Metric:DiskWriteOps(InstanceType,m1.small),
 28     Metric:DiskReadBytes(InstanceType,m1.small),
 29     Metric:DiskReadOps(ImageId,ami-a1ffb63),
 30     Metric:CPUUtilization(InstanceType,m1.small),
 31     Metric:NetworkIn(ImageId,ami-a1ffb63),
 32     Metric:DiskReadOps(InstanceType,m1.small),
 33     Metric:DiskReadBytes,
 34     Metric:CPUUtilization,
 35     Metric:DiskWriteBytes(InstanceId,i-e573e68c),
 36     Metric:DiskWriteOps(InstanceId,i-e573e68c),
 37     Metric:DiskWriteOps,
 38     Metric:DiskReadOps,
 39     Metric:CPUUtilization(ImageId,ami-a1ffb63),
 40     Metric:DiskReadOps(InstanceId,i-e573e68c),
 41     Metric:NetworkOut(InstanceId,i-e573e68c),
 42     Metric:DiskReadBytes(ImageId,ami-a1ffb63),
 43     Metric:DiskReadBytes(InstanceId,i-e573e68c),
 44     Metric:DiskWriteBytes,
 45     Metric:NetworkIn(InstanceType,m1.small),
 46     Metric:DiskWriteOps(ImageId,ami-a1ffb63)]
 48The list_metrics call will return a list of all of the available metrics
 49that you can query against.  Each entry in the list is a Metric object.
 50As you can see from the list above, some of the metrics are generic metrics
 51and some have Dimensions associated with them (e.g. InstanceType=m1.small).
 52The Dimension can be used to refine your query.  So, for example, I could
 53query the metric Metric:CPUUtilization which would create the desired statistic
 54by aggregating cpu utilization data across all sources of information available
 55or I could refine that by querying the metric
 56Metric:CPUUtilization(InstanceId,i-e573e68c) which would use only the data
 57associated with the instance identified by the instance ID i-e573e68c.
 59Because for this example, I'm only monitoring a single instance, the set
 60of metrics available to me are fairly limited.  If I was monitoring many
 61instances, using many different instance types and AMI's and also several
 62load balancers, the list of available metrics would grow considerably.
 64Once you have the list of available metrics, you can actually
 65query the CloudWatch system for that metric.  Let's choose the CPU utilization
 66metric for our instance.::
 68    >>> m = metrics[5]
 69    >>> m
 70    Metric:CPUUtilization(InstanceId,i-e573e68c)
 72The Metric object has a query method that lets us actually perform
 73the query against the collected data in CloudWatch.  To call that,
 74we need a start time and end time to control the time span of data
 75that we are interested in.  For this example, let's say we want the
 76data for the previous hour::
 78    >>> import datetime
 79    >>> end =
 80    >>> start = end - datetime.timedelta(hours=1)
 82We also need to supply the Statistic that we want reported and
 83the Units to use for the results.  The Statistic can be one of these
 86    ['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']
 88And Units must be one of the following::
 90    ['Seconds', 'Percent', 'Bytes', 'Bits', 'Count',
 91    'Bytes/Second', 'Bits/Second', 'Count/Second']
 93The query method also takes an optional parameter, period.  This
 94parameter controls the granularity (in seconds) of the data returned.
 95The smallest period is 60 seconds and the value must be a multiple
 96of 60 seconds.  So, let's ask for the average as a percent::
 98    >>> datapoints = m.query(start, end, 'Average', 'Percent')
 99    >>> len(datapoints)
100    60
102Our period was 60 seconds and our duration was one hour so
103we should get 60 data points back and we can see that we did.
104Each element in the datapoints list is a DataPoint object
105which is a simple subclass of a Python dict object.  Each
106Datapoint object contains all of the information available
107about that particular data point.::
109    >>> d = datapoints[0]
110    >>> d
111    {u'Average': 0.0,
112     u'SampleCount': 1.0,
113     u'Timestamp': u'2009-05-21T19:55:00Z',
114     u'Unit': u'Percent'}
116My server obviously isn't very busy right now!