diff --git a/docs/image/flips-monitoring-architecture.jpg b/docs/image/flips-monitoring-architecture.jpg
index ae90dc89bbd2311bc5934579b4e9107e9ea5879b..50f64ec113a024f630c2b6ae71c13cba5bc4a0e1 100644
Binary files a/docs/image/flips-monitoring-architecture.jpg and b/docs/image/flips-monitoring-architecture.jpg differ
diff --git a/docs/image/flips-monitoring-architecture2.jpg b/docs/image/flips-monitoring-architecture2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..f5471767227a0059d51829ddb3c756de573ae820
Binary files /dev/null and b/docs/image/flips-monitoring-architecture2.jpg differ
diff --git a/docs/monitoring.md b/docs/monitoring.md
index 7f996042786020d64066c8010dd028096f23b190..704588691b09cabcd94898fd1e9521cf70a3505e 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -1,74 +1,97 @@
-## FLAME Configuration and Monitoring Specification
+# **FLAME CLMC Information Model Specification**
 
-This document describe the low-level configuration and monitoring specification for cross-layer management and control within the FLAME platform. 
+This document describe the configuration and monitoring specification for cross-layer management and control within the FLAME platform. All information measured by the CLMC aims to improve management and control decisions made by the platform against defined performance criteria such as increasing QoE or reducing costs. 
 
-### Principles
+## **Overview**
 
-#### Configuration Data
+This section provides an overview of the FLAME CLMC information model considering the following elements:
 
-Briefly describe
+* Media Service Information
+* Configuration
+* Monitoring
+* Information Security
+* Privacy
 
-* the characteristics of configuration data as ways to describe the structure of the system over time.  
-* how configuration data provides context for measurements of system behaviour
-* the lifecycle of configuration data within the platform and how it is used.
-* the type of configuration data
 
-Configuration includes:
+### Media Service (https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/issues/2)
 
-* Capacity (servers and networks)
-* Media Service (sfc and sf)
-* Topology (nodes and links)
-* Allocation (Media Service Instance, Service Function Instance, Surrogate Instance)
-* Basic State (up, down, etc.)
+The FLAME architecture defines a media services as "An Internet accessible service supporting processing, storage and retrieval of content resources hosted and managed by the FLAME platform". A media service consists of 1 or more media components (also known as Service Functions) that together are composed to create an overall Service Function Chain. SFs are realised through the instantiation of virtual machines (or containers) based on resource management policy. Multiple VMs may be instantiated for each SF to create surrogate SFs to balance load and deliver against performance targets.
 
-#### Monitoring Data
+Media services are described using a template structured according to the TOSCA specification (http://docs.oasis-open.org/tosca/TOSCA/v1.0/TOSCA-v1.0.html). A TOSCA template includes all of the information needed for the FLAME orchestrator to instantiate a media service. This includes all SF's, links between SFs server resource configuration information. The Alpha version of the FLAME platform is based on the current published TOSCA specification. Future developments will extend the TOSCA specification (known as TOSCA++) to meet FLAME requirements such higher level KPIs and location-based constraints.
 
-Briefly descirbe:
+The current TOSCA template provides the initial structure of the Media Service information model through specified service and resource configuration. Within this structure, system components are instantiated whose runtime characteristics are measured to inform management processes. Measurements relate to individual SF's as well as aggregated measurements structured according the context of deployment (e.g. media service, platform, etc). Measurements are made by monitoring processes deployed with system components.
 
-* the characteristics of monitoring data as ways to measure the behaviour of the system overtime including usage and performance
-* how measurements relate to resources within the configured system
-* the lifecycle of monitoring data within the platform and how it is used
-* the type of monitoring data
+The media information model in relation to the high-level media service lifecycle is shown in the diagram below. The lifecycle includes processes for packaging, orchestration and management/control. Each stage in the process creates context for decisions and measurements within the next stage of the lifecycle. Packaging creates the context for orchestration, orchestration creates the context for service instantiation, etc. In the diagam, the green concepts provide the context for filtering and queries whilst the yellow concepts are the measurement data providing runtime characteristics
 
-Usage metrics
+![FLAMEContext](/docs/image/flame-context.jpg)
+
+The primary measurement point in the model is the VM/Container instance as this is the realisation of computational processes running on the platform. A VM/Container is one or more process running on a physical or virtual host with ports connecting to other VM/Container instances over network links. The VM/Container has measurement processes running to capture different views on the VM/Container including the network, host, and service. The acquisition of these different views of the VM/Container together are a key element of the cross-layer information required for management and control.  The measurements about a VM/Container are captured by different processes running on the VM or container but are brought together by comon context allowing the information to be integrated, correlated and analysed. 
+
+### Configuration (https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/issues/3)
+
+Configuration information describes the structure of the system over time. Each system component has a configuration lifecycle that defines configuration states and transistions between states.
+
+Configuration information can include:
+
+* Capacity (e.g. servers, networks)
+* Topology (e.g. nodes and links)
+* Resource Allocation (e.g. cpus, memory, data IO, etc)
+* SF State (e.g. port up, port down, sf placed, sf booted)
+* Media Service (software)
+
+### Monitoring (https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/issues/8)
+
+Monitoring measures the behaviour of the system overtime including metrics associated with usage and performance. Measurements are made within the context of a configuration. 
+
+Usage monitoring information can include:
 
 * network resource usage
 * host resource usage
 * service usage
 
-Performance metrics:
+Performance monitoring information can include:
 
 * cpu/sec
 * throughput
 * response time
 * etc.
 
-#### Measurements Model
+### Information Security (https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/issues/25) (TBC Stephen Phillips)
+
+*to be completed*
+
+### Data Subject (https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/issues/24) (TBC Stephen Phillips)
+
+*to be completed*
+
+## **Measurement Model**
 
-##### General 
+### General 
 
-The measurement model is based on a time-series model using the TICK stack from influxdata. The data model is based on the line protocol which has the format
+The measurement model is based on a time-series model defined by TICK stack from influxdata called the line protocol. The protocol defines a format for measurement samples which together can be combined to create series.
 
 `<measurement>[,<tag-key>=<tag-value>...] <field-key>=<field-value>[,<field2-key>=<field2-value>...] [unix-nano-timestamp]`
 
-Each series has
+Each series has:
 
 * a name "measurement"
 * 0 or more tags for configuration context
 * 1 or more fields for the measurement values
 * a timestamp.
 
-The model will be used to report configuration and monitoring data. In general, tags are used to provide configuration context for measurement values stored in fields. The tags are structured to provide queries by dimensions defined in the FLAME architecture. Tags are automatically indexed by InfluxDB. Global tags are automatically inserted by contexualised agents collecting data from monitoring processes. The global tags used across different measurements are a key part of the database design. Although, InfluxDB is schemaless database allowing arbirtary measurement fields to be stored (e.g. allowing for a media component to have a set of specific metrics), using common global tags allows the aggregation of measurements across time with common context. Although similar to SQL influx is not a relational database and the primary key for all measuremetns is time. Further schema design recommendations can be found here:
+The model is used to report both configuration and monitoring data. In general, tags are used to provide configuration context for measurement values stored in fields. The tags are structured to provide queries by KPIs and dimensions defined in the FLAME architecture. 
 
-https://docs.influxdata.com/influxdb/v1.4/concepts/schema_and_data_layout/
+Tags are automatically indexed by InfluxDB. Global tags can be automatically inserted by contexualised agents collecting data from monitoring processes. The global tags used across different measurements are a key part of the database design. Although, InfluxDB is schemaless database allowing arbirtary measurement fields to be stored (e.g. allowing for a media component to have a set of specific metrics), using common global tags allows the aggregation of measurements across time with common context.
 
-##### Temporal Measurements
+Although similar to SQL influx is not a relational database and the primary key for all measuremetns is time. Further schema design recommendations can be found here: https://docs.influxdata.com/influxdb/v1.4/concepts/schema_and_data_layout/
+
+### Temporal Measurements (TBC Simon Crowle)
 
 Monitoring data must have time-stamp values that are consistent and sycnrhonised across the platform. This means that all VMs hosting SFs should have a synchronised system clock, or at least (and more likely) a means by which an millisecond offset from the local time can be retrieved so that a 'platform-correct' time value can be calculated.
 
 Describe approaches to integrate temporal measurements, time as a primary key, etc.
 
-##### Spatial Measurements
+### Spatial Measurements (TBC Simon Crowle)
 
 Location can be provided in two forms: labelled (tag) and numeric (longitude and latitude as digitial degrees). Note that the location label is likely to be a _global tag_. 
 
@@ -96,9 +119,29 @@ If tags are used then measurements of GPS coordinates will need to be translated
 
 Matching on tags is limited to matching and potentially spatial hierarchies (e.g. country.city.street). Using a coordiante system allows for mathatical functions to be developed (e.g. proximity functions)
 
-##### Measurement Context
+## **Decisions**
+
+### Service Management Decisions
+
+Capacity Decision Context
+
+* Provision compute node
+* Provision network provider
+
+Media Service Decision Context
+
+* Place SF
+* Unplace SF
+* Boot SF
+* Connect SF
+* Route to SF against criteria
+* Change resource allocation to SF 
 
-Monitoring data is collected to support design, management and control decisions. The link between decisions and data is through queries applied to contextual information stored with measurement values.  
+Health Checks?
+
+### Decision Context
+
+Monitoring data is collected to support service design, management and control decisions. The link between decisions and data is through queries and rules applied to contextual information stored with measurement values.  
 
 ![MeasurementContext](/docs/image/measurement-context.jpg)
 
@@ -113,24 +156,22 @@ To support this query the following measurement would be created:
 Designing the context for measurements is an important step in the schema design. This is especially important when measurements from multiple monitoring sources need to be integrated and processed to provided data for queries and decision. The key design principles adopted include:
 
 * identify common context across different measurements
-* use the same identifiers and naming conventions for context across different measurements
+* where possible use the same identifiers and naming conventions for context across different measurements
 * organise the context into hierarchies that are automatically added to measurements during the collection process
 
 ![CommonContext](/docs/image/common-measurement-context.jpg)
 
 The following figure shows the general structure approach for two measurements A and B. Data points in each series have a set of tags that shares a common context and have a specific context related to the measurement values.
 
-Now let’s look at the FLAME platform context for measurements within the FLAME platform. In the diagram below core of the model is the VM/Container instance as the primary measurement point as this is the  realisation of computational processes running on the platform. A VM/Container is aone or more process running on a physical or virtual host with ports connecting to other VM/Container instances over network links. The VM/Container has measurement processes running to capture different views on the VM/Container include the network, host, and service. The acquisition of these different views of the VM/Container together are a key element of the cross-layer information required for management and control.  The measurements about a VM/Container are captured by different processes running on the VM or container but are brought together by comon context allowing the information to be integrated, correlated and analysed. 
+![FLAMEMeasurements](/docs/image/flame-measurements.jpg)
 
-We consider three views on the VM/Container instance including (in orange)
+The measurement model considers three views on the VM/Container instance with field values covering:
 
 * service: specific metrics associated within the SF (either media component or platform component) 
 * network: data usage TX/RX, latency, jitter, etc.
 * host: cpu, storage, memory, storage I/O, etc
 
-![FLAMEContext](/docs/image/flame-context.jpg)
-
-All of the measurements on a specific VM/Container instance share a common context (green) that includes
+All of the measurements on a specific VM/Container instance share a common context that includes tag values:
 
 * sfc – an orchestration template
 * sfc_instance – an instance of the orchestration template
@@ -140,32 +181,29 @@ All of the measurements on a specific VM/Container instance share a common conte
 * server – a physical or virtual server for hosting VM instances
 * location – the location of the server
 	
-By including this context with service, network and host measurements it is possible to support a wide range of temporal queries associated with SFC’s whether they are Media Services or the Platform components . By adopting the same convention for identifiers it is possible to combine measurements across service, network and host to create new series that allows exploration of different aspects of the VM instance.
+By including this context with service, network and host measurements it is possible to support a wide range of temporal queries associated with SFC’s. By adopting the same convention for identifiers it is possible to combine measurements across service, network and host to create new series that allows exploration of different aspects of the VM instance, including cross-layer queries.
 
-Give a worked example across service and network measurements
+Give a worked example across service and network measurements based on the mpeg-dash service
 
-* Decide on the measurement of interest and how it's calculated from a series of one or more other measurements (i.e. the function)
+* Decide on the service management decisions and time scales
+* Decide on the measurements of interest that are needed to make the decisions
+* Decide how measurements are calculated from a series of one or more other measurements 
 * Decide on time window for the series and sample rate
 * Decide on interpolation approach for data points in the series
 
 Discuss specific tags
 
-![FLAMEMeasurements](/docs/image/flame-measurements.jpg)
+## **Architecture**
 
-### Architecture
-
-The monitoring model uses an agent based approach with hierarchical aggregation used as required. The general architecture is shown in the diagram below.
+### General ###
+The monitoring model uses an agent based approach with hierarchical aggregation used as required for different time scales of decision making. The general architecture is shown in the diagram below.
 
 ![AgentArchitecture](/docs/image/agent-architecture.jpg)
 
-For monitoring a service function, an agent is deployed on each of the container/VM implementing a SF. The agent is deployed by the orchestrator when the SF is provisioned. The agent is configured with a set of input plugins that collect measurements from three aspects of the SF including network, host and SF usage/perf. The agent is configured with a set of global tags that are inserted for all measurements made by the agent on the host.
+To monitoring a SF an agent is deployed on each of the container/VM implementing a SF. The agent is deployed by the orchestrator when the SF is provisioned. The agent is configured with a set of input plugins that collect measurements from the three viewpoints of network, host and service. The agent is configured with a set of global tags that are inserted for all measurements made by the agent on the host.
 
-Telegraf agent-based monitoring with the following plugins potentially relevant for integration with FLAME
+Telegraf offers a wide range of integration with relevant monitoring processes.
 
-* Telegraf AMQP: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/amqp_consumer
-* Telegrapf http json: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/httpjson
-* Telegraf http listener: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/http_listener 
-* Telegraf Bespoke Plugin: https://www.influxdata.com/blog/how-to-write-telegraf-plugin-beginners/
 * Telegraf Existing Plugins for common services, relevant plugins include
  * Network Response https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/net_response: could be used to performance basic network monitoring
  * nstat https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/nstat : could be used to monitor the network
@@ -174,38 +212,39 @@ Telegraf agent-based monitoring with the following plugins potentially relevant
  * SNMP https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/snmp: could be used to monitor flows
  * systat https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/sysstat: could be used to monitor hosts
 
-Agents:
+Telegraf offers a wide range of integration for 3rd party monitoring processes:
 
-* deployed at monitoring points (e.g surrogates and other network elements)
-* insert contextual metadata as tags into measurements
-* But how does this relate to the Mona agents?
+* Telegraf AMQP: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/amqp_consumer
+* Telegrapf http json: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/httpjson
+* Telegraf http listener: https://github.com/influxdata/telegraf/tree/release-1.5/plugins/inputs/http_listener 
+* Telegraf Bespoke Plugin: https://www.influxdata.com/blog/how-to-write-telegraf-plugin-beginners/
 
-Hierarchical monitoring and scalability considerations
+The architecture considers hierarchical monitoring and scalability, for example, AMQP can be used to buffer monitoring information whilst InfluxDB can be used to provide intermediate aggregation points when used with Telegraf input and output plugin. 
 
-* AMQP can be used to buffer monitoring info
-* InfluxDB can be used to provide aggregation points when used with Telegraf input and output plugin
-* But how does this relate to the pub/sub and mySQL aggregator in FLIPS?
+### Integration with FLIPS Monitoring
 
-Using FLIPS monitoring
+FLIPS offers a scalable pub/sub system for distributing monitoring data. The architecture is described in the POINT monitoring specification https://drive.google.com/file/d/0B0ig-Rw0sniLMDN2bmhkaGIydzA/view. Some observations can be made
 
-FLIPS offers a hightly scalable pub/sub system. We'll most likely need to use this in place of RabbitMQ for the infrastructure monitoring. The 
-monitoring specification is here:
+* MOOSE and CLMC provide similar functions in the architecture, the CLMC will not have access to MOOSE but will need to subscribe to data points provided by FLIPS
+* The APIs for Moly and Blackadder are not provided therefore it's not possible to critically understand the correct implementation approach for agents and monitoring data distribution
+* Individual datapoints  need to be aggregated into measurements 
+* It's likely that we'll have to use the blackadder API for distribution of monitoring data, replacing messaging systems such as RabbitMQ with all buffering and pub/sub deployed on the nodes themselves rather than a central service. 
 
-https://drive.google.com/file/d/0B0ig-Rw0sniLMDN2bmhkaGIydzA/view
+There are a few architectural choices. The first below uses moly as an integration point for monitoring processes via a Telegraf output plugin with data inserted into influx using a blackadder API input plugin on another Telegraf agent running on the CLMC. In this case managing the subscriptions to nodes and data points is difficult. In addition, some data points will be individual from FLIPS monitoring whilst others will be in line protocol format from Telegraf. For the FLIPS data points a new input plugin would be required to aggregate individual data points into time-series measurements. 
 
 ![FLIPSAgentArchitecture](/docs/image/flips-monitoring-architecture.jpg)
 
-**Trust in measurements**
+The second (currently preferred) choice only sends line protocol format over the wire. Here we develop telegraf input and output plugins for blackadder benefiting from the scalable nature of the pub/sub system rather than introducing RabbitMQ as a central server. In this case the agent on each node would be configured with input plugins for service, host and network . We'd deploy a new Telegraf input plugin for FLIPS data points on the node's agent by subscribing to blackadder locally and then publish the aggregated measurement using the line protocol back over blackadder to the CLMC. FLIPS can still publish data to MOOSE as required. 
 
-If the agent is deployed in a VM/container that a tenant has root access then a tenant could change the configuration to fake measurements associated with network and host in an attempt gain benefit. This is a security risk. Some ideas include
+![FLIPSAgentArchitecture](/docs/image/flips-monitoring-architecture2.jpg)
 
-* Deploy additional agents on hosts rather than agents to measure network and VM performance. Could be hard to differentiate between the different SFs deployed on a host
-* Generate a hash from the agent configuration file that's checked within the monitoring message. Probably too costly and not part of the telegraf protocol
-* Use unix permissions (e.g. surrogates are deployed within root access to them)
+The pub/sub protocol still needs some work as we don't want the CLMC to have to subscribe to nodes as they start and stop. We want the nodes to register with a known CLMC and then start publishing data to the CLMC according to a monitoring configuration (e.g. sample rate, etc). So we want a "monitoring topic" that nodes publish to and that the CLMC can pull data from. This topic is on the CLMC itself and note the nodes. Reading the FLIPS specification it seems that this is not how the nodes current distribute data, although could be wrong
+
+## **Measurements Summary**
 
-## Configuration Measurement Summary
+### Configuration
 
-|Context|Measurement|Description
+|Decision Context|Measurement|Description
 |---|---|---|
 |Capacity|host_resource|the compute infrastructure allocation to the platform|
 |Capacity|network_resource|the network infrastructure allocation to the platform|
@@ -215,12 +254,9 @@ If the agent is deployed in a VM/container that a tenant has root access then a
 |Media Service|vm_host_config|compute resources allocated to a VM|
 |Media Service|net_port_config|networking constraints on port on a VM|
 
-*Need to refer to TOSCA here*
-
-## Usage and Performance Measurement Summary
+## Monitoring 
 
-
-|Context|Measurement|Description
+|Decision Context|Measurement|Description
 |---|---|---|
 |Platform|nap_data_io|nap data io at byte, ip and http levels|
 |Platform|nap_fqdn_perf|fqdn request rate and latency|
@@ -238,7 +274,7 @@ If the agent is deployed in a VM/container that a tenant has root access then a
 |Media Service|service|vm service perf metrics|
 
 
-## Capacity 
+## Capacity Measurements 
 
 Capacity measurements measure the size of the infrastructure slice available to the platform that can be allocated on demand to tenants.
 
@@ -258,7 +294,7 @@ network_resource measures the overall capacity of the network available to the p
 
 `network_resource,slice_id="",network_id="", bandwidth=(integer),X=(integer),Y=(integer),Z=(integer) timestamp`
 
-## Platform
+## Platform Measurements 
 
 Platform measurements measure the configuration, usage and performance of platform components.
 
@@ -307,11 +343,11 @@ Fields
 
 **clmc**
 
-## Media Service 
+## Media Service Measurements 
 
 Media service measurements measure the configuration, usage and performance of media service instances deployed by the platform.
 
-### Service Function Chain
+### Service Function Chain Measurements
 
 **sfc_config**
 
@@ -508,3 +544,13 @@ Link Tags
 **link_perf**
 
 link perf is measured at the nodes, related to end_to_end_latency. Needs further work.
+
+# Other Issues
+
+**Trust in measurements**
+
+If the agent is deployed in a VM/container that a tenant has root access then a tenant could change the configuration to fake measurements associated with network and host in an attempt gain benefit. This is a security risk. Some ideas include
+
+* Deploy additional agents on hosts rather than agents to measure network and VM performance. Could be hard to differentiate between the different SFs deployed on a host
+* Generate a hash from the agent configuration file that's checked within the monitoring message. Probably too costly and not part of the telegraf protocol
+* Use unix permissions (e.g. surrogates are deployed within root access to them)
\ No newline at end of file