diff --git a/README.md b/README.md
index 1b183b66002ab3039d62df7bb8cd1d7fe41de494..b0dbab9c9fc4db926b31a0a07c15acc18b1e615c 100644
--- a/README.md
+++ b/README.md
@@ -50,55 +50,45 @@ https://gitlab.it-innovation.soton.ac.uk/mjb/flame-clmc/blob/integration/docs/ad
 
 tbd
 
-#### Installation
+#### Testing
 
-To set up the adaptive streaming use case scenario 
+Testing is implemented using pytest. 
 
-`vagrant up`
+The installation script is here:
 
-This will provision the following VMs clmc, ipendpoint1, ipendpoint2, nap1, nap2
+`test/services/pytest/install.sh`
 
-The **clmc** vm includes influx, Kapacitor and Chronograf. The following ports forwarded to the clmc VM from the host machine are as follows:
+using the following convention:
 
-* Influx: 8086 
-* Chronograf: 8888
-* Kapacitor: 9092
-
-#### Running the simulation
+* Tests are written in python using pytest
+* Related tests are stored in a python module `test/<testmodule>` to create a suite of tests. All tests are stored in files test_*.py, there can be many tests per file, and many files per module
+* Each test module has a rspec.yml that provides the baseline "fixture" for the tests in the module
+* Tests are executed against fixtures. Fixtures are modular "setups" created for a test, that are inserted into the python code using dependancy injection. This offers more flexibility than the *unit style testing. The baseline deployment is created using `vagrant up` with an appropriate rspec, and the pytest fixture reads the rspec.yml and makes the configuration available to the test.
+* Tests are executed from a guest VM (not the host) in the repo root using the command `pytest test/<testmodule>`
+* Pytest will scan the directory for all tests including in files test_*.py and run them
 
-SSH into the CLMC server
-
-`vagrant ssh clmc`
-
-Run a python script to generate the test data sets
+#### Creating a deployment for a test
 
-`python3 vagrant/src/mediaServiceSim/simulator_v2.py`
+To set up a simualtion of the adaptive streaming use case scenario 
 
+`vagrant --fixture=streaming-sim -- up`
 
-#### Java/Unit Test Framework (Not currently used)
-A Java/JUnit test framework has been developed to provide concrete examples of the CLMC monitoring specification. To build and run this test framework you will need:
+This will provision the following VMs clmc-service, ipendpoint1, ipendpoint2
 
-1. The CLMC TICK stack installed and running (provided as a Vagrant solution in this project)
+The **clmc-service** vm includes influx, Kapacitor and Chronograf. The following ports forwarded to the clmc VM from the host machine are as follows:
 
-2. Java JDK 1.8+ installed
-3. Maven 3+ installed
-  - Optionally a Java IDE installed, such as NetBeans
-
-##### Building the test framework
-
-1. Clone this project (obviously)
+* Influx: 8086 
+* Chronograf: 8888
+* Kapacitor: 9092
 
-2. Open the Maven project (\<flame-clmc root\>\src\clmc-spec) in your Java IDE or navigate to POM.xml file in command line
+#### Running the streaming-sim test
 
-3. Check the monSpecTestConfig.properties file (src/main/resources) matches your TICK stack set-up. It's likely to.  
+**needs to be updated once we have this in pytest format**
 
-4.  Build the project (this should automatically build and run the tests)
-    > From the command line: mvn test
+SSH into the CLMC server
 
-##### Extending the test framework
-This test framework is easily extendible. There are two simple tests already ready for you to explore:
+`vagrant --fixture=streaming-sim -- ssh clmc-service`
 
-* BasicInputTest.java - This tries sending some basic metrics about a host to InfluxDB
-* BasicQueryTest.java - This tries querying InfluxDB about host metrics
+Run a python script to generate the test data sets
 
-Each test case uses resources in the project to send test data or execute queries. In the first case the resource '/src/main/resources/inputs/host_resource_input' is a file with example InfluxDB Line Protocol statements. In the second test case, the file '/src/main/resources/host_resource_query' contains queries that are executed against InfluxDB. The results of these are currently just output to the console.
\ No newline at end of file
+`python3 /vagrant/test/streaming-sim/StreamingSim.py`
diff --git a/Vagrantfile b/Vagrantfile
index d388c51ee4a4cfd8cd7bf87cf1b28480421c72cc..b0d51525f21193cac18c30fee3eebbe8063cdd74 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -27,29 +27,29 @@ require 'getoptlong'
 require 'yaml'
 
 # Custom options:
-#   --infra <infradir>
+#   --fixture <fixturedir>
 
 # Set defaults
-DEFAULT_INFRA = "full"
+DEFAULT_FIXTURE = "streaming"
 
 # Define custom options
 opts = GetoptLong.new(
-  [ '--infra', GetoptLong::OPTIONAL_ARGUMENT]
+  [ '--fixture', GetoptLong::OPTIONAL_ARGUMENT]
 )
 
 # Retrieve custom option values
-infra = DEFAULT_INFRA
+fixture = DEFAULT_FIXTURE
 opts.each do |opt, arg|
  case opt
-   when '--infra'
-    infra = arg    
+   when '--fixture'
+    fixture = arg    
  end
 end
 
 # load custom config file
-puts "loading custom infrastructure configuration: #{infra}"
-puts "custom config file: /infra/#{infra}/rspec.yml"
-host_rspec_file = "infra/#{infra}/rspec.yml"
+puts "loading custom infrastructure configuration: #{fixture}"
+puts "custom config file: /test/#{fixture}/rspec.yml"
+host_rspec_file = "test/#{fixture}/rspec.yml"
 hosts = YAML.load_file(host_rspec_file)
 
 # Start creating VMS using xenial64 as the base box
diff --git a/docs/TestScenarios.md b/docs/TestScenarios.md
index 4f10c29392dfe34d10027f25570010483f3b75ab..d9758c908ca1d5bfcde7038eb340a2d06b374923 100644
--- a/docs/TestScenarios.md
+++ b/docs/TestScenarios.md
@@ -13,7 +13,11 @@
 | show all metrics for a database | ```influx -execute 'SHOW MEASUREMENTS ON testDB'``` |
 | show all databases | ```inflix -execute 'SHOW DATABASES'``` |
 
+### Using Chronograf
 
+open ```http://localhost:8888/sources/1/chronograf/data-explorer```
+user: telegraf
+password: metricsmetricsmetrics
 
 ### Scenario 1 - Linear user load increase
 
diff --git a/infra/full/rspec.yml b/infra/full/rspec.yml
deleted file mode 100644
index 06acf9ac6c4421600da2fabd9d00ecd901a1432d..0000000000000000000000000000000000000000
--- a/infra/full/rspec.yml
+++ /dev/null
@@ -1,99 +0,0 @@
-hosts:
-  - name: clmc-service
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    forward_ports:
-      - guest: 8086
-        host: 8086
-      - guest: 8888
-        host: 8888
-      - guest: 9092
-        host: 9092
-    ip_address: "192.168.50.10"
-  - name: apache1
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    service_name: "apache"
-    forward_ports:
-      - guest: 80
-        host: 8081
-    ip_address: "192.168.50.11"
-    location: "DC1"
-    sfc_id: "MS_Template_1"
-    sfc_id_instance: "MS_I1"
-    sf_id: "adaptive_streaming"
-    sf_id_instance: "adaptive_streaming_I1"
-    ipendpoint_id: "adaptive_streaming_I1_apache1"
-    influxdb_url: "http://192.168.50.10:8086"
-    database_name: "CLMCMetrics"
-  - name: apache2
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    service_name: "apache"
-    forward_ports:
-      - guest: 80
-        host: 8082
-    ip_address: "192.168.50.12"
-    location: "DC2"
-    sfc_id: "MS_Template_1"
-    sfc_id_instance: "MS_I1"
-    sf_id: "adaptive_streaming"
-    sf_id_instance: "adaptive_streaming_I1"
-    ipendpoint_id: "adaptive_streaming_I1_apache2"
-    influxdb_url: "http://192.168.50.10:8086"
-    database_name: "CLMCMetrics"      
-    
-  - name: nginx
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    service_name: "nginx"
-    forward_ports:
-      - guest: 80
-        host: 8083
-    ip_address: "192.168.50.13"
-    location: "DC1"
-    sfc_id: "MS_Template_1"
-    sfc_id_instance: "MS_I1"
-    sf_id: "adaptive_streaming"
-    sf_id_instance: "adaptive_streaming_nginx_I1"
-    ipendpoint_id: "adaptive_streaming_nginx_I1_apache1"
-    influxdb_url: "http://192.168.50.10:8086"
-    database_name: "CLMCMetrics"
-  - name: mongo
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    service_name: "mongo"
-    forward_ports:
-      - guest: 80
-        host: 8084
-    ip_address: "192.168.50.14"
-    location: "DC1"
-    sfc_id: "MS_Template_1"
-    sfc_id_instance: "MS_I1"
-    sf_id: "metadata_database"
-    sf_id_instance: "metadata_database_I1"
-    ipendpoint_id: "metadata_database_I1_apache1"
-    influxdb_url: "http://192.168.50.10:8086"
-    database_name: "CLMCMetrics" 
-  - name: ffmpeg
-    cpus: 1
-    memory: 2048
-    disk: "10GB"
-    service_name: "ffmpeg"
-    forward_ports:
-      - guest: 80
-        host: 8085
-    ip_address: "192.168.50.14"
-    location: "DC1"
-    sfc_id: "MS_Template_1"
-    sfc_id_instance: "MS_I1"
-    sf_id: "metadata_database"
-    sf_id_instance: "metadata_database_I1"
-    ipendpoint_id: "metadata_database_I1_apache1"
-    influxdb_url: "http://192.168.50.10:8086"
-    database_name: "CLMCMetrics" 
diff --git a/scripts/influx/install-clmc-agent.sh b/scripts/influx/install-clmc-agent.sh
deleted file mode 100755
index ab3d0bdcecd807e2323da45807e62c8eb2a17060..0000000000000000000000000000000000000000
--- a/scripts/influx/install-clmc-agent.sh
+++ /dev/null
@@ -1,70 +0,0 @@
-#!/bin/bash
-#/////////////////////////////////////////////////////////////////////////
-#//
-#// (c) University of Southampton IT Innovation Centre, 2017
-#//
-#// Copyright in this software belongs to University of Southampton
-#// IT Innovation Centre of Gamma House, Enterprise Road,
-#// Chilworth Science Park, Southampton, SO16 7NS, UK.
-#//
-#// This software may not be used, sold, licensed, transferred, copied
-#// or reproduced in whole or in part in any manner or form or in or
-#// on any media by any person other than in accordance with the terms
-#// of the Licence Agreement supplied with the software, or otherwise
-#// without the prior written consent of the copyright owners.
-#//
-#// This software is distributed WITHOUT ANY WARRANTY, without even the
-#// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-#// PURPOSE, except where stated in the Licence Agreement supplied with
-#// the software.
-#//
-#//      Created By :            Michael Boniface
-#//      Created Date :          13/12/2017
-#//      Created for Project :   FLAME
-#//
-#/////////////////////////////////////////////////////////////////////////
-
-# Install telegraf
-if [ "$#" -ne 9 ]; then
-    echo "Error: illegal number of arguments: "$#
-      echo "Usage: install-clmc-agent.sh TELEGRAF_CONF_FILE LOCATION SFC_ID SFC_ID_INSTANCE SF_ID SF_ID_INSTANCE IP_ENDPOINT_ID INFLUXDB_URL DATABASE_NAME"
-      exit 
-fi
-
-TELEGRAF_CONF_FILE=$1
-LOCATION=$2
-SFC_ID=$3
-SFC_ID_INSTANCE=$4
-SF_ID=$5
-SF_ID_INSTANCE=$6
-IP_ENDPOINT_ID=$7
-INFLUXDB_URL=$8
-DATABASE_NAME=$9
-
-if [ ! -f $TELEGRAF_CONF_FILE]; then
-    echo "Error: Telegraf conf template file not found: "$TELEGRAF_CONF_FILE
-    exit
-fi
-
-wget https://dl.influxdata.com/telegraf/releases/telegraf_1.3.2-1_amd64.deb
-dpkg -i telegraf_1.3.2-1_amd64.deb
-
-# Copy configuration
-echo "Telegraf config file: " $TELEGRAF_CONF_FILE
-cp $TELEGRAF_CONF_FILE /etc/telegraf/telegraf.conf
-
-echo "INFLUXDB_URL: " $INFLUXDB_URL
-echo "DATABASE_NAME: " $DATABASE_NAME
-
-# Replace template parameters
-sed -i 's/{{LOCATION}}/'$LOCATION'/g' /etc/telegraf/telegraf.conf
-sed -i 's/{{SFC_ID}}/'$SFC_ID'/g' /etc/telegraf/telegraf.conf
-sed -i 's/{{SFC_ID_INSTANCE}}/'$SFC_ID_INSTANCE'/g' /etc/telegraf/telegraf.conf
-sed -i 's/{{SF_ID}}/'$SF_ID'/g' /etc/telegraf/telegraf.conf
-sed -i 's/{{SF_ID_INSTANCE}}/'$SF_ID_INSTANCE'/g' /etc/telegraf/telegraf.conf
-sed -i 's/{{IP_ENDPOINT_ID}}/'$IP_ENDPOINT_ID'/g' /etc/telegraf/telegraf.conf
-sed -i 's|{{INFLUXDB_URL}}|'$INFLUXDB_URL'|g' /etc/telegraf/telegraf.conf
-sed -i 's/{{DATABASE_NAME}}/'$DATABASE_NAME'/g' /etc/telegraf/telegraf.conf
-
-# Start telegraf
-systemctl start telegraf
\ No newline at end of file
diff --git a/scripts/influx/install-clmc-service.sh b/scripts/influx/install-clmc-service.sh
deleted file mode 100755
index 42e247ad17b20eba67cdc425891446d8ee83ea99..0000000000000000000000000000000000000000
--- a/scripts/influx/install-clmc-service.sh
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/bin/bash
-#/////////////////////////////////////////////////////////////////////////
-#//
-#// (c) University of Southampton IT Innovation Centre, 2017
-#//
-#// Copyright in this software belongs to University of Southampton
-#// IT Innovation Centre of Gamma House, Enterprise Road,
-#// Chilworth Science Park, Southampton, SO16 7NS, UK.
-#//
-#// This software may not be used, sold, licensed, transferred, copied
-#// or reproduced in whole or in part in any manner or form or in or
-#// on any media by any person other than in accordance with the terms
-#// of the Licence Agreement supplied with the software, or otherwise
-#// without the prior written consent of the copyright owners.
-#//
-#// This software is distributed WITHOUT ANY WARRANTY, without even the
-#// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-#// PURPOSE, except where stated in the Licence Agreement supplied with
-#// the software.
-#//
-#//      Created By :            Michael Boniface
-#//      Created Date :          13/12/2017
-#//      Created for Project :   FLAME
-#//
-#/////////////////////////////////////////////////////////////////////////
-
-# install python for the simulator
-apt-get update
-apt-get -y install python
-
-# install influx
-wget https://dl.influxdata.com/influxdb/releases/influxdb_1.2.4_amd64.deb
-dpkg -i influxdb_1.2.4_amd64.deb
-
-# install kapacitor
-wget https://dl.influxdata.com/kapacitor/releases/kapacitor_1.3.1_amd64.deb
-dpkg -i kapacitor_1.3.1_amd64.deb
-
-# install Chronograf
-wget https://dl.influxdata.com/chronograf/releases/chronograf_1.3.3.0_amd64.deb
-dpkg -i chronograf_1.3.3.0_amd64.deb
diff --git a/scripts/influx/start-clmc-service.sh b/scripts/influx/start-clmc-service.sh
deleted file mode 100755
index f92c6b5eaf0c93b5a98585b4aab4182d09e2360e..0000000000000000000000000000000000000000
--- a/scripts/influx/start-clmc-service.sh
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/bin/bash
-#/////////////////////////////////////////////////////////////////////////
-#//
-#// (c) University of Southampton IT Innovation Centre, 2018
-#//
-#// Copyright in this software belongs to University of Southampton
-#// IT Innovation Centre of Gamma House, Enterprise Road,
-#// Chilworth Science Park, Southampton, SO16 7NS, UK.
-#//
-#// This software may not be used, sold, licensed, transferred, copied
-#// or reproduced in whole or in part in any manner or form or in or
-#// on any media by any person other than in accordance with the terms
-#// of the Licence Agreement supplied with the software, or otherwise
-#// without the prior written consent of the copyright owners.
-#//
-#// This software is distributed WITHOUT ANY WARRANTY, without even the
-#// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-#// PURPOSE, except where stated in the Licence Agreement supplied with
-#// the software.
-#//
-#//      Created By :            Simon Crowle
-#//      Created Date :          03/11/2018
-#//      Created for Project :   FLAME
-#//
-#/////////////////////////////////////////////////////////////////////////
-
-echo Starting TICK stack services...
-
-systemctl start influxdb
-systemctl start kapacitor
-systemctl start chronograf
\ No newline at end of file
diff --git a/scripts/influx/telegraf_ipendpoint_template.conf b/scripts/influx/telegraf_ipendpoint_template.conf
deleted file mode 100644
index 2358dcca5bfcd48d4b45e0e1ccd316357f1e4ba7..0000000000000000000000000000000000000000
--- a/scripts/influx/telegraf_ipendpoint_template.conf
+++ /dev/null
@@ -1,112 +0,0 @@
-# Telegraf configuration
-
-# Telegraf is entirely plugin driven. All metrics are gathered from the
-# declared inputs, and sent to the declared outputs.
-
-# Plugins must be declared in here to be active.
-# To deactivate a plugin, comment out the name and any variables.
-
-# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
-# file would generate.
-
-# Global tags can be specified here in key="value" format.
-[global_tags]
-  # location of the data centre
-  location="{{LOCATION}}"
-  # media service template id
-  sfc="{{SFC_ID}}"
-  # media service instance
-  sfc_i="{{SFC_ID_INSTANCE}}"
-  # service function type
-  sf="{{SF_ID}}"
-  # service function instance id
-  sf_i="{{SF_ID_INSTANCE}}"
-  # ipendpoint id aka surrogate instance
-  ipendpoint="{{IP_ENDPOINT_ID}}"
-
-# Configuration for telegraf agent
-[agent]
-  ## Default data collection interval for all inputs
-  interval = "10s"
-  ## Rounds collection interval to 'interval'
-  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
-  round_interval = true
-
-  ## Telegraf will cache metric_buffer_limit metrics for each output, and will
-  ## flush this buffer on a successful write.
-  metric_buffer_limit = 1000
-  ## Flush the buffer whenever full, regardless of flush_interval.
-  flush_buffer_when_full = true
-
-  ## Collection jitter is used to jitter the collection by a random amount.
-  ## Each plugin will sleep for a random time within jitter before collecting.
-  ## This can be used to avoid many plugins querying things like sysfs at the
-  ## same time, which can have a measurable effect on the system.
-  collection_jitter = "0s"
-
-  ## Default flushing interval for all outputs. You shouldn't set this below
-  ## interval. Maximum flush_interval will be flush_interval + flush_jitter
-  flush_interval = "10s"
-  ## Jitter the flush interval by a random amount. This is primarily to avoid
-  ## large write spikes for users running a large number of telegraf instances.
-  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
-  flush_jitter = "0s"
-
-  ## Logging configuration:
-  ## Run telegraf in debug mode
-  debug = false
-  ## Run telegraf in quiet mode
-  quiet = false
-  ## Specify the log file name. The empty string means to log to stdout.
-  logfile = "G:/Telegraf/telegraf.log"
-
-  ## Override default hostname, if empty use os.Hostname()
-  hostname = ""
-
-
-###############################################################################
-#                                  OUTPUTS                                    #
-###############################################################################
-
-# Configuration for influxdb server to send metrics to
-[[outputs.influxdb]]
-  # The full HTTP or UDP endpoint URL for your InfluxDB instance.
-  # Multiple urls can be specified but it is assumed that they are part of the same
-  # cluster, this means that only ONE of the urls will be written to each interval.
-  # urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
-  urls = ["{{INFLUXDB_URL}}"] # required
-  # The target database for metrics (telegraf will create it if not exists)
-  database = "{{DATABASE_NAME}}" # required
-  # Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-  # note: using second precision greatly helps InfluxDB compression
-  precision = "s"
-
-  ## Write timeout (for the InfluxDB client), formatted as a string.
-  ## If not provided, will default to 5s. 0s means no timeout (not recommended).
-  timeout = "5s"
-  # username = "telegraf"
-  # password = "metricsmetricsmetricsmetrics"
-  # Set the user agent for HTTP POSTs (can be useful for log differentiation)
-  # user_agent = "telegraf"
-  # Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
-  # udp_payload = 512
-
-
-###############################################################################
-#                                  INPUTS                                     #
-###############################################################################
-# # Influx HTTP write listener
-[[inputs.http_listener]]
-  ## Address and port to host HTTP listener on
-  service_address = ":8186"
-
-  ## timeouts
-  read_timeout = "10s"
-  write_timeout = "10s"
-
-  ## HTTPS
-  #tls_cert= "/etc/telegraf/cert.pem"
-  #tls_key = "/etc/telegraf/key.pem"
-
-  ## MTLS
-  #tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]
diff --git a/src/clmc-spec/.classpath b/src/clmc-spec/.classpath
deleted file mode 100644
index 6d7587a819e638a7f25352b31dcd0b4e876e42da..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.classpath
+++ /dev/null
@@ -1,31 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<classpath>
-	<classpathentry kind="src" output="target/classes" path="src/main/java">
-		<attributes>
-			<attribute name="optional" value="true"/>
-			<attribute name="maven.pomderived" value="true"/>
-		</attributes>
-	</classpathentry>
-	<classpathentry excluding="**" kind="src" output="target/classes" path="src/main/resources">
-		<attributes>
-			<attribute name="maven.pomderived" value="true"/>
-		</attributes>
-	</classpathentry>
-	<classpathentry kind="src" output="target/test-classes" path="src/test/java">
-		<attributes>
-			<attribute name="optional" value="true"/>
-			<attribute name="maven.pomderived" value="true"/>
-		</attributes>
-	</classpathentry>
-	<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.8">
-		<attributes>
-			<attribute name="maven.pomderived" value="true"/>
-		</attributes>
-	</classpathentry>
-	<classpathentry kind="con" path="org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER">
-		<attributes>
-			<attribute name="maven.pomderived" value="true"/>
-		</attributes>
-	</classpathentry>
-	<classpathentry kind="output" path="target/classes"/>
-</classpath>
diff --git a/src/clmc-spec/.project b/src/clmc-spec/.project
deleted file mode 100644
index 061590a24bdd8b6ea607dbd0d3090242f5a9ac00..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.project
+++ /dev/null
@@ -1,23 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<projectDescription>
-	<name>clmc-spec</name>
-	<comment></comment>
-	<projects>
-	</projects>
-	<buildSpec>
-		<buildCommand>
-			<name>org.eclipse.jdt.core.javabuilder</name>
-			<arguments>
-			</arguments>
-		</buildCommand>
-		<buildCommand>
-			<name>org.eclipse.m2e.core.maven2Builder</name>
-			<arguments>
-			</arguments>
-		</buildCommand>
-	</buildSpec>
-	<natures>
-		<nature>org.eclipse.jdt.core.javanature</nature>
-		<nature>org.eclipse.m2e.core.maven2Nature</nature>
-	</natures>
-</projectDescription>
diff --git a/src/clmc-spec/.settings/org.eclipse.core.resources.prefs b/src/clmc-spec/.settings/org.eclipse.core.resources.prefs
deleted file mode 100644
index 839d647eef851c560a9854ff81d9caa1df594ced..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.settings/org.eclipse.core.resources.prefs
+++ /dev/null
@@ -1,5 +0,0 @@
-eclipse.preferences.version=1
-encoding//src/main/java=UTF-8
-encoding//src/main/resources=UTF-8
-encoding//src/test/java=UTF-8
-encoding/<project>=UTF-8
diff --git a/src/clmc-spec/.settings/org.eclipse.jdt.core.prefs b/src/clmc-spec/.settings/org.eclipse.jdt.core.prefs
deleted file mode 100644
index 714351aec195a9a572640e6844dcafd51565a2a5..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.settings/org.eclipse.jdt.core.prefs
+++ /dev/null
@@ -1,5 +0,0 @@
-eclipse.preferences.version=1
-org.eclipse.jdt.core.compiler.codegen.targetPlatform=1.8
-org.eclipse.jdt.core.compiler.compliance=1.8
-org.eclipse.jdt.core.compiler.problem.forbiddenReference=warning
-org.eclipse.jdt.core.compiler.source=1.8
diff --git a/src/clmc-spec/.settings/org.eclipse.m2e.core.prefs b/src/clmc-spec/.settings/org.eclipse.m2e.core.prefs
deleted file mode 100644
index f897a7f1cb2389f85fe6381425d29f0a9866fb65..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.settings/org.eclipse.m2e.core.prefs
+++ /dev/null
@@ -1,4 +0,0 @@
-activeProfiles=
-eclipse.preferences.version=1
-resolveWorkspaceProjects=true
-version=1
diff --git a/src/clmc-spec/.vscode/settings.json b/src/clmc-spec/.vscode/settings.json
deleted file mode 100644
index e0f15db2eb22b5d618150277e48b741f8fdd277a..0000000000000000000000000000000000000000
--- a/src/clmc-spec/.vscode/settings.json
+++ /dev/null
@@ -1,3 +0,0 @@
-{
-    "java.configuration.updateBuildConfiguration": "automatic"
-}
\ No newline at end of file
diff --git a/src/clmc-spec/pom.xml b/src/clmc-spec/pom.xml
deleted file mode 100644
index fa2828ef27c6176ba2ff74793d3acb9e65ef8965..0000000000000000000000000000000000000000
--- a/src/clmc-spec/pom.xml
+++ /dev/null
@@ -1,51 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-    <modelVersion>4.0.0</modelVersion>
-    <groupId>uk.ac.soton.itinnovation.flame</groupId>
-    <artifactId>clmc-spec</artifactId>
-    <version>0.1-SNAPSHOT</version>
-    <packaging>jar</packaging>
-    
-    <properties>
-        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
-        <maven.compiler.source>1.8</maven.compiler.source>
-        <maven.compiler.target>1.8</maven.compiler.target>
-    </properties>
-    
-    <dependencies>
-        
-        <dependency>
-            <groupId>org.influxdb</groupId>
-            <artifactId>influxdb-java</artifactId>
-            <version>2.8</version>
-        </dependency>
-        
-        <dependency>
-            <groupId>junit</groupId>
-            <artifactId>junit</artifactId>
-            <version>4.12</version>
-            <scope>test</scope>
-        </dependency>
-        
-        <dependency>
-            <groupId>org.hamcrest</groupId>
-            <artifactId>hamcrest-core</artifactId>
-            <version>1.3</version>
-            <scope>test</scope>
-        </dependency>
-        
-        <dependency>
-            <groupId>org.slf4j</groupId>
-            <artifactId>slf4j-api</artifactId>
-            <version>1.7.21</version>
-        </dependency>
-        
-        <dependency>
-            <groupId>org.slf4j</groupId>
-            <artifactId>slf4j-simple</artifactId>
-            <version>1.7.21</version>
-        </dependency>
-    
-    </dependencies>
-   
-</project>
\ No newline at end of file
diff --git a/src/clmc-spec/src/main/resources/inputs/host_resource_input b/src/clmc-spec/src/main/resources/inputs/host_resource_input
deleted file mode 100644
index 39441c9cc0cd3f182b070ea5cd5710bcee4993e8..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/main/resources/inputs/host_resource_input
+++ /dev/null
@@ -1,9 +0,0 @@
-# Host Resource metrics
-# 
-# host_resource node_id,sf_inst_id,sf_id,sfc_inst_id,sfc_id,server_id,location cpus,memory,storage <timestamp>
-#
-# NOTE: commas and spaces are significant in the line protocol
-
-host_resource,node_id=1,sf_inst_id=1,sf_id=1,sfc_inst_id=1,sfc_id=1,server_id=1,location="Bristol_DC" cpus=16,memory=256,storage=1024 1513778835385000000
-host_resource,node_id=1,sf_inst_id=2,sf_id=1,sfc_inst_id=1,sfc_id=1,server_id=1,location="Bristol_DC" cpus=16,memory=256,storage=1024 1513778836385000000
-host_resource,node_id=1,sf_inst_id=3,sf_id=1,sfc_inst_id=1,sfc_id=1,server_id=2,location="Bristol_DC" cpus=8,memory=128,storage=1024 1513778837385000000
\ No newline at end of file
diff --git a/src/clmc-spec/src/main/resources/monSpecTestConfig.properties b/src/clmc-spec/src/main/resources/monSpecTestConfig.properties
deleted file mode 100644
index 54042b375a276101bd225752f105d5dfcd54ac88..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/main/resources/monSpecTestConfig.properties
+++ /dev/null
@@ -1,8 +0,0 @@
-# Test settings
-clearDBOnExit=false
-
-# InfluxDB
-influxDB_EP=http://localhost:8086
-influxDB_UN=root
-influxDB_PW=root
-influxDB_DB=clmcTestDB
diff --git a/src/clmc-spec/src/main/resources/queries/host_resource_query b/src/clmc-spec/src/main/resources/queries/host_resource_query
deleted file mode 100644
index 45c8c16ee14a6b8ac1a20ea4a5e1a4d032da6585..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/main/resources/queries/host_resource_query
+++ /dev/null
@@ -1,4 +0,0 @@
-# Host Resource Basic queries
-
-SELECT "node_id", "cpus", "memory", "storage" FROM "clmcTestDB"."autogen"."host_resource"
-
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/AlphaTestSuite.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/AlphaTestSuite.java
deleted file mode 100644
index 26888bc7314bbbef5c3909284f5664b551a62806..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/AlphaTestSuite.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          19/12/2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-package uk.ac.soton.innovation.flame.clmc.monspec.test;
-
-import uk.ac.soton.innovation.flame.clmc.monspec.test.base.TestSuiteBase;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.runner.RunWith;
-import org.junit.runners.Suite;
-
-
-
-
-@RunWith(Suite.class)
-@Suite.SuiteClasses({
-    uk.ac.soton.innovation.flame.clmc.monspec.test.BasicInputTest.class,
-    uk.ac.soton.innovation.flame.clmc.monspec.test.BasicQueryTest.class
-})
-public class AlphaTestSuite extends TestSuiteBase 
-{
-
-    @BeforeClass
-    public static void setUpClass() throws Exception 
-    { 
-        TestSuiteBase.setUpClass(); 
-    }
-
-    @AfterClass
-    public static void tearDownClass() throws Exception 
-    {
-        TestSuiteBase.tearDownClass();
-    }
-
-    @Before
-    @Override
-    public void setUp() throws Exception
-    {
-        super.setUp();
-    }
-
-    @After
-    @Override
-    public void tearDown() throws Exception
-    {
-        super.setUp();
-    }
-    
-}
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicInputTest.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicInputTest.java
deleted file mode 100644
index 662b0be38f67a4cfa4d52b798e9e441836c59dbd..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicInputTest.java
+++ /dev/null
@@ -1,77 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          19/12/2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-package uk.ac.soton.innovation.flame.clmc.monspec.test;
-
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import uk.ac.soton.innovation.flame.clmc.monspec.test.base.BaseTest;
-
-
-
-/**
- * BasicInputTest class tests inputting data series into InfluxDB
- * 
- * IMPORTANT NOTE: for test automation under Maven, ensure your classes are labelled either
- * Test* or *Test. Not doing this means Maven will miss this test class.
- */
-public class BasicInputTest extends BaseTest
-{
-    
-    public BasicInputTest() 
-    {}
-    
-    @BeforeClass
-    public static void setUpClass() 
-    {}
-    
-    @AfterClass
-    public static void tearDownClass()
-    {}
-    
-    @Before
-    @Override
-    public void setUp()
-    {
-        super.setUp();
-    }
-    
-    @After
-    @Override
-    public void tearDown()
-    {
-        super.tearDown();
-    }
-
-    @Test
-    public void testInputSeries()
-    {
-        writeProtocolLines( "host_resource_input" );
-        
-        logger.info( "Completed Test Input Series" );
-    }
-}
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicQueryTest.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicQueryTest.java
deleted file mode 100644
index 862dffa80bd087187dba6d86f012de3ab6210bc9..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/BasicQueryTest.java
+++ /dev/null
@@ -1,75 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          20-Dec-2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-
-package uk.ac.soton.innovation.flame.clmc.monspec.test;
-
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import uk.ac.soton.innovation.flame.clmc.monspec.test.base.BaseTest;
-
-
-
-/**
- * BasicQueryTest class tests querying InfluxDB using data from BasicInputTest
- * 
- * IMPORTANT NOTE: for test automation under Maven, ensure your classes are labelled either
- * Test* or *Test. Not doing this means Maven will miss this test class.
- */
-public class BasicQueryTest extends BaseTest
-{
-    public BasicQueryTest() 
-    {}
-    
-    @BeforeClass
-    public static void setUpClass() 
-    {}
-    
-    @AfterClass
-    public static void tearDownClass()
-    {}
-    
-    @Before
-    @Override
-    public void setUp()
-    {
-        super.setUp();
-    }
-    
-    @After
-    @Override
-    public void tearDown()
-    {
-        super.tearDown();
-    }
-
-    @Test
-    public void querySoloSeries()
-    {
-        executeQueries( "host_resource_query", true );
-    }
-}
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/BaseTest.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/BaseTest.java
deleted file mode 100644
index 4b3c724ece094c47ff3432a774dd2a8a88169de4..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/BaseTest.java
+++ /dev/null
@@ -1,137 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          19/12/2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-package uk.ac.soton.innovation.flame.clmc.monspec.test.base;
-
-import java.util.ArrayList;
-import java.util.List;
-import org.influxdb.InfluxDB;
-import org.junit.After;
-import org.junit.AfterClass;
-import static org.junit.Assert.assertTrue;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-/**
- * BaseTest is an abstract class used to set up InfluxDB resource before
- * executing tests proper. Derive from this class and use protected methods.
- */
-public abstract class BaseTest 
-{
-    protected Logger   logger = LoggerFactory.getLogger( BaseTest.class );
-    protected InfluxDB influxDB;
-    
-    public BaseTest() 
-    {}
-    
-    @BeforeClass
-    public static void setUpClass() 
-    {}
-    
-    @AfterClass
-    public static void tearDownClass() 
-    {}
-    
-    @Before
-    public void setUp()
-    {
-        try
-        {
-            // Create test database if it does not already exist
-            influxDB = InfluxUtility.getInfluxDBConnection();
-            String dbName = InfluxUtility.getTestConfig().getProperty( "influxDB_DB" );
-            
-            if ( !influxDB.databaseExists(dbName) )
-                influxDB.createDatabase( dbName );
-            
-            influxDB.setDatabase( dbName );
-        }
-        catch ( Exception ex )
-        {
-            logger.error( "Failed to get InfluxDB connection: " + ex.getMessage() );
-            assertTrue( false );
-        }
-    }
-    
-    @After
-    public void tearDown() 
-    {}
-    
-    // Protected methods -------------------------------------------------------
-    /**
-     * Attempts to write InfluxDB Line Protocol lines to InfluxDB
-     * 
-     * @param seriesLabel - label of input resource (see /src/main/resources/inputs
-     */
-    protected void writeProtocolLines( String seriesLabel )
-    {
-        try
-        {
-            // Get line protocols (will throw if non-existent or empty)
-            List<String> protocols = InfluxUtility.getInputSeries( seriesLabel );
-            
-            for ( String lp : protocols )
-                influxDB.write( lp );
-        }
-        catch ( Exception ex )
-        {
-            String err = "Failed writing protocol lines: " + ex.getMessage();
-            logger.error( err );
-            
-            assertTrue( false );
-        }
-    }
-    
-    /**
-     * Attempts to execute one or more queries with InfluxDB
-     * 
-     * @param queryLabel - name of query input file resource (see /src/main/resources/queries)
-     * @param printOut   - boolean to indicate whether results are logged out to console
-     * @return           - Array of query results (per series set)
-     */
-    protected List<List<String>> executeQueries( String queryLabel, boolean printOut )
-    {
-        List<List<String>> uberResult = new ArrayList<>();
-        
-        try
-        {
-            // Get test query set (will throw if non-existent or empty)
-            List<String> queries = InfluxUtility.getQueries( queryLabel );
-            
-            for ( String q : queries )
-                uberResult.add(InfluxUtility.executeQuery(q,printOut) );
-        }
-        catch ( Exception ex )
-        {
-            String err = "Failed to execute queries: " + queryLabel;
-            logger.error( err );
-            
-            assertTrue( false );
-        }
-        
-        return uberResult;
-    }
-}
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/InfluxUtility.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/InfluxUtility.java
deleted file mode 100644
index 0204b09cffec76f1596efee12b65aabb641b9ca7..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/InfluxUtility.java
+++ /dev/null
@@ -1,309 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          19-Dec-2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-
-package uk.ac.soton.innovation.flame.clmc.monspec.test.base;
-
-import java.io.BufferedReader;
-import java.io.InputStream;
-import java.io.InputStreamReader;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Properties;
-import java.util.concurrent.TimeUnit;
-import org.influxdb.InfluxDB;
-import org.influxdb.InfluxDBFactory;
-import org.influxdb.dto.Query;
-import org.influxdb.dto.QueryResult;
-import org.influxdb.dto.QueryResult.Result;
-import org.influxdb.dto.QueryResult.Series;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-
-/**
- * Helper class for accessing InfluxDB and processing results 
- */
-public class InfluxUtility
-{
-    private static Logger     logger = LoggerFactory.getLogger(InfluxUtility.class );
-    private static Properties testConfig;
-    private static InfluxDB   influxDB;
-    
-    
-    public static Properties getTestConfig()
-    {
-        if ( testConfig == null )
-        {
-            InputStream is = null;
-            try
-            {
-                testConfig = new Properties();
-                
-                is = ClassLoader.class.getResourceAsStream( "/monSpecTestConfig.properties" );
-                
-                if ( is != null )
-                {
-                    testConfig.load( is );
-                    logger.info( "Found test configuration OK" );
-                }
-                else
-                    throw new Exception( "Failed to find test configuration" );
-            }
-            catch ( Exception ex )
-            {
-                String err = "Could not find test configuration: " + ex.getMessage();
-                logger.error( err );
-            }
-            finally
-            {
-                try
-                {
-                    if ( is != null )
-                        is.close();
-                }
-                catch ( Exception ex )
-                { logger.warn( "Had problems closing resource input stream" ); }
-            }
-        }
-        
-        return testConfig;
-    }
-    
-    public static InfluxDB getInfluxDBConnection() throws Exception
-    {
-        InfluxDB result = null;
-        
-        try
-        {
-            if ( influxDB == null )
-            {
-                Properties config = getTestConfig();
-                logger.info( "Trying to connect to Influx: " + config.getProperty("influxDB_EP") );
-        
-                influxDB = InfluxDBFactory.connect( config.getProperty("influxDB_EP"), 
-                                                    config.getProperty("influxDB_UN"),
-                                                    config.getProperty("influxDB_PW") );
-                
-                result = influxDB;
-                logger.info( "Influx connection OK" );
-            }
-            else
-                result = influxDB;
-        }
-        catch ( Exception ex )
-        {
-            String err = "Failed to load InfluxDB configuration: " + ex.getMessage();
-            logger.error( err, ex );
-            
-            throw new Exception( err );
-        }
-        
-        return result;
-    }
-    
-    public static List<String> getInputSeries( String label ) throws Exception
-    {
-        // Safety first
-        if ( label == null || label.isEmpty() )
-            throw new Exception( "Could not get input series: label invalid" );
-        
-        return tryGetTestFile( "/inputs/" + label );
-    }
-    
-    public static List<String> getQueries( String label ) throws Exception
-    {
-        // Safety first
-        if ( label == null || label.isEmpty() )
-            throw new Exception( "Could not get queries: label invalid" );
-        
-        return tryGetTestFile( "/queries/" + label );
-    }
-    
-    public static List<String> executeQuery( String query, boolean printOut ) throws Exception
-    {
-        List<String> result = null;
-        
-        // Safety first
-        if ( query == null || query.isEmpty() )
-            throw new Exception( "Could not execute query: invalid input param" );
-        
-        if ( influxDB == null )
-            throw new Exception( "Could not execute query: no InfluxDB connection" );
-        
-        // Query it
-        Query fluxQuery = new Query( query, testConfig.getProperty("influxDB_DB") );
-        QueryResult qResult = influxDB.query( fluxQuery, TimeUnit.NANOSECONDS );
-        
-        // Safety
-        if ( qResult == null )
-            throw new Exception( "Got null InfluxDB query result for: " + query );
-        else
-            result = parseFluxQueryResult( qResult );
-        
-        if ( printOut )
-        {
-            String rOut = "";
-            for ( String line : result )
-                rOut += line + "\n";
-            
-            logger.info( rOut );
-        }
-        
-        return result;
-    }
-    
-    // Private methods ---------------------------------------------------------
-    private static List<String> tryGetTestFile( String path ) throws Exception
-    {
-        List<String> result = new ArrayList<>();
-        
-        // Safety first
-        if ( path == null || path.isEmpty() )
-            throw new Exception( "Could not get test file: path invalid" );
-        
-        InputStream    is = null;
-        BufferedReader br = null;
-        
-        try
-        {
-            // Load resource as stream
-            is = ClassLoader.class.getResourceAsStream( path );
-            
-            if ( is != null )
-            {
-                // Read into list
-                br = new BufferedReader( new InputStreamReader(is) );
-                String nextLine;
-                
-                while( (nextLine = br.readLine()) != null )
-                {
-                    if ( !nextLine.startsWith("#") && !nextLine.isEmpty() )
-                        result.add( nextLine );
-                }
-            }
-            else
-                throw new Exception( "Failed to find test file" );
-        }
-        catch ( Exception ex )
-        {
-            String err = "Could not find test file: " + ex.getMessage();
-            logger.error( err );
-        }
-        // Clean up
-        finally
-        {
-            try
-            {
-                if ( br != null )
-                    br.close();
-                
-                if ( is != null )
-                    is.close();
-            }
-            catch ( Exception ex )
-            { logger.warn( "Had problems closing resource input stream: " + ex.getMessage() ); }
-        }
-        
-        // Final safety check
-        if ( result.isEmpty() )
-            throw new Exception( "Found no lines in test file" );
-        
-        return result;
-    }
-    
-    /**
-     * This method simply parses the result of the query and formats as CSV (tab delimited)
-     * 
-     * @param queryResult  - The structured query result returned by InfluxDB
-     * @return             - A list of rows in Line Protocol format
-     */
-    private static List<String> parseFluxQueryResult( QueryResult queryResult )
-    {
-        List<String> result = new ArrayList<>();
-        
-        for ( Result res : queryResult.getResults() )
-        {
-            for ( Series series : res.getSeries() )
-            {
-                // Result series name
-                result.add( series.getName() );
-                
-                // TODO: Unclear what the tags collection is for at the moment
-                
-                // Column titles
-                String colNames = "";
-                int volInd = 0;
-                for ( String colName : series.getColumns() )
-                {
-                    // Special formatting for time value
-                    if ( volInd == 0 )
-                        colNames += colName + "\t\t\t";
-                    else
-                        colNames += colName + "\t";
-                    
-                    ++volInd;
-                }
-                    
-                
-                colNames = colNames.substring( 0, colNames.length() -1 ); // Trim last comma
-                result.add( colNames );
-                
-                // Results
-                for ( List<Object> rowList : series.getValues() )
-                {
-                    String valList = "";
-                    volInd = 0;
-                    
-                    for ( Object val : rowList )
-                    {
-                        // Special formatting for time value
-                        // Note: some nasty rounding errors in nanosecond number component
-                        // for now round down to millisecond
-                        if ( volInd == 0 )
-                        {
-                            Double tD = (Double) val;
-                            Double msFrac = tD / 100000;
-                            Long tL = msFrac.longValue();
-                            tL *= 100000;
-                            
-                            valList += tL.toString() + "\t";
-                            
-                        }
-                        else
-                            valList += val.toString() + "\t";
-                        
-                        ++volInd;
-                    }
-                    
-                    valList = valList.substring( 0, valList.length() -1 ); // Trim last comma
-                    result.add( valList );
-                }
-            }
-        }
-        
-        return result;
-    }
-}
diff --git a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/TestSuiteBase.java b/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/TestSuiteBase.java
deleted file mode 100644
index 965f11170ed312d2ab21ddd83866e96731382fe2..0000000000000000000000000000000000000000
--- a/src/clmc-spec/src/test/java/uk/ac/soton/innovation/flame/clmc/monspec/test/base/TestSuiteBase.java
+++ /dev/null
@@ -1,94 +0,0 @@
-/////////////////////////////////////////////////////////////////////////
-//
-// © University of Southampton IT Innovation Centre, 2017
-//
-// Copyright in this software belongs to University of Southampton
-// IT Innovation Centre of Gamma House, Enterprise Road, 
-// Chilworth Science Park, Southampton, SO16 7NS, UK.
-//
-// This software may not be used, sold, licensed, transferred, copied
-// or reproduced in whole or in part in any manner or form or in or
-// on any media by any person other than in accordance with the terms
-// of the Licence Agreement supplied with the software, or otherwise
-// without the prior written consent of the copyright owners.
-//
-// This software is distributed WITHOUT ANY WARRANTY, without even the
-// implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-// PURPOSE, except where stated in the Licence Agreement supplied with
-// the software.
-//
-//      Created By :            Simon Crowle
-//      Created Date :          19/12/2017
-//      Created for Project :   FLAME
-//
-/////////////////////////////////////////////////////////////////////////
-package uk.ac.soton.innovation.flame.clmc.monspec.test.base;
-
-import java.util.Properties;
-import org.influxdb.InfluxDB;
-import org.junit.After;
-import org.junit.AfterClass;
-import org.junit.Before;
-import org.junit.BeforeClass;
-import org.junit.runner.RunWith;
-import org.junit.runners.Suite;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
-
-
-@RunWith(Suite.class)
-@Suite.SuiteClasses({})
-/**
- * TestSuiteBase is an abstract class used to set up Influx test suites. Derive 
- * from this class to create InfluxDB test suites
- */
-public abstract class TestSuiteBase
-{
-    private static Logger logger = LoggerFactory.getLogger( TestSuiteBase.class );
-    
-    @BeforeClass
-    public static void setUpClass() throws Exception 
-    {
-        InfluxUtility.getInfluxDBConnection(); // throws if connection fails
-        
-        destroyTestDatabase();
-    }
-
-    @AfterClass
-    public static void tearDownClass() throws Exception
-    {
-        Properties config = InfluxUtility.getTestConfig();
-        
-        if ( "true".equals( config.getProperty("clearDBOnExit")) )
-            destroyTestDatabase();
-    }
-
-    @Before
-    public void setUp() throws Exception
-    {
-        
-    }
-
-    @After
-    public void tearDown() throws Exception
-    {
-        
-    }
-    
-    // Private methods ---------------------------------------------------------
-    private static void destroyTestDatabase()
-    {
-        try
-        {
-            String dbName = InfluxUtility.getTestConfig().getProperty( "influxDB_DB" );
-            InfluxDB influxDB = InfluxUtility.getInfluxDBConnection();
-            
-            if ( influxDB.databaseExists( dbName ) )
-                influxDB.deleteDatabase( dbName );
-        }
-        catch ( Exception ex )
-        {
-            logger.error( ex.getMessage() );
-        }
-    }
-}
diff --git a/src/mediaServiceSim/LineProtocolGenerator.py b/src/mediaServiceSim/LineProtocolGenerator.py
deleted file mode 100644
index 3d4b07736b3fa3b318754d411aaeb1d91aa2f537..0000000000000000000000000000000000000000
--- a/src/mediaServiceSim/LineProtocolGenerator.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# line protocol
-
-# Method to create a full InfluxDB request statement (based on partial statement from client)
-import uuid
-from random import random, randint
-
-
-# Reports TX and RX, scaling on requested quality
-def generate_network_report(recieved_bytes, sent_bytes, time):
-    # Measurement
-    result = 'net_port_io'
-    # Tags
-    result += ',port_id=enps03 '
-    # Fields
-    result += 'RX_BYTES_PORT_M=' + str(recieved_bytes) + ","
-    result += 'TX_BYTES_PORT_M=' + str(sent_bytes)
-    # Timestamp
-    result += ' ' + str(_getNSTime(time))
-
-    # Measurement
-    #print(result)
-    return result
-
-
-# Formats VM config
-def generate_vm_config(state, cpu, mem, storage, time):
-    # metric
-    result = 'vm_res_alloc'
-    # Tags
-    result += ',vm_state=' + quote_wrap(state)
-    result += ' '
-    # Fields
-    result += 'cpu=' + str(cpu)
-    result += ',memory=' + quote_wrap(mem)
-    result += ',storage=' + quote_wrap(storage)
-
-    # Time
-    result += ' ' + str(_getNSTime(time))
-
-    print(result)
-    return result
-
-
-# Reports cpu usage, scaling on requests
-def generate_cpu_report(cpu_usage, cpu_active_time, cpu_idle_time, time):
-    result = 'cpu_usage'
-    # Tag
-    result += ' '
-    # field
-    result += 'cpu_usage='+str(cpu_usage)
-    result += ',cpu_active_time='+str(cpu_active_time)
-    result += ',cpu_idle_time='+str(cpu_idle_time)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Reports response times, scaling on number of requests
-def generate_mpegdash_report(resource, requests, avg_response_time, peak_response_time, time):
-    # Measurement
-    result = 'mpegdash_service'
-    # Tags
-    result += ',cont_nav=\"' + str(resource) + "\" "
-    # Fields
-
-    # result += 'cont_rep=' + str(quality) + ','
-    result += 'requests=' + str(requests) + ','
-    result += 'avg_response_time=' + str(avg_response_time) + ','
-    result += 'peak_response_time=' + str(peak_response_time)
-    # Timestamp
-    result += ' ' + str(_getNSTime(time))
-    print(result)
-    return result
-
-#ipendpoint_route,ipendpoint_id,cont_nav=FQDN HTTP_REQUESTS_FQDN_M, NETWORK_FQDN_LATENCY timestamp
-def generate_ipendpoint_route(resource, requests, latency, time):
-    # Measurement
-    result = 'ipendpoint_route'
-    # Tags
-    result += ',cont_nav=\"' + str(resource) + "\" "
-    # Fields
-
-    # result += 'cont_rep=' + str(quality) + ','
-    result += 'http_requests_fqdn_m=' + str(requests) + ','
-    result += 'network_fqdn_latency=' + str(latency)
-    # Timestamp
-    result += ' ' + str(_getNSTime(time))
-    #print(result)
-    return result
-
-# Influx needs strings to be quoted, this provides a utility interface to do this
-def quote_wrap(str):
-    return "\"" + str + "\""
-
-
-# InfluxDB likes to have time-stamps in nanoseconds
-def _getNSTime(time):
-    # Convert to nano-seconds
-    timestamp = int(1000000000*time)
-    #print("timestamp", timestamp)
-    return timestamp
-
-# DEPRICATED
-# ____________________________________________________________________________
-
-# DEPRICATED: old structure, not part of new spec
-def _generateClientRequest(cReq, id, time):
-    # Tags first
-    result = 'sid="' + str(id) + '",' + cReq
-
-    # Fields
-    # No additional fields here yet
-
-    # Timestamp
-    result += ' ' + str(_getNSTime(time))
-
-    # Measurement
-    return 'request,' + result
-
-
-# Method to create a full InfluxDB response statement
-# DEPRECATED: old structure, not part of new spec
-def _generateServerResponse(reqID, quality, time, cpuUsage, qualityDifference):
-    # Tags first
-    result = ' '
-
-    # Fields
-    result += 'quality=' + str(quality) + ','
-    result += 'cpuUsage=' + str(cpuUsage) + ','
-    result += 'qualityDifference=' + str(qualityDifference) + ','
-    result += 'requestID="' + str(reqID) + '",'
-    result += 'index="' + str(uuid.uuid4()) + '"'
-
-    # Timestamp
-    result += ' ' + str(_getNSTime(time))
-
-    # Measurement
-    # print('response'+result)
-    return 'response' + result
-
-
-
-# Formats server config
-def _generateServerConfig(ID, location, cpu, mem, storage, time):
-    # metric
-    result = 'host_resource'
-    # Tags
-    result += ',slice_id=' + quote_wrap(ID)
-    result += ',location=' + quote_wrap(location)
-    result += ' '
-    # Fields
-    result += 'cpu=' + str(cpu)
-    result += ',memory=' + quote_wrap(mem)
-    result += ',storage=' + quote_wrap(storage)
-
-    # Time
-    result += ' ' + str(_getNSTime(time))
-
-    print(result)
-    return result
-
-
-
-# Format port config
-def _configure_port(port_id, state, rate, time):
-    # metric
-    result = 'net_port_config '
-    # Fields
-    result += 'port_id=' + quote_wrap('enps' + port_id)
-    result += ',port_state=' + quote_wrap(state)
-    result += ',tx_constraint=' + quote_wrap(rate)
-    result += ' '
-
-    # Time
-    result += ' ' + str(_getNSTime(time))
-
-    print(result)
-    return result
-
-
-# Format service function config
-def _configure_service_function(state, max_connected_clients):
-    # measurement
-    result = 'mpegdash_service_config'
-    # tags
-    result += ',service_state='+quote_wrap(state)
-    result += ' '
-    # fields
-    result += 'max_connected_clients='+str(max_connected_clients)
-
-    return result
-
-
-
-# Reports memory usage, scaling on requests
-def generate_mem_report(requests, total_mem, time):
-    # Measurement
-    result = 'mem'
-    result += ' '
-    # field
-    used = randint(0, min(100,5*requests))
-    available = 100-used
-    result += 'available_percent='+str(available)
-    result += ',used_percent='+str(used)
-    result += ',total='+str(total_mem)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Formats compute node config
-def generate_compute_node_config(slice_id, location, node_id, cpus, mem, storage, time):
-    # Measurement
-    result = 'compute_node_config'
-    # CommonContext Tag
-    result += ',slide_id='+quote_wrap(slice_id)
-    # Tag
-    result += ',location='+quote_wrap(location)
-    result += ',comp_node_id='+quote_wrap(node_id)
-    result += ' '
-    # field
-    result += 'cpus='+str(cpus)
-    result += ',memory='+str(mem)
-    result += ',storage='+str(storage)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Formats network resource config
-def generate_network_resource_config(slice_id, network_id, bandwidth, time):
-    # Measurement
-    result = 'network_resource_config'
-    # Meta Tag
-    result += ',slice_id='+quote_wrap(slice_id)
-    # Tag
-    result += 'network_id='+quote_wrap(network_id)
-    result += ' '
-    # field
-    result += 'bandwidth='+str(bandwidth)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Formats network interface config
-def generate_network_interface_config(slice_id, comp_node_id, port_id, rx_constraint, tx_constraint, time):
-    # Measurement
-    result = 'network_interface_config'
-    # Meta Tag
-    result += ',slice_id'+quote_wrap(slice_id)
-    # Tags
-    result += ',comp_node_id='+quote_wrap(comp_node_id)
-    result += ',port_id='+quote_wrap(port_id)
-    result += ' '
-    # field
-    result += 'rx_constraint='+str(rx_constraint)
-    result += ',tx_constraint='+str(tx_constraint)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Format SF instance config
-def generate_sf_instance_surrogate_config(loc, sfc, sfc_i, sf_package, sf_i, cpus, mem, storage, time):
-    # Measurement
-    result = 'sf_instance_surrogate_config'
-    # Meta Tag
-    result += ',location'+quote_wrap(loc)
-    result += ',sfc'+quote_wrap(sfc)
-    result += ',sfc_i'+quote_wrap(sfc_i)
-    result += ',sf_package'+quote_wrap(sf_package)
-    result += ',sf_i'+quote_wrap(sf_i)
-    result += ' '
-    # field
-    result += 'cpus='+str(cpus)
-    result += ',memory='+str(mem)
-    result += ',storage='+str(storage)
-    result += ' '
-    # Time
-    result += str(_getNSTime(time))
-    print(result)
-    return result
-
-
-# Formats context container as part of other line protocol generators
-def service_function_measurement(measurement, service_function_context):
-    result = measurement
-    result += ',sfc'+quote_wrap(service_function_context.sfc)
-    result += ',sfc_i'+quote_wrap(service_function_context.sfc_i)
-    result += ',sf_package'+quote_wrap(service_function_context.sf_package)
-    result += ',sf_i'+quote_wrap(service_function_context.sf_i)
-
-    return result
-
-
-
diff --git a/src/mediaServiceSim/serviceSim.py b/src/mediaServiceSim/serviceSim.py
deleted file mode 100644
index 2cdc993af6c0d0b543abbc7b5cd396a3a786fd8e..0000000000000000000000000000000000000000
--- a/src/mediaServiceSim/serviceSim.py
+++ /dev/null
@@ -1,437 +0,0 @@
-# coding: utf-8
-## ///////////////////////////////////////////////////////////////////////
-##
-## © University of Southampton IT Innovation Centre, 2018
-##
-## Copyright in this software belongs to University of Southampton
-## IT Innovation Centre of Gamma House, Enterprise Road, 
-## Chilworth Science Park, Southampton, SO16 7NS, UK.
-##
-## This software may not be used, sold, licensed, transferred, copied
-## or reproduced in whole or in part in any manner or form or in or
-## on any media by any person other than in accordance with the terms
-## of the Licence Agreement supplied with the software, or otherwise
-## without the prior written consent of the copyright owners.
-##
-## This software is distributed WITHOUT ANY WARRANTY, without even the
-## implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
-## PURPOSE, except where stated in the Licence Agreement supplied with
-## the software.
-##
-##      Created By :            Simon Crowle
-##      Created Date :          03-01-2018
-##      Created for Project :   FLAME
-##
-##///////////////////////////////////////////////////////////////////////
-
-from random import random, randint
-
-import math
-import time
-import datetime
-import uuid
-import urllib.parse
-import urllib.request
-import LineProtocolGenerator as lp
-
-
-# DemoConfig is a configuration class used to set up the simulation
-class DemoConfig(object):
-    def __init__(self):
-        self.LOG_DATA = False  # Log data sent to INFLUX if true
-        self.ITERATION_STRIDE = 10  # Number of seconds of requests/responses sent to INFLUXDB per HTTP POST
-        self.SEG_LENGTH = 4  # Each MPEG segment encodes 5 seconds worth of frames (assume double-buffering)
-        self.MAX_SEG = (30 * 60) / (self.SEG_LENGTH + 1)  # 30 mins
-        self.MIN_QUALITY = 5  # Minimum quality requested by a client
-        self.MAX_QUALITY = 9  # Maximum quality requested by a client
-        self.MIN_SERV_RESP_TIME = 100  # Mininum time taken for server to respond to a request (ms)
-        self.CLIENT_START_DELAY_MAX = 360  # Randomly delay clients starting stream up to 3 minutes
-
-
-dc = DemoConfig()
-
-
-# DemoClient is a class the simulations the behaviour of a single client requesting video from the server
-class DemoClient(object):
-    def __init__(self):
-        self.startRequestOffset = randint(0,
-                                          dc.CLIENT_START_DELAY_MAX)  # Random time offset before requesting 1st segment
-        self.numSegRequests = dc.MAX_SEG - randint(0, 50)  # Randomly stop client watching all of video
-        self.id = uuid.uuid4()  # Client's ID
-        self.currSeg = 1  # Client's current segment
-        self.nextSegCountDown = 0  # Count-down before asking for next segment
-        self.qualityReq = randint(dc.MIN_QUALITY, dc.MAX_QUALITY)  # Randomly assigned quality for this client
-        self.lastReqID = None  # ID used to track last request made by this client
-
-    def getQuality(self):
-        return self.qualityReq
-
-    def getLastRequestID(self):
-        return self.lastReqID
-
-    def iterateRequest(self):
-        result = None
-
-        # If the time offset before asking for 1st segment is through and there are more segments to get
-        # and it is time to get one, then create a request for one!
-        if (self.startRequestOffset == 0):
-            if (self.numSegRequests > 0):
-                if (self.nextSegCountDown == 0):
-
-                    # Generate a request ID
-                    self.lastReqID = uuid.uuid4()
-
-                    # Start building the InfluxDB statement
-                    # tags first
-                    result = 'cid="' + str(self.id) + '",'
-                    result += 'segment=' + str(self.currSeg) + ' '
-
-                    # then fields
-                    result += 'quality=' + str(self.qualityReq) + ','
-                    result += 'index="' + str(self.lastReqID) + '"'
-
-                    # Update this client's segment tracking
-                    self.currSeg += 1
-                    self.numSegRequests -= 1
-                    self.nextSegCountDown = dc.SEG_LENGTH
-                else:
-                    self.nextSegCountDown -= 1
-        else:
-            self.startRequestOffset -= 1
-
-        # Return the _partial_ InfluxDB statement (server will complete the rest)
-        return result
-
-
-# Used to tell influx to launch or teardown a database (DB name overwritten by telegraf)
-class DatabaseManager:
-    def __init__(self, influx_url, db_name):
-        self.influx_url = influx_url
-        self.influx_db = db_name
-
-    def database_up(self):
-        self._createDB()
-
-    def database_teardown(self):
-        self._deleteDB()
-
-    def _createDB(self):
-        self._sendInfluxQuery('CREATE DATABASE ' + self.influx_db)
-
-    def _deleteDB(self):
-        self._sendInfluxQuery('DROP DATABASE ' + self.influx_db)
-
-    def _sendInfluxQuery(self, query):
-        query = urllib.parse.urlencode({'q': query})
-        query = query.encode('ascii')
-        req = urllib.request.Request(self.influx_url + '/query ', query)
-        urllib.request.urlopen(req)
-
-
-# Used to allocate clients to servers
-class ClientManager:
-    def __init__(self, servers):
-        self.servers = servers
-    def generate_new_clients(self, amount):
-        assigned_count = 0
-        while(assigned_count < amount):
-            for server in self.servers:
-                if(assigned_count < amount):
-                    server.assign_client(DemoClient())
-                    assigned_count += 1
-
-
-# Simulates nodes not connected directly to clients (e.g. telegraf)
-class Node:
-    def __init__(self, influxurl, influxdb, input_cpu):
-        self.influx_url = influxurl
-        self.influx_db = influxdb
-        self.report_cpu = input_cpu
-    def iterateService(self):
-        if self.report_cpu:
-            self._sendInfluxData(lp.generate_CPU_report(0))
-            self._sendInfluxData(lp.generate_mem_report(10, 0))
-
-    # Private Methods
-    # ________________________________________________________________
-
-    # This is duplicated from DemoServer, should probably be refactored
-    def _sendInfluxData(self, data):
-        data = data.encode()
-        header = {'Content-Type': 'application/octet-stream'}
-        req = urllib.request.Request(self.influx_url + '/write?db=' + self.influx_db, data, header)
-        urllib.request.urlopen(req)
-
-# Container for common SF tags, used as part of generating SF usage reports
-
-
-# DemoServer is the class that simulates the behaviour of the MPEG-DASH server
-class DemoServer(object):
-    def __init__(self, si, db_url, db_name, server_id, server_location):
-        self.influxDB = db_name  # InfluxDB database name
-        self.id = uuid.uuid4()  # MPEG-DASH server ID
-        self.simIterations = si  # Number of iterations to make for this simulation
-        self.influxURL = db_url  # InfluxDB connection URL
-        self.currentTime = int(round(time.time() * 1000))  # The current time
-        self._configure(server_id, server_location)
-        self.clients = []
-
-    def shutdown(self):
-        print("Shutting down")
-        self.configure_VM('stopping')
-
-    def assign_client(self, new_client):
-        self.clients.append(new_client)
-        print('Number of clients: ' + str(len(self.clients)))
-
-    def configure_server(self, server_id, server_location):
-        print("Configuring Servers")
-        server_conf_block = []
-        server_conf_block.append(lp._generateServerConfig(server_id, server_location, 8, '100G', '1T',
-                                                          self._selectDelay(0)))
-
-        #ids = ['A', 'B', 'C']
-        #locations = ['locA', 'locB', 'locC']
-        #for i, id in enumerate(ids):
-        #    server_conf_block.append(
-        #        lp._generateServerConfig(id, locations[i], 8, '100G', '1T', self._selectDelay(len(ids))))
-        self._sendInfluxDataBlock(server_conf_block)
-
-    def configure_VM(self, state):
-        print("Configuring VM node")
-        self._sendInfluxData(self._generateVM(state, 1))
-
-    def configure_ports(self):
-        print("Configuring Servers")
-        server_conf_block = []
-        for i in range(0, 10):
-            server_conf_block.append(lp._configure_port())
-        self._sendInfluxDataBlock(server_conf_block)
-
-    def shutdown_VM(self):
-        print("Shutting down VM nodes")
-        VM_conf_block = []
-        self._generateVMS('stopping', 10, VM_conf_block)
-
-        self._sendInfluxDataBlock(VM_conf_block)
-
-    def iterateService(self):
-        # The simulation will run through 'X' iterations of the simulation
-        # each time this method is called. This allows request/response messages to be
-        # batched and sent to the InfluxDB in sensible sized blocks
-        return self._executeServiceIteration(dc.ITERATION_STRIDE)
-
-    def _executeServiceIteration(self, count):
-
-        requestBlock = []
-        responseBlock = []
-        networkBlock = []
-        SFBlock = []
-        totalDifference = sumOfclientQuality = percentageDifference = 0
-
-        # Keep going until this stride (count) completes
-        while (count > 0):
-            count -= 1
-
-            # Check we have some iterations to do
-            if (self.simIterations > 0):
-                # First record clients that request segments
-                clientsRequesting = []
-
-                # Run through all clients and see if they make a request
-                for client in self.clients:
-
-                    # Record request, if it was generated
-                    cReq = client.iterateRequest()
-                    if cReq is not None:
-                        clientsRequesting.append(client)
-                        requestBlock.append(lp._generateClientRequest(cReq, self.id, self.currentTime))
-
-                # Now generate request statistics
-                clientReqCount = len(clientsRequesting)
-
-                # Create a single CPU usage metric for this iteration
-                cpuUsagePercentage = self._cpuUsage(clientReqCount)
-
-                # Now generate responses, based on stats
-                for client in clientsRequesting:
-                    # Generate some quality and delays based on the number of clients requesting for this iteration
-                    qualitySelect = self._selectQuality(client.getQuality(), clientReqCount)
-                    delaySelect = self._selectDelay(clientReqCount) + self.currentTime
-                    qualityDifference = client.getQuality() - qualitySelect
-                    totalDifference += qualityDifference
-                    # print('totalDifference = ' + str(totalDifference) +'\n')
-                    sumOfclientQuality += client.getQuality()
-                    # print('sumOfclientQuality = ' + str(sumOfclientQuality) + '\n')
-                    percentageDifference = int((totalDifference * 100) / sumOfclientQuality)
-                    # print('percentageOfQualityDifference = ' + str(percentageDifference) + '%')
-
-                    responseBlock.append(lp._generateServerResponse(client.getLastRequestID(), qualitySelect,
-                                                                    delaySelect, cpuUsagePercentage,
-                                                                    percentageDifference))
-                    SFBlock.append(lp._generateMpegDashReport('https://netflix.com/scream', qualitySelect, delaySelect))
-
-                    networkBlock.append(lp._generateNetworkReport(sumOfclientQuality, delaySelect))
-                # Iterate the service simulation
-                self.simIterations -= 1
-                self.currentTime += 1000  # advance 1 second
-
-        # If we have some requests/responses to send to InfluxDB, do it
-        if (len(requestBlock) > 0 and len(responseBlock) > 0):
-            self._sendInfluxDataBlock(requestBlock)
-            self._sendInfluxDataBlock(responseBlock)
-            self._sendInfluxDataBlock(networkBlock)
-            self._sendInfluxDataBlock(SFBlock)
-            print("Sending influx data blocks")
-
-        return self.simIterations
-
-    def _generateVM(self, state, delay):
-        return lp._generateVMConfig(state, 1, '100G', '1T', self._selectDelay(delay))
-
-    # 'Private' methods ________________________________________________________
-    def _configure(self, server_id, server_location):
-        print("Configuring")
-        self.configure_VM('starting')
-        self.configure_VM('running')
-        #time.sleep(0.1)
-        self.configure_server(server_id, server_location)
-        self._sendInfluxData(lp._configure_port('01', 'running', '1GB/s', self.currentTime))
-        self._sendInfluxData(lp._configure_service_function('starting', 100))
-        #time.sleep(0.1)
-        self._sendInfluxData(lp._configure_service_function('running', 100))
-
-    def _cpuUsage(self, clientCount):
-        cpuUsage = randint(0, 10)
-
-        if (clientCount < 20):
-            cpuUsage += 5
-        elif (clientCount >= 20 and clientCount < 40):
-            cpuUsage += 10
-        elif (clientCount >= 40 and clientCount < 60):
-            cpuUsage += 15
-        elif (clientCount >= 60 and clientCount < 80):
-            cpuUsage += 20
-        elif (clientCount >= 80 and clientCount < 110):
-            cpuUsage += 30
-        elif (clientCount >= 110 and clientCount < 150):
-            cpuUsage += 40
-        elif (clientCount >= 150 and clientCount < 200):
-            cpuUsage += 55
-        elif (clientCount >= 200 and clientCount < 300):
-            cpuUsage += 70
-        elif (clientCount >= 300):
-            cpuUsage += 90
-
-        return cpuUsage
-
-    # Rule to determine a response quality, based on the current number of clients requesting
-    def _selectQuality(self, expectedQuality, clientCount):
-
-        result = dc.MAX_QUALITY
-
-        if (clientCount < 50):
-            result = 8
-        elif (clientCount >= 50 and clientCount < 100):
-            result = 7
-        elif (clientCount >= 100 and clientCount < 150):
-            result = 6
-        elif (clientCount >= 150 and clientCount < 200):
-            result = 5
-        elif (clientCount >= 200 and clientCount < 250):
-            result = 4
-        elif (clientCount >= 250 and clientCount < 300):
-            result = 3
-        elif (clientCount >= 300):
-            result = 2
-
-        # Give the client what it wants if possible
-        if (result > expectedQuality):
-            result = expectedQuality
-
-        return result
-
-    # Rule to determine a delay, based on the current number of clients requesting
-    def _selectDelay(self, cCount):
-
-        result = dc.MIN_SERV_RESP_TIME
-
-        if (cCount < 50):
-            result = 150
-        elif (cCount >= 50 and cCount < 100):
-            result = 200
-        elif (cCount > 100 and cCount < 200):
-            result = 500
-        elif (cCount >= 200):
-            result = 1000
-
-        # Perturb the delay a bit
-        result += randint(0, 20)
-
-        return result
-
-    # InfluxDB data send methods
-    # -----------------------------------------------------------------------------------------------
-
-    def _sendInfluxData(self, data):
-        data = data.encode()
-        header = {'Content-Type': 'application/octet-stream'}
-        req = urllib.request.Request(self.influxURL + '/write?db=' + self.influxDB, data, header)
-        urllib.request.urlopen(req)
-
-    def _sendInfluxDataBlock(self, dataBlock):
-        msg = ''
-        for stmt in dataBlock:
-            msg += stmt + '\n'
-
-        try:
-            if (dc.LOG_DATA == True):
-                print(msg)
-
-            self._sendInfluxData(msg)
-
-        except urllib.error.HTTPError as ex:
-            print("Error calling: " + str(ex.url) + "..." + str(ex.msg))
-
-
-# Entry point
-# -----------------------------------------------------------------------------------------------
-print("Preparing simulation")
-# Iterations is time in seconds for each server to simulate
-iterations = 3000
-# port 8086: Direct to DB specified
-# port 8186: To telegraf, telegraf specifies DB
-start_time = time.localtime()
-database_manager = DatabaseManager('http://localhost:8186', 'testDB')
-# Set up InfluxDB (need to wait a little while)
-database_manager.database_teardown()
-time.sleep(2)
-database_manager.database_up()
-time.sleep(2)
-# configure servers
-demoServer_southampton = DemoServer(iterations, 'http://localhost:8186', 'testDB', "Server1", "Southampton")
-demoServer_bristol = DemoServer(iterations, 'http://localhost:8186', 'testDB', "Server2", "Bristol")
-telegraf_node = Node('http://localhost:8186', 'testDB', True)
-server_list = [demoServer_southampton, demoServer_bristol]
-client_manager = ClientManager(server_list)
-client_manager.generate_new_clients(20)
-
-# Start simulation
-print("Starting simulation")
-while True:
-    for server in server_list:
-        itCount = server.iterateService()
-    telegraf_node.iterateService()
-    pcDone = round((itCount / iterations) * 100)
-
-    print("Simulation remaining (%): " + str(pcDone) + " \r", end='')
-
-    if itCount == 0:
-        break
-
-for server in server_list:
-    server.shutdown()
-print("\nFinished")
-end_time = time.localtime()
-print("Started at {0} ended at {1}, total run time {2}".format(start_time,end_time,(end_time-start_time)))
-
diff --git a/src/mediaServiceSim/simulator_v2.py b/src/mediaServiceSim/simulator_v2.py
deleted file mode 100644
index 0182e75dc99b9e9f28ffad87a0d4d40e5929d67b..0000000000000000000000000000000000000000
--- a/src/mediaServiceSim/simulator_v2.py
+++ /dev/null
@@ -1,203 +0,0 @@
-import LineProtocolGenerator as lp
-import time
-import urllib.parse
-import urllib.request
-import sys
-import random
-
-# Simulation parameters
-TICK_TIME = 1
-DEFAULT_REQUEST_RATE_INC = 1
-DEFAULT_REQUEST_RATE_INC_PERIOD = 10 
-SIMULATION_TIME_SEC = 60*60
-
-# CLMC parameters
-INFLUX_DB_URL = 'http://192.168.50.10:8086'
-AGENT_URL1 = 'http://192.168.50.11:8186'
-AGENT_URL2 = 'http://192.168.50.12:8186'
-
-# Simulator for services
-class sim:
-    def __init__(self, influx_url):
-        # We don't need this as the db is CLMC metrics
-        self.influx_db = 'CLMCMetrics'
-        self.influx_url = influx_url
-        # Teardown DB from previous sim and bring it back up
-        self._deleteDB()
-        self._createDB()
-
-
-    def run(self, simulation_length_seconds):
-        start_time = time.time()-SIMULATION_TIME_SEC
-        sim_time = start_time
-
-        # segment_size : the length of video requested at a time
-        # bit_rate: MPEG-2 High 1080p 25fps = 80Mbps
-        ip_endpoints = [{'agent_url': AGENT_URL1, 'location': 'DC1', 'cpu': 16,
-                        'mem': '8GB', 'storage': '1TB', 'request_queue': 0, 'request_arrival_rate': 0,
-                        'segment_size': 2, 'video_bit_rate': 80, 'packet_size': 1500},
-                        {'agent_url': AGENT_URL2, 'location': 'DC2', 'cpu': 4, 
-                        'mem': '8GB', 'storage': '1TB', 'request_queue': 0, 'request_arrival_rate': 0, 
-                        'segment_size': 2, 'video_bit_rate': 80, 'packet_size': 1500}
-                        ]
-
-        # Simulate configuration of the ipendpoints
-        # endpoint state->mu, sigma, secs normal distribution
-        config_delay_dist = {"placing": [10, 0.68], "booting": [10, 0.68],"connecting": [10, 0.68]}
-
-        # Place endpoints
-        max_delay = 0              
-        for ip_endpoint in ip_endpoints:
-            delay_time = self._changeVMState(sim_time, ip_endpoint, config_delay_dist['placing'][0], config_delay_dist['placing'][0]*config_delay_dist['placing'][1], 'placing', 'placed')
-            if delay_time > max_delay:
-                max_delay = delay_time
-        sim_time +=max_delay
-
-        # Boot endpoints
-        max_delay = 0        
-        for ip_endpoint in ip_endpoints:
-            delay_time = self._changeVMState(sim_time, ip_endpoint, config_delay_dist['booting'][0], config_delay_dist['booting'][0]*config_delay_dist['booting'][1], 'booting', 'booted')
-            if delay_time > max_delay:
-                max_delay = delay_time            
-        sim_time +=max_delay
-
-        # Connect endpoints
-        max_delay = 0     
-        for ip_endpoint in ip_endpoints:
-            delay_time = self._changeVMState(sim_time, ip_endpoint, config_delay_dist['connecting'][0], config_delay_dist['connecting'][0]*config_delay_dist['connecting'][1], 'connecting', 'connected')
-            if delay_time > max_delay:
-                max_delay = delay_time
-        sim_time +=max_delay
-   
-        request_arrival_rate_inc = DEFAULT_REQUEST_RATE_INC
-        request_queue = 0
-        inc_period_count = 0
-        for i in range(simulation_length_seconds):        
-            for ip_endpoint in ip_endpoints:
-                request_processing_time = 0
-                cpu_time_available = 0
-                requests_processed = 0
-                max_requests_processed = 0
-                cpu_active_time = 0
-                cpu_idle_time = 0
-                cpu_usage = 0
-                cpu_load_time = 0
-                avg_response_time = 0
-                peak_response_time = 0
-
-                # linear inc to arrival rate
-                if inc_period_count >= DEFAULT_REQUEST_RATE_INC_PERIOD:
-                    ip_endpoint['request_arrival_rate'] += request_arrival_rate_inc
-                    inc_period_count = 0
-                else:
-                    inc_period_count += 1
-                # add new requests to the queue
-                ip_endpoint['request_queue'] += ip_endpoint['request_arrival_rate']
-
-                # time to process one second of video (mS) in the current second
-                request_processing_time = int(random.normalvariate(10, 10*0.68))
-                if request_processing_time <= 10:
-                    request_processing_time = 10
-                # time depends on the length of the segments in seconds
-                request_processing_time *= ip_endpoint['segment_size']
-
-                # amount of cpu time (mS) per tick
-                cpu_time_available = ip_endpoint['cpu']*TICK_TIME*1000
-                max_requests_processed = int(cpu_time_available/request_processing_time)
-                # calc how many requests processed
-                if ip_endpoint['request_queue'] <= max_requests_processed:
-                    # processed all of the requests
-                    requests_processed = ip_endpoint['request_queue']
-                else:
-                    # processed the maxmum number of requests
-                    requests_processed = max_requests_processed
-
-                # calculate cpu usage
-                cpu_active_time = int(requests_processed*request_processing_time)
-                cpu_idle_time = int(cpu_time_available-cpu_active_time)
-                cpu_usage = cpu_active_time/cpu_time_available
-                self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_cpu_report(cpu_usage, cpu_active_time, cpu_idle_time, sim_time))
-
-                # calc network usage metrics
-                bytes_rx = 2048*requests_processed           
-                bytes_tx = int(ip_endpoint['video_bit_rate']/8*1000000*requests_processed*ip_endpoint['segment_size'])
-                self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_network_report(bytes_rx, bytes_tx, sim_time))                
-
-                # time to process all of the requests in the queue
-                peak_response_time = ip_endpoint['request_queue']*request_processing_time/ip_endpoint['cpu']
-                # mid-range 
-                avg_response_time = (peak_response_time+request_processing_time)/2
-                self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_mpegdash_report('http://localhost/server-status?auto', ip_endpoint['request_arrival_rate'], avg_response_time, peak_response_time, sim_time))
-
-                # need to calculate this but sent at 5mS for now
-                network_request_delay = 0.005
-
-                # calculate network response delays (2km link, 100Mbps)
-                network_response_delay = self._calcNetworkDelay(2000, 100, ip_endpoint['packet_size'], ip_endpoint['video_bit_rate'], ip_endpoint['segment_size'])
-
-                e2e_delay = network_request_delay + (avg_response_time/1000) + network_response_delay
-
-                self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_ipendpoint_route('http://localhost/server-status?auto', ip_endpoint['request_arrival_rate'], e2e_delay, sim_time))
-
-                # remove requests processed off the queue
-                ip_endpoint['request_queue'] -= int(requests_processed)            
-
-            sim_time += TICK_TIME
-        end_time = sim_time
-        print("Simulation Finished. Start time {0}. End time {1}. Total time {2}".format(start_time,end_time,end_time-start_time))
-
-    # distance metres
-    # bandwidth Mbps
-    # package size bytes
-    # tx_video_bit_rate bp/sec
-    # segment size sec
-    def _calcNetworkDelay(self, distance, bandwidth, packet_size, tx_video_bit_rate, segment_size):
-        response_delay = 0
-
-        # propogation delay = distance/speed () (e.g 2000 metres * 2*10^8 for optical fibre)
-        propogation_delay = distance/(2*100000000)
-        # packetisation delay = ip packet size (bits)/tx rate (e.g. 100Mbp with  0% packet loss)
-        packetisation_delay = (packet_size*8)/(bandwidth*1000000)
-    #    print('packetisation_delay:', packetisation_delay)   
-        # total number of packets to be sent
-        packets = (tx_video_bit_rate*1000000)/(packet_size*8)
-     #   print('packets:', packets)        
-        response_delay = packets*(propogation_delay+packetisation_delay)
-      #  print('response_delay:', response_delay)
-
-        return response_delay     
-
-    def _changeVMState(self, sim_time, ip_endpoint, mu, sigma, transition_state, next_state):
-        delay_time = 0
-    
-        self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_vm_config(transition_state, ip_endpoint['cpu'], ip_endpoint['mem'], ip_endpoint['storage'], sim_time))   
-        
-        delay_time = random.normalvariate(mu, sigma)        
-        
-        self._sendInfluxData(ip_endpoint['agent_url'], lp.generate_vm_config(next_state, ip_endpoint['cpu'], ip_endpoint['mem'], ip_endpoint['storage'], sim_time+delay_time))
-
-        return delay_time
-
-    def _createDB(self):
-        self._sendInfluxQuery(self.influx_url, 'CREATE DATABASE ' + self.influx_db)
-
-
-    def _deleteDB(self):
-        self._sendInfluxQuery(self.influx_url, 'DROP DATABASE ' + self.influx_db)
-
-
-    def _sendInfluxQuery(self, url, query):
-        query = urllib.parse.urlencode({'q': query})
-        query = query.encode('ascii')
-        req = urllib.request.Request(url + '/query ', query)
-        urllib.request.urlopen(req)
-
-    def _sendInfluxData(self, url, data):
-        data = data.encode()
-        header = {'Content-Type': 'application/octet-stream'}
-        req = urllib.request.Request(url + '/write?db=' + self.influx_db, data, header)
-        urllib.request.urlopen(req)  
-
-simulator = sim(INFLUX_DB_URL)
-simulator.run(SIMULATION_TIME_SEC)
-
diff --git a/test/__init__.py b/test/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..44f772595799f5fe338534918c95e23e08e80464
--- /dev/null
+++ b/test/__init__.py
@@ -0,0 +1 @@
+#!/usr/bin/python3
\ No newline at end of file
diff --git a/test/services/loadtest-streaming/install.sh b/test/services/loadtest-streaming/install.sh
index 8c1f2fb318ec9d627034187441fe17d03d0f5edd..7d6ef6ddc357283c53da5f0475a517002c685f14 100755
--- a/test/services/loadtest-streaming/install.sh
+++ b/test/services/loadtest-streaming/install.sh
@@ -23,10 +23,8 @@
 #//      Created for Project :   FLAME
 #//
 #/////////////////////////////////////////////////////////////////////////
-
 set -euo pipefail
 
 echo "REPO_ROOT:"$REPO_ROOT
-
 eval '$REPO_ROOT/test/services/vlc/install.sh'
-eval '$REPO_ROOT/test/services/jmeter/install.sh'
+eval '$REPO_ROOT/test/services/pytest/install.sh'
\ No newline at end of file
diff --git a/scripts/apache/install-apache.sh b/test/services/pytest/install.sh
old mode 100755
new mode 100644
similarity index 88%
rename from scripts/apache/install-apache.sh
rename to test/services/pytest/install.sh
index 735fc0a46e4dbe491ce82edba7b5aeb17d84c005..0c04381e4c56837c24fda25c67747ee577e47477
--- a/scripts/apache/install-apache.sh
+++ b/test/services/pytest/install.sh
@@ -19,11 +19,10 @@
 #// the software.
 #//
 #//      Created By :            Michael Boniface
-#//      Created Date :          23/01/2018
+#//      Created Date :          24/02/2018
 #//      Created for Project :   FLAME
 #//
 #/////////////////////////////////////////////////////////////////////////
-
-# Install apache
-sudo apt-get update
-sudo apt-get -y install apache2
\ No newline at end of file
+apt-get update
+apt-get -y install python-pip python-dev build-essential
+pip install pytest pyyaml
\ No newline at end of file
diff --git a/test/streaming-sim/LineProtocolGenerator.py b/test/streaming-sim/LineProtocolGenerator.py
index 3d4b07736b3fa3b318754d411aaeb1d91aa2f537..5d7914f797b5024e74949a8ff5bab01457b4e2e5 100644
--- a/test/streaming-sim/LineProtocolGenerator.py
+++ b/test/streaming-sim/LineProtocolGenerator.py
@@ -1,3 +1,5 @@
+#!/usr/bin/python3
+
 # line protocol
 
 # Method to create a full InfluxDB request statement (based on partial statement from client)
diff --git a/test/streaming-sim/StreamingSim.py b/test/streaming-sim/StreamingSim.py
index 0182e75dc99b9e9f28ffad87a0d4d40e5929d67b..2a375523af36c041db39dcfd69383a0c150ce860 100644
--- a/test/streaming-sim/StreamingSim.py
+++ b/test/streaming-sim/StreamingSim.py
@@ -1,3 +1,5 @@
+#!/usr/bin/python3
+
 import LineProtocolGenerator as lp
 import time
 import urllib.parse
diff --git a/test/streaming-sim/VerifySimResults.py b/test/streaming-sim/VerifySimResults.py
index 2060a23d578d2b8cb278f9678a98b8e8430c92d5..5bf40672aa8a2f0594fb83202133b7743f5671fc 100644
--- a/test/streaming-sim/VerifySimResults.py
+++ b/test/streaming-sim/VerifySimResults.py
@@ -1,3 +1,5 @@
+#!/usr/bin/python3
+
 import sys
 import urllib.parse
 import urllib.request
diff --git a/test/streaming-sim/__init__.py b/test/streaming-sim/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..44f772595799f5fe338534918c95e23e08e80464
--- /dev/null
+++ b/test/streaming-sim/__init__.py
@@ -0,0 +1 @@
+#!/usr/bin/python3
\ No newline at end of file
diff --git a/test/streaming-sim/conftest.py b/test/streaming-sim/conftest.py
new file mode 100644
index 0000000000000000000000000000000000000000..de3ebaef1ef1c53b3471690b8a4f98af766266e1
--- /dev/null
+++ b/test/streaming-sim/conftest.py
@@ -0,0 +1,14 @@
+#!/usr/bin/python3
+
+import pytest
+import yaml
+
+@pytest.fixture(scope="module",
+                params=[{'config1': {'rspec': 'test/streaming-sim/rspec.yml', 'id': 'myid'}}])
+def streaming_sim_config(request):
+    """Returns the service configuration deployed for the streaming simulation test. In future this needs to be a parameterised fixture shared with other rspec.yml based tests"""
+    print(request.param['config1']['rspec'])
+    print(request.param['config1']['id'])    
+    with open(request.param['config1']['rspec'], 'r') as stream:
+        data_loaded = yaml.load(stream)
+    return data_loaded
\ No newline at end of file
diff --git a/infra/streaming-sim/rspec.yml b/test/streaming-sim/rspec.yml
similarity index 98%
rename from infra/streaming-sim/rspec.yml
rename to test/streaming-sim/rspec.yml
index cd62eebf3f4e06842d28f0ae76d25f9f205a18b7..5709115b88f6b201e56278aa45933ef7f34ab071 100644
--- a/infra/streaming-sim/rspec.yml
+++ b/test/streaming-sim/rspec.yml
@@ -28,7 +28,7 @@ hosts:
     ipendpoint_id: "adaptive_streaming_I1_apache1"
     influxdb_url: "http://192.168.50.10:8086"
     database_name: "CLMCMetrics"
-  - name: ipendpoint12
+  - name: ipendpoint2
     cpus: 1
     memory: 2048
     disk: "10GB"
diff --git a/test/streaming-sim/test_rspec.py b/test/streaming-sim/test_rspec.py
new file mode 100644
index 0000000000000000000000000000000000000000..efecd68bf0eac6c89b23320453d0b5425edc0c67
--- /dev/null
+++ b/test/streaming-sim/test_rspec.py
@@ -0,0 +1,17 @@
+#!/usr/bin/python3
+
+import pytest
+import os
+
+def test_service_names(streaming_sim_config):
+    print(streaming_sim_config['hosts'][0]['name'])
+    assert streaming_sim_config['hosts'][0]['name'] == 'clmc-service'
+    assert streaming_sim_config['hosts'][1]['name'] == 'ipendpoint1'
+    assert streaming_sim_config['hosts'][2]['name'] == 'ipendpoint2'            
+
+def test_ping(streaming_sim_config):
+    """This test will only run on linux due to using os.system library"""
+    for x in streaming_sim_config['hosts']:
+        print(x['ip_address'])
+        response = os.system("ping -c 1 " + x['ip_address'])
+        assert response == 0     
\ No newline at end of file
diff --git a/test/streaming/__init__.py b/test/streaming/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..44f772595799f5fe338534918c95e23e08e80464
--- /dev/null
+++ b/test/streaming/__init__.py
@@ -0,0 +1 @@
+#!/usr/bin/python3
\ No newline at end of file
diff --git a/test/streaming/conftest.py b/test/streaming/conftest.py
new file mode 100644
index 0000000000000000000000000000000000000000..4fe3bb9dc1298df611392594940a7efca95d2ddf
--- /dev/null
+++ b/test/streaming/conftest.py
@@ -0,0 +1,11 @@
+#!/usr/bin/python3
+
+import pytest
+import yaml
+
+@pytest.fixture(scope="module")
+def streaming_config():
+    """Returns the service configuration deployed for the streaming test. In future this needs to be a parameterised fixture shared with other rspec.yml based tests"""
+    with open("test/streaming/rspec.yml", 'r') as stream:
+        data_loaded = yaml.load(stream)
+    return data_loaded
\ No newline at end of file
diff --git a/test/streaming/report.sh b/test/streaming/report.sh
new file mode 100644
index 0000000000000000000000000000000000000000..ad1251a7cb908a97b4bb97834fc39c455cabbc5d
--- /dev/null
+++ b/test/streaming/report.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+
+# This script reads stdin and expects the output of cvlc.
+# It is used by the run.sh script and receives the output of the cvlc client.
+# It counts the number of times the frame "dropping" error is seen and every 10 times it sends a message to telegraf reporting "another 10" errors.
+
+if [ "$#" -ne 1 ]; then
+    echo "Error: illegal number of arguments: "$#
+    echo "Usage: report.sh <client number>"
+    exit 
+fi
+
+COUNTER=$1
+TELEGRAF=http://localhost:8186
+
+ERR_COUNT=0
+while read line; do
+  if [[ $line = *"dropping"* ]]; then
+    ERR_COUNT=$(($ERR_COUNT + 1))
+  fi
+  TEN=$((ERR_COUNT % 10))
+  if [ $TEN -eq 0 ]; then
+    curl -i -XPOST "${TELEGRAF}/write?precision=s" --data-binary "vlc,client=${COUNTER} drop_error=10 $(date +%s)" >& /dev/null
+  fi
+done
\ No newline at end of file
diff --git a/infra/streaming/rspec.yml b/test/streaming/rspec.yml
similarity index 98%
rename from infra/streaming/rspec.yml
rename to test/streaming/rspec.yml
index fa428cc6b989d8ffa672a76011df5f6191caa253..b1291a381c3ca2eb09078958dec0e0f29706a7a3 100644
--- a/infra/streaming/rspec.yml
+++ b/test/streaming/rspec.yml
@@ -53,7 +53,7 @@ hosts:
     forward_ports:
       - guest: 80
         host: 8083
-    ip_address: "192.168.50.12"
+    ip_address: "192.168.50.13"
     location: "DC1"
     sfc_id: "MS_Template_1"
     sfc_id_instance: "MS_I1"
diff --git a/test/streaming/run.sh b/test/streaming/run.sh
index 2c8c930eee923ed84e22894ad5cf0f43cd865dfc..359204945167465d1920d88d6dc84930787fd9f1 100755
--- a/test/streaming/run.sh
+++ b/test/streaming/run.sh
@@ -31,7 +31,7 @@ if [ "$#" -ne 3 ]; then
 fi
 
 # create test directories
-TEST_FOLDER=$(date +%Y%m%d%H%M%S)
+TEST_FOLDER=$(date +%Y%m%d%H%M%S) 
 TEST_RUN_DIR=$1
 TEST_DIR=$TEST_RUN_DIR"/streaming/"$TEST_FOLDER
 echo "Test directory: "$TEST_DIR
@@ -47,8 +47,8 @@ STREAM_URI=$2
 COUNTER=0
 MAX_CLIENTS=$3
 while [  $COUNTER -lt $MAX_CLIENTS ]; do
-  cvlc -Vdummy --no-audio $STREAM_URI &>$TEST_DIR/stdout$COUNTER &
- # cvlc -Vdummy --no-audio --verbose=0 --file-logging --logfile=$TEST_DIR/vlc-log$COUNTER.txt $STREAM_URI &
+  # run cvlc headless, redirect stderr into stdout, pipe that into the report.sh script
+  cvlc -Vdummy --no-audio $STREAM_URI 2>&1 | /home/ubuntu/flame-clmc/test/streaming/report.sh ${COUNTER} &
   sleep 1
   let COUNTER=COUNTER+1 
 done
diff --git a/test/streaming/stop.sh b/test/streaming/stop.sh
index b9953899d5d08b3ead91d60438532524e5a19525..b332fe3b1d7d1e9ff2e974cb59b036e2252d0bc9 100755
--- a/test/streaming/stop.sh
+++ b/test/streaming/stop.sh
@@ -1,3 +1,4 @@
 #!/bin/bash
 
-for pid in $(ps -ef | grep "/usr/bin/vlc" | awk '{print $2}'); do kill -9 $pid; done
\ No newline at end of file
+for pid in $(ps -ef | grep "/usr/bin/vlc" | awk '{print $2}'); do kill -9 $pid; done
+# TODO: 'killall vlc' should work: need to test though
\ No newline at end of file
diff --git a/test/streaming/test_rspec.py b/test/streaming/test_rspec.py
new file mode 100644
index 0000000000000000000000000000000000000000..ea21ea9054143172ba27ab7492a4d4077d6a667d
--- /dev/null
+++ b/test/streaming/test_rspec.py
@@ -0,0 +1,19 @@
+#!/usr/bin/python3
+
+import pytest
+import os
+
+def test_service_names(streaming_config):
+    print(streaming_config['hosts'][0]['name'])
+    assert streaming_config['hosts'][0]['name'] == 'clmc-service'
+    assert streaming_config['hosts'][1]['name'] == 'nginx1'
+    assert streaming_config['hosts'][2]['name'] == 'nginx2' 
+    assert streaming_config['hosts'][3]['name'] == 'loadtest-streaming'            
+
+def test_ping(streaming_config):
+    """This test will only run on linux"""
+    for x in streaming_config['hosts']:
+        print(x['ip_address'])
+        response = os.system("ping -c 1 " + x['ip_address'])
+        assert response == 0     
+    
diff --git a/test/streaming/testplan.jmx b/test/streaming/testplan.jmx
deleted file mode 100644
index 065e14c0f91203bf0d6a4a80a751bbf0b89a502c..0000000000000000000000000000000000000000
--- a/test/streaming/testplan.jmx
+++ /dev/null
@@ -1,67 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<jmeterTestPlan version="1.2" properties="3.2" jmeter="3.3 r1808647">
-  <hashTree>
-    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
-      <stringProp name="TestPlan.comments"></stringProp>
-      <boolProp name="TestPlan.functional_mode">false</boolProp>
-      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
-      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
-        <collectionProp name="Arguments.arguments"/>
-      </elementProp>
-      <stringProp name="TestPlan.user_define_classpath"></stringProp>
-    </TestPlan>
-    <hashTree>
-      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="StreamingGroupApache1" enabled="true">
-        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
-        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
-          <boolProp name="LoopController.continue_forever">false</boolProp>
-          <intProp name="LoopController.loops">-1</intProp>
-        </elementProp>
-        <stringProp name="ThreadGroup.num_threads">50</stringProp>
-        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
-        <longProp name="ThreadGroup.start_time">1518691643000</longProp>
-        <longProp name="ThreadGroup.end_time">1518691643000</longProp>
-        <boolProp name="ThreadGroup.scheduler">true</boolProp>
-        <stringProp name="ThreadGroup.duration">20</stringProp>
-        <stringProp name="ThreadGroup.delay">0</stringProp>
-      </ThreadGroup>
-      <hashTree>
-        <SystemSampler guiclass="SystemSamplerGui" testclass="SystemSampler" testname="VLC Client" enabled="true">
-          <boolProp name="SystemSampler.checkReturnCode">false</boolProp>
-          <stringProp name="SystemSampler.expectedReturnCode">0</stringProp>
-          <stringProp name="SystemSampler.command">cvlc</stringProp>
-          <elementProp name="SystemSampler.arguments" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
-            <collectionProp name="Arguments.arguments">
-              <elementProp name="" elementType="Argument">
-                <stringProp name="Argument.name"></stringProp>
-                <stringProp name="Argument.value">-Vdummy</stringProp>
-                <stringProp name="Argument.metadata">=</stringProp>
-              </elementProp>
-              <elementProp name="" elementType="Argument">
-                <stringProp name="Argument.name"></stringProp>
-                <stringProp name="Argument.value">--no-audio</stringProp>
-                <stringProp name="Argument.metadata">=</stringProp>
-              </elementProp>
-              <elementProp name="" elementType="Argument">
-                <stringProp name="Argument.name"></stringProp>
-                <stringProp name="Argument.value">http://192.168.50.11/test_video/stream.mpd</stringProp>
-                <stringProp name="Argument.metadata">=</stringProp>
-              </elementProp>
-            </collectionProp>
-          </elementProp>
-          <elementProp name="SystemSampler.environment" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
-            <collectionProp name="Arguments.arguments"/>
-          </elementProp>
-          <stringProp name="SystemSampler.directory"></stringProp>
-          <stringProp name="SystemSampler.stdout">stdout${__threadNum}</stringProp>
-          <longProp name="SystemSampler.timeout">20000</longProp>
-        </SystemSampler>
-        <hashTree/>
-      </hashTree>
-    </hashTree>
-    <WorkBench guiclass="WorkBenchGui" testclass="WorkBench" testname="WorkBench" enabled="true">
-      <boolProp name="WorkBench.save">true</boolProp>
-    </WorkBench>
-    <hashTree/>
-  </hashTree>
-</jmeterTestPlan>