diff --git a/docs/clmc-service.md b/docs/clmc-service.md
index bc5d6723d7c32077334b07865661e8f7f80357cb..8f8c1ef30dfdedf5b39a4fe5e84c25a3ec3a30a0 100644
--- a/docs/clmc-service.md
+++ b/docs/clmc-service.md
@@ -62,41 +62,6 @@ with **/clmc-service** so that the nginx reverse proxy server (listening on port
 
 ## Alerts API Endpoints
 
-* **GET** ***/alerts?sfc={service function chain id}&sfci={service function chain instance id}&policy={policy id}&trigger={trigger id}***
-
-    This API method can be used to retrieve the generated alert task and alert topic identifiers during the processing of an alerts specification document.
-    These identifiers can then be used to interact with the Kapacitor HTTP API for further configiuration or modification of alerts - https://docs.influxdata.com/kapacitor/v1.4/working/api/.
-
-    * Request:
-    
-        Expects a URL query string with the request parameters - **sfc**, **sfci**, **policy** and **trigger**. The given parameters must match the values used in the alerts specification
-        document. Otherwise, a wrong ID will be returned.
-        
-    * Request URL Examples:
-    
-        **/alerts?sfc=MSDemo&sfci=MSDemo-premium&policy=requests_diff&trigger=low_requests**
-        
-        **/alerts?sfc=SimpleMediaService&sfci=SimpleMediaService-1&policy=rtt_deviation&trigger=increase_in_rtt**
-     
-    * Response
-    
-        The response of this request is a JSON-formatted content, which contains the task and topic identifiers, along with the Kapacitor
-        API endpoints to use for configuring the given task, topic and the respective handlers.
-        
-        Returns a 400 Bad Request if the URL query string parameters are invalid or otherwise incorrect.
-        
-    * Response Body Example:
-    
-        ```json
-        {
-            "task_identifier": "094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
-            "topic_identifier": "094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
-            "task_api_endpoint": "/kapacitor/v1/tasks/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
-            "topic_api_endpoint": "/kapacitor/v1/alerts/topics/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
-            "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388/handlers"
-        }
-        ```
-
 * **POST** ***/alerts***
 
     This API method can be used to send an alert specification document, which is then used by the CLMC service to create
@@ -179,6 +144,139 @@ with **/clmc-service** so that the nginx reverse proxy server (listening on port
         }
         ```
 
+* **PUT** ***/alerts***
+
+    This API method can be used to send an alert specification document, which is then used by the CLMC service to create or update
+    alert tasks and subscribe alert handlers to those tasks in Kapacitor. For further information on the alert specification
+    document, please check the [CLMC Alert Specification Documentation](AlertsSpecification.md).
+    
+    The request/response format of this method is the same as the **POST /alerts** API endpoint with the only difference being that existing alert tasks
+    or handlers will be re-created rather than returning a duplication error.
+
+* **GET** ***/alerts/{sfc_id}/{sfc_instance_id}***
+
+    This API method can be used to fetch all alerts that are registered for a specific service function chain instance identified
+    by the ***sfc_id*** and ***sfc_instance_id*** url parameters.
+    
+    * Request:
+    
+        Expects a URL query with the following parameters - **sfc_id**, **sfc_instance_id**. The given parameters should uniquely
+        identify a service function chain instance for which the registered alerts information will be returned.
+        
+    * Request URL Examples:
+    
+        **/alerts/MSDemo/MSDemo_1**
+        
+        **/alerts/SimpleMediaService/SimpleMediaService_1**
+    
+    * Response:
+    
+        The response of this request is a JSON-formatted content, which contains a list of alert objects each of which represents
+        a registered alert and contains the alert policy ID, trigger ID and Kapacitor resources including handlers.
+    
+    * Response Example:
+        
+        ```json
+        [
+            {
+                "policy": "<TOSCA policy ID>",
+                "trigger": "<TOSCA trigger ID>",
+                "handlers": ["<list of URLs that receive alerts through POST requests>"],
+                "task_identifier": "<Kapacitor task identifier>",
+                "topic_identifier": "<Kapacitor topic identifier>",
+                "task_api_endpoint": "/kapacitor/v1/tasks/<Kapacitor task identifier>",
+                "topic_api_endpoint": "/kapacitor/v1/alerts/topics/<Kapacitor topic identifier>",
+                "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/<Kapacitor topic identifier>/handlers"
+            }
+        ]
+        ```
+
+* **DELETE** ***/alerts***
+
+    This API method can be used to send an alert specification document, which is then used by the CLMC service to delete
+    alert tasks and deregister alert handlers in Kapacitor. Essentially, it is a clean-up endpoint for the alerts API. 
+    For further information on the alert specification document, please check the [CLMC Alert Specification Documentation](AlertsSpecification.md).
+
+    * Request:
+    
+        Expects a YAML-formatted file in the request referenced with ID ***alert-spec*** representing the TOSCA alert specification 
+        document. The alert specification document is then parsed with the openstack TOSCA parser (https://github.com/openstack/tosca-parser/tree/master/toscaparser)
+        and validated against the CLMC alerts specification schema (again check [documentation](AlertsSpecification.md) for more info on this).
+
+    * Example for sending a request with curl:
+    
+        `curl -X DELETE -F "alert-spec=@alert-specification.yaml" http://localhost:9080/alerts`
+        
+        where **alert-specification.yaml** is the path to the alerts specification file.
+    
+    * Response:
+    
+        The response of this request is a JSON-formatted content, which contains two lists - one for the alerts that were found in the TOSCA specification
+        and then deleted from Kapacitor, and one for the respective alert handlers that were deleted from Kapacitor.
+                        
+        Returns a 400 Bad Request if the request does not contain a yaml file referenced with ID **alert-spec**.
+        
+        Returns a 400 Bad Request if the alert specification file is not a valid YAML file.
+        
+        Returns a 400 Bad Request if the alert specification file cannot be parsed with the TOSCA parser.
+        
+        Returns a 400 Bad Request if the alert specification file fails validation against the CLMC alerts specification schema.
+    
+    * Response Body Example:
+    
+        ```json
+        {
+          "deleted_alerts": [{"policy": "scale_nginx_policy", "trigger": "high_requests"}, 
+                             {"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests"},
+                             {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes"}, 
+                             {"policy": "deadman_policy", "trigger": "no_measurements"}], 
+          "deleted_handlers": [{"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests", "handler": "flame_sfemc"},
+                               {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes", "handler": "flame_sfemc"},
+                               {"policy": "deadman_policy", "trigger": "no_measurements", "handler": "flame_sfemc"},
+                               {"policy": "scale_nginx_policy", "trigger": "high_requests", "handler": "http://172.40.231.200:9999/"},
+                               {"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests", "handler": "http://172.40.231.200:9999/"},
+                               {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes", "handler": "http://172.40.231.200:9999/"},
+                               {"policy": "deadman_policy", "trigger": "no_measurements", "handler": "http://172.40.231.200:9999/"}]
+        }
+        ```
+
+* **GET** ***/alerts?sfc={service function chain id}&sfci={service function chain instance id}&policy={policy id}&trigger={trigger id}***
+    
+    (Deprecated - please use *GET /alerts/{sfc_id}/{sfc_instance_id}* instead)
+    
+    This API method can be used to retrieve the generated alert task and alert topic identifiers during the processing of an alerts specification document.
+    These identifiers can then be used to interact with the Kapacitor HTTP API for further configiuration or modification of alerts - https://docs.influxdata.com/kapacitor/v1.4/working/api/.
+
+    * Request:
+    
+        Expects a URL query string with the request parameters - **sfc**, **sfci**, **policy** and **trigger**. The given parameters must match the values used in the alerts specification
+        document. Otherwise, a wrong ID will be returned.
+        
+    * Request URL Examples:
+    
+        **/alerts?sfc=MSDemo&sfci=MSDemo-premium&policy=requests_diff&trigger=low_requests**
+        
+        **/alerts?sfc=SimpleMediaService&sfci=SimpleMediaService-1&policy=rtt_deviation&trigger=increase_in_rtt**
+     
+    * Response
+    
+        The response of this request is a JSON-formatted content, which contains the task and topic identifiers, along with the Kapacitor
+        API endpoints to use for configuring the given task, topic and the respective handlers.
+        
+        Returns a 400 Bad Request if the URL query string parameters are invalid or otherwise incorrect.
+        
+    * Response Body Example:
+    
+        ```json
+        {
+            "task_identifier": "094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
+            "topic_identifier": "094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
+            "task_api_endpoint": "/kapacitor/v1/tasks/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
+            "topic_api_endpoint": "/kapacitor/v1/alerts/topics/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388",
+            "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/094f23d6e948c78e9fa215528973fb3aeefa5525898626c9ea049dc8e87a7388/handlers"
+        }
+        ```
+
 ## Graph API Endpoints
 
 * **Assumptions**
diff --git a/src/service/clmcservice/__init__.py b/src/service/clmcservice/__init__.py
index 6b5c6e48116c354697b98c227f0c0e3903791205..c17e71b9506d71736ab7eeeba4ce71af9d4150eb 100644
--- a/src/service/clmcservice/__init__.py
+++ b/src/service/clmcservice/__init__.py
@@ -76,6 +76,7 @@ def main(global_config, **settings):
 
     # add routes of the Alerts Configuration API
     config.add_route('alerts_configuration', '/alerts')
+    config.add_route('alerts_configuration_instance', '/alerts/{sfc_id}/{sfc_instance_id}')
 
-    config.scan()  # This method scans the packages and finds any views related to the routes added in the app configuration
+    config.scan()  # this method scans the packages and finds any views related to the routes added in the app configuration
     return config.make_wsgi_app()
diff --git a/src/service/clmcservice/alertsapi/tests.py b/src/service/clmcservice/alertsapi/tests.py
index 8fd2ac04547db1eb5876df0f6d4afd44bdfb05e2..a24b5970154dd664885908286e714dd4d7e7092e 100644
--- a/src/service/clmcservice/alertsapi/tests.py
+++ b/src/service/clmcservice/alertsapi/tests.py
@@ -36,7 +36,7 @@ from requests import get, delete
 from toscaparser.tosca_template import ToscaTemplate
 
 # CLMC-service imports
-from clmcservice.alertsapi.utilities import adjust_tosca_definitions_import
+from clmcservice.alertsapi.utilities import adjust_tosca_definitions_import, SFEMC
 from clmcservice.alertsapi.alerts_specification_schema import validate_clmc_alerts_specification
 from clmcservice.alertsapi.views import AlertsConfigurationAPI
 from clmcservice import ROOT_DIR
@@ -61,8 +61,8 @@ class TestAlertsConfigurationAPI(object):
         A fixture to implement setUp/tearDown functionality for all tests by initializing configuration structure for the web service
         """
 
-        self.registry = testing.setUp()
-        self.registry.add_settings({"kapacitor_host": "localhost", "kapacitor_port": 9092, "sfemc_fqdn": "sfemc.localhost", "sfemc_port": 8081})
+        self.config = testing.setUp()
+        self.config.add_settings({"kapacitor_host": "localhost", "kapacitor_port": 9092, "sfemc_fqdn": "sfemc.localhost", "sfemc_port": 8081})
 
         yield
 
@@ -149,38 +149,32 @@ class TestAlertsConfigurationAPI(object):
 
     def test_alerts_config_api_post(self, app_config):
         """
-        Tests the POST API endpoint of the alerts configuration API responsible for receiving alerts specifications.
+        Tests the POST API endpoint of the alerts configuration API responsible for creating alerts.
 
         Test steps are:
-            * Traverse all valid TOSCA Alerts Specifications in the
-                src/service/clmcservice/resources/tosca/test-data/clmc-validator/valid and src/service/clmcservice/resources/tosca/test-data/tosca-parser/valid
-            * Sending a valid TOSCA Alert Specification to the view responsible for configuring Kapacitor
+            * Traverse all valid TOSCA Alerts Specifications and TOSCA Resource Specifications in the
+                src/service/clmcservice/resources/tosca/test-data/clmc-validator/valid and src/service/clmcservice/resources/tosca/test-data/tosca-parser/valid folders
+            * Send a valid TOSCA Alert Specification to the view responsible for configuring Kapacitor and creating alerts (POST request)
             * Check that Kapacitor alerts, topics and handlers are created with the correct identifier and arguments
+            * Check that the API returns the duplication errors if the same alerts specification is sent through a POST request
+            * Check that the API returns no duplication errors if the same alerts specification is sent through a PUT request
+            * Clean up the registered alerts
 
         :param app_config: fixture for setUp/tearDown of the web service registry
         """
 
-        test_folder = "clmc-validator"
-        alerts_test_data_path = join(dirname(ROOT_DIR), *["resources", "tosca", "test-data", test_folder, "valid"])
-        resources_test_data_path = join(dirname(ROOT_DIR), *["resources", "tosca", "test-data", "resource-spec"])
+        kapacitor_host = self.config.registry.settings["kapacitor_host"]
+        kapacitor_port = self.config.registry.settings["kapacitor_port"]
 
-        for alerts_test_file in listdir(alerts_test_data_path):
-            alert_spec_abs_path = join(alerts_test_data_path, alerts_test_file)
+        # test all of the test files provided by the path_generator_testfiles function
+        for alert_spec_file_paths, valid_resource_spec_file_paths, invalid_resource_spec_file_paths in path_generator_testfiles():
 
-            if not isfile(alert_spec_abs_path):
-                continue  # skip directories
-
-            print("Testing file {0} in folder {1}".format(alerts_test_file, test_folder))
-
-            valid_resources_test_file = alerts_test_file.replace("alerts", "resources_valid")
-            invalid_resources_test_file = alerts_test_file.replace("alerts", "resources_invalid")
-            valid_resource_spec_abs_path = join(resources_test_data_path, valid_resources_test_file)
-            invalid_resource_spec_abs_path = join(resources_test_data_path, invalid_resources_test_file)
-
-            print("Test uses resource spec. files {0} and {1}".format(valid_resources_test_file, invalid_resources_test_file))
+            alert_spec_abs_path, alerts_test_file = alert_spec_file_paths  # absolute path and name of the alerts spec. file
+            valid_resource_spec_abs_path, valid_resources_test_file = valid_resource_spec_file_paths  # absolute path and name of a valid resource spec. file
+            invalid_resource_spec_abs_path, invalid_resources_test_file = invalid_resource_spec_file_paths  # absolute path and name of an invalid resource spec. file
 
             with open(alert_spec_abs_path) as alert_spec:
-                # first send an inconsistent resource spec
+                # first send an inconsistent resource spec, expecting bad request
                 with open(invalid_resource_spec_abs_path) as invalid_resource_spec:
                     request = testing.DummyRequest()
                     request.POST['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)
@@ -191,12 +185,16 @@ class TestAlertsConfigurationAPI(object):
                     except HTTPBadRequest:
                         pass  # we expect this to happen
 
+                # reset the read pointer of the alert specification file since it was already read once
                 alert_spec.seek(0)
+
+                # extract the alert specification data in a structured way
+                sfc, sfc_instance, alerts = extract_alert_configuration_data(alert_spec, self.config.registry.settings["sfemc_fqdn"], self.config.registry.settings["sfemc_port"])
+                alert_spec.seek(0)  # reset the read pointer to the beginning again (the extraction in the previous step had to read the full file)
+
                 # then send a consistent resource spec
                 with open(valid_resource_spec_abs_path) as valid_resource_spec:
                     request = testing.DummyRequest()
-                    sfc, sfc_instance, alert_ids, topic_handlers = extract_alert_spec_data(alert_spec)
-                    alert_spec.seek(0)
                     request.POST['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)  # a simple mock class is used to mimic the FieldStorage class
                     request.POST['resource-spec'] = FieldStorageMock(valid_resources_test_file, valid_resource_spec)
                     clmc_service_response = AlertsConfigurationAPI(request).post_alerts_specification()
@@ -206,33 +204,8 @@ class TestAlertsConfigurationAPI(object):
             assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
             assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
 
-            # traverse through all alert IDs and check that they are created within Kapacitor
-            for alert_id, alert_type in alert_ids:
-                kapacitor_response = get("http://localhost:9092/kapacitor/v1/tasks/{0}".format(alert_id))
-                assert kapacitor_response.status_code == 200, "Alert with ID {0} was not created - test file {1}.".format(alert_id, alerts_test_file)
-                kapacitor_response_json = kapacitor_response.json()
-                assert "link" in kapacitor_response_json, "Incorrect response from kapacitor for alert with ID {0} - test file {1}".format(alert_id, alerts_test_file)
-                assert kapacitor_response_json["status"] == "enabled", "Alert with ID {0} was created but is disabled - test file {1}".format(alert_id, alerts_test_file)
-                assert kapacitor_response_json["executing"], "Alert with ID {0} was created and is enabled, but is not executing - test file {1}".format(alert_id, alerts_test_file)
-                assert kapacitor_response_json["type"] == alert_type,  "Alert with ID {0} was created with the wrong type - test file {1}".format(alert_id, alerts_test_file)
-
-            # check that all topic IDs were registered within Kapacitor
-            topic_ids = list(topic_handlers.keys())
-            kapacitor_response = get("http://localhost:9092/kapacitor/v1/alerts/topics")
-            assert kapacitor_response.status_code == 200, "Kapacitor couldn't return the list of created topics - test file {0}".format(alerts_test_file)
-            kapacitor_response_json = kapacitor_response.json()
-            kapacitor_defined_topics = [topic["id"] for topic in kapacitor_response_json["topics"]]
-            assert set(topic_ids).issubset(kapacitor_defined_topics), "Not all topic IDs were created within kapacitor - test file {0}".format(alerts_test_file)
-
-            # check that all handler IDs were created and each of them is subscribed to the correct topic ID
-            for topic_id in topic_handlers:
-                for handler_id, handler_url in topic_handlers[topic_id]:
-                    kapacitor_response = get("http://localhost:9092/kapacitor/v1/alerts/topics/{0}/handlers/{1}".format(topic_id, handler_id))
-                    assert kapacitor_response.status_code == 200, "Handler with ID {0} for topic with ID {1} doesn't exist - test file {2}".format(handler_id, topic_id, alerts_test_file)
-                    kapacitor_response_json = kapacitor_response.json()
-                    assert kapacitor_response_json["id"] == handler_id, "Incorrect ID of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
-                    assert kapacitor_response_json["kind"] == "post", "Incorrect kind of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
-                    assert kapacitor_response_json["options"]["url"], "Incorrect url of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
+            # traverse through all alerts and check that they are created along with their respective handlers within Kapacitor
+            check_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port, alerts_test_file)
 
             # send the same spec again to check that error messages are returned (because of ID duplication)
             with open(alert_spec_abs_path) as alert_spec:
@@ -243,13 +216,243 @@ class TestAlertsConfigurationAPI(object):
             assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"]), \
                 "Incorrect extraction of metadata for file {0}". format(alerts_test_file)
 
-            assert len(clmc_service_response["triggers_specification_errors"]) == len(alert_ids), "Expected errors were not returned for triggers specification"
-            handlers_count = sum([len(topic_handlers[topic]) for topic in topic_handlers])
+            assert len(clmc_service_response["triggers_specification_errors"]) == len(alerts), "Expected errors were not returned for triggers specification"
+            handlers_count = sum([len(alert["handlers"]) for alert in alerts])
             assert len(clmc_service_response["triggers_action_errors"]) == handlers_count, "Expected errors were not returned for handlers specification"
 
-            clear_kapacitor_alerts(alert_ids, topic_handlers)
+            # send the same request but as a PUT method instead of POST
+            with open(alert_spec_abs_path) as alert_spec:
+                with open(valid_resource_spec_abs_path) as valid_resource_spec:
+                    request.params['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)  # a simple mock class is used to mimic the FieldStorage class
+                    request.params['resource-spec'] = FieldStorageMock(valid_resources_test_file, valid_resource_spec)
+                    clmc_service_response = AlertsConfigurationAPI(request).put_alerts_specification()
+
+            # no errors are expected now, since the PUT request must update existing alerts
+            assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"]), \
+                "Incorrect extraction of metadata for file {0}". format(alerts_test_file)
+            assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
+            assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
+
+            # traverse through all alerts and check that they were not deleted by the PUT request
+            check_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port, alerts_test_file)
+
+            # clean-up in the end of the test
+            clean_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port)
+
+    def test_alerts_config_api_put(self, app_config):
+        """
+        Tests the PUT API endpoint of the alerts configuration API responsible for creating or updating alerts.
+
+        Test steps are:
+            * Traverse all valid TOSCA Alerts Specifications and TOSCA Resource Specifications in the
+                src/service/clmcservice/resources/tosca/test-data/clmc-validator/valid and src/service/clmcservice/resources/tosca/test-data/tosca-parser/valid folders
+            * Send a valid TOSCA Alert Specification to the view responsible for configuring Kapacitor and creating/updating alerts (PUT request)
+            * Check that Kapacitor alerts, topics and handlers are created with the correct identifier and arguments
+            * Clean up the registered alerts
+
+        :param app_config: fixture for setUp/tearDown of the web service registry
+        """
+
+        kapacitor_host = self.config.registry.settings["kapacitor_host"]
+        kapacitor_port = self.config.registry.settings["kapacitor_port"]
+
+        # test all of the test files provided by the path_generator_testfiles function, ignoring the last result, which is invalid resource spec. files
+        for alert_spec_file_paths, valid_resource_spec_file_paths, _ in path_generator_testfiles():
+            alert_spec_abs_path, alerts_test_file = alert_spec_file_paths  # absolute path and name of the alerts spec. file
+            resource_spec_abs_path, resources_test_file = valid_resource_spec_file_paths  # absolute path and name of a valid resource spec. file
+
+            with open(alert_spec_abs_path) as alert_spec:
+                # extract alert configuration data
+                sfc, sfc_instance, alerts = extract_alert_configuration_data(alert_spec, self.config.registry.settings["sfemc_fqdn"], self.config.registry.settings["sfemc_port"])
+                alert_spec.seek(0)  # reset the read pointer to the beginning again (the extraction in the previous step had to read the full file)
+
+                # send valid alert and resource spec to create the alerts with a PUT request (can be used for updating or creating)
+                with open(resource_spec_abs_path) as resource_spec:
+                    request = testing.DummyRequest()
+                    request.params['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)  # a simple mock class is used to mimic the FieldStorage class
+                    request.params['resource-spec'] = FieldStorageMock(resources_test_file, resource_spec)
+                    clmc_service_response = AlertsConfigurationAPI(request).put_alerts_specification()
+
+            # no errors are expected now, since the PUT request must update existing alerts
+            assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"]), \
+                "Incorrect extraction of metadata for file {0}".format(alerts_test_file)
+            assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
+            assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
+
+            # traverse through all alerts and check that they were created by the PUT request
+            check_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port, alerts_test_file)
+
+            # clean-up in the end of the test
+            clean_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port)
+
+    def test_alerts_config_api_get(self, app_config):
+        """
+        Tests the GET API endpoint of the alerts configuration API responsible for fetching registered alerts for a specific SFC instance.
+
+        Test steps are:
+            * Traverse all valid TOSCA Alerts Specifications and TOSCA Resource Specifications in the
+                src/service/clmcservice/resources/tosca/test-data/clmc-validator/valid and src/service/clmcservice/resources/tosca/test-data/tosca-parser/valid folders
+            * Send a valid TOSCA Alert Specification to the view responsible for configuring Kapacitor and creating alerts
+            * Send a valid TOSCA Alert Specification to the view responsible for getting the created alerts
+            * Check that all alerts and Kapacitor resources (task, topic, handler urls) have been correctly returned
+            * Clean up the alerts
+
+        :param app_config: fixture for setUp/tearDown of the web service registry
+        """
+
+        kapacitor_host = self.config.registry.settings["kapacitor_host"]
+        kapacitor_port = self.config.registry.settings["kapacitor_port"]
+        sfemc_fqdn = self.config.registry.settings["sfemc_fqdn"]
+        sfemc_port = self.config.registry.settings["sfemc_port"]
+        fqdn_prefix = "http://{0}:{1}/sfemc/event".format(sfemc_fqdn, sfemc_port)
+
+        # test all of the test files provided by the path_generator_testfiles function, ignoring the last result, which is invalid resource spec. files
+        for alert_spec_file_paths, valid_resource_spec_file_paths, _ in path_generator_testfiles():
+
+            alert_spec_abs_path, alerts_test_file = alert_spec_file_paths  # absolute path and name of the alerts spec. file
+            resource_spec_abs_path, resources_test_file = valid_resource_spec_file_paths  # absolute path and name of a valid resource spec. file
+
+            with open(alert_spec_abs_path) as alert_spec:
+
+                # extract alert configuration data
+                sfc, sfc_instance, alerts = extract_alert_configuration_data(alert_spec, self.config.registry.settings["sfemc_fqdn"], self.config.registry.settings["sfemc_port"])
+                alert_spec.seek(0)  # reset the read pointer to the beginning again (the extraction in the previous step had to read the full file)
+
+                # send a GET request for registered alerts, expecting an empty lists, alerts have not been created yet
+                request = testing.DummyRequest()
+                request.matchdict["sfc_id"] = sfc
+                request.matchdict["sfc_instance_id"] = sfc_instance
+                response = AlertsConfigurationAPI(request).get_alerts()
+                assert response == [], "Incorrect response when fetching registered alerts, expecting empty list"
+
+                # send valid alert and resource spec to create the alerts to be fetched afterwards through a GET request
+                with open(resource_spec_abs_path) as resource_spec:
+                    request = testing.DummyRequest()
+                    request.POST['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)  # a simple mock class is used to mimic the FieldStorage class
+                    request.POST['resource-spec'] = FieldStorageMock(resources_test_file, resource_spec)
+                    AlertsConfigurationAPI(request).post_alerts_specification()
+
+            # send a GET request for registered alerts, expecting the newly created alerts
+            request = testing.DummyRequest()
+            request.matchdict["sfc_id"] = sfc
+            request.matchdict["sfc_instance_id"] = sfc_instance
+            response = AlertsConfigurationAPI(request).get_alerts()
+
+            # restructure the extracted alerts data to be comparable with the response of the clmc service
+            expected_alerts = map(lambda alert_object: {
+                "policy": alert_object["policy"], "trigger": alert_object["trigger"],
+                # make sure handlers are sorted and replace URLs starting with the SFEMC fqdn prefix to the SFEMC label
+                "handlers": sorted(map(lambda handler: SFEMC if handler.startswith(fqdn_prefix) else handler, alert_object["handlers"].values())),
+                "task_identifier": alert_object["task"], "topic_identifier": alert_object["topic"],
+                "task_api_endpoint": "/kapacitor/v1/tasks/{0}".format(alert_object["task"]),
+                "topic_api_endpoint": "/kapacitor/v1/alerts/topics/{0}".format(alert_object["topic"]),
+                "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/{0}/handlers".format(alert_object["topic"])
+            }, alerts)
+            expected_alerts = sorted(expected_alerts, key=lambda x: x["trigger"])
+            # sort the handlers of each alert in the response to ensure comparison order is correct
+            for alert in response:
+                alert["handlers"] = sorted(alert["handlers"])
+
+            # compare the actual response with the expected response
+            assert sorted(response, key=lambda x: x["trigger"]) == expected_alerts, "Incorrect result returned from a GET alerts request"
+
+            clean_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port)
+
+    def test_alerts_config_api_delete(self, app_config):
+        """
+        Tests the DELETE API endpoint of the alerts configuration API responsible for deleting alerts specifications.
+
+        Test steps are:
+            * Traverse all valid TOSCA Alerts Specifications and TOSCA Resource Specifications in the
+                src/service/clmcservice/resources/tosca/test-data/clmc-validator/valid and src/service/clmcservice/resources/tosca/test-data/tosca-parser/valid folders
+            * Send a valid TOSCA Alert Specification to the view responsible for configuring Kapacitor and creating alerts
+            * Send a valid TOSCA Alert Specification to the view responsible for deleting the created alerts if they exist
+            * Check that all Kapacitor resources (task, topic, handler) have been deleted
+
+        :param app_config: fixture for setUp/tearDown of the web service registry
+        """
+
+        kapacitor_host = self.config.registry.settings["kapacitor_host"]
+        kapacitor_port = self.config.registry.settings["kapacitor_port"]
+        sfemc_fqdn = self.config.registry.settings["sfemc_fqdn"]
+        sfemc_port = self.config.registry.settings["sfemc_port"]
+
+        # test all of the test files provided by the path_generator_testfiles function, ignoring the last result, which is invalid resource spec. files
+        for alert_spec_file_paths, valid_resource_spec_file_paths, _ in path_generator_testfiles():
+
+            alert_spec_abs_path, alerts_test_file = alert_spec_file_paths  # absolute path and name of the alerts spec. file
+            resource_spec_abs_path, resources_test_file = valid_resource_spec_file_paths  # absolute path and name of a valid resource spec. file
+
+            with open(alert_spec_abs_path) as alert_spec:
+
+                # extract the alert specification data, ignore SFC and SFC instance IDs
+                _, _, alerts = extract_alert_configuration_data(alert_spec, sfemc_fqdn, sfemc_port)
+                alert_spec.seek(0)  # reset the read pointer to the beginning again (the extraction in the previous step had to read the full file)
+
+                # send valid alert and resource spec to create the alerts to be deleted afterwards
+                with open(resource_spec_abs_path) as resource_spec:
+                    request = testing.DummyRequest()
+                    request.POST['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)  # a simple mock class is used to mimic the FieldStorage class
+                    request.POST['resource-spec'] = FieldStorageMock(resources_test_file, resource_spec)
+                    AlertsConfigurationAPI(request).post_alerts_specification()
+
+            # ensure that these resource (tasks, topics, handlers) exist in Kapacitor
+            # traverse through all alerts and check that everything is created
+            check_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port, alerts_test_file)
+
+            with open(alert_spec_abs_path) as alert_spec:
+                # now send the alert spec for deletion and check that everything is deleted in Kapacitor
+                request = testing.DummyRequest()
+                request.params['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)
+                response = AlertsConfigurationAPI(request).delete_alerts_specification()
+
+            # restructure the extracted alerts data to be comparable with the response of the clmc service
+            deleted_alerts = map(lambda alert_object: {"policy": alert_object["policy"], "trigger": alert_object["trigger"]}, alerts)
+            deleted_alerts = sorted(deleted_alerts, key=lambda x: x["trigger"])
+            deleted_handlers = []
+            for alert in alerts:
+                policy = alert["policy"]
+                trigger = alert["trigger"]
+                for handler_id in alert["handlers"]:
+                    handler_url = alert["handlers"][handler_id]
+                    if handler_url.startswith("http://{0}:{1}/sfemc/event".format(sfemc_fqdn, sfemc_port)):
+                        handler_url = SFEMC
+                    deleted_handlers.append({"policy": policy, "trigger": trigger, "handler": handler_url})
+            deleted_handlers = sorted(deleted_handlers, key=lambda x: (x["trigger"], x["handler"]))
+
+            # assert the response is what's expected containing the deleted alerts and handlers
+            assert response.keys() == {"deleted_alerts", "deleted_handlers"}, "Incorrect response format"
+            assert sorted(response["deleted_alerts"], key=lambda x: x["trigger"]) == deleted_alerts, "Incorrect result for deleted alerts"
+            assert sorted(response["deleted_handlers"], key=lambda x: (x["trigger"], x["handler"])) == deleted_handlers, "Incorrect result for deleted handlers"
+
+            # ensure that these resource (tasks, topics, handlers) do not exist in Kapacitor anymore
+            # traverse through all alerts and check that everything is deleted from Kapacitor
+            for alert in alerts:
+                # check the alert task
+                alert_id = alert["task"]
+                kapacitor_response = get("http://{0}:{1}/kapacitor/v1/tasks/{2}".format(kapacitor_host, kapacitor_port, alert_id))
+                assert kapacitor_response.status_code == 404, "Alert with ID {0} was not deleted - test file {1}.".format(alert_id, alerts_test_file)
+
+                # check the alert topic
+                topic_id = alert["topic"]
+                kapacitor_response = get("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}".format(kapacitor_host, kapacitor_port, topic_id))
+                assert kapacitor_response.status_code == 404, "Topic with ID {0} was not deleted - test file {1}".format(topic_id, alerts_test_file)
+
+                # check handlers
+                for handler_id in alert["handlers"]:
+                    kapacitor_response = get("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers/{3}".format(kapacitor_host, kapacitor_port, topic_id, handler_id))
+                    assert kapacitor_response.status_code == 404, "Handler with ID {0} for topic with ID {1} was not deleted - test file {2}".format(handler_id, topic_id, alerts_test_file)
+
+            # now send a second delete request to ensure that nothing else is deleted
+            with open(alert_spec_abs_path) as alert_spec:
+                request = testing.DummyRequest()
+                request.params['alert-spec'] = FieldStorageMock(alerts_test_file, alert_spec)
+                response = AlertsConfigurationAPI(request).delete_alerts_specification()
 
+            assert response == {"deleted_alerts": [], "deleted_handlers": []}, "Incorrect response after a second delete"
 
+
+# The implementation below is for utility methods and classes used in the tests' implementation.
 class FieldStorageMock(object):
 
     def __init__(self, filename, file):
@@ -264,12 +467,47 @@ class FieldStorageMock(object):
         self.file = file
 
 
-def extract_alert_spec_data(alert_spec):
+def path_generator_testfiles():
+    """
+    A utility function which returns a generator object for traversing the valid CLMC alert specs and their respective valid/invalid resource specs
+
+    :return: a generator object that yields a triple of file paths (valid_alert_spec_path, valid_resource_spec_path, invalid_resource_spec_path)
     """
-    A utility function to extract the expected alert, handler and topic identifiers from a given alert specification.
+
+    test_folder = "clmc-validator"
+    alerts_test_data_path = join(dirname(ROOT_DIR), *["resources", "tosca", "test-data", test_folder, "valid"])
+    resources_test_data_path = join(dirname(ROOT_DIR), *["resources", "tosca", "test-data", "resource-spec"])
+
+    # traverse through all files in the clmc-validator/valid folder (expected to be valid TOSCA alert specifications)
+    for alerts_test_file in listdir(alerts_test_data_path):
+        alert_spec_abs_path = join(alerts_test_data_path, alerts_test_file)
+
+        if not isfile(alert_spec_abs_path):
+            continue  # skip directories
+
+        print("Testing file {0} in folder {1}".format(alerts_test_file, test_folder))
+
+        # the respective resource specification consistent with this alert spec. will have the same name, with "alerts"
+        # being replaced by "resources_valid" for valid spec or "resources_invalid" for invalid spec
+        valid_resources_test_file = alerts_test_file.replace("alerts", "resources_valid")
+        invalid_resources_test_file = alerts_test_file.replace("alerts", "resources_invalid")
+        valid_resource_spec_abs_path = join(resources_test_data_path, valid_resources_test_file)
+        invalid_resource_spec_abs_path = join(resources_test_data_path, invalid_resources_test_file)
+
+        print("Test uses resource spec. files {0} and {1}".format(valid_resources_test_file, invalid_resources_test_file))
+
+        yield (alert_spec_abs_path, alerts_test_file), (valid_resource_spec_abs_path, valid_resources_test_file), (invalid_resource_spec_abs_path, invalid_resources_test_file)
+
+
+def extract_alert_configuration_data(alert_spec, sfemc_fqdn, sfemc_port):
+    """
+    A utility function to extract the expected alert, handler and topic identifiers (Kapacitor resources) from a given alert specification.
 
     :param alert_spec: the alert specification file (file object)
-    :return: a tuple containing sfc_id and sfc_instance_id along with a list and a dictionary of generated IDs (alert IDs (list), topic IDs linked to handler IDs (dict))
+    :param sfemc_fqdn: FQDN of SFEMC
+    :param sfemc_port: port number of SFEMC
+
+    :return: a list of alert objects containing policy ID, trigger ID, handlers, generated task/topic ID, alert type
     """
 
     version = splitext(alert_spec.name)[0].split("-")[-1]  # take the ending number of the alert spec file
@@ -280,8 +518,7 @@ def extract_alert_spec_data(alert_spec):
     # sfc, sfc_instance = tosca_tpl.tpl["metadata"]["sfc"], tosca_tpl.tpl["metadata"]["sfci"]
     sfc, sfc_instance = tosca_tpl.tpl["metadata"]["servicefunctionchain"], "{0}_1".format(tosca_tpl.tpl["metadata"]["servicefunctionchain"])
 
-    alert_ids = []  # saves all alert IDs in a list
-    topic_handlers = {}  # saves all topics in a dictionary, each topic is linked to a list of handler pairs (a handler pair consists of handler id and handler url)
+    alerts = []  # saves every alert object
 
     for policy in tosca_tpl.policies:
         policy_id = policy.name
@@ -292,22 +529,23 @@ def extract_alert_spec_data(alert_spec):
 
             topic_id = "{0}\n{1}\n{2}\n{3}".format(sfc, sfc_instance, policy_id, trigger_id)
             topic_id = AlertsConfigurationAPI.get_hash(topic_id)
-            topic_handlers[topic_id] = []
 
             alert_id = topic_id
             alert_type = get_alert_type(event_type, alert_period_integer)
-            alert_ids.append((alert_id, alert_type))
 
+            alert_handlers = {}
             for handler_url in trigger.trigger_tpl["action"]["implementation"]:
 
-                if handler_url == "flame_sfemc":
-                    handler_url = "http://sfemc.localhost:8081/sfemc/event/{0}/{1}/{2}".format(sfc, policy_id, "trigger_id_{0}".format(version))
+                if handler_url == SFEMC:
+                    handler_url = "http://{0}:{1}/sfemc/event/{2}/{3}/{4}".format(sfemc_fqdn, sfemc_port, sfc, policy_id, "trigger_id_{0}".format(version))
 
                 handler_id = "{0}\n{1}\n{2}\n{3}\n{4}".format(sfc, sfc_instance, policy_id, trigger_id, handler_url)
                 handler_id = AlertsConfigurationAPI.get_hash(handler_id)
-                topic_handlers[topic_id].append((handler_id, handler_url))
+                alert_handlers[handler_id] = handler_url
+
+            alerts.append({"policy": policy_id, "trigger": trigger_id, "task": alert_id, "topic": topic_id, "type": alert_type, "handlers": alert_handlers})
 
-    return sfc, sfc_instance, alert_ids, topic_handlers
+    return sfc, sfc_instance, alerts
 
 
 def get_alert_type(event_type, alert_period):
@@ -321,7 +559,7 @@ def get_alert_type(event_type, alert_period):
     """
 
     if event_type in AlertsConfigurationAPI.DUAL_VERSION_TEMPLATES:
-        if alert_period <= AlertsConfigurationAPI.STREAM_PERIOD_LIMIT:
+        if alert_period < AlertsConfigurationAPI.STREAM_PERIOD_LIMIT:
             return "stream"
         else:
             return "batch"
@@ -330,22 +568,70 @@ def get_alert_type(event_type, alert_period):
         return events[event_type]
 
 
-def clear_kapacitor_alerts(alert_ids, topic_handlers):
+def check_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port, alerts_test_file):
+    """
+    Check that all Kapactior resources are created with the expected values.
+
+    :param alerts: the list of alert objects to test
+    :param kapacitor_host: hostname of kapacitor
+    :param kapacitor_port: port number of kapacitor
+    :param alerts_test_file: name of the current test file
+    """
+
+    for alert in alerts:
+
+        alert_id = alert["task"]
+        alert_type = alert["type"]
+        topic_id = alert["topic"]
+
+        # check the Kapacitor task
+        kapacitor_response = get("http://{0}:{1}/kapacitor/v1/tasks/{2}".format(kapacitor_host, kapacitor_port, alert_id))
+        assert kapacitor_response.status_code == 200, "Alert with ID {0} was not created - test file {1}.".format(alert_id, alerts_test_file)
+        kapacitor_response_json = kapacitor_response.json()
+        assert "link" in kapacitor_response_json, "Incorrect response from kapacitor for alert with ID {0} - test file {1}".format(alert_id, alerts_test_file)
+        assert kapacitor_response_json["status"] == "enabled", "Alert with ID {0} was created but is disabled - test file {1}".format(alert_id, alerts_test_file)
+        assert kapacitor_response_json["executing"], "Alert with ID {0} was created and is enabled, but is not executing - test file {1}".format(alert_id, alerts_test_file)
+        assert kapacitor_response_json["type"] == alert_type, "Alert with ID {0} was created with the wrong type - test file {1}".format(alert_id, alerts_test_file)
+
+        # check the Kapacitor topic
+        kapacitor_response = get("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}".format(kapacitor_host, kapacitor_port, topic_id))
+        assert kapacitor_response.status_code == 200, "Topic with ID {0} was not created - test file {1}".format(topic_id, alerts_test_file)
+        kapacitor_response_json = kapacitor_response.json()
+        assert kapacitor_response_json["id"] == topic_id, "Topic {0} was created with incorrect ID - test file {1}".format(topic_id, alerts_test_file)
+
+        # check that all handler IDs were created and each of them is subscribed to the correct topic ID
+        for handler_id in alert["handlers"]:
+            handler_url = alert["handlers"][handler_id]
+            kapacitor_response = get("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers/{3}".format(kapacitor_host, kapacitor_port, topic_id, handler_id))
+            assert kapacitor_response.status_code == 200, "Handler with ID {0} for topic with ID {1} doesn't exist - test file {2}".format(handler_id, topic_id, alerts_test_file)
+            kapacitor_response_json = kapacitor_response.json()
+            assert kapacitor_response_json["id"] == handler_id, "Incorrect ID of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
+            assert kapacitor_response_json["kind"] == "post", "Incorrect kind of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
+            assert kapacitor_response_json["options"]["url"] == handler_url, "Incorrect url of handler {0} in the Kapacitor response - test file {1}".format(handler_id, alerts_test_file)
+
+
+def clean_kapacitor_alerts(alerts, kapacitor_host, kapacitor_port):
     """
     A utility function to clean up Kapacitor from the configured alerts, topics and handlers.
 
-    :param alert_ids: the list of alert IDs to delete
-    :param topic_handlers: the dictionary of topic and handlers to delete
+    :param alerts: the list of alert objects along with handlers and generated identifiers
+    :param kapacitor_host: Kapacitor hostname
+    :param kapacitor_port: Kapacitor port number
     """
 
-    for alert_id, _ in alert_ids:
-        kapacitor_response = delete("http://localhost:9092/kapacitor/v1/tasks/{0}".format(alert_id))  # delete alert
+    for alert in alerts:
+
+        # delete the alert task
+        alert_id = alert["task"]
+        kapacitor_response = delete("http://{0}:{1}/kapacitor/v1/tasks/{2}".format(kapacitor_host, kapacitor_port, alert_id))  # delete alert
         assert kapacitor_response.status_code == 204
 
-    for topic_id in topic_handlers:
-        for handler_id, handler_url in topic_handlers[topic_id]:
-            kapacitor_response = delete("http://localhost:9092/kapacitor/v1/alerts/topics/{0}/handlers/{1}".format(topic_id, handler_id))  # delete handler
+        # delete the handlers
+        topic_id = alert["topic"]
+        for handler_id in alert["handlers"]:
+            kapacitor_response = delete("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers/{3}".format(kapacitor_host, kapacitor_port, topic_id, handler_id))  # delete handler
             assert kapacitor_response.status_code == 204
 
-        kapacitor_response = delete("http://localhost:9092/kapacitor/v1/alerts/topics/{0}".format(topic_id))  # delete topic
+        # delete the alert topic
+        kapacitor_response = delete("http://{0}:{1}/kapacitor/v1/alerts/topics/{2}".format(kapacitor_host, kapacitor_port, topic_id))  # delete topic
         assert kapacitor_response.status_code == 204
diff --git a/src/service/clmcservice/alertsapi/views.py b/src/service/clmcservice/alertsapi/views.py
index d794e656293407823bf9bf64668e32c0192b6d25..729c55d014c129eeb52c61aeb89145e8e6119568 100644
--- a/src/service/clmcservice/alertsapi/views.py
+++ b/src/service/clmcservice/alertsapi/views.py
@@ -31,7 +31,7 @@ from pyramid.httpexceptions import HTTPBadRequest
 from pyramid.view import view_defaults, view_config
 from yaml import load, YAMLError
 from toscaparser.tosca_template import ToscaTemplate
-from requests import post
+from requests import post, get, delete
 
 # CLMC-service imports
 from clmcservice.alertsapi.utilities import adjust_tosca_definitions_import, TICKScriptTemplateFiller, fill_http_post_handler_vars, get_resource_spec_policy_triggers, get_alert_spec_policy_triggers
@@ -63,6 +63,7 @@ class AlertsConfigurationAPI(object):
     @view_config(route_name='alerts_configuration', request_method='GET')
     def get_alerts_hash(self):
         """
+        (DEPRECATED - there is a GET /alerts/<sfc>/<sfc_instance> API method)
         Retrieves hash value for alerts task, topic and handlers based on sfc, sfci, policy and trigger IDs
         """
 
@@ -88,76 +89,194 @@ class AlertsConfigurationAPI(object):
             "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/{0}/handlers".format(topic_id)
         }
 
-    @view_config(route_name='alerts_configuration', request_method='POST')
-    def post_alerts_specification(self):
+    @view_config(route_name='alerts_configuration_instance', request_method='GET')
+    def get_alerts(self):
         """
-        The view for receiving and configuring alerts based on the TOSCA alerts specification document. This endpoint must also receive the TOSCA resources specification document for validation.
+        The view for retrieving all alerts and Kapacitor resources registered for a specific service function chain instance.
+
+        :return: a list of objects, each object representing an alert and containing the policy ID, trigger ID, task ID, topic ID and kapacitor endpoints for these
+        """
+
+        kapacitor_host, kapacitor_port = self.request.registry.settings['kapacitor_host'], self.request.registry.settings['kapacitor_port']
+        sfemc_fqdn, sfemc_port = self.request.registry.settings['sfemc_fqdn'], self.request.registry.settings['sfemc_port']
+        fqdn_prefix = "http://{0}:{1}/sfemc/event".format(sfemc_fqdn, sfemc_port)
+
+        # fetch the URL parameters
+        sfc_id, sfc_instance_id = self.request.matchdict["sfc_id"], self.request.matchdict["sfc_instance_id"]
+
+        # get all tasks from kapacitor
+        kapacitor_tasks_url = "http://{0}:{1}/kapacitor/v1/tasks".format(kapacitor_host, kapacitor_port)
+        all_tasks = get(kapacitor_tasks_url).json()["tasks"]
+
+        # fetch all alerts that are configured for this SFC instance
+        sfc_instance_tasks = []
+
+        # traverse every registered task and check if it is configured for this SFC instance
+        for task in all_tasks:
+
+            # get the configured variables of this alert
+            task_config = task["vars"]
+
+            # if configured for this SFC instance
+            if task_config["sfc"]["value"] == sfc_id and task_config["sfci"]["value"] == sfc_instance_id:
+
+                task_id = task["id"]
+                topic_id = task_id
+                policy_id = task_config["policy"]["value"]
+                trigger_id = task_config["eventID"]["value"]
+
+                kapacitor_handlers_relative_url = "/kapacitor/v1/alerts/topics/{0}/handlers".format(topic_id)
+                kapacitor_handlers_full_url = "http://{0}:{1}{2}".format(kapacitor_host, kapacitor_port, kapacitor_handlers_relative_url)
+                handlers = [handler_obj["options"]["url"] for handler_obj in get(kapacitor_handlers_full_url).json()["handlers"]]
+                handlers = list(map(lambda handler: SFEMC if handler.startswith(fqdn_prefix) else handler, handlers))
+
+                # add it to the list of alerts for this SFC instance
+                sfc_instance_tasks.append({"policy": policy_id, "trigger": trigger_id, "handlers": handlers,
+                                           "task_identifier": task_id, "topic_identifier": topic_id,
+                                           "task_api_endpoint": "/kapacitor/v1/tasks/{0}".format(task_id),
+                                           "topic_api_endpoint": "/kapacitor/v1/alerts/topics/{0}".format(topic_id),
+                                           "topic_handlers_api_endpoint": kapacitor_handlers_relative_url})
+
+        return sfc_instance_tasks
+
+    @view_config(route_name='alerts_configuration', request_method='DELETE')
+    def delete_alerts_specification(self):
+        """
+        The view for deleting alerts based on a TOSCA alerts specification document.
+
+        :return: a dictionary with the deleted alerts and handlers
 
         :raises HTTPBadRequest: if the request doesn't contain a (YAML) file input referenced as alert-spec representing the TOSCA Alerts Specification
         """
 
         kapacitor_host, kapacitor_port = self.request.registry.settings['kapacitor_host'], self.request.registry.settings['kapacitor_port']
+        sfemc_fqdn, sfemc_port = self.request.registry.settings['sfemc_fqdn'], self.request.registry.settings['sfemc_port']
+        fqdn_prefix = "http://{0}:{1}/sfemc/event".format(sfemc_fqdn, sfemc_port)
 
-        alert_spec_reference = self.request.POST.get('alert-spec')
-        resource_spec_reference = self.request.POST.get('resource-spec')
+        # since this is not a POST request, need to fetch the alert specification file from request.params, rather than request.POST
+        alert_spec_reference = self.request.params.get('alert-spec')
 
-        # check that the resource specification file was sent
-        if not hasattr(resource_spec_reference, "file") or not hasattr(resource_spec_reference, "filename"):
-            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'resource-spec' representing the TOSCA Resource Specification.")
+        # parse the alert specification file into a tosca template (including validation)
+        tosca_tpl = self._parse_alert_spec(alert_spec_reference)
 
-        try:
-            resource_spec_sfc, resource_spec_sfc_i, resource_spec_policy_triggers = get_resource_spec_policy_triggers(resource_spec_reference)
-        except Exception as e:
-            log.error("Couldn't extract resource specification event IDs due to error: {0}".format(e))
-            raise HTTPBadRequest("Couldn't extract resource specification event IDs - invalid TOSCA resource specification.")
+        # sfc, sfc_instance = tosca_tpl.tpl["metadata"]["sfc"], tosca_tpl.tpl["metadata"]["sfci"]
+        sfc = tosca_tpl.tpl["metadata"]["servicefunctionchain"]
+        sfc_instance = "{0}_1".format(sfc)
 
-        # check that the alerts specification file was sent
-        if not hasattr(alert_spec_reference, "file") or not hasattr(alert_spec_reference, "filename"):
-            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'alert-spec' representing the TOSCA Alert Specification.")
+        alerts = []
+        handlers = []
 
-        # extract alert specification file and filename
-        alerts_input_filename = alert_spec_reference.filename
-        alerts_input_file = alert_spec_reference.file
+        for policy in tosca_tpl.policies:
+            for trigger in policy.triggers:
+                event_id = trigger.name
+                policy_id = policy.name
 
-        if not alerts_input_filename.lower().endswith('.yaml'):
-            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'alert-spec' representing the TOSCA Alerts Specification.")
+                # generate topic and alert identifiers
+                topic_id = "{0}\n{1}\n{2}\n{3}".format(sfc, sfc_instance, policy_id, event_id)  # scoped per service function chain instance (no two sfc instances report to the same topic)
+                topic_id = self.get_hash(topic_id)
+                task_id = topic_id
 
-        # parse the alerts specification file
-        try:
-            alerts_yaml_content = load(alerts_input_file)
-            adjust_tosca_definitions_import(alerts_yaml_content)
-        except YAMLError as err:
-            log.error("Couldn't parse user request file {0} to yaml format due to error: {1}".format(alerts_input_filename, err))
-            log.error("Invalid content is: {0}".format(alerts_input_file.read()))
-            raise HTTPBadRequest("Request alert specification file could not be parsed as valid YAML document.")
+                # delete alert task
+                kapacitor_api_task_url = "http://{0}:{1}/kapacitor/v1/tasks/{2}".format(kapacitor_host, kapacitor_port, task_id)
+                if get(kapacitor_api_task_url).status_code == 200:
+                    delete(kapacitor_api_task_url)
+                    alerts.append({"policy": policy_id, "trigger": event_id})
 
-        try:
-            tosca_tpl = ToscaTemplate(yaml_dict_tpl=alerts_yaml_content)
-        except Exception as e:
-            log.error(e)
-            raise HTTPBadRequest("Request alert specification file could not be parsed as a valid TOSCA document.")
+                # get all alert handlers
+                kapacitor_api_topic_handlers_url = "http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers".format(kapacitor_host, kapacitor_port, topic_id)
+                http_response = get(kapacitor_api_topic_handlers_url)
+                if http_response.status_code != 200:
+                    continue  # if the topic doesn't exist continue with the other triggers
 
-        valid_alert_spec = validate_clmc_alerts_specification(tosca_tpl.tpl)
-        if not valid_alert_spec:
-            raise HTTPBadRequest("Request alert specification file could not be validated as a CLMC TOSCA alerts specification document.")
+                # delete alert handlers
+                http_handlers = http_response.json()['handlers']
+                for handler in http_handlers:
+
+                    original_http_handler_url = handler["options"]["url"]
+                    if original_http_handler_url.startswith(fqdn_prefix):
+                        original_http_handler_url = SFEMC
+
+                    handler_id = handler["id"]
+
+                    kapacitor_api_handler_url = "http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers/{3}".format(kapacitor_host, kapacitor_port, topic_id, handler_id)
+                    delete(kapacitor_api_handler_url)
+                    handlers.append({"policy": policy_id, "trigger": event_id, "handler": original_http_handler_url})
+
+                # delete alert topic
+                kapacitor_api_topic_url = "http://{0}:{1}/kapacitor/v1/alerts/topics/{2}".format(kapacitor_host, kapacitor_port, topic_id)
+                delete(kapacitor_api_topic_url)
+
+        return {"deleted_alerts": alerts, "deleted_handlers": handlers}
+
+    @view_config(route_name='alerts_configuration', request_method='PUT')
+    def put_alerts_specification(self):
+        """
+        The view for receiving and configuring alerts based on the TOSCA alerts specification document. This endpoint must also receive the TOSCA resources specification document for validation.
+        A PUT request will update any existing alerts and handlers that were registered before.
+
+        :return: a dictionary with a msg and optional keys for errors encountered while interacting with Kapacitor
+
+        :raises HTTPBadRequest: if the request doesn't contain a (YAML) file input referenced as alert-spec representing the TOSCA Alerts Specification
+        :raises HTTPBadRequest: if the request doesn't contain a (YAML) file input referenced as resource-spec representing the TOSCA Resources Specification
+        """
+
+        return self._create_alerts_specification(patch_duplicates=True)
+
+    @view_config(route_name='alerts_configuration', request_method='POST')
+    def post_alerts_specification(self):
+        """
+        The view for receiving and configuring alerts based on the TOSCA alerts specification document. This endpoint must also receive the TOSCA resources specification document for validation.
+        A POST request will not try to update existing alerts and handlers and will simply return the error message from Kapacitor.
+
+        :return: a dictionary with a msg and optional keys for errors encountered while interacting with Kapacitor
+
+        :raises HTTPBadRequest: if the request doesn't contain a (YAML) file input referenced as alert-spec representing the TOSCA Alerts Specification
+        :raises HTTPBadRequest: if the request doesn't contain a (YAML) file input referenced as resource-spec representing the TOSCA Resources Specification
+        """
+
+        return self._create_alerts_specification(patch_duplicates=False)
+
+    def _create_alerts_specification(self, patch_duplicates=False):
+        """
+        The base source code for creating (with or without updates) alerts from an alert specification document.
+
+        :param patch_duplicates: (defaults to False) if set to True, existing resources will be overwritten
+
+        :return: a dictionary with a msg and optional keys for errors encountered while interacting with Kapacitor
+        """
+
+        kapacitor_host, kapacitor_port = self.request.registry.settings['kapacitor_host'], self.request.registry.settings['kapacitor_port']
+
+        if patch_duplicates:  # implying a PUT request
+            alert_spec_reference = self.request.params.get('alert-spec')
+            resource_spec_reference = self.request.params.get('resource-spec')
+        else:  # implying a POST request
+            alert_spec_reference = self.request.POST.get('alert-spec')
+            resource_spec_reference = self.request.POST.get('resource-spec')
 
+        # parse the resource specification file and extract the required information
+        resource_spec_sfc, resource_spec_sfc_instance, resource_spec_policy_triggers = self._parse_resource_spec(resource_spec_reference)
+
+        # parse the alert specification file into a tosca template (including validation)
+        tosca_tpl = self._parse_alert_spec(alert_spec_reference)
+
+        # extract the information needed from the alert spec. for validation between the two TOSCA documents
         alert_spec_policy_triggers = get_alert_spec_policy_triggers(tosca_tpl)
-        # TODO next release - uncomment
+
         # sfc, sfc_instance = tosca_tpl.tpl["metadata"]["sfc"], tosca_tpl.tpl["metadata"]["sfci"]
         sfc = tosca_tpl.tpl["metadata"]["servicefunctionchain"]
         sfc_instance = "{0}_1".format(sfc)
 
         # do validation between the two TOSCA documents
-        self._compare_alert_and_resource_spec(sfc, sfc_instance, alert_spec_policy_triggers, resource_spec_sfc, resource_spec_sfc_i, resource_spec_policy_triggers)
+        self._compare_alert_and_resource_spec(sfc, sfc_instance, alert_spec_policy_triggers, resource_spec_sfc, resource_spec_sfc_instance, resource_spec_policy_triggers)
 
         db = sfc  # database per service function chain, named after the service function chain ID
 
-        # two lists to keep track of any errors while interacting with the Kapacitor HTTP API
-        alert_tasks_errors = []
-        alert_handlers_errors = []
-
-        # iterate through every policy and extract all triggers of the given policy
-        self._config_kapacitor_alerts(tosca_tpl, sfc, sfc_instance, db, kapacitor_host, kapacitor_port, resource_spec_policy_triggers, alert_tasks_errors, alert_handlers_errors)
+        # iterate through every policy and extract all triggers of the given policy - the returned lists of errors will be empty if no errors were encountered
+        # while interacting with the Kapacitor HTTP API, the patch_flag is set to False so that existing resources are not recreated
+        alert_tasks_errors, alert_handlers_errors = self._config_kapacitor_alerts(tosca_tpl, sfc, sfc_instance, db,
+                                                                                  kapacitor_host, kapacitor_port, resource_spec_policy_triggers,
+                                                                                  patch_duplicates=patch_duplicates)
 
         return_msg = {"msg": "Alerts specification has been successfully validated and forwarded to Kapacitor", "service_function_chain_id": sfc,
                       "service_function_chain_instance_id": sfc_instance}
@@ -205,7 +324,7 @@ class AlertsConfigurationAPI(object):
         if len(missing_policy_triggers) > 0:
             raise HTTPBadRequest("Couldn't match the following policy triggers from the alerts specification with triggers defined in the resource specification: {0}".format(missing_policy_triggers))
 
-    def _config_kapacitor_alerts(self, tosca_tpl, sfc, sfc_instance, db, kapacitor_host, kapacitor_port, resource_spec_policy_triggers, alert_tasks_errors, alert_handlers_errors):
+    def _config_kapacitor_alerts(self, tosca_tpl, sfc, sfc_instance, db, kapacitor_host, kapacitor_port, resource_spec_policy_triggers, patch_duplicates=False):
         """
         Configures the alerts task and alert handlers within Kapacitor.
 
@@ -216,12 +335,17 @@ class AlertsConfigurationAPI(object):
         :param kapacitor_host: default host is localhost (CLMC service running on the same machine as Kapacitor)
         :param kapacitor_port: default value to use is 9092
         :param resource_spec_policy_triggers: the extracted policy-trigger strings from the resource specification
-        :param alert_tasks_errors: the list for tracking errors while interacting with Kapacitor tasks
-        :param alert_handlers_errors: the list for tracking errors while interacting with Kapacitor alert handlers
+        :param patch_duplicates: (defaults to False) if set to True, any duplication errors will be handled by first deleting the existing resource and then creating it
 
-        :return: the list of successfully registered event identifiers
+        :return: the list for tracking errors while interacting with Kapacitor tasks and the list for tracking errors while interacting with Kapacitor alert handlers
         """
 
+        kapacitor_api_tasks_url = "http://{0}:{1}/kapacitor/v1/tasks".format(kapacitor_host, kapacitor_port)
+
+        # two lists to keep track of any errors while interacting with the Kapacitor HTTP API
+        alert_tasks_errors = []
+        alert_handlers_errors = []
+
         for policy in tosca_tpl.policies:
             for trigger in policy.triggers:
                 event_id = trigger.name
@@ -255,11 +379,11 @@ class AlertsConfigurationAPI(object):
                 # generate topic and alert identifiers
                 topic_id = "{0}\n{1}\n{2}\n{3}".format(sfc, sfc_instance, policy_id, event_id)  # scoped per service function chain instance (no two sfc instances report to the same topic)
                 topic_id = self.get_hash(topic_id)
-                alert_id = topic_id
+                task_id = topic_id
 
                 # check whether the template needs to be a stream or a batch
                 if event_type in self.DUAL_VERSION_TEMPLATES:
-                    if alert_period_integer <= self.STREAM_PERIOD_LIMIT:
+                    if alert_period_integer < self.STREAM_PERIOD_LIMIT:
                         template_id = "{0}-stream-template".format(event_type)
                         event_type = "{0}_stream".format(event_type)
                     else:
@@ -273,9 +397,8 @@ class AlertsConfigurationAPI(object):
                                                                             alert_period=alert_period, topic_id=topic_id, event_id=event_id, where_clause=where_clause)
 
                 # create and activate alert task through the kapacitor HTTP API
-                kapacitor_api_tasks_url = "http://{0}:{1}/kapacitor/v1/tasks".format(kapacitor_host, kapacitor_port)
                 kapacitor_http_request_body = {
-                    "id": alert_id,
+                    "id": task_id,
                     "template-id": template_id,
                     "dbrps": [{"db": db, "rp": "autogen"}],
                     "status": "enabled",
@@ -289,22 +412,38 @@ class AlertsConfigurationAPI(object):
                 log.info(response_content, response.status_code)
 
                 # track all reported errors
-                if response_content.get("error", "") != "":
-                    alert_tasks_errors.append({
-                        "policy": policy_id,
-                        "trigger": event_id,
-                        "error": response_content.get("error")
-                    })
+                if response_content.get("error", "") != "":  # Kapacitor always returns an error key which is set to an empty string if successful
+                    capture_error = True
+                    # check if a Kapacitor task with the given identifier already exists, if so, delete it and create it again
+                    kapacitor_api_tasks_instance_url = "{0}/{1}".format(kapacitor_api_tasks_url, task_id)
+                    if patch_duplicates and get(kapacitor_api_tasks_instance_url).status_code == 200:
+                        # this will only happen if the patch_duplicates flag is set to True
+                        delete(kapacitor_api_tasks_instance_url)
+                        # an alternative is to use the PATCH API endpoint instead of deleting and creating it again, however,
+                        # using the PATCH method requires the task to be disabled and re-enabled for changes to take effect (more HTTP requests)
+                        response = post(kapacitor_api_tasks_url, json=kapacitor_http_request_body)
+                        response_content = response.json()
+                        capture_error = response_content.get("error", "") != ""
+
+                    # if the error was not handled by patching an existing Kapacitor task, then add it in the list of errors
+                    if capture_error:
+                        alert_tasks_errors.append({
+                            "policy": policy_id,
+                            "trigger": event_id,
+                            "error": response_content["error"]
+                        })
 
                 # extract http handlers
                 http_handlers = trigger.trigger_tpl["action"]["implementation"]
 
                 # subscribe all http handlers to the created topic
                 self._config_kapacitor_alert_handlers(kapacitor_host, kapacitor_port, sfc, sfc_instance, policy_id, resource_spec_trigger_id, topic_id, event_id,
-                                                      http_handlers, alert_handlers_errors)
+                                                      http_handlers, alert_handlers_errors, patch_duplicates=patch_duplicates)
+
+        return alert_tasks_errors, alert_handlers_errors
 
     def _config_kapacitor_alert_handlers(self, kapacitor_host, kapacitor_port, sfc, sfc_i, policy_id, trigger_id,
-                                         topic_id, event_id, http_handlers, alert_handlers_errors):
+                                         topic_id, event_id, http_handlers, alert_handlers_errors, patch_duplicates=False):
         """
         Handles the configuration of HTTP Post alert handlers.
 
@@ -318,9 +457,11 @@ class AlertsConfigurationAPI(object):
         :param event_id: name of trigger
         :param http_handlers: list of handlers to subscribe
         :param alert_handlers_errors: the list for tracking errors while interacting with Kapacitor alert handlers
+        :param patch_duplicates: (defaults to False) if set to True, any duplication errors will be handled by first deleting the existing resource and then creating it
         """
 
         kapacitor_api_handlers_url = "http://{0}:{1}/kapacitor/v1/alerts/topics/{2}/handlers".format(kapacitor_host, kapacitor_port, topic_id)
+
         for http_handler_url in http_handlers:
 
             # check for flame_sfemc entry, if found replace with sfemc FQDN
@@ -337,12 +478,90 @@ class AlertsConfigurationAPI(object):
             log.info(response_content, response.status_code)
 
             if response_content.get("error", "") != "":
-                alert_handlers_errors.append({
-                    "policy": policy_id,
-                    "trigger": event_id,
-                    "handler": http_handler_url,
-                    "error": response_content.get("error")
-                })
+                capture_error = True
+                # check if a Kapacitor handler with the given identifier already exists, if so, delete it and create it again
+                kapacitor_api_handlers_instance_url = "{0}/{1}".format(kapacitor_api_handlers_url, handler_id)
+                if patch_duplicates and get(kapacitor_api_handlers_instance_url).status_code == 200:
+                    # this will only happen if the patch_duplicates flag is set to True
+                    delete(kapacitor_api_handlers_instance_url)
+                    response = post(kapacitor_api_handlers_url, json=kapacitor_http_request_body)
+                    response_content = response.json()
+                    capture_error = response_content.get("error", "") != ""
+
+                # if the error was not handled by patching an existing Kapacitor handler, then add it in the list of errors
+                if capture_error:
+                    alert_handlers_errors.append({
+                        "policy": policy_id,
+                        "trigger": event_id,
+                        "handler": http_handler_url,
+                        "error": response_content["error"]
+                    })
+
+    def _parse_alert_spec(self, alert_spec_reference):
+        """
+        Parses an alert specification file to TOSCA template and validates it against the CLMC alerts specification schema.
+
+        :param alert_spec_reference: the alert specification file received in the request
+
+        :return: the parsed tosca template
+        """
+
+        # check that the alerts specification file was sent correctly
+        if not hasattr(alert_spec_reference, "file") or not hasattr(alert_spec_reference, "filename"):
+            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'alert-spec' representing the TOSCA Alert Specification.")
+
+        # extract alert specification file and filename
+        alerts_input_filename = alert_spec_reference.filename
+        alerts_input_file = alert_spec_reference.file
+
+        # allow only .yaml and .yml extensions for filename
+        if not alerts_input_filename.lower().endswith('.yaml') and not alerts_input_filename.lower().endswith('.yml'):
+            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'alert-spec' representing the TOSCA Alerts Specification.")
+
+        # parse the alerts specification file
+        try:
+            alerts_yaml_content = load(alerts_input_file)
+            adjust_tosca_definitions_import(alerts_yaml_content)
+        except YAMLError as err:
+            log.error("Couldn't parse user request file {0} to yaml format due to error: {1}".format(alerts_input_filename, err))
+            log.error("Invalid content is: {0}".format(alerts_input_file.read()))
+            raise HTTPBadRequest("Request alert specification file could not be parsed as valid YAML document.")
+
+        # convert to tosca template
+        try:
+            tosca_tpl = ToscaTemplate(yaml_dict_tpl=alerts_yaml_content)
+        except Exception as e:
+            log.error(e)
+            raise HTTPBadRequest("Request alert specification file could not be parsed as a valid TOSCA document.")
+
+        # validate against CLMC spec.
+        valid_alert_spec = validate_clmc_alerts_specification(tosca_tpl.tpl)
+        if not valid_alert_spec:
+            raise HTTPBadRequest("Request alert specification file could not be validated as a CLMC TOSCA alerts specification document.")
+
+        return tosca_tpl
+
+    def _parse_resource_spec(self, resource_spec_reference):
+        """
+        Parses a resource specification file to TOSCA template and extracts the information needed to compare it with the CLMC alerts specification.
+
+        :param resource_spec_reference: the resource specification file received in the request
+
+        :return: sfc ID, sfc instance ID, list of policy-trigger identifiers found in the resource spec.
+        """
+
+        # check that the resource specification file was sent
+        if not hasattr(resource_spec_reference, "file") or not hasattr(resource_spec_reference, "filename"):
+            raise HTTPBadRequest("Request to this API endpoint must include a (YAML) file input referenced as 'resource-spec' representing the TOSCA Resource Specification.")
+
+        # extract the required information from the resource specification and return an error if an exception was encountered
+        try:
+            resource_spec_sfc, resource_spec_sfc_instance, resource_spec_policy_triggers = get_resource_spec_policy_triggers(resource_spec_reference)
+        except Exception as e:
+            log.error("Couldn't extract resource specification event IDs due to error: {0}".format(e))
+            raise HTTPBadRequest("Couldn't extract resource specification event IDs - invalid TOSCA resource specification.")
+
+        return resource_spec_sfc, resource_spec_sfc_instance, resource_spec_policy_triggers
 
     @staticmethod
     def get_hash(message):
@@ -351,7 +570,7 @@ class AlertsConfigurationAPI(object):
 
         :param message: the message to hash
 
-        :return: the value of the has
+        :return: the value of the hash
         """
 
         byte_str = bytes(message, encoding="utf-8")
diff --git a/src/test/clmctest/alerts/conftest.py b/src/test/clmctest/alerts/conftest.py
index 02d004df047f89dbbe44c2fd3e3d32830a14b1ea..ae23443fc9443a668b34fd7fabe83566b65cef71 100644
--- a/src/test/clmctest/alerts/conftest.py
+++ b/src/test/clmctest/alerts/conftest.py
@@ -31,13 +31,9 @@ from shutil import rmtree
 from signal import SIGKILL
 from json import load
 from pkg_resources import resource_filename
-from requests import delete, get
 from clmctest.alerts.alert_handler_server import LOG_TEST_FOLDER_PATH
 
 
-KAPACITOR_PORT = 9092
-
-
 @fixture(scope="module")
 def rspec_config():
     """
@@ -55,23 +51,11 @@ def rspec_config():
 
 
 @fixture(scope="module")
-def set_up_tear_down_fixture(rspec_config):
+def set_up_tear_down_fixture():
     """
     Set up/tear down fixture for the alerts integration test.
     """
 
-    global KAPACITOR_PORT
-
-    kapacitor_host = None
-    for host in rspec_config:
-        if host["name"] == "clmc-service":
-            kapacitor_host = host["ip_address"]
-            break
-
-    assert kapacitor_host is not None
-
-    kapacitor_url = "http://{0}:{1}".format(kapacitor_host, KAPACITOR_PORT)
-
     if exists(LOG_TEST_FOLDER_PATH):
         rmtree(LOG_TEST_FOLDER_PATH)  # clean out the log directory
     makedirs(LOG_TEST_FOLDER_PATH)  # create the log directory
@@ -89,19 +73,3 @@ def set_up_tear_down_fixture(rspec_config):
     kill(process_id, SIGKILL)
     if exists(LOG_TEST_FOLDER_PATH):
         rmtree(LOG_TEST_FOLDER_PATH)
-
-    print("Deleting Kapacitor tasks, topics and handlers that were created for this test...")
-    # get all tasks from kapacitor (that were created in this test) and delete them
-    kapacitor_tasks = get("{0}/kapacitor/v1/tasks".format(kapacitor_url)).json()["tasks"]
-    kapacitor_task_links = [task["link"]["href"] for task in kapacitor_tasks]
-    for task_link in kapacitor_task_links:
-        delete("{0}{1}".format(kapacitor_url, task_link))
-
-    # get all topics and handlers from kapacitor (that were created in this test) and delete them
-    kapacitor_topics = get("{0}/kapacitor/v1/alerts/topics".format(kapacitor_url)).json()["topics"]
-    for topic in kapacitor_topics:
-        topic_handlers = get("{0}{1}".format(kapacitor_url, topic["handlers-link"]["href"])).json()["handlers"]
-        for handler in topic_handlers:
-            delete("{0}{1}".format(kapacitor_url, handler["link"]["href"]))
-
-        delete("{0}{1}".format(kapacitor_url, topic["link"]["href"]))
diff --git a/src/test/clmctest/alerts/test_alerts.py b/src/test/clmctest/alerts/test_alerts.py
index 8f5ca464693428791094994753fb737d200e5b93..60c0d598ff13c486dc71e2bc4c5216a9ece8ddaa 100644
--- a/src/test/clmctest/alerts/test_alerts.py
+++ b/src/test/clmctest/alerts/test_alerts.py
@@ -21,9 +21,9 @@
 ##      Created Date :          22-08-2018
 ##      Created for Project :   FLAME
 """
-
+import datetime
 from time import sleep, strptime
-from requests import post, get
+from requests import post, get, delete, put
 from os import listdir
 from os.path import join, dirname
 from json import load
@@ -84,11 +84,14 @@ class TestAlerts(object):
     def test_alert_triggers(self, rspec_config, set_up_tear_down_fixture):
         """
         Test is implemented using the following steps:
-            * Send clmc service a TOSCA alert spec. file
-            * Wait 15 seconds for Kapacitor to configure and start executing the defined tasks
+            * Send to clmc service a POST request with TOSCA alert spec. and resource spec. files
+            * Check that the registered alerts can be fetched with a GET request
+            * Wait 10 seconds for Kapacitor to configure and start executing the defined tasks
             * Send some test requests to nginx to increase the load
-            * Wait 20 seconds for alerts to be triggered
+            * Wait 15 seconds for alerts to be triggered
             * Check that 4 log files have been created - one for each alert defined in the alert spec.
+            * Send to clmc service a DELETE request with TOSCA alert spec. file
+            * Check that the returned lists of deleted handlers and alerts are correct
 
         :param rspec_config: fixture from conftest.py
         """
@@ -105,6 +108,7 @@ class TestAlerts(object):
             if clmc_service_host is not None and nginx_host is not None:
                 break
 
+        # create the alerts with a POST request
         print("Sending alerts specification to clmc service...")
         alerts_spec = join(dirname(__file__), "alerts_test_config.yaml")
         resources_spec = join(dirname(__file__), "resources_test_config.yaml")
@@ -119,8 +123,50 @@ class TestAlerts(object):
         assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
         assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
 
+        sfc, sfc_instance = "MS_Template_1", "MS_Template_1_1"
+        assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"])
         print("Alert spec sent successfully")
 
+        # check that the alerts can be fetched with a GET request
+        print("Validate that the alerts were registered and can be fetched with a GET request.")
+        response = get("http://{0}/clmc-service/alerts/{1}/{2}".format(clmc_service_host, sfc, sfc_instance))
+        assert response.status_code == 200
+        clmc_service_response = response.json()
+        clmc_service_response = sorted(clmc_service_response, key=lambda x: x["trigger"])  # sort by trigger so that the response can be compared to what's expected
+
+        # sort the handlers of returned alerts to ensure comparison order is correct
+        for alert in clmc_service_response:
+            alert["handlers"] = sorted(alert["handlers"])
+
+        # compare actual response with expected response
+        assert clmc_service_response == [
+            {"policy": "scale_nginx_policy", "trigger": "high_requests", "task_identifier": "46fb8800c8a5eeeb04b090d838d475df574a2e6d854b5d678fc981c096eb6c1b",
+             "handlers": ["http://172.40.231.200:9999/"],
+             "topic_identifier": "46fb8800c8a5eeeb04b090d838d475df574a2e6d854b5d678fc981c096eb6c1b",
+             "task_api_endpoint": "/kapacitor/v1/tasks/46fb8800c8a5eeeb04b090d838d475df574a2e6d854b5d678fc981c096eb6c1b",
+             "topic_api_endpoint": "/kapacitor/v1/alerts/topics/46fb8800c8a5eeeb04b090d838d475df574a2e6d854b5d678fc981c096eb6c1b",
+             "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/46fb8800c8a5eeeb04b090d838d475df574a2e6d854b5d678fc981c096eb6c1b/handlers"},
+            {"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests", "task_identifier": "7a9867f9270dba6635ac3760a3b70bc929f5bd0f3bf582e45d27fbd437f528ca",
+             "handlers": ["flame_sfemc", "http://172.40.231.200:9999/"],
+             "topic_identifier": "7a9867f9270dba6635ac3760a3b70bc929f5bd0f3bf582e45d27fbd437f528ca",
+             "task_api_endpoint": "/kapacitor/v1/tasks/7a9867f9270dba6635ac3760a3b70bc929f5bd0f3bf582e45d27fbd437f528ca",
+             "topic_api_endpoint": "/kapacitor/v1/alerts/topics/7a9867f9270dba6635ac3760a3b70bc929f5bd0f3bf582e45d27fbd437f528ca",
+             "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/7a9867f9270dba6635ac3760a3b70bc929f5bd0f3bf582e45d27fbd437f528ca/handlers"},
+            {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes", "task_identifier": "f5edaeb27fb847116be749c3815d240cbf0d7ba79aee1959daf0b3445a70f2c8",
+             "handlers": ["flame_sfemc", "http://172.40.231.200:9999/"],
+             "topic_identifier": "f5edaeb27fb847116be749c3815d240cbf0d7ba79aee1959daf0b3445a70f2c8",
+             "task_api_endpoint": "/kapacitor/v1/tasks/f5edaeb27fb847116be749c3815d240cbf0d7ba79aee1959daf0b3445a70f2c8",
+             "topic_api_endpoint": "/kapacitor/v1/alerts/topics/f5edaeb27fb847116be749c3815d240cbf0d7ba79aee1959daf0b3445a70f2c8",
+             "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/f5edaeb27fb847116be749c3815d240cbf0d7ba79aee1959daf0b3445a70f2c8/handlers"},
+            {"policy": "deadman_policy", "trigger": "no_measurements", "task_identifier": "f7dab6fd53001c812d44533d3bbb6ef45f0d1d39b9441bc3c60402ebda85d320",
+             "handlers": ["flame_sfemc", "http://172.40.231.200:9999/"],
+             "topic_identifier": "f7dab6fd53001c812d44533d3bbb6ef45f0d1d39b9441bc3c60402ebda85d320",
+             "task_api_endpoint": "/kapacitor/v1/tasks/f7dab6fd53001c812d44533d3bbb6ef45f0d1d39b9441bc3c60402ebda85d320",
+             "topic_api_endpoint": "/kapacitor/v1/alerts/topics/f7dab6fd53001c812d44533d3bbb6ef45f0d1d39b9441bc3c60402ebda85d320",
+             "topic_handlers_api_endpoint": "/kapacitor/v1/alerts/topics/f7dab6fd53001c812d44533d3bbb6ef45f0d1d39b9441bc3c60402ebda85d320/handlers"}
+        ], "Incorrect response for GET alerts request"
+        print("Alert spec validated successfully")
+
         print("Wait 10 seconds for Kapacitor stream/batch tasks to start working...")
         sleep(10)
 
@@ -136,6 +182,7 @@ class TestAlerts(object):
         alert_logs = listdir(LOG_TEST_FOLDER_PATH)
         assert len(alert_logs) == 4, "4 log files must have been created - one for each alert defined in the specification."
 
+        # check the content of each log file
         for alert_log in alert_logs:
             alert_log_path = join(LOG_TEST_FOLDER_PATH, alert_log)
 
@@ -149,3 +196,116 @@ class TestAlerts(object):
                 valid = False
 
             assert valid, "Alert log content is invalid - {0}".format(alert_log_path)
+
+        # delete the alerts with a DELETE request
+        with open(alerts_spec, 'rb') as alerts:
+            files = {'alert-spec': alerts}
+            response = delete("http://{0}/clmc-service/alerts".format(clmc_service_host), files=files)
+
+        assert response.status_code == 200, "Incorrect status code returned after deleting the alert specification"
+
+        json_response = response.json()
+        # sort by trigger to ensure comparison order is correct
+        assert sorted(json_response["deleted_alerts"], key=lambda x: x['trigger']) == [{"policy": "scale_nginx_policy", "trigger": "high_requests"}, {"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests"},
+                                                                                       {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes"}, {"policy": "deadman_policy", "trigger": "no_measurements"}], \
+            "Incorrect list of deleted alerts"
+        # sort by handler and trigger to ensure comparison order is correct
+        assert sorted(json_response["deleted_handlers"], key=lambda x: (x['handler'], x['trigger'])) == [{"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests", "handler": "flame_sfemc"},
+                                                                                                         {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes", "handler": "flame_sfemc"},
+                                                                                                         {"policy": "deadman_policy", "trigger": "no_measurements", "handler": "flame_sfemc"},
+                                                                                                         {"policy": "scale_nginx_policy", "trigger": "high_requests", "handler": "http://172.40.231.200:9999/"},
+                                                                                                         {"policy": "scale_nginx_policy", "trigger": "increase_in_active_requests", "handler": "http://172.40.231.200:9999/"},
+                                                                                                         {"policy": "scale_nginx_policy", "trigger": "increase_in_running_processes", "handler": "http://172.40.231.200:9999/"},
+                                                                                                         {"policy": "deadman_policy", "trigger": "no_measurements", "handler": "http://172.40.231.200:9999/"}], \
+            "Incorrect list of deleted handlers"
+
+    def test_alerts_update_request(self, rspec_config):
+        """
+        Test is implemented using the following steps:
+            * Send to clmc service a POST request with TOSCA alert spec. and resource spec. files
+            * Send to clmc service a PUT request with TOSCA alert spec. and resource spec. files
+            * Check that the alerts have a "created" timestamp that is later than the timestamp of the alerts during the POST request,
+                implying that the alerts were re-created during the PUT request
+
+        :param rspec_config: fixture from conftest.py
+        """
+
+        clmc_service_host = None
+        for host in rspec_config:
+            if host["name"] == "clmc-service":
+                clmc_service_host = host["ip_address"]
+                break
+
+        # create the alerts with a POST request
+        print("Sending alerts specification to clmc service...")
+        alerts_spec = join(dirname(__file__), "alerts_test_config.yaml")
+        resources_spec = join(dirname(__file__), "resources_test_config.yaml")
+
+        with open(alerts_spec, 'rb') as alerts:
+            with open(resources_spec, 'rb') as resources:
+                files = {'alert-spec': alerts, 'resource-spec': resources}
+                response = post("http://{0}/clmc-service/alerts".format(clmc_service_host), files=files)
+        assert response.status_code == 200
+        clmc_service_response = response.json()
+        assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
+        assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
+        sfc, sfc_instance = "MS_Template_1", "MS_Template_1_1"
+        assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"])
+        print("Alert spec sent successfully")
+
+        # find the latest timestamp of the registered alerts
+        max_post_timestamp = 0
+        tasks = get("http://{0}/kapacitor/v1/tasks".format(clmc_service_host)).json()["tasks"]
+        for timestamp in tasks_timestamps(tasks, sfc, sfc_instance):
+            max_post_timestamp = max(max_post_timestamp, timestamp)
+
+        delay = 2  # seconds
+        print("Sleeping {0} seconds to ensure a difference between the timestamps when creating the alerts and when updating them...".format(delay))
+        sleep(delay)
+
+        # update the alerts with a PUT request and check that the "created" metadata is updated implying that the alerts were recreated
+        print("Sending alerts specification to clmc service for updating...")
+        with open(alerts_spec, 'rb') as alerts:
+            with open(resources_spec, 'rb') as resources:
+                files = {'alert-spec': alerts, 'resource-spec': resources}
+                response = put("http://{0}/clmc-service/alerts".format(clmc_service_host), files=files)
+        assert response.status_code == 200
+        clmc_service_response = response.json()
+        assert "triggers_specification_errors" not in clmc_service_response, "Unexpected error was returned for triggers specification"
+        assert "triggers_action_errors" not in clmc_service_response, "Unexpected error was returned for handlers specification"
+        sfc, sfc_instance = "MS_Template_1", "MS_Template_1_1"
+        assert (sfc, sfc_instance) == (clmc_service_response["service_function_chain_id"], clmc_service_response["service_function_chain_instance_id"])
+        print("Alert spec updated successfully")
+
+        # find the earliest timestamp of the updated alerts
+        min_put_timestamp = float("inf")
+        tasks = get("http://{0}/kapacitor/v1/tasks".format(clmc_service_host)).json()["tasks"]
+        for timestamp in tasks_timestamps(tasks, sfc, sfc_instance):
+            min_put_timestamp = min(min_put_timestamp, timestamp)
+
+        print("Latest timestamp during the POST request", max_post_timestamp, "Earliest timestamp during the PUT request", min_put_timestamp)
+        assert min_put_timestamp - max_post_timestamp >= delay, "There is an alert that wasn't updated properly with a PUT request"
+
+        # delete the alerts with a DELETE request
+        with open(alerts_spec, 'rb') as alerts:
+            files = {'alert-spec': alerts}
+            delete("http://{0}/clmc-service/alerts".format(clmc_service_host), files=files)
+
+
+def tasks_timestamps(all_tasks, sfc_id, sfc_instance_id):
+    """
+    Generates the timestamps for the tasks related to the given SFC and SFC instance.
+
+    :param all_tasks: the full list of tasks from kapacitor
+    :param sfc_id: SFC identifier
+    :param sfc_instance_id: SFC instance identifier
+    """
+
+    for task in all_tasks:
+        # get the configured variables of this alert
+        task_config = task["vars"]
+        # if configured for this SFC instance
+        if task_config["sfc"]["value"] == sfc_id and task_config["sfci"]["value"] == sfc_instance_id:
+            created_datestr = task["created"][:26]  # ignore the timezone and only take the first 6 digits of the microseconds
+            task_created_timestamp = datetime.datetime.strptime(created_datestr, "%Y-%m-%dT%H:%M:%S.%f")
+            yield task_created_timestamp.timestamp()
diff --git a/src/test/clmctest/services/minio/install.sh b/src/test/clmctest/services/minio/install.sh
index cf07bc138a3b1a4150154bce78d439c0ba4b1bca..15abf0a8d7d71c678bd6b29bffa659e05a437197 100755
--- a/src/test/clmctest/services/minio/install.sh
+++ b/src/test/clmctest/services/minio/install.sh
@@ -47,6 +47,8 @@ cd /usr/local/go/src/github.com/minio
 git clone https://github.com/minio/minio
 
 cd minio
+# fix the minio version
+git checkout tags/RELEASE.2019-03-27T22-35-21Z
 go install -v -ldflags "$(go run buildscripts/gen-ldflags.go)"
 
 # check minio configuration available