Distributed Hash Table (DHT) In Structured Peer-to-Peer SIP Overlay

.

peersDistributed Hash Table (DHT) is used in structured Peer-to-Peer overlay for resource location and discovery. In SIP world, the resource can be the contact list (physical addresses) associated with an AOR (SIP/SIPS URI that points to a user on domain -virtual address of the user). The mappings between the AORs and the contact lists will be distributed amongst the peers in the SIP overlay. In this way we get a distributed registrar functionality for SIP.

There should be an interface for the DHT so it can be used without caring about the implementation and it can easily integrate with the SIP server so the server can work in P2P mode. Basically the functions that need to be implemented are: Join (join the overlay), Leave (leave the overlay), Lookup (search for a resource using the AOR as a key), and Store (store the resource). For example the lookup function works as following:

  • Using the AOR as a key to find the appropriate node/peer. The hash value of the key is used to find the node/peer ID.
  • Then the selected node/peer will do normal hash table (HT) lookup to get the value/resource/contact-list

The mappings {AOR, Contact List} can be stored persistently on disks in a key-value databases like Berkeley DB.

The IETF standards body is working on P2P-SIP. There is a working group called p2psip where you can find the internet drafts (working documents) related to P2P-SIP. For example the RELOAD (RFC [6940]) is a signaling protocol for resource location and discovery. It specifies “chord-reload” as a mandatory DHT algorithm to be implemented. The purpose of this work is distributed SIP registrar. The RELOAD works with SIP to enable the distributed SIP solution. The RELOAD can be used by other protocols and not only SIP (e.g. A Constrained Application Protocol (CoAP) usage for RELOAD: Internet Draft: draft-jimenez-p2psip-coap-reload-10).

But why P2P SIP mode. This is because in P2P:

  • There is no one point of failure.
  • Less cost: avoid having service provider (Paying) + delete nodes/peers on low demand.
  • Capacity: Adding new nodes/peers on high demand.

The peer to peer SIP overlay is very suitable to be built over Openstack cloud where the automated creation and deletion of nodes/peers is based on predefined policies. The peers are defined in an autoscaling group where they are scaled up and down based on the autoscaling policies. When a node is determined to leave the overlay based on the cloud policies, the node starts the leave process where the configuration is delivered to it from the cloud (DELETE lifecycle hook). The trigger to add (<=> join the overlay) or delete server (<=> leave the overlay) are controlled by the cloud orchestration service (the scaling policies). The policies itself are based on CPU utilization, Load,…etc.


Creative Commons License
The content of this blog is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


Advertisements

Monitoring Agent For Rackspace’s Auto scaled Servers

Introduction

Ceilometer is used to collect measurements of different utilizations (memory, CPU, Hard, ….) from OpenStack components. It is designed originally for billing. It is not a complete monitoring solution for metering because it does not allow service/application level monitoring and it ignores the detailed metrics of the guest system.

Rackspace’ cloud which is Openstack based cloud solution has its own monitoring service which allow the tenants to keep their measured data whether standard (e.g. CPU, Memory, …) or custom (application/service specific metrics) on the cloud and create the notification plans they want.

In this article, i will show you how to automate the setup of Rackspace monitoring agent on the virtual machine. So when your auto scale policy is triggered, you will have a new server with the monitoring agent installed and connected to the cloud. I have Centos-7 for my virtual machine which i will use later to create the image. The image will be used by the auto scaling service to create new servers. You need to have an account with Rackspace cloud provider.

Rackspace Monitoring Agent Installation on Centos 7

Install the package signing key

# curl https://monitoring.api.rackspacecloud.com/pki/agent/centos-7.asc > /tmp/signing-key.asc
# rpm –import /tmp/signing-key.asc

Add the agent repository to yum

  • Create and edit the file  “/etc/yum.repos.d/rackspace-cloud-monitoring.repo”

# vi /etc/yum.repos.d/rackspace-cloud-monitoring.repo

  •  Add the configuration of the repository. In my case i have centos7:

[rackspace]
name=Rackspace Monitoring
baseurl=http://stable.packages.cloudmonitoring.rackspace.com/centos-7-x86_64
enabled=1

Install the agent

# yum install rackspace-monitoring-agent

Now we have the agent installed on the current virtual machine.

Create oneshot systemd or init service for the agent setup

The setup process is needed to configure the monitoring agent for the specific server, verifies the connectivity with the cloud, and association with the monitoring entity of the server. The script that you will write does the setup of the agent as following:

     # rackspace-monitoring-agent –setup -U username -K apikey

Replace the username and apikey with yours. You can take the API key from your account settings when you access the web control panel.

The script needs also to start the agent if it is not started:

      # systemctl start rackspace-monitoring-agent

As this service will be executed on boot, you need to be sure that it is executed only when the server is created (only once). So you need to write a check which examine if rackspace-monitoring-agent service is started or not. If it is started so do NOT set it up again.

Clean after preparation

If you test the setup on the current virtual machine, you need to clean it up so the new servers created from the image will not have the old configuration of the server that is used to create the image. Simply stop the service, uninstall the agent. Then install the agent again without the setup. If you want to have your image independent on the account information you need to make the installation and the setup of the monitoring agent as YAML template executed by the cloud. See the last link in the section “More Information” further down.

Server-Side Agent Configuration YAML File

For example a YAML configuration file that creates a CPU check with alarm. Bind the check with the auto scaling policy notification plans. Create the file “cpu.yaml” in the folder “/etc/rackspace-monitoring-agent.conf.d” with this content:

type : agent.cpu

label: CPU

period: 60

timeout: 10

alarms:

cpu-usage-up:

label: CPU Usage Up

notification_plan_id: scale-up-notification-plan-id-here

criteria: |

     if (metric[‘usage_average’]> 80){

return new AlarmStatus(CRITICAL);

}

cpu-usage-down:

label: CPU Usage Down

notification_plan_id: scale-down-notification-plan-id-here

criteria: |

     if (metric[‘usage_average’]< 50){

return new AlarmStatus(WARNING);

}

To get the ids of your created notifications, execute this:

# curl -s -X GET https://monitoring.api.rackspacecloud.com/v1.0/$tenantID/notifications  -H “X-Auth-Token: $token”   -H “Accept: application/json” | python -m json.tool

Create a new image

Now you can go to the web control panel and create a new image that will be used in auto scaling process.


Next

The next article will be about how to send custom measured data (custom metrics) to your cloud using the monitoring agent. this is called creating custom plugin. I will show you how to create a custom check.

More Information