Tuning The Linux Connection Tracking System

.

Introduction

trackerThe connection tracking entry (conntrack entry) looks like this:

udp 17 170 src=192.168.1.2 dst=192.168.1.5 sport=137 dport=1025 src=192.168.1.5 dst=192.168.1.2 sport=1025 dport=137 [ASSURED] use=1

It contains two elements the original direction (the red) and the reply direction (the blue).

To display all table’s entries, read  “/proc/net/nf_conntrack”.

The conntrack entry is stored into two separate nodes (one for each direction) in different linked lists. Each linked list is called  bucket. The bucket is an element in a hash table. The hash value is calculated based on the received packet and used as index in the hash table. Iteration is done over the linked list of nodes to find the wanted node.

Long list is not recommended (iteration cost): The cost depends on the length of the list and the position of the wanted conntrack node.

Long hash table is recommended (constant time): The cost is the hash calculation.

Linked List Size (Bucket Size)= Maximum Number of nodes / Hash Table Size (Number of Buckets)

The hash table is stored in the kernel memory. We can tune the size of the bucket and the maximum number of nodes. The required memory = conntrack node’s memory size * 2* simultaneous connections your system aim to handle. Example: 304 bytes per conntrack and 1M connections requires 304*2 MB.

It is not recommended to set so big values if you have less than 1G RAM.

Tuning the Values

If your server has a lot of connections to be handled and the conntrack table is full, you will get this error “nf_conntrack: table full, dropping packet“. This will limit the number of simultaneous connections your system can handle.

To get the maximum number of nodes:

# /sbin/sysctl -a|grep -i nf_conntrack_max
net.nf_conntrack_max = 65536

To get the hash table size (number of buckets):

# /sbin/sysctl -a|grep -i nf_conntrack_buckets
net.netfilter.nf_conntrack_buckets = 16384

The bucket size (linked list length)= 4 (65536/16384).

To temporarily change the value of hash table size to 2*16384=32768:

# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

To permanently change the value:

# echo “net.netfilter.nf_conntrack_buckets = 32768” >> /etc/sysctl.conf
# /sbin/sysct -p

The same way for “nf_conntrack_max”:

Temporarily: # echo 131072 > /proc/sys/net/nf_conntrack_max

To permanently:

# echo “net.netfilter.nf_conntrack_max = 131072” >> /etc/sysctl.conf
# /sbin/sysct -p

This requires 38 MB memory.

“nf_conntrack” Other Values

You can change these values also in the same way:

# /sbin/sysctl -a|grep -i nf_conntrack
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_buckets = 16384
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_count = 817
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_expect_max = 256
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.nf_conntrack_max = 65536

Last word: Disable “nf_conntrack” if it is not necessary.


 

Advertisements

Running An Instance In OpenStack

.

Introduction

The instance is a virtual machine that run inside the cloud. So when we say run an instance, we mean run an instance of a specific virtual machine image. The virtual machine image (or simply the image) is a single file that contains a bootable operating system with cloud support. The package “Cloud-init” is installed in the virtual machine image to enable instance activation and initialization. You can NOT use a classic image. You must use “Cloud-Aware” image.

The image format must be supported by the hypervisor. So see your hypervisor and image format compatibility. Here you can find descriptions of different formats.

Upload A Virtual Machine Image

In the dashborad, go to “Project” tab on the left-side navigation menu and click on “Images”. Then click on “Create Image” to upload the image. You will get this dialog box:

Screenshot from 2015-02-22 12:05:50

Here you can find explanations of dialog box fieldsCurrently only images available via an HTTP URL are supported. So i chose the “Image Location” as an “Image Source” and the “Image Location” is: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2

The selected image format “qcow2” which is commonly used with the KVM (default hypervisor). The size of image file is nearly 169 MB. It takes some time for image to be uploaded. Here you can find set of cloud ready images, copy the link location and paste it in the dialog box above in the “Image Location” field.

After the image has been uploaded successfully, you will be here:

Screenshot from 2015-02-22 12:25:00Enable SSH On Your Default Security Group

Click the “Project” tab on the left-side navigation menu. Then click on “Access & Security”. Under the “Security Groups” tab, Select the “default” security group. Click on the “Manage Rules” button, you will get this table of rules:

Screenshot from 2015-02-22 13:01:39Click on “Add Rule”, Enter the “22” in the “Port” field:

Screenshot from 2015-02-22 13:05:51

 Then click the “Add” button. The rule will be added and then SSH can be used.

Screenshot from 2015-02-22 13:09:56

If you create a new security group and you want to apply its rules to an instance, be aware to select it when you launch the instance so the rules of the new security group will be applied to the new instance (launch the instance in specific security group).

Create OR Import Key Pair

To access the instance through SSH, we need to create or import kay pair. Click on “Access & Security” on the left-side navigation menu, and then click on the tab “Key Pairs”.

To create a key pair, click on “Create Key Pair”. The key pair (private an public keys) will be generated. The public key will be registered and the the private key must be saved privately after download.

The public key is injected into the instance using “Cloud-init” package on boot.

Screenshot from 2015-02-22 13:47:53

Screenshot from 2015-02-22 14:05:31The option “Import Key pair” will prompt you to provide a name and a public key.

Launch The Virtual Machine

Go to the uploaded image and click on “Launch” button. You will get this dialog box:

Screenshot from 2015-03-15 18:13:35In the resulting dialog box and under “Details” tab, you can specify number of parameters like the “Instance Name” and “Flavor” (defines the hardware resources of the instance). Check also the configuration under other tabs (Access & Security, Networking, Post-Creation, and Advanced Options). Then click the “Launch” button to ask the Compute service (Compute Node) to launch an instance with your specified parameters.

Now The instance is created. Click “Instances” on the left-side navigation menu, to see the instantiated instances.

Connecting To The Instance Console

In the figure above, click on “Associate Floating IP” on the right side to associate “floating IP” to your instance. Both the floating and the internal IP addresses will be listed in the “IP Address” column for your instance. The internal IP address of the instance is a private IP address used to reach other OpenStack instances. The floating IP address is used to access the instance from other machines in your network (Private IP address) or from the internet (Public IP address). It is called floating because you can associate it to an instance and disassociate it so it is movable (It can move to another instance).

# ssh -i /home/binan/Downloads/binankeypair.pem  fedora@Floating-IP-Address

“fedora” is the default user for fedora instances.

Also you can access the instance console  via the dashboard or by specifying the network namespace where your instance resides:

# ip netns exec qrouter-f96c719a-56e9-4b52-b2cb-da326fc1a429 ssh -i /home/binan/Downloads/binankeypair.pem  fedora@Local-IP-Address

where the “qrouter-f96c719a-56e9-4b52-b2cb-da326fc1a429” is the namespace name.

Congratulation, You have successfully created an instance of a specific virtual machine image and you can SSH to your instance.


More Information


RTPEngine Manual Compilation and Installation In Fedora RedHat

.

RTPEngine Main Features

  • OpenSource and free
  • Media traffic running over either IPv4 or IPv6
  • Bridging between IPv4 and IPv6 user agents
  • TOS/QoS field setting
  • Customizable port range
  • Multi-threaded
  • Advertising different addresses for operation behind NAT
  • In-kernel packet forwarding for low-latency and low-CPU performance
  • Automatic fallback to normal userspace operation if kernel module is unavailable
  • OpenSIPS/Kamailio Support.

Installation and Compilation

1- Install The Packages Required To Compile The Daemon:

# yum install glib glib-devel gcc zlib zlib-devel openssl openssl-devel pcre pcre-devel libcurl libcurl-devel xmlrpc-c xmlrpc-c-devel

Note: glib version must be higher than 2.0. Note in the Makefile of the daemon: CFLAGS+= `pkg-config –cflags glib-2.0`

2- Get The Latest Release Of RTPEngine Source From RTPEngine’s GitHub Repository:

Go to the place where “GitHub.com” is remotely storing RTPEngine and get the RTPEngine ‘s repo HTTP-address “https://github.com/sipwise/rtpengine.git“. Then clone the repo.

# git clone https://github.com/sipwise/rtpengine.git

Install redis client and server (If you want to have it on the same server):

# yum install  hiredis hiredis-devel redis

Be sure the server is running: # systemctl status redis

3- Compile The Daemon:

# cd /usr/local/src/rtpengine/daemon
# make

Now the daemon is compiled and we have the executable file rtpengine. Copy this file to /usr/sbin:

 # cp rtpengine /usr/sbin/rtpengine

4- Compile IPtables-Extension “libxt_RTPENGINE” Which Is User-Space Module Used For “In-Kernel” Packet Forwarding (Adding the forwarding rules):

  • Install IPtables development headers: # yum install iptables-devel
  • Compile the iptables extension

# cd /usr/local/src/rtpengine/iptables-extension

# make

After compilation we get the plugin “libxt_RTPENGINE.so” (user-space module). Copy this file to “/lib/xtables/” or “/lib64/xtables/” (see your system):

# cp libxt_RTPENGINE.so /lib64/xtables

5- Compile The Kernel Module “xt_RTPENGINE” Required For In-Kernel Packet Forwarding:

To compile a kernel module, we need the kernel development headers to be installed in “/lib/modules/$VERSION/build/” where VERSION is the kernel version. To know the kernel version, execute  “uname -r”. Here i have 3.11.10-301.fc20.x86_64 so the kernel headers must be here “/lib/modules/3.11.10-301.fc20.x86_64/build/”. where 3.11.10-301.fc20.x86_64 is the kernel that we compile the module with. To install the header files of specific kernel: # yum install kernel-devel-$(uname -r)

Then:

# cd /usr/local/src/rtpengine/kernel-module

# make

After compilation we get this file “xt_RTPENGINE.ko” which is a kernel module. Copy this module into “/lib/modules/3.11.10-301.fc20.x86_64/extra”.

# cp xt_RTPENGINE.ko /lib/modules/3.11.10-301.fc20.x86_64/extra

After this creates a list of module dependencies and add the list to “modules.dep”:

# depmod  /lib/modules/3.11.10-301.fc20.x86_64/extra/xt_RTPENGINE.ko
or for all modules:
# depmod -a

Note: Instead of manually coping the module to extra sub-folder and create the list of dependencies by calling “depmod”, you can add modules_install rule in the Makefile. This will copy the file to “extra” and create the dependencies list:

modules_install:

make -C $(KSRC) M=$(PWD) modules_install

and execute: # make modules_install

Load the module:

You need to boot from the kernel that you compiled the module against. Then yo can load the module as following:

# modprobe xt_RTPENGINE

To check if the module is loaded: # lsmod |grep xt_RTPENGINE

Output: xt_RTPENGINE           27145  0

After the module has been loaded, a new directory called /proc/rtpengine will appear. In this directory you find  two files: control (write-only, used for creating and deleting the forwarding tables) and list (read-only, list the current forwarding tables, it could be an empty list):

 # ls -l   /proc/rtpengine/control

–w–w—-. 1 root root 0 Oct 16 11:54 /proc/rtpengine/control

 # ls -l   /proc/rtpengine/list

 -r–r–r–. 1 root root 0 Oct 16 11:55 /proc/rtpengine/list

To add a forwarding table (done by the daemon) with an ID=$TableID (number between 0-63):

# echo ‘add $TableID’ > /proc/rtpengine/control

The folder  /proc/rtpengine/$TableID will be created. It will contain these files: blist (read-only), control (write-only, used by the daemon to update the forwarding rules), list (read-only), and status (read-only).

To delete a table: # echo ‘del $TableID’ > /proc/rtpengine/control

You can delete the table if it is not used by the rtpengine (the rtpengine is not running).

To display the list of the current tables: # cat /proc/rtpengine/list

You can unload the kernel module “xt_RTPENGINE” when there is no related iptables rules or forwarding tables:

# rmmod xt_RTPENGINE

Adding iptables rules to forward the incoming packets to xt_RTPENGINE module

# iptables -I INPUT -p udp -j RTPENGINE –id $TableID

# ip6tables -I INPUT -p udp -j RTPENGINE –id $TableID

Start The Daemon

Recommendation: Start the RTPEngine as systemd service with a configuration file to specify the options. Click here. Otherwise continue:

# ./rtpengine ….Options…..

Example:

# rtpengine –table=TableID –interface=$IPAddress –listen-ng=127.0.0.1:2223 –pidfile=/var/run/rtpengine.pid –no-fallback

For logging to file:

  • When you start rtpengine, add the options: log-level and log-facility:

# rtpengine –table=$TableID –interface=127.0.0.1 –listen-ng=127.0.0.1:2223 –pidfile=/var/run/rtpengine.pid –no-fallback –log-level=3  –log-facility=local1

The max value of log-level is 7

  • Assuming the /var/log/rtpengine.log will be the log file:

# vim  /etc/rsyslog.conf and add this entry:

local1.*                        /var/log/rtpengine.log

For the connection with Redis database:

  • Check the IP and the port of the Redis server: #  netstat -lpn |grep redis
  • Add the following options when you start the RTPEngine: –redis=IP:PORT and –redis-db=INT

See the list of command-line options in the project’s GitHub repository page. There you can find explanation about all RTPEngine’s stuffs with examples.


More Information


OpenSIPS Event Interface – Part 1

.

 

External applications can interact with OpenSIPS through Management Interface (MI) which is pull-based mechanism and Event Interface which is push-based mechanism. The event interface acts as a mediator between OpenSIPS regular modules, the configuration script and the transport modules. The events can be published by the modules and the configuration file/script. The external applications subscribes for a certain event and OpenSIPS event interface notifies the external application when that event is raised. OpenSIPS has some hardcoded events such as E_PIKE_BLOCKED which is raised by the “Pike” module), E_CORE_PKG_THRESHOLD which is raised when private memory usage exceeded a threshold,…..etc. Several transport protocols are used to send the event notification to the external subscribed application. Each protocol is provided by separate OpenSIPS module.

Events Raised from OpenSIPS Modules

If the module wants to export events, it must register them in its initialization function (init_function” member in the module interface “struct module_exports”). The function “evi_publish_event” defined in “evi/evi_modules.h” is used to register an event. The function “evi_raise_event” is used to raise an event. The function “evi_probe_event” checks if there is any subscriber for that event. This function is called before building parameters list because if no subscriber has been found, then the delay of building parameters list and other things can be skipped.

event_id_t evi_publish_event(str event_name);

int evi_raise_event(event_id_t id, evi_params_t* params);

int evi_probe_event(event_id_t id);

Events Raised from OpenSIPS Script

To raise an event from script, you have to call the core function “raise_event” from the script. The first parameter of this function is the event name and it is constant string parameter (pseudo variable cannot be used) because the event must be registered at startup and the pseudo variable would have an undefined value. The second and third parameters form together the list of the parameters and they are optional. This is how to call this function:

$avp(attr-name) = “param1”;

$avp(attr-name) = “param2”;

$avp(attr-val) = 1;

$avp(attr-val) = “2”

raise_event(“E_Example”, $avp(attr-name), $avp(attr-val));

Click here to see the full explanation.

The length of both AVPs have the same number of parameters.

Transport Module Interface

This interface is implemented by all transport modules that are used to send the event notification to the external applications. It is defined in the file “evi/evi_transport.h” as following:

typedef struct evi_export_ {
str proto;                      /* protocol name / transport module name */
raise_f *raise;         /* raise function */
parse_f *parse;         /* parse function */
match_f *match;         /* sockets match function */
free_f *free;           /* free a socket */
print_f *print;         /* prints a socket */
unsigned int flags;
} evi_export_t;

The first field is the name of the transport protocol. It is used by the MI subscription command so the event notification will be sent by the transport named in the subscription request. The second filed is the raise function which will be called by the OpenSIPS Event Interface for each subscriber, to send the notification on this transport. This function serializes the event parameters, and send them to the subscriber. The event parameters are passed to the transport module from the module that raised the event.

The third field is the parse function which parses the subscriber socket. The match function is used to update the expire period of the subscription or unsubscribe the application. The rest of fields are easy to understand.

The transport module which implement this interface must be registered to OpenSIPS event interface. This is can be done by calling the function with the an instance of the above structure as a parameter:

int register_event_mod(evi_export_t *ev);

The Availability of the Events

All exported events will be available at startup time after the parsing of the configuration file is completed. The external application can only subscribe to en event after OpenSIPS has been started up.

Event Subscription

To subscribe for events, the external application send this MI subscription command:

event_subscribe Event_Name Transport Socket Expire

Example: #opensipsctl fifo event_subscribe E_PIKE_BLOCKED udp:127.0.0.1:8888 1200

If the subscription succeeded, the application gets 200 OK message and it will start getting notifications for the subscribed event when the event is raised. The external application suppose to do something upon receiving the notification. For example: update the firewall, do some processing, or just printing or send information mail to administrator.


More Information