RE-TURN In Few Words

 

Words taken from RE-TURN RTCWEB draft

  • TURN [RFC 5766] is a protocol used to provide connectivity between users behind NAT or to obscure the identity of the participants by concealing their IP addresses.
  • The TURN server typically sits in the public internet.
  • The problem is direct UDP transmissions are not permitted between clients on the internal networks and external IP addresses in many enterprises. It is not ideal to use TURN-TCP or TURN-TLS for media because of latency.
  • In the current WebRTC implementations, TURN can only be used on a single-hop basis.
    • Using only the enterprise’s TURN server reveals the user information. Less security here.
    • Using only the application’s TURN server may be blocked by the network administrator  or may require using TURN-TCP or TURN-TLS. Less connectivity here.
  • For security and connectivity,  Recursively Encapsulated TURN (Re-TURN) is introduced. Multiple TURN servers are used to route the traffic.
  • The browser allocates a port on the border TURN server (TURN proxy) and runs STUN and TURN over this allocations. So the TURN is recursively encapsulated.
  • Only the browser needs to implement the Re-TURN and not the TURN proxy or the Application TURN server.

Reference

draft-ietf-rtcweb-return-02

Advertisements

Datagram Transport Layer Security (DTLS)

.

Introduction

The DTLS protocol ([RFC 6347]) is based on TLS protocol to provide similar security for the network traffic transported on datagram transport protocols (e.g. UDP). Usually the real time applications like media streaming and internet telephony are delay sensitive for the transported data so they use datagram transport to carry their data. DTLS  runs on UDP to secure the data in a transparent way (inserted between the application layer and transport layer). DTLS runs in application space without any kernel modifications. The DTLS preserves the in-order delivery of data which is not provided by the datagram transport. Current version of DTLS is 1.2

Why DTLS and NOT TLS for Datagram Transport

The answer is simply because using datagram transport like UDP means the packets could be lost or reordered and TLS cannot handle this (this is handled by TCP when it is used). So we take the TLS and add minimal changes to fix the unreliability problem and we call the result DTLS.

WhyDTLSMore specifically, the problems that are in TLS if datagram transport are used:

  • In TLS there is what is called integrity check which depends on the sequence number. For example record N is lost –> then the integrity check on record N+1 will fail because the wrong sequence number. The sequence numbers are implicit in the records. The record could also reach but in a wrong order. For example record N+1 reached before the record N.
  • The record could reach many times (replayed).
  • The TLS handshake will break if the handshake messages are lost.
  • Handshake message size is big (many kilobytes): as we know in UDP, datagrams are limited to 1500 bytes.

So the goal is changing TLS to solve the above problems and then we get DTLS. Briefly DTLS solves the problems by:

  • Banning the stream ciphers to make the records independent (don’t have the same cryptographic context – cipher key).
  • Adding explicit sequence numbers in the records.
  • Using retransmission timer for packet loss handling.
  • Handshake message fragmentation –> Each DTLS handshake message must contain fragment offset and fragment length.
  • Maintaining a bitmap window of received records so if a record is previously received it will be discarded.

The client automatically generates self-signed certificates for each peer. This means there is no certificate chain verification. The certificates themselves cannot be used to authenticate the peer because they are self-signed. So the DTLS provides encryption and integrity, but let the authentication to be done by the application.

Library Support For DTLS 1.2

Botan, GnuTLS, MatrixSSL, OpenSSL, wolfSSL


OpenSIPS Management Interface (C Development)

.

Introduction

OpenSIPS Management Interface (MI)  is a mechanism which enables the external application (e.g. command line :opensipsctl , web: OpenSIPS-CP) to send commands (MI commands) to OpenSIPS. It is a pull based mechanism which means when you need information you need to query OpenSIPS (i.e. need to do something at a certain time). The MI command allows you to fetch/push data from/to OpenSIPS or trigger some actions there. The core has its own exported MI functions and the module has its own as well. Here you can find some examples of sending mi commands by the external applications “opensipsctl” and “OpenSIPS-CP”

Several transport protocols are used to carry MI commands and their replies between the external application and OpenSIPS. Each protocol is provided by separate OpenSIPS module. The current protocols are mi_fifo, mi_datagram, mi_xmlrpc, mi_http, mi_json, and mi_xmlrpc_ng. These modules require extra processes to avoid disturbing OpenSIPS main processes that are working with SIP.

The module must be loaded (i.e. configured to be loaded in the routing script) so its exported MI functions (exported by the module interface “exports”) are populated and can be called from the external application otherwise you will get an error message “500 command ‘Module_Name’ not available”. To be able to send MI command from the external application, the transport module also must be configured in the routing script to be loaded. For example if you want to connect to MI interface via FIFO file stream, the module MI_FIFO must be right configured and loaded. The same for the rest of transport protocols.

So two modules are needed to be able to call a specific MI function:

  • The module which exports the MI function.
  • The transport protocol which will transport the command to OpenSIPS.

The extra processes that are required for transport issues will be listening on different ports than the SIP ports. OpenSIPS can use multiple transport protocols at the same time (In the routing script, configure them to be loaded).

MI Command Syntax

If you are willing to write an external management application, you have to implement the transport you want to use. Your application will behave like a client which sends MI command in a specific format to OpenSIPS. These are the current formats:

MI Internal Structure

The following is the module_exports structure (Module Interface) defined in the file “sr_module.h” with the parts related to MI.

struct module_exports {.

……… /* Many Fields are Missed For Simplicity */

mi_export_t*     mi_cmds;           /* Array of the exported MI functions */

….

proc_export_t*  procs;     /* Array of the additional processes required by the module */

……

}

procs

Sometime the module needs extra processes like the transport protocol. So it exports

  • The number of required processes to be forked (“no” number).
  • The helper functions (pre_fork_function and post_fork_function) which help the attendant process to create the extra processes.
  • The function which will be executed by the extra processes. What will be done by these extra processes will not interfere with the rest of processes that handle the SIP.

The structure is defined in the file “sr_module.h” in OpenSIPS source directory:

typedef struct cmd_export_ cmd_export_t;

struct proc_export_ {
char *name;
mod_proc_wrapper pre_fork_function;
mod_proc_wrapper post_fork_function;
mod_proc function;
unsigned int no;
unsigned int flags;
};

typedef void (*mod_proc)(int no);

typedef int (*mod_proc_wrapper)();

The flags can be 0 or PROC_FLAG_INITCHILD. If PROC_FLAG_INITCHILD is provided, the function “child_init” from all modules will be run by the new extra processes.

Example of procs NULL terminated array:

static proc_export_t mi_procs[] = {
{“MI Datagram”,  pre_datagram_process,  post_datagram_process,
datagram_process, MI_CHILD_NO, PROC_FLAG_INITCHILD },
{0,0,0,0,0,0}
};
static param_export_t mi_params[] = {
{“children_count”,      INT_PARAM,    &mi_procs[0].no},…….}

mi_cmds

mi_cmds is an array of exported MI function (Type: mi_export_t*). The definition of the type “mi_export_t” is included in the file “mi/mi.h” as following:

typedef struct mi_export_ {
char *name;
char *help;
mi_cmd_f *cmd;
unsigned int flags;
void *param;
mi_child_init_f *init_f;
} mi_export_t;

name is the actual name of the MI function which will be called. help is the information about what this function is doing. cmd is a pointer to the actual mi function:

typedef struct mi_root* (mi_cmd_f)(struct mi_root*, void *param)

The return value is mi_root * (pointer to the root of the MI reply tree).  The first parameter is the MI root parameter(type: mi_root *) and the second parameter is the actual command parameter (void *). the param is the actual parameter of the MI function.

init_f is the child init function for the exported MI function. In some parts, some extra stuffs are needed to be done by the MI workers (processes that are handling the MI requests  like MI_FIFO proces, MI_XMLRPC process,….. This function will be called at the startup after OpenSIPS has been forked (will be called one time by each core process individually).

Example of MI functions NULL terminated array:

static mi_export_t mi_cmds[] = {
{ “mi_get_function“,”This Function is doing bla bla”,mi_get_function,MI_NO_INPUT_FLAG,0,0},
{0,0,0,0,0,0}
}

The string “mi_get_function” is the actual name of the mi command which will be named by when MI command is received and needed to be verified (lookup_mi_cmd(MethodName,..)). The actual function will be like this:

struct mi_root* mi_get_function(struct mi_root*  root, void * param){

……..

}

MI Function Reply Tree

The return value of MI function is a pointer to a tree (reply tree) . The root of this tree is pointer to struct mi_root. This structure is defined in the file “mi/tree.h”:

struct mi_root {
unsigned int  code;
str   reason;
struct mi_handler  * async_hdl;
struct mi_node node;
}

The code is the root code of the response (200 for sucess,500 for error, …) and the node is the starting node (type: mi_node defined in the same file “mi/tree.h”):

struct mi_node {
str value;
str name;
unsigned int flags;
struct mi_node *kids;
struct mi_node *next;
struct mi_node *last;
struct mi_attr *attributes;
}

To initialize MI reply tree in the previous MI function (mi_get_function), the function “init_mi_tree” can be called. To add child node, the function “add_mi_node_child“. To add attribute, the function “add_mi_attr” is called  and so on. Here you can find an example code where a reply tree is built when a MI function is called.

struct mi_root* mi_get_function(struct mi_root*  root, void * param){

………

struct mi_root * rpl_tree = init_mi_tree( 200, MI_SSTR(MI_OK));    /* 200 OK Reply */

……..

}

The transport process like datagram_process, xmlrpc_process,..etc  behaves as a server which accepts connections (UDP connections in case datagram transport , TCP connections in case xmlrpc connections and so on). To serve the MI request, the transport process calls functions from OpenSIPS core (Management Interface: “mi/mi.h”) in addition to some module-defined functions.

When the message is received, the transport process checks if the requested MI function is available (looks up the MI command):

struct mi_cmd*  fu =lookup_mi_cmd((char*)methodName, strlen(methodName))

The “lookup_mi_cmd” is defined in “mi/mi.h”. If “f==0”, the MI command is not available. Otherwise it parses the parameters of the requested MI function into a MI tree. For example in xmlrpc, the function “xr_parse_tree” is defined in the file “modules/mi_xmlrpc/xr_parser.h” and called in “modules/mi_xmlrpc/xr_server.c”. It returns a pointer to the reply tree (mi_root * t).

After this, the actual mi function will be called through the function “run_mi_cmd”. As i explained above the return value of the MI function (reply tree) has the type mi_root*. This is how it is called in xmlrpc transport module “”.

struct mi_root*  mi_rpl=run_mi_cmd( fu , t , (mi_flush_f*)xr_flush_response,env))

The function “run_mi_cmd” is defined in the file “mi/mi.h” and in its body you call the actual MI function as following:

static inline struct mi_root* run_mi_cmd(struct mi_cmd *cmd, struct mi_root * t, mi_flush_f *f, void *param){

struct mi_root *ret;

…..

ret = cmd->f( t, cmd->param);

…..

}

struct mi_cmd {
int id;
str module;
str name;
str help;
mi_child_init_f *init_f;
mi_cmd_f *f;
unsigned int flags;
void *param;
};

Then the reply tree mi_rpl  (Type: mi_root *) will be formatted corresponded to the transport process (transport protocol). For example if the protocol is xmlrpc, then the reply tree will be formatted in xml either as string that contains the name and attributes of each node OR as an array where each element contains a node information (node name and its attributes). Each attribute has name and value.

Finally, the response will be written in fifo file, UDP socket (datagram transport), TCP socket (xmlrpc transport) so it will be sent to the external application.

Test MI Function of a New Module

You can test the MI command by using opensipsctl as following:

# scripts/opensipsctl fifo mi_get_function


 Note

  • The OpenSIPS core MI exported functions (mi_core_cmds) are defined in the file “mi_core.c”.
  • The OpenSIPS Module MI exported functions (mi_cmds) are defind in the file modules/Module_Name/Module_Name.c
  • OpenSIPS has many transport modules. Go to “modules/mi_Transport“, Where Transport can be datagram, xmlrpc, json,…etc.

 More Information


Linux Tuning For SIP Routers – Part 4 (Networking)

.

Introduction

This is Part  4 of “Linux Tuning For SIP Routers” topic. In Part 1 i have talked about Interrupts and IRQ Tuning. In Part 2 i have talked about File System Tuning, Journaling File System, and Swappiness . In Part 3 i have talked about OOM killer, Private Momory, and Shared Memory Tuning. In this part i will talk about Network tuning.

Network Adapter Settings

To check the setting of your network card, type: “ethtool X” where X is the interface name. The tool “ethtool” is used to read and write the settings of the network card. To update the settings:

  • Find the name of the network interface: # ifconfig
  • Get the current settings: # ethtool X
  • Do your change using ethtool (See the man page: # man ethtool). For example to change the speed to 1000, and the mode to  full duplex: # ethtool -s p4p2 speed 1000 duplex full autoneg off (where p4p2 is my interface name).

Note: Changes must be supported by the network adapter, otherwise it will give an error message “Cannot set new settings: Invalid argument”

To make the changes permanent for p4p2, set the environment variable  ETHTOOL_OPTS:

  • If you are using bash : # vi /etc/sysconfig/network-scripts/ifcfg-p4p1  (note “ifcfg-p4p1” will be different)
  • Add/update this line: ETHTOOL_OPTS=speed 1000 duplex full
  • Restart the network service: # systemctl restart network.service

“txqueuelen” Parameter

The kernel parameter “txqueuelen” is the size of the transmission queue related to the interface. The default value is 1000 frames. The kernel stores the departing frames in this queue (the frames are not loaded onto the NIC’s buffer yet). Tuning this value is important to avoid loosing frames due to the lack of space in the transmission queue. Use high value (long queue) for high speed interfaces (≈> 10 Gb).

To get the current value of txqueuelen: # ifconfig p4p2 where p4p2 is the interface name. The output will look like:

 p4p2: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500 ether 0c:54:a5:08:45:a7  txqueuelen 1000  (Ethernet) RX packets 0  bytes 0 (0.0 B) RX errors 0  dropped 0  overruns 0  frame 0 TX packets 0  bytes 0 (0.0 B) TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0 device interrupt 19

You can also execute “# ip link” to get the txqueuelen value (qlen 1000).

To temporarily change the value to 3000 (for ≈10 Gb card): # ifconfig  p4p2 txqueuelen 3000

To permanently change it, add/change the command “/sbin/ifconfig p4p2 txqueuelen 3000” in the file “/etc/rc.d/rc.local”. If this file is not exist, do the following if you have Fedora Red Hat:

  • # vi /etc/rc.d/rc.local
  • # chmod a+x /etc/rc.d/rc.local
  • At the top of the file, add the bash interpreter: #!/bin/sh
  • At the end of the file add exit 0
  • Between the previous two lines add the command: /sbin/ifconfig p4p2 txqueuelen 3000
  • Save the file and reboot (The command will be executed at system startup).

“netdev_max_backlog” Parameter

The kernel parameter “netdev_max_backlog” is the maximum size of the receive queue. The received frames will be stored in this queue after taking them from the ring buffer on the NIC. Use high value for high speed cards to prevent loosing packets. In real time application like SIP router, long queue must  be assigned with high speed CPU otherwise the data in the queue will be out of date (old).

To get the current value of netdev_max_backlog: #  /proc/sys/net/core/netdev_max_backlog. The default value is 1000 frames.

To temporarily change the size to 300000 frame:  # echo 300000 > /proc/sys/net/core/netdev_max_backlog

To permanently change the value:

  • Edit the file “/etc/sysctl.conf
  • Add the line: net.core.netdev_max_backlog=300000
  • Load the change which you did:  # sysctl -p

Note

  • Do a test before going to the deployment.
  • To change the ring buffer size on the NIC: # ethtool -g p4p2 where p4p2 is the interface name. This operation must be supported by the NIC.

Kernel Parameters For the Socket Buffer

Usually SIP uses TCP or UDP to carry the SIP signaling messages over the internet (<=> TCP/UDP sockets). The receive buffer (socket receive buffer) holds the received data until it is read by the application. The send buffer (socket transmit buffer) holds the data until it is read by the underling protocol in the network stack.

The default (initial) and maximum size (In  bytes) of the receive socket buffer:

# cat /proc/sys/net/core/rmem_default

# cat /proc/sys/net/core/rmem_max

You can manipulate this value in the application by calling the function “setsockopt” and specify the option name “SO_RCVBUF”  with the socket level set to SOL_SOCKET. When you specify a value for this option, the kernel will doubles this value.  So the doubled value should be less than the maximum (rmem_max).

The default (initial) and maximum size (In bytes) of the send socket buffer:

#  cat /proc/sys/net/core/wmem_default #  cat /proc/sys/net/core/wmem_max

You can manipulate this value in the application by calling the function “setsockopt” with the option name “SO_SNDBUF”. Choose the value which is suitable to he speed of the network card. For example, to set the maximum to 10 MB (10 * 1024 * 1024 Bytes)=10,485,760 for the receive buffer and send buffer:

Temporarily:

# echo 10485760 > /proc/sys/net/core/rmem_max

# echo 10485760 > /proc/sys/net/core/wmem_max

Permanently:

  • Edit the file “/etc/sysctl.conf
  • Add the lines:

net.core.rmem_max=10485760

net.core.wmem_max=10485760

  • Save the file and reload the changes:  # sysctl -p

To change the defaults:

Temporarily:

# echo 10485760 > /proc/sys/net/core/rmem_default

# echo 10485760 > /proc/sys/net/core/wmem_default

Permanently:

  • Edit the file “/etc/sysctl.conf
  • Add the lines:

net.core.rmem_default=10485760

net.core.wmem_default=10485760

  • Save the file and reload the changes:  # sysctl -p

Also You need to set the minimum size, initial size, and maximum size for the protocols. To read the current values (Assuming IPv4 and TCP):

# cat /proc/sys/net/ipv4/tcp_rmem

The output look like this: 4096    87380   6291456 (min, initial, max). To read write buffer settings: # cat /proc/sys/net/ipv4/tcp_wmem

You can do the same for UDP (udp_rmem and udp_wmem) To change the values: Temporarily:

# echo 10240 87380 10485760 > /proc/sys/net/ipv4/tcp_rmem

# echo 10240 87380 10485760 > /proc/sys/net/ipv4/tcp_rmem

Permanently:

  • Edit the file “/etc/sysctl.conf
  • Add the lines:

net.ipv4.tcp_rmem= 10240 87380 10485760

net.ipv4.tcp_wmem= 10240 87380 10485760

  • Save the file and reload the changes:  # sysctl -p

TCP Kernel Parameters

  • Disable TCP timestamp (RFC 1321): TCP timestamp feature allows round trip time measurement (<=>  Adding 8 bytes to TCP header). To avoid this overhead we disable this feature:  net.ipv4.tcp_timestamps = 0
  • Enable window scaling: net.ipv4.tcp_window_scaling = 1
  • Disable select acknowledgements (SACK): net.ipv4.tcp_sack = 1
  • Disable cache metrics so the initial conditions of the closed connections will not be saved to be used in near future connections:

net.ipv4.tcp_no_metrics_save = 1

‘1’ means disable caching.

  • Tune the value of “backlog” (maximum queue length of pending connections “Waiting Acknowledgment”):

tcp_max_syn_backlog= 300000

  • Set the value of somaxconn. This is the Max value of the backlog. The default value is 128. If the backlog is greater than somaxconn, it will truncated to it.

Temporarily# echo 300000 > /proc/sys/net/core/somaxconn

Permanently: Add the line: net.core.somaxconn= 300000 in the file /etc/sysctl.conf. Reload the change (# sysctl -p).

  • ” TIME_WAIT” TCP socket state is the state where the socket is closed but waiting to handle the packets which are still in the network. The parameter tcp_max_tw_buckets is the maximum number of sockets in “TIME_WAIT” state. After reaching this number the system will start destroying the socket in this state. To get the default value:

# cat /proc/sys/net/ipv4/tcp_max_tw_buckets.

Increase the value to 2000000 (2 million). Increasing this value leads to requirement of more memory.

Temporarily:

# echo 2000000 > /proc/sys/net/ipv4/tcp_max_tw_buckets

Permanently: Add the line: net.ipv4.tcp_max_tw_buckets= 2000000 in the file /etc/sysctl.conf. Reload the change (# sysctl -p).

  • Other TCP parameters that you can change as above:

net.ipv4.tcp_keepalive_time (how often the keepalive packets will be sent to keep the connection alive).

net.ipv4.tcp_keepalive_intvl (time to wait for a reply on each keepalive probe).

net.ipv4.tcp_retries2 (how many times to retry before killing an alive TCP connection).

net.ipv4.tcp_syn_retries (how many times to retransmit the initial SYN packet).


 Number of Open Files

When the SIP router is high loaded, this means it has a lot of open sockets (Each socket is a file). To increase the number of sockets we increase the number of open files (<=> number of file handles). To know the maximum number of file handles on the entire system:

# cat /proc/sys/fs/file-max. The output will be a number:  391884

To get the current usage of file handles: # cat /proc/sys/fs/file-nr. The output will look like: 10112   0       391884 First number (10112) is the total number of allocated file handles. Second number (0) is the currently used file handles (2.4 kkernel) or currently unused file handles (2.6 kernel). Third number (391884) is the maximum number of file handles.

To change the maximum number of open files temporarily:  # echo 65536 > /proc/sys/fs/file-max

To change the maximum number of open files permanently: Add the line “fs.file-max=65536”  in the file “/etc/sysctl.conf”.

SHELL Limits

The maximum number of open files “/proc/sys/fs/file-max” is for the whole system. This number is for all users so if you want to allow specific user (lets say opensips) to open X files where X < file-max, you can use “ulimit” to do that:

  • As root, edit the file: # vi /etc/security/limits.conf
  • Add/Change the following lines:

opensips              soft               nofile                   4096

opensips              hard               nofile                  40000

At the start time, the opensips can open till 4096 files (soft limit). If opensips gets an error message about running out of file handles, then opensips user can execute “ulimit -n New_Number” to increase the number of file handles to New_Number. The New_Number must be less than the hard limit (the second line) and the hard limit must be less than the file-max.

To avoid getting the error message (“out of file handles”), let opensips in its start scripts execute “ulimit -n New_Number”. Choosing the New_Number depends on your test to know how much your opensips needs. You can as root set permanently “soft limit=hard limit = New_Number”  in the file ” # vi /etc/security/limits.conf”.

Do not set the hard limit to a value equal to /proc/sys/fs/file-max, then the entire system might run out of file handles. Remember file-max is for the whole system.


 Note

  • Don’t use the numbers in this article. Do your tests on your system.
  • Use network performance testing tools like netperf, pktgen, and mpstat.
  • You always have to check how to optimize your hardware:
    • If the Network Interface Card support Flow Control (e.g e1000 network interface card).
    • Some NIC drivers support interrupt coalescing (multiple interrupts can be coalesced into one kernel interrupts). If your NIC support this, you can activate it as following: # ethtool -C p4p2 where p4p2 is the interface.

 More Information