OpenSIPS Management Interface (C Development)

.

Introduction

OpenSIPS Management Interface (MI)  is a mechanism which enables the external application (e.g. command line :opensipsctl , web: OpenSIPS-CP) to send commands (MI commands) to OpenSIPS. It is a pull based mechanism which means when you need information you need to query OpenSIPS (i.e. need to do something at a certain time). The MI command allows you to fetch/push data from/to OpenSIPS or trigger some actions there. The core has its own exported MI functions and the module has its own as well. Here you can find some examples of sending mi commands by the external applications “opensipsctl” and “OpenSIPS-CP”

Several transport protocols are used to carry MI commands and their replies between the external application and OpenSIPS. Each protocol is provided by separate OpenSIPS module. The current protocols are mi_fifo, mi_datagram, mi_xmlrpc, mi_http, mi_json, and mi_xmlrpc_ng. These modules require extra processes to avoid disturbing OpenSIPS main processes that are working with SIP.

The module must be loaded (i.e. configured to be loaded in the routing script) so its exported MI functions (exported by the module interface “exports”) are populated and can be called from the external application otherwise you will get an error message “500 command ‘Module_Name’ not available”. To be able to send MI command from the external application, the transport module also must be configured in the routing script to be loaded. For example if you want to connect to MI interface via FIFO file stream, the module MI_FIFO must be right configured and loaded. The same for the rest of transport protocols.

So two modules are needed to be able to call a specific MI function:

  • The module which exports the MI function.
  • The transport protocol which will transport the command to OpenSIPS.

The extra processes that are required for transport issues will be listening on different ports than the SIP ports. OpenSIPS can use multiple transport protocols at the same time (In the routing script, configure them to be loaded).

MI Command Syntax

If you are willing to write an external management application, you have to implement the transport you want to use. Your application will behave like a client which sends MI command in a specific format to OpenSIPS. These are the current formats:

MI Internal Structure

The following is the module_exports structure (Module Interface) defined in the file “sr_module.h” with the parts related to MI.

struct module_exports {.

……… /* Many Fields are Missed For Simplicity */

mi_export_t*     mi_cmds;           /* Array of the exported MI functions */

….

proc_export_t*  procs;     /* Array of the additional processes required by the module */

……

}

procs

Sometime the module needs extra processes like the transport protocol. So it exports

  • The number of required processes to be forked (“no” number).
  • The helper functions (pre_fork_function and post_fork_function) which help the attendant process to create the extra processes.
  • The function which will be executed by the extra processes. What will be done by these extra processes will not interfere with the rest of processes that handle the SIP.

The structure is defined in the file “sr_module.h” in OpenSIPS source directory:

typedef struct cmd_export_ cmd_export_t;

struct proc_export_ {
char *name;
mod_proc_wrapper pre_fork_function;
mod_proc_wrapper post_fork_function;
mod_proc function;
unsigned int no;
unsigned int flags;
};

typedef void (*mod_proc)(int no);

typedef int (*mod_proc_wrapper)();

The flags can be 0 or PROC_FLAG_INITCHILD. If PROC_FLAG_INITCHILD is provided, the function “child_init” from all modules will be run by the new extra processes.

Example of procs NULL terminated array:

static proc_export_t mi_procs[] = {
{“MI Datagram”,  pre_datagram_process,  post_datagram_process,
datagram_process, MI_CHILD_NO, PROC_FLAG_INITCHILD },
{0,0,0,0,0,0}
};
static param_export_t mi_params[] = {
{“children_count”,      INT_PARAM,    &mi_procs[0].no},…….}

mi_cmds

mi_cmds is an array of exported MI function (Type: mi_export_t*). The definition of the type “mi_export_t” is included in the file “mi/mi.h” as following:

typedef struct mi_export_ {
char *name;
char *help;
mi_cmd_f *cmd;
unsigned int flags;
void *param;
mi_child_init_f *init_f;
} mi_export_t;

name is the actual name of the MI function which will be called. help is the information about what this function is doing. cmd is a pointer to the actual mi function:

typedef struct mi_root* (mi_cmd_f)(struct mi_root*, void *param)

The return value is mi_root * (pointer to the root of the MI reply tree).  The first parameter is the MI root parameter(type: mi_root *) and the second parameter is the actual command parameter (void *). the param is the actual parameter of the MI function.

init_f is the child init function for the exported MI function. In some parts, some extra stuffs are needed to be done by the MI workers (processes that are handling the MI requests  like MI_FIFO proces, MI_XMLRPC process,….. This function will be called at the startup after OpenSIPS has been forked (will be called one time by each core process individually).

Example of MI functions NULL terminated array:

static mi_export_t mi_cmds[] = {
{ “mi_get_function“,”This Function is doing bla bla”,mi_get_function,MI_NO_INPUT_FLAG,0,0},
{0,0,0,0,0,0}
}

The string “mi_get_function” is the actual name of the mi command which will be named by when MI command is received and needed to be verified (lookup_mi_cmd(MethodName,..)). The actual function will be like this:

struct mi_root* mi_get_function(struct mi_root*  root, void * param){

……..

}

MI Function Reply Tree

The return value of MI function is a pointer to a tree (reply tree) . The root of this tree is pointer to struct mi_root. This structure is defined in the file “mi/tree.h”:

struct mi_root {
unsigned int  code;
str   reason;
struct mi_handler  * async_hdl;
struct mi_node node;
}

The code is the root code of the response (200 for sucess,500 for error, …) and the node is the starting node (type: mi_node defined in the same file “mi/tree.h”):

struct mi_node {
str value;
str name;
unsigned int flags;
struct mi_node *kids;
struct mi_node *next;
struct mi_node *last;
struct mi_attr *attributes;
}

To initialize MI reply tree in the previous MI function (mi_get_function), the function “init_mi_tree” can be called. To add child node, the function “add_mi_node_child“. To add attribute, the function “add_mi_attr” is called  and so on. Here you can find an example code where a reply tree is built when a MI function is called.

struct mi_root* mi_get_function(struct mi_root*  root, void * param){

………

struct mi_root * rpl_tree = init_mi_tree( 200, MI_SSTR(MI_OK));    /* 200 OK Reply */

……..

}

The transport process like datagram_process, xmlrpc_process,..etc  behaves as a server which accepts connections (UDP connections in case datagram transport , TCP connections in case xmlrpc connections and so on). To serve the MI request, the transport process calls functions from OpenSIPS core (Management Interface: “mi/mi.h”) in addition to some module-defined functions.

When the message is received, the transport process checks if the requested MI function is available (looks up the MI command):

struct mi_cmd*  fu =lookup_mi_cmd((char*)methodName, strlen(methodName))

The “lookup_mi_cmd” is defined in “mi/mi.h”. If “f==0”, the MI command is not available. Otherwise it parses the parameters of the requested MI function into a MI tree. For example in xmlrpc, the function “xr_parse_tree” is defined in the file “modules/mi_xmlrpc/xr_parser.h” and called in “modules/mi_xmlrpc/xr_server.c”. It returns a pointer to the reply tree (mi_root * t).

After this, the actual mi function will be called through the function “run_mi_cmd”. As i explained above the return value of the MI function (reply tree) has the type mi_root*. This is how it is called in xmlrpc transport module “”.

struct mi_root*  mi_rpl=run_mi_cmd( fu , t , (mi_flush_f*)xr_flush_response,env))

The function “run_mi_cmd” is defined in the file “mi/mi.h” and in its body you call the actual MI function as following:

static inline struct mi_root* run_mi_cmd(struct mi_cmd *cmd, struct mi_root * t, mi_flush_f *f, void *param){

struct mi_root *ret;

…..

ret = cmd->f( t, cmd->param);

…..

}

struct mi_cmd {
int id;
str module;
str name;
str help;
mi_child_init_f *init_f;
mi_cmd_f *f;
unsigned int flags;
void *param;
};

Then the reply tree mi_rpl  (Type: mi_root *) will be formatted corresponded to the transport process (transport protocol). For example if the protocol is xmlrpc, then the reply tree will be formatted in xml either as string that contains the name and attributes of each node OR as an array where each element contains a node information (node name and its attributes). Each attribute has name and value.

Finally, the response will be written in fifo file, UDP socket (datagram transport), TCP socket (xmlrpc transport) so it will be sent to the external application.

Test MI Function of a New Module

You can test the MI command by using opensipsctl as following:

# scripts/opensipsctl fifo mi_get_function


 Note

  • The OpenSIPS core MI exported functions (mi_core_cmds) are defined in the file “mi_core.c”.
  • The OpenSIPS Module MI exported functions (mi_cmds) are defind in the file modules/Module_Name/Module_Name.c
  • OpenSIPS has many transport modules. Go to “modules/mi_Transport“, Where Transport can be datagram, xmlrpc, json,…etc.

 More Information


Advertisements

Linux Tuning For SIP Routers – Part 1 (Interrupts and IRQ Tuning)

.

Introduction

Caring about the performance of any SIP router (real-time application) involves caring about the performance of the operating system and the routing application which runs on the operating system. The hardware performance is also important. I will mention some tips on how you can tune Linux to serve real time application like SIP routing. This article will be in many parts. In this part i will talk about interrupts and IRQ tuning for real time processing.

.

Interrupts and IRQ Tuning

Usually the network devices has 1 to 5 interrupts lines to communicate with the CPU. In multiple CPU system, round robin scheduling algorithm (Fair interrupts distribution) is used to choose the CPU core that will handle the interrupt. The file “/proc/interrupts” records the number of interrupts per CPU core per IO device. Execute this “cat /proc/interrupts” to get the list of interrupts in your system. To get the line associated with Ethernet driver, execute “grep p4p1 /proc/interrupts” where p4p1 is the name of my Ethernet driver. The output will look like this:   

30:      36650      16711      62175      18490   PCI-MSI-edge      p4p1

Where: 30 is IRQ number. 36650 of that interrupt handled by CPU-Core-0. 16711 of that interrupt handled by CPU-Core-1, 62175 of that interrupt handled by CPU-Core-2, 18490 of that interrupt handled by CPU-Core-3. PCI-MSI-edge is the interrupt type. p4p1 is the driver which receives the interrupts (This could be a comma-delimited list of drivers).

It is recommended that all interrupts generated by a specific device to be handled by the same CPU cores. IRQ fair distribution between all CPU cores is not recommended because when the interrupt goes to another fresh CPU core, the new CPU core will load the interrupt handler function from the main memory to the cache (time overhead).

IRQs have a property called interrupt affinity or smp_affinity. This property defines the CPU cores (CPU Set) that will handle the interrupts of a specific device. This property can be used to improve the performance of the SIP router (as any real-time application) by assigning same CPU set to the process/thread and to the interrupt. This minimizes the delays by allowing cache sharing between the process/thread and the interrupt.

Affinity Value for a Specefic IRQ

The affinity value (CPU cores) for a specific IRQ is stored in the file “/proc/irq/IRQ_NUMBER/smp_affinity”. The value in this file is a hexadecimal bit-mask (The lowest order bit corresponding to the first logical CPU). For example to set the affinity value to the IRQ-30 (the example above), do this as root:

  • Display the current affinity value: “cat /proc/irq/30/smp_affinity”.
  • Change the affinity of the IRQ-30 to the first 2 cores (0011). The corresponding hex value is 3. We write this number 3 in the file “/proc/irq/30/smp_affinity” : “echo 3 > /proc/irq/30/smp_affinity”

Setting the smp_affinity can be done at the hardware level (no intervention from the kernel) on systems which support interrupt steering.

“irqbalance” Daemon Configuration

Most of the current distributions comes with “irqbalance” daemon which distributes the interrupts fairly between CPU cores. So disable the irqbalance for those CPU cores that you want to bind them with specific IRQs. This can be done by editing the file “/etc/sysconfi/irqbalance” and changing the parameter IRQBALANCE_BANNED_CPUS which is 64 bit mask which allows you to indicate which CPUs should be skipped when balancing IRQs.

Affinity Value for a Specific Process/Thread

You can use the command “taskset” to set/get the affinity value of a running process or to run a new command with a given affinity. This will be done as following for non-running process (command): “taskset 3 /…./my_process“. This will bind my_process to the CPU-Core-0 and CPU-Core-1. Scheduling will happen between these two cores.

To bind running process to specific cores, do this : “taskset -p 3 PID” where 3 is the CPU-Set hex mask and PID is the process ID.

To check the CPU cores, execute “cat /proc/PID/status“.

I will apply this to OpenSIPS (SIP router):

OpenSIPS 1.X is multi-processes SIP router so when you execute “ps aux |grep opensips“, you will get many lines. You can change the affinity value to all or some of these processes as following:

  • # taskset -p 3 19904 where 19904 is one of these processes
  • Check the status of the process 19904: # cat /proc/19904/status

Cpus_allowed:   3

OpenSIPS 2.X is multi-threads SIP router (under development).

The affinity can also set by the process in its code. For example:

      #include <sched.h>

cpu_set_t mask;

CPU_ZERO(&mask);

unsigned int len = sizeof(mask);

CPU_SET(cpu_nr, &mask);

sched_setaffinity(0, len, &mask);

 Description

  • cpu_set_t is a data structure (set of bits) which represents the set of CPU cores.
  • CPU_ZERO(…) clears the set (no CPUs).
  • CPU_SET(…) adds CPU core number cpu_nr to the CPU set.
  • sched_setaffinity(…) Set the affinity mask. First parameter is PID (here it is zero so it means the calling thread/process).

Note: To scale the performance more, choose the best CPU cores for interrupts affinity and process affinity.


Last Word

If there is a lot of reading/writing from/to the hard disk (e.g. Stateful SIP Router),  Attaching the same CPU set to the processes/threads and the interrupts will be useful.

  • Get the driver name of your hard disk:
    • Execute “udevadm info -a -n /dev/sda” and parse the output. Search for something like DRIVERS==”Something”.  I have got this DRIVERS==”ahci”.
  • Execute “# grep ahci /proc/interrupts” and get the IRQ number.
  • Set the affinity value for the processes/threads and to the interrupts as discussed above.

Next

The Next Part will be about tuning the access to the disk to optimize the performance of the SIP routers.