The Virtual Ethernet Bridge: docker0

.

docker Docker [Installation: click here] creates the virtual interface “docker0” which is virtual Ethernet bridge. This virtual bridge forwards the packets between the containers each other and between the host and the container.

You see this when you execute: “ip addr list docker0 or “ifconfig docker0.

output of “ifconfig docker0“:

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
ether 02:42:63:8f:85:df  txqueuelen 0  (Ethernet)
RX packets 0  bytes 0 (0.0 B)
RX errors 0  dropped 0  overruns 0  frame 0
TX packets 0  bytes 0 (0.0 B)
TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

The IP address of this interface is a randomly chosen private IP that is not used by the host.  Here it is 172.17.0.1 with the net mask of 16 bits (255.255.0.0) and the MAC address 02:42:63:8f:85:df.

To display the virtual networks created by Docker, execute:

# docker network ls

Output:

NETWORK ID          NAME                DRIVER
dfc34cd75029        bridge              bridge 
3453b5653383        none                null
e9a375f4a8a2        host                host

The figure below describes how the virtual interfaces are connected to the bridge docker0.  All veth* interfaces are attached/linked to this bridge. Each veth* interface has a peer (virtual interface) in a specific container.

docker0

Some Notes:

  • The eth0 in the container and the veth* in the host are pairs and connected like a pipe. So the packets enter from one of them will reach the other and vice versa.
  • The eth0 has a private IP address from the same range of the bridge docker0’s IP address which is considered as a gateway. The MAC address is generated from the IP address.
  • The MAC address of the veth* interface is set.
  • The bridge docker0 connects veth* interfaces while sharing a single IP address.

To inspect the bridge network, execute:

# docker network inspect bridge

Output:

[
{
“Name”: “bridge”,
“Id”: “dfc34cd75029786a53356e554b34a14c290ef4969e38ecb1e4ae9c34598e93d7”,
“Scope”: “local”,
“Driver”: “bridge”,
“IPAM”: {
“Driver”: “default”,
“Config”: [
{
“Subnet”: “172.17.0.0/16”
}
]
},
“Containers”: {},
“Options”: {
“com.docker.network.bridge.default_bridge”: “true”,
“com.docker.network.bridge.enable_icc”: “true”,
“com.docker.network.bridge.enable_ip_masquerade”: “true”,
“com.docker.network.bridge.host_binding_ipv4”: “0.0.0.0”,
“com.docker.network.bridge.name”: “docker0”,
“com.docker.network.driver.mtu”: “1500”
}
}
]

Because there is no container created yet, we have the array “Containers” empty (“Containers”: {}). The bridge network can be created manually.

To run a container in a specific network, executes:

# docker run –net=<NETWORK> …..The rest ………..

Note: Instead of using a bridge, we can use iptables NAT configuration.


More Information


 

Advertisements

Docker Installation

.

Follow these steps to install Docker. I have Fedora 23:

  • Add Docker  repo. I will use the baseurl for Fedora 22 because Fedora 23 is new and no Docker repo for it:

cat >/etc/yum.repos.d/docker.repo <<-EOF

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/fedora/22

enabled=1

gpgcheck=1

gpgkey=https://yum.dockerproject.org/gpg

EOF

  • Use dnf to install Docker (yum is deprecated): # sudo dnf -y -q install docker-engine
  • Start the daemon: # sudo systemctl start docker-engine
  • Enable the daemon so it will start on boot: # sudo systemctl enable docker
  • By default, docker listens on a UNIX socket (more secure): “unix:///var/run/docker.sock”. This allows only local connections. To list the permissions of the docker socket file:

ls -l  /var/run/docker.sock
srw-rw—-. 1 root docker 0 Nov 19 09:36 /var/run/docker.sock

As we see this file is readable and writable by the user (root) and the group (docker). So to send command to docker daemon from the docker client, we need to be either root user (directly or as sudo user) or add your user to the group “docker” (NOT secure). To add a user to the group “docker”, execute: # sudo usermod -aG docker your_username

To allow remote connections from docker clients (NOT Secure), we can use the option “-H”. Open the unit file of the docker service “/usr/lib/systemd/system/docker.service” and update the line “ExecStart=/usr/bin/docker daemon -H fd://” using “-H host:port”. If you update the unit file you need to restart the docker.

  • Log out and in or even reboot the system.
  • Verify the installation: # docker run hello-world

 Output:

Hello from Docker.
This message shows that your installation appears to be working correctly.

…..


First Steps In Securing Your Linux Server

.

Some notes here to keep it in mind when you manage your Linux server security:

  • Keep your system clean: Install ONLY the packages you need.
  • Check the enabled services and disable the services that you don’t need. The disabled services will NOT run automatically when system boot. To display the enabled services, execute this:

# systemctl list-unit-files –type=service | grep enabled

The unit files of the enabled services exist in “/usr/lib/systemd/system/”. To stop a running service, execute: # systemd stop $serviceName. To disable a service: # systemd disable $serviceName

  • Change the port of the sshd:
    • Edit the configuration file of the SSH daemon: # vim /etc/ssh/sshd_config
    • For Example, change the port to 1866:  “Port 1866” in the configuration file.
    •  If SELinux is enabled,  you have to tell SELinux about this change:
      # semanage port -a -t ssh_port_t -p tcp
    • Restart sshd : # systemctl restart sshd
    • By doing that, you need to use the option -p1866 when you login.
  • Disable sshing as root:
    • Disable Root Login:
      • Edit the configuration file of the SSH daemon: # vim /etc/ssh/sshd_config
      • Uncomment and change yes to no in the line “PermitRootLogin no”
      • Restart the SSH daemon: # systemctl restart sshd
    • Create an admin user so you can ssh into the server as an administrator:  As a root, do these: Create a new user and its password  using useradd and passwd. Then to make the new user (lets say it is binan) an administrator, execute visudo and do one of these:
      • Search for the line “%wheel  ALL=(ALL)       ALL” and uncomment it (delete the # at the begining of the line). This allows people in group wheel to run all commands. Add the user binan to the group wheel so the user binan can execute all commands using sudo: # usermod -aG wheel binan.
      • OR Instead of adding the user binan to the group wheel, you can add “binan ALL=(ALL) ALL”.  

Everything sudo users do will be logged so “who did what” is determined (that’s why we disable logging as root).

  • Generate SSH key pair on your machine:  # ssh-keygen  -t rsa. The generated private and the public keys will be stored in the .ssh directory in the home directory (e.g. private key: ~/.ssh/id_rsa and public key: ~/.ssh/id_rsa.pub).
  • You must have the public key installed on the server (appended in ~/.ssh/authorized_keys). To copy the public key to the server, you can temporarily enable the sshing using passwords (“PasswordAuthentication yes” in the /etc/ssh/sshd_config and restart the sshd daemon).  Now copy the public key to the server as following: # cat ~/.ssh/id_rsa.pub | ssh -p 1866 binan@Hostname/IP-Address “mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys
  • Disable sshing using passwords: “PasswordAuthentication no” in the /etc/ssh/sshd_config. Restart the daemon. Now the login will be using public key authentication.
  • Check that you can login from another terminal using : Assuming you have the key pair (the public and private keys) on your local machine:

# ssh -p 1866 binan@Hostname/IP-Address
Last login: Wed Nov  4 09:40:00 2015
-bash-4.2$


Running RTPEngine Under Systemd Control

.

To compile and install RTPEngine go here.

To run the RTPEngine under systemd control, follow these steps:

# git clone https://github.com/Binan/rtpengine-systemd.git

# cd rtpengine-systemd

Edit the configuration file ” rtpengine-conf ” to reflect your configuration. Then install the files in your system:

# cp rtpengine-conf /etc/default/rtpengine-conf

# cp rtpengine.service /usr/lib/systemd/system/rtpengine.service

# cp rtpengine-start /usr/bin/rtpengine/rtpengine-start
# cp rtpengine-stop-post /usr/bin/rtpengine/rtpengine-stop-post

# chmod +x /usr/bin/rtpengine/rtpengine-start

# chmod +x /usr/bin/rtpengine/rtpengine-stop-post

In the systemd unit file, the option “ExecStopPost” is used to clean the system after the RTPEngine daemon is stopped. This incolves: deleting the forwarding table, the iptables related rules, and unload the kernel module (xt_RTPENGINE).

Now you can enable/start/stop/status the rtpengine service as following:

# systemctl enable rtpengine.service

# systemctl start rtpengine.service

# systemctl status rtpengine.service

# systemctl stop rtpengine.service

If you do enable, then the rtpengine will be automatically started by the Systemd after boot.

This work is a translation of the Sipwise ngcp-rtpengine-daemon.init script to what Systemd needs.

Building a Custom Linux kernel for your VOIP System – Part1

.

Introduction

Is your VOIP system based on userspace applications ? What about building kernel based VOIP system ?. This is not new but we are going to go through it here. I am assuming Linux as an operating system. The Linux kernel is about core and modules (micro kernel + loadable modules). You might not touch the core but you might need to build a module that support your VOIP system in the kernel for performance purposes. The new module supposed to do VOIP jobs (filtering, QoS, protection, media relay). There are already some useful kernel modules that can support your system like NATing, IPv6ing, and so on . Use them!

In this article i will just introduce what you need to know to follow this series of articles: how to compile and install a new, fresh and clean Linux kernel and what you need to know to compile and install a new kernel module.

We will not touch the current running kernel but building a new one, install it, boot from it, write a new module, add it to the kernel and test it. You need to be very careful when you work in the kernel.

Compilation and Installation Steps

This is a traditional common way to all Linux distributions to compile and install a Linux kernel:

Download the source:

I will work on the kernel 3.19.5 as an example. For other versions, go to https://kernel.org/pub/linux/kernel and see the list there.

# wget -c https://kernel.org/pub/linux/kernel/v3.x/linux-3.19.5.tar.gz

  Decompress the gz file and go to the directory “linux-3.19.5”:

# tar -xzvf linux-3.19.5.tar.gz

# cd linux-3.19.5

Configure the kernel:

Fresh configuration: traditional way: # make menuconfig or the modern way: # make nconfig

Note: To detect memory leaks in the kernel modules, you can use the tool ‘kmemleak’. To use it, you need to compile the kernel with ‘CONFIG_DEBUG_KMEMLEAK’ option enabled. Enable it as following: make menuconfig –> ‘kernel Hacking’ –> ‘Memory Debugging’ –> ‘Kernel Memory Leak Detector’. Enable this option and do save. You can check this file “/sys/kernel/debug/kmemleak” while testing.

You can use the config file of the current running kernel. In Fedora this file exists as /boot/config-$(KernelName). For example: /boot/config-3.19.5-100.fc20.x86_64. Then update it with ‘make menuconfig’ and save.

Compilation:

Prepare for the compilation: # make mrproper

Compile the kernel: # make

Install the compiled modules:

# make modules_install

The modules will be installed in /lib/modules/3.19.5/

Install the kernel: Now we want to copy and rename the image bzImage (arch/x86_64/boot/bzImage) to /boot + creating the initial RAM disk + creating the map/kernel symbols , and update the grub config file). This done by: # make install

After compiling and installing the kernel, we can boot from it (appear in the GRUB startup List).

Header Files needed to compile a new module:

If you want to build your own kernel module, you need to have the header files.  Check the folder /lib/modules/3.19.5/build/ where 3.19.5 is the new kernel version.

The module is header and source files with Makefile file. When it is built we got .ko file. In the Makefile you need to specify the kernel source and the obj-m.

After building the module, it will be ready to be installed. You need to boot from the new kernel and do ‘modprob $ModuleName’

Again this article is just an introduction to this series. More information is coming next.


Copying an ISO Image to USB Drive Using GNOME Disks GUI

.

  • Insert the USB drive.
  • Open GNOME disks GUI. It looks like this:

  • Click on your USB disk item on the left side menu. You will see how this disk looks like on the right side.
  • Under the blue partition box, you find square icon. click on it to unmount the partition. Note: here you can format/delete/add partitions.
  • Click on the top right corner icon. Select “Restore Disk Image” and then browse to your ISO image. Click the button “Start Restoring”. The image will be copied to the USB disk.
  • Now if the ISO image is bootable. You can boot from the USB disk after changing the Bios setting on boot.

 

Installation of MySQL Database Server

.

These simple steps install and start MySQL database server on Fedora 20:

  • Install the package “community-mysql-server”: # yum -y install community-mysql-server
  • Start the daemon service: # systemctl start mysqld.service
  • Enable the daemon service: # systemctl enable mysqld.service
  • Connect to MySQL: # mysql -u root. Type Enter
  • Set the root password: mysql> set password for root@localhost=password(‘password’);

You can replace the localhost with your domain or IP address.

  • Delete anonymous users: mysql> delete from mysql.user where user=”;
  • mysql> exit

If you forgot the root password, do these:

  • # mysqld_safe –skip-grant-tables &
  • mysql> use mysql;
    mysql> update user set password=PASSWORD(“New-Password”) where User=’root’;
    mysql> flush privileges;
    mysql> quit
  • # systemctl stop mysqld
  • # systemctl start mysqld
  • # mysql -u root -p

Building Linux Images For Use With Openstack Heat

.

This Article is published on opensource.com with My Permission.

Introduction

548426__fiery-blaze_p

The Openstack image is cloud-init enabled image. Instances created from the image run the cloud-init script on early boot to do some configurations. The directory “/var/lib/cloud” is the directory which contains the cloud-init specific subdirectories.

# ls /var/lib/cloud/
data handlers instance instances scripts seed sem

.

In these subdirectories you can find the current instance id, previous instance id (it could be different) , current and previous datasources, start and end time , errors if exists, custom scripts (scripts/) downloaded/created by the corresponding handlers (handlers/ subdirectory), user data downloaded from metadata service (/instance/scripts/user-data.txt as RAW data and user-data.txt.i as post-processed data),….etc.

But what if we want some configurations/actions on different life cycles and not only on early boot (cloud-init). For example if you are using Openstack autoscaling service, this means your servers will be added and deleted automatically without human intervention. When any of these servers is determined to be killed, it needs some time to clean itself and it needs some configuration to be applied on this stage (e.g drains its active sessions, its active connections to other services if exist,..). Here we come to what is called Heat Software Deployment which is a communication between the Openstack Heat unit and the instance to configure the instance on different life cycles and not only on boot. This feature is available since Openstack Icehouse release. For this we need some agents to be installed in the image and some definitions in the Heat template (mainly based on “OS::Heat::SoftwareDeployment” and “OS::Heat::SoftwareConfig” resource types).

The agents can be installed on boot (script that installs the agents on boot or definitions in Heat template to install the agents on boot). Here I will talk about how to install the agents in the image. The agents are: heat-config, os-collect-config, os-refresh-config, os-apply-config, heat-config-cfn-init, heat-config-puppet, heat-config-salt and heat-config-script.

Preparations For Image Building

Here i am assuming that you have your own Openstack image and you want to add the Heat agents to your image. I will use the DiskImage-Builder to build the image. Diskimage-Builder (DIB) is driven by HP, RedHat, and Cisco and licensed under the Apache License, Version 2.0. My operating system is Fedora 21 and i will build Centos7 custom image with Heat support.

If your image is already on the cloud, you can download it using glance client after you source your credentials (keystonerc).

To install the glance client on Fedora:

# yum install python-devel libffi-devel

# pip install python-glanceclient

To download the image, execute:

# glance image-download –file outputImageFilename imageID

Ex: # glance image-download –file my_image.qcow2 155ae0f9-56ee-4dad-a3a3-cc15a835c035

By doing this, you will get the file my_image.qcow2 on the local disk which is in my case Centos7 image.

The DiskImage-Builder needs enough RAM for caching. If your image is big and your RAM is not enough , you can use /tmp for caching on disk (See how to use /tmp further bellow -Notes Section).

Now we need to clone the repositories: diskimage-builder, image elements, and heat-templates:

# git clone https://git.openstack.org/openstack/diskimage-builder.git

# git clone https://git.openstack.org/openstack/tripleo-image-elements.git

# git clone https://git.openstack.org/openstack/heat-templates.git

Some environment variables need to be exported:

DIB_LOCAL_IMAGE variable: This variable contains the path to the local image (in my case my_image.qcow2). If this variable is not set, the diskimage-builder will go to internet and bring a generic image (e.g. in case centos7, it will go to http://cloud.centos.org/centos/7/images). If you don’t have any image just don’t export this variable to keep the default.

DIB_CLOUD_IMAGES variable (HTTP address): If your image is public on internet, you can export this variable.

ARCH variable: This variable contains the architecture. In my case “amd64”: # export ARCH=”amd64″. This will be converted to x86_64.

My image is Centos7 so the source file which use this variable is “…/diskimage-builder/elements/centos7/root.d/10-centos7-cloud-image”. I recommend you to check the corresponding file for your choice (Go to the root.d directory in your OS choice directory (centos7 in my case) and check what is supported for ARCH). Check this file also for DIB_CLOUD_IMAGES variable and others.

Then we need to install some packages:

# yum install qemu or # yum install qemu-img

# yum install python-pip git

# pip install git+git://git.openstack.org/openstack/dib-utils.git

Image Building

Now i want to add the Heat agents to my image using diskimage-builder. So the instances created from my image can interact with Heat on different life cycles. Write the following in bash script and execute it or take it line by line:

Note: You need to be in the directory where you have cloned the repositories or use the absolute paths.

export ELEMENTS_PATH=tripleo-image-elements/elements:heat-templates/hot/software-config/elements

# customize this line. Possible values: fedora centos7, debian, opensuse, rhel, rhel7, or ubuntu

export BASE_ELEMENTS=”centos7 selinux-permissive”

export AGENT_ELEMENTS=”os-collect-config os-refresh-config os-apply-config”

export DEPLOYMENT_BASE_ELEMENTS=”heat-config heat-config-script”

# For any other chosen configuration tool(s). e.g. heat-config-cfn-init, heat-config-puppet, or heat-config-salt. NOT IN MY CASE.

export DEPLOYMENT_TOOL=””

export IMAGE_NAME=software-deployment-image-gold

diskimage-builder/bin/disk-image-create vm $BASE_ELEMENTS $AGENT_ELEMENTS $DEPLOYMENT_BASE_ELEMENTS $DEPLOYMENT_TOOL -o $IMAGE_NAME.qcow2

Note the variable IMAGE_NAME determines the image name.

Image Uploading

After building the image , you will get the file software-deployment-image-gold.qcow2. If your image is big, you can compress it before uploading it to Openstack (Glance unit):

# tar -zcvf gw-software-deployment-gold.qcow2.tar.gz gw-software-deployment-gold.qcow2

You can also pass the option “-t tar” to the command diskimage-builder/bin/disk-image-create to have the image compressed by the diskimage-builder. Also use this option “-a i386|amd64|armhf” for the ARCH mentioned above. See the full documentation Here.

To upload the compressed image, execute:

# glance image-create –name “gw-software-deployment-gold” –is-public false  –disk-format qcow2 –file gw-software-deployment-gold.qcow2.tar.gz

As i found, Firefox cannot be used to upload large files like images whereas Chrome can. So you can upload your image from the Openstack control panel (Horizon) using Chrome browser.

You may also need to tune the performance parameters of the operating system (Update system files) and the application itself so the servers created from the image are already tuned. You can do the tuning on the live server before taking a snapshot or you can update the system files of the local image using guestfish before uploading.

Now you can create an instance from the new uploaded image where you can start working with software deployment on different lifecycle in your Heat template.


Using “/tmp” For Caching On Disk

If your RAM is not enough for the image building process, you need to increase the /tmp. For example 15 GB:

# dd if=/dev/zero of=/usr/tmp-dir bs=1024M count=15

# mke2fs -j /usr/tmp-dir; chmod 777 /usr/tmp-dir

# mount /usr/tmp-dir /tmp

Notes

  • During image building, if you find some problems related to packages update, you can temporarily disable repositories that cause the problems by using yum-config-manager –disable repositoryName in the live server before creating the image (The image that you will download from the cloud to work on).

  • Commenting the line “yum -y update” in the file “…./heat-templates/hot/software-config/heat-container-agent/scripts/configure_container_agent.sh”, could solve the problems that arise on update (conflict between different versions of packages).

  • You could do the process many time with different updates till the image creation process completed successfully. If you are using the default, you are not supposed to have any problems.

  • Package dependencies file in my case Centos7 is “…/diskimage-builder/elements/centos7/element-deps”: If you like you can add more here so it will be installed.

More Information


Monitoring Agent For Rackspace’s Auto scaled Servers

Introduction

Ceilometer is used to collect measurements of different utilizations (memory, CPU, Hard, ….) from OpenStack components. It is designed originally for billing. It is not a complete monitoring solution for metering because it does not allow service/application level monitoring and it ignores the detailed metrics of the guest system.

Rackspace’ cloud which is Openstack based cloud solution has its own monitoring service which allow the tenants to keep their measured data whether standard (e.g. CPU, Memory, …) or custom (application/service specific metrics) on the cloud and create the notification plans they want.

In this article, i will show you how to automate the setup of Rackspace monitoring agent on the virtual machine. So when your auto scale policy is triggered, you will have a new server with the monitoring agent installed and connected to the cloud. I have Centos-7 for my virtual machine which i will use later to create the image. The image will be used by the auto scaling service to create new servers. You need to have an account with Rackspace cloud provider.

Rackspace Monitoring Agent Installation on Centos 7

Install the package signing key

# curl https://monitoring.api.rackspacecloud.com/pki/agent/centos-7.asc > /tmp/signing-key.asc
# rpm –import /tmp/signing-key.asc

Add the agent repository to yum

  • Create and edit the file  “/etc/yum.repos.d/rackspace-cloud-monitoring.repo”

# vi /etc/yum.repos.d/rackspace-cloud-monitoring.repo

  •  Add the configuration of the repository. In my case i have centos7:

[rackspace]
name=Rackspace Monitoring
baseurl=http://stable.packages.cloudmonitoring.rackspace.com/centos-7-x86_64
enabled=1

Install the agent

# yum install rackspace-monitoring-agent

Now we have the agent installed on the current virtual machine.

Create oneshot systemd or init service for the agent setup

The setup process is needed to configure the monitoring agent for the specific server, verifies the connectivity with the cloud, and association with the monitoring entity of the server. The script that you will write does the setup of the agent as following:

     # rackspace-monitoring-agent –setup -U username -K apikey

Replace the username and apikey with yours. You can take the API key from your account settings when you access the web control panel.

The script needs also to start the agent if it is not started:

      # systemctl start rackspace-monitoring-agent

As this service will be executed on boot, you need to be sure that it is executed only when the server is created (only once). So you need to write a check which examine if rackspace-monitoring-agent service is started or not. If it is started so do NOT set it up again.

Clean after preparation

If you test the setup on the current virtual machine, you need to clean it up so the new servers created from the image will not have the old configuration of the server that is used to create the image. Simply stop the service, uninstall the agent. Then install the agent again without the setup. If you want to have your image independent on the account information you need to make the installation and the setup of the monitoring agent as YAML template executed by the cloud. See the last link in the section “More Information” further down.

Server-Side Agent Configuration YAML File

For example a YAML configuration file that creates a CPU check with alarm. Bind the check with the auto scaling policy notification plans. Create the file “cpu.yaml” in the folder “/etc/rackspace-monitoring-agent.conf.d” with this content:

type : agent.cpu

label: CPU

period: 60

timeout: 10

alarms:

cpu-usage-up:

label: CPU Usage Up

notification_plan_id: scale-up-notification-plan-id-here

criteria: |

     if (metric[‘usage_average’]> 80){

return new AlarmStatus(CRITICAL);

}

cpu-usage-down:

label: CPU Usage Down

notification_plan_id: scale-down-notification-plan-id-here

criteria: |

     if (metric[‘usage_average’]< 50){

return new AlarmStatus(WARNING);

}

To get the ids of your created notifications, execute this:

# curl -s -X GET https://monitoring.api.rackspacecloud.com/v1.0/$tenantID/notifications  -H “X-Auth-Token: $token”   -H “Accept: application/json” | python -m json.tool

Create a new image

Now you can go to the web control panel and create a new image that will be used in auto scaling process.


Next

The next article will be about how to send custom measured data (custom metrics) to your cloud using the monitoring agent. this is called creating custom plugin. I will show you how to create a custom check.

More Information


Root Password’s Hash Injection Into Linux Image File

.

Resetting-a-Forgotten-Root-Password-2Here i will show you how to set the root password permanently in the image. As an example i will use the following image: http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/i386/Fedora-Cloud-Base-20141203-21.i386.qcow2. This image is cloud-aware image and it is in qcow2 format. You need to install the following: “guestfish” and “libguestfs-tools”:

# yum install guestfish libguestfs-tools

To generate an encrypted password:  # openssl passwd -1 Your-Password

I will set the root password as “binan” but you need to choose a strong password:

# openssl passwd -1 binan
$1$PNq4EoLe$EFwgE1BVdVG3uXqv05Pb5/

Now i will set the generated hash value in the file “/etc/shadow” in the image file. This is done by executing (guestfish –rw -a <image-name>):

# guestfish –rw -a /home/binan/Downloads/Fedora-Cloud-Base-20141203-21.i386.qcow2

><fs> run

><fs> list-filesystems

/dev/sda1: ext4

><fs> mount /dev/sda1 /

><fs> vi /etc/shadow

Now i will write the hash value of the password ($1$PNq4EoLe$EFwgE1BVdVG3uXqv05Pb5/) in its place:

root:$1$PNq4EoLe$EFwgE1BVdVG3uXqv05Pb5/::0:99999:7:::

If the root password in the image file is locked, replace the word “locked” with the generated hash. Now each instance created from this image will have “binan” as root password.

Note: After mounting the file system you can do whatever you want. This is not restricted to the “/etc/shadow” file.

To set different root passwords for different instances, use “cloud-init”.