RHEL7: Configure a system as either an iSCSI target or initiator that persistently mounts an iSCSI target.

Share this link

Note: This is an RHCE 7 exam objective.

Presentation

In the iSCSI world, you’ve got two types of agents:

  • an iSCSI target provides some storage (here called server),
  • an iSCSI initiator uses this available storage (here called client).

As you already guessed, we are going to use two virtual machines, respectively called server and client. If necessary, the server and client virtual machines can be one and only one machine.

iSCSI Target Configuration

Most of the target configuration is done interactively through the targetcli command. This command uses a directory tree to access the different objects.

To create an iSCSI target, you need to follow several steps on the server virtual machine.

Install the following packages:

# yum install -y targetcli

Activate the target service at boot:

# systemctl enable target

Note: This is mandatory, otherwise your configuration won’t be read after a reboot!

Execute the targetcli command:

# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb34
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/>

You’ve got two options:

  • You can create a fileio backstore called shareddata of 100MB in the /opt directory (don’t hesitate to use tab completion):
    /> backstores/fileio/ create shareddata /opt/shareddata.img 100M
    Created fileio shareddata with size 104857600

    Note: If you don’t specify write_back=false at the end of the previous command, it is assumed write_back=true. The write_back option set to true enables the local file system cache. This improves performance but increases the risk of data loss. In production environments, it is recommended to use write_back=false.

  • You can create a block backstore that usually provides the best performance. You can use a block device like /dev/sdb or a logical volume previously created (# lvcreate –name lv_iscsi –size 100M vg):
    /> backstores/block/ create block1 /dev/vg/lv_iscsi
    Created block storage object block1 using /dev/vg/lv_iscsi.

Then, create an IQN (Iscsi Qualified Name) called iqn.2014-08.com.example with a target named t1 and get an associated TPG (Target Portal Group):

/> iscsi/ create iqn.2014-08.com.example:t1
Created target iqn.2014-08.com.example:t1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

Note: The IQN follows the convention of the RFC 3270 (see http://en.wikipedia.org/wiki/ISCSI to get more details).

Now, we can go to the newly created directory:

/> cd iscsi/iqn.2014-08.com.example:t1/tpg1
/iscsi/iqn.20...ample:t1/tpg1> ls
o- tpg1 ................................................. [no-gen-acls, no-auth]
  o- acls ............................................................ [ACLs: 0]
  o- luns ............................................................ [LUNs: 0]
  o- portals ...................................................... [Portals: 1]
    o- 0.0.0.0:3260 ....................................................... [OK]

Below tpg1, three objects have been defined:

  • acls (access control lists: restrict access to resources),
  • luns (logical unit number: define exported resources),
  • portals (define ways to reach the exported resources; consist in pairs of IP addresses and ports).

If you use a version pre-RHEL 7.1 (this step is now automatically done by the iscsi/ create command), you need to create a portal (a pair of IP address and port through which the target can be contacted by initiators):

/iscsi/iqn.20...ple:t1/tpg1> portals/ create
Using default IP port 3260
Binding to INADDR_ANY (0.0.0.0)
Created network portal 0.0.0.0:3260.

Whatever version, create a lun depending on the kind of backstore you previously chose:

  • Fileio backstore:
    /iscsi/iqn.20...ample:t1/tpg1> luns/ create /backstores/fileio/shareddata
     Created LUN 0.
  • Block backstore:
    /iscsi/iqn.20...ample:t1/tpg1> luns/ create /backstores/block/block1
     Created LUN 0.

Create an acl with the previously created IQN (here iqn.2014-08.com.example) and an identifier you choose (here client), together creating the future initiator name:

/iscsi/iqn.20...ample:t1/tpg1> acls/ create iqn.2014-08.com.example:client
Created Node ACL for iqn.2014-08.com.example:client
Created mapped LUN 0

Optionally, set a userid and a password:

/iscsi/iqn.20...ample:t1/tpg1> cd acls/iqn.2014-08.com.example:client/
/iscsi/iqn.20...xample:client> set auth userid=usr
Parameter userid is now 'usr'.
/iscsi/iqn.20...xample:client> set auth password=pwd
Parameter password is now 'pwd'.

Now, to check the configuration, type:

/iscsi/iqn.20...om.example:d1> cd ../..
/iscsi/iqn.20...ple:tgt1/tpg1> ls
o- tpg1 ................................................. [no-gen-acls, no-auth]
  o- acls ............................................................ [ACLs: 1]
  | o- iqn.2014-08.com.example:client ......................... [Mapped LUNs: 1]
  |   o- mapped_lun0 ............................. [lun0 fileio/shareddata (rw)]
  o- luns ............................................................ [LUNs: 1]
  | o- lun0 .......................... [fileio/shareddata (/opt/shareddata.img)]
  o- portals ...................................................... [Portals: 1]
    o- 0.0.0.0:3260 ....................................................... [OK]

Finally, you can quit the targetcli command:

/iscsi/iqn.20...ple:tgt1/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

Note: The configuration is automatically saved to the /etc/target/saveconfig.json file.

Also, it can be useful to check the ports currently used:

# netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN
tcp        0      0 0.0.0.0:3260            0.0.0.0:*               LISTEN
tcp        0      0 192.168.1.81:22         192.168.1.81:33584      ESTABLISHED
tcp6       0      0 :::22                   :::*                    LISTEN
tcp6       0      0 ::1:25                  :::*                    LISTEN

Finally, open the 3260 tcp port in the firewall configuration:

# firewall-cmd --permanent --add-port=3260/tcp
Success

Note1: With RHEL 7.2 (RHBZ#1150656), there is now a firewalld configuration file for the iscsi-target service. So you can type: # firewall-cmd –permanent –add-service iscsi-target
Note2: In the new /usr/lib/firewalld/services/iscsi-target.xml configuration file, two lines are specified for the ports: TCP 3260 and UDP 3260. As everything was working fine until now with the TCP 3260 argument, I suppose that you can run iSCSI on top of UDP but it’s not the default option (I didn’t find any details in the RFC7143 on this point).

Reload the firewall configuration:

# firewall-cmd --reload
Success

iSCSI Initiator Configuration

To create an iSCSI initiator, you need to follow several steps on the client virtual machine.

Install the following package:

# yum install -y iscsi-initiator-utils

Edit the /etc/iscsi/initiatorname.iscsi and replace the content with the initiator name that you previously configured as acl on the target side:

InitiatorName=iqn.2014-08.com.example:client

If you previously set up a userid and a password on the server, edit the /etc/iscsi/iscsid.conf file and paste the following lines:

node.session.auth.authmethod = CHAP
node.session.auth.username = usr
node.session.auth.password = pwd

Start the iscsi service:

# systemctl start iscsi

Caution: This action is mandatory to be able to unmount the remote resource when rebooting. Don’t confuse iscsid and iscsi services!

Execute the iscsiadm command in discovery mode with the server ip address (here 192.168.1.81):

# iscsiadm --mode discovery --type sendtargets --portal 192.168.1.81
192.168.1.81:3260,1 iqn.2014-08.com.example:t1

Note1: If you don’t specify any port, the default port is 3260.
Note2: Don’t mention a DNS entry as your portal address (here 192.168.1.81), this would be a bad idea causing you a lot of trouble.

Execute the iscsiadm command in node mode with the server ip address (here 192.168.1.81):

# iscsiadm --mode node --targetname iqn.2014-08.com.example:t1 --portal 192.168.1.81 --login
Logging in to [iface: default, target: iqn.2014-08.com.example:t1, portal: 192.168.1.81,3260] (multiple)
Login to [iface: default, target: iqn.2014-08.com.example:t1, portal: 192.168.1.81,3260] successful.

Note: As before, if you don’t specify any port, the default port is 3260. Use of DNS entry as portal address only brings problems.

To check the configuration, type:

# lsblk --scsi
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  2:0:0:0    disk LIO-ORG  shareddata       4.0  iscsi

To be sure that your resource is not in read-only mode (1=read-only mode), type:

# lsblk | egrep "NAME|sda"
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0  100M  0 disk

Now, you can create a file system:

# mkfs.ext4 /dev/sda
mke2fs 1.42.9 (28-Dec-2013)
/dev/sda is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=4096 blocks
25688 inodes, 102400 blocks
5120 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33685504
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

Retrieve the UUID of this disk:

# blkid | grep "/dev/sda"
/dev/sda: UUID="4a184c70-20ad-4d91-a0b1-c2cf0eb1986f" TYPE="ext4"

Add the disk UUID to the /etc/fstab file:

# echo "UUID=..." >> /etc/fstab

Note: Be very careful to type >> and not >, otherwise this will destroy all your configuration!
Make a copy of the /etc/fstab file before doing this operation if you don’t want to take any risk.

Edit the /etc/fstab file and add the mount point (here /mnt), the file system type (here ext4) and the mount options (_netdev):

UUID=... /mnt ext4 _netdev 0 0

Note: The _netdev mount option is mandatory to postpone the mount operation after the network initialization. If you don’t do it, the initiator boot process will be stopped after a timeout in maintenance mode.

To check your configuration, type:

# mount /mnt
# touch /mnt/testFile

Optionally, you can dump all the initiator configuration (3=max output, 0=min output):

# iscsiadm -m session -P 3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-28
Target: iqn.2014-08.com.example:t1 (non-flash)
	Current Portal: 192.168.1.81:3260,1
	Persistent Portal: 192.168.1.81:3260,1
		**********
		Interface:
		**********
		Iface Name: default
		Iface Transport: tcp
		Iface Initiatorname: iqn.2014-08.com.example:client
		Iface IPaddress: 192.168.1.10
		Iface HWaddress: 
		Iface Netdev: 
		SID: 1
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE
		*********
		Timeouts:
		*********
		Recovery Timeout: 120
		Target Reset Timeout: 30
		LUN Reset Timeout: 30
		Abort Timeout: 15
		*****
		CHAP:
		*****
		username: usr
		password: ********
		username_in: 
		password_in: ********
		************************
		Negotiated iSCSI params:
		************************
		HeaderDigest: None
		DataDigest: None
		MaxRecvDataSegmentLength: 262144
		MaxXmitDataSegmentLength: 262144
		FirstBurstLength: 65536
		MaxBurstLength: 262144
		ImmediateData: Yes
		InitialR2T: Yes
		MaxOutstandingR2T: 1
		************************
		Attached SCSI devices:
		************************
		Host Number: 2	State: running
		scsi2 Channel 00 Id 0 Lun: 0
			Attached scsi disk sda		State: running

Source: targetcli man page and Linux-iSCSI wiki.

Useful Tips

Before rebooting, set up a virtual console, this can be helpful!

If you need to shut down target and initiator, shut down the initiator first. If you shut down the target first, the initiator won’t be able to unmount the remote resource and will be stuck in the shutdown process.

During the exam, as an extra precaution, unmount the remote resource before rebooting the initiator, you will avoid any bad surprise.

Additional Resources

In addition, you can watch CalPOP’s video Creating iSCSI SAN Storage on Linux (CentOS 7.0) (10min/2015), Venkat Nagappan’s video Setting up iSCSI Target & Initiator (19min/2015) or follow this IBM iScsi tutorial.

There is also a wiki about Targetcli.

Dell offers some interesting information about iSCSI, MPIO and performance tips in its RHEL Configuration Guide for Dell Storage PS Series Arrays.

Check Your Knowledge

Test yourself!

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

34 Comments on "RHEL7: Configure a system as either an iSCSI target or initiator that persistently mounts an iSCSI target."

Notify of
Sort by:   newest | oldest
g.cardone
Member
g.cardone

I found it was also necessary to issue:

systemctl start target

otherwise the initiator could not create the filesystem with the above mentioned:

mkfs.ext4 /dev/sda

rodolfocasas
Member
rodolfocasas

Hi,

why don’t you use

# firewall –permanent –add-port=3260/tcp

Thank you
RODOLFO

jerky_rs
Member
jerky_rs
You may need to set ACL configuration against the target or limit the target to a given IP address or IQN. If you are required to do this you will have 2 options 1) Protect it via ACL using IQN of client = “cat /etc/iscsi/initiatorname.iscsi” on client and add on server in targetcli (quite easy really) “../acls> create iqn.1994-05.com.redhat:a51085a87171” 2) Protect it via firewall, using standard –add-port will not protect it unless you have specific source address in your zone. If this is the case you will need to use rich rules. The easiest is to use firewall-config as remembering… Read more »
Shikaz
Member
Shikaz

for portals/
i found the following , it’s better to delete the default 0.0.0.0 and add the targetcli server ip.

# portals/ delete 0.0.0.0 3260

and add the targetcli server ip

# portals/ create 192.168.10.20 3260

bubson01
Member
bubson01

Dear CertDepot,
Thanks for your tutorials, they are so informative. But i’ve been struggling with the iscsi-initiator on the client side for a while now. I’ve followed your tutorial from start to finish but anytime i come to the login part on the client side I keep on getting this error
“iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).”
” iscsiadm: Could not log into all portals”

Do you have any ideas what I might be doing wrong so that i can correct it? it is driving me crazy!

tron
Member
tron

This may be because you added authentication at the target after performing initial discovery ? DIscovery “caches” the target configuration and will not be updated if you update iscsi.conf. You should “rediscover” by using iscsiadm -m node -o delete and then redo discovery.
You can check “cached” configurations under /var/lib/iscsi/nodes.
Also, iscsiadm -m session -P 2 -S should show CHAP user/password to check if current values are what you expect.

keshara dorakumbura
Member
keshara dorakumbura

Looking for difference between iscsi and iscsid services.

Can anybody provide some brief detail?

linuxfan
Member
linuxfan

All you need to remember is that both need to be running in order for your initiator to work properly.

Another advice is that NEVER and i mean NEVER use the DNS entry as your portal address. ALWAYS use the server IP. That is if you don’t want to spend hours troubleshooting why your initiator isnt finding its target. Some of us learned the hard way.

tron
Member
tron

While testing this I fell into a booby trap: had a server exporting LUNs made from a LVM managed disk and also had LVM merging those LUNs into an iSCSI PV.
After a server (target) reboot, the server LVM claimed the LUNS and the target was unable to “export” them. Ouch. Ended up learning about LVM filtering, which can be used to prevent LVM from managing anything it sees that looks like a PV.

Jaz
Member
Jaz
Hello everyone, If you are experiencing an issue during your RHEL training such as: “iscsiadm: initiator reported error (24 – iSCSI login failed due to authorization failure).” ” iscsiadm: Could not log into all portals” It appears to be a bug even in RHEL7 as far as I understand, i am not sure about the upgraded version like 7.2 or software updates may fix it only if you are subscribed to redhat. Now here is the solution: If you are running tests on VMs and your domain is example.com and you have named your machines after the domain for e.g… Read more »
drainuzzo
Member
drainuzzo

I’m on a Centos 7, I run systemctl restart iscsid.service && systemctl restart iscsi.service on the initiator as said by linuxfan user and it worked without restarting anything on the target

abhiquiet
Member
abhiquiet
Hello all, The article here is really informative and helpful for the beginners. Thanks for writing in the complete step by step guide. I am new to the environment, and have tried creating the iscsi target on centos 7 based on the inputs given. I am connecting the ISCSI target from ubuntu on client side. I am able to connect to the target, but the drive connected is in the read only mode. I am not able to trace the error I did. Can you please guide me, where I may be going wrong while making the connection / or… Read more »
yarilc
Member
yarilc

On CentOS 7.1.1503 it looks like python-six library is too old and not aligned with targetcli requirements. It doesn’t work with targetcli. If you’re working on CentOS 1503 update that package (yum update -y python-fix).

twostep
Member
twostep

Very useful,
To verify node settings (authentication, automatic startup, discovery address etc.) on the client side:
iscsiadm -m node -T target_name -o show

popo
Member
popo

Hi! After writing an entry in /etc/fstab, I type reboot and it gets stuck. It says “connection1:0: ping timeout of 5 seconds expired, last rx 4302580612, last ping 4302585612, now 4302590625”

Any ideas? I only found one elegant solution i.e. to issue the isscsiadm …–logout issue command after typing mount -a and then it reboots instantly otherwise it takes 5 min approx.

Any more tips for rebooting after doing persistent mount?

wpDiscuz

RHCSA7: Task of the day

Allowed time: 5 minutes.
Create a user account named "tony" with password “redhat” and belonging to a secondary group called “team”.

RHCE7: Task of the day

Allowed time: 10 minutes.
Set up a default secure MariaDB database called maria and create a table named people with two columns respectively name varchar(20) and age int(10) unsigned.

Poll for favorite RHEL 7 book

What is your favorite RHEL 7 book to prepare RHCSA & RHCE exams?

View Results

Loading ... Loading ...

Poll for most difficult RHCSA 7 topic

What do you think is the most difficult RHCSA 7 topic?

View Results

Loading ... Loading ...

Poll for most difficult RHCE 7 topic

What do you think is the most difficult RHCE 7 topic?

View Results

Loading ... Loading ...

Recent Comments