User:Tom/RHCE EX300
RHCSA & RHCE Red Hat Enterprise Linux 7: Training and Exam Preparation Guide (EX200 and EX300) third edition march 2015 by Asghar Ghori
RHCE
Hoofdstuk 14 Writing Shell Scripts
Indicate the shell in which the script will run
#!/bin/bash
Add a new path to the existing PATH setting.
#export PATH=$PATH:/usr/local/bin
Debug a shell script.
#bash -x /usr/local/bin/sysinfo.sh
nl Number lines of files
Command line arguments $0 $1, $#, $*, $$
- $0 scriptname
- $1 first argument
- $# # of arguments
- $* all arguments
- $$ PID of the script)
- ${10} for arguments above 9.
shift Move command arguments one position to the left. During this move the value of the first argument is lost echo -e Enables interprestation of backslash escapes. See man echo for escape sequences.
read Var Read user input from the keyboard
$? Exit code test Test conditions (man test) int1 -eq int2 if condition then action else action fi if condition then action elif condition then action else action fi
Looping Statements
for do done while do done until do done
Test Conditions
case $var in
val1)
;;
val2)
;;
*)
;;
esac
See man bash for more details.
Hoofdstuk 15 Configuring Bonding, Teaming, IPv6 and Routing
Link aggregation is a technique by which two or more network interfaces are logically configured to provide higher performance using their combined bandwith and fault tolerance should all but one of them fail. Two common methodsfor link aggregation are bonding and teaming and both are supported natively in RHEL7.
Link Aggregation
Link aggregation is a term to combine the capabilities of two or more physical or virtual Ethernet network interfaces to function as a single network pipe. RHEL7 supports two link aggregation methods that are referred to as bonding and teaming.
Bonding and teaming can be configured using tools such as the Network Manager CLI or TUI or the GNOME Network Connections GUI.
Interface Bonding
Interface bonding provides the ability to bind two or more network interfaces together into a single logical bonded channel that acts as the master for all slave interfaces that are added to it. The support for bonding is integrated entirely into the kernel as a loadable module. This module is called bonding.
Configure Interface bonding by Editing Files
In this exercise you will add two new interfaces on 192.168.1.0/24 network to server1 and call them eth2 and eth3. Form a bond by creating configuration files and executing appropiate commands to activate. Reboot to verify bond activation. Assign hostname server1bond.example.org with alias serv1bond. Add IP and hostname to /etc/hosts.
Add two network devices to server1 using the virtual console for server1 on host1. Logon to server1 and tun the ip command to check the new interfaces.
[root@server1 ~]# ip addr
...
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
link/ether 52:54:00:1f:65:38 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe1f:6538/64 scope link
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000
link/ether 52:54:00:6a:f7:a4 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe6a:f7a4/64 scope link
valid_lft forever preferred_lft forever
...
[root@server1 ~]#
The output indicates the presence of two new interfaces by the name eth2 and eth3.
Load the bonding driver called bonding in the kernel with the modprobe command if it is not already loaded. Verify with the modinfo command.
[root@server1 ~]# modprobe bonding [root@server1 ~]# modinfo bonding filename: /lib/modules/3.10.0-327.22.2.el7.x86_64/kernel/drivers/net/bonding/bonding.ko author: Thomas Davis, tadavis@lbl.gov and many others description: Ethernet Channel Bonding Driver, v3.7.1 version: 3.7.1 license: GPL alias: rtnl-link-bond rhelversion: 7.2 srcversion: 49765A3F5CDFF2C3DCFD8E6 depends: intree: Y vermagic: 3.10.0-327.22.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: A9:80:1A:61:B3:68:60:1C:40:EB:DB:D5:DF:D1:F3:A7:70:07:BF:A4 sig_hashalgo: sha256 parm: max_bonds:Max number of bonded devices (int) parm: tx_queues:Max number of transmit queues (default = 16) (int) parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int) parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp ) parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp) parm: ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp) parm: min_links:Minimum number of available links before turning on carrier (int) parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3, 3 for encap layer 2+3, 4 for encap layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp) parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp) parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int) parm: resend_igmp:Number of IGMP membership reports to send on link failure (int) parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int) parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint) [root@server1 ~]#
Generate UUIDs for both new interfaces using the úuidgen command.
[root@server1 ~]# uuidgen eth2 90454e94-3c7f-4e5f-8d04-5367fe8aaf96 [root@server1 ~]# uuidgen eth3 bcf28a1e-808e-4d0a-9e1e-0ab2fc01986e [root@server1 ~]#
Create file /etc/sysconfig/network-scripts/ifcfg-bond0 for bond0 with the following settings:
[root@server1 network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 NAME=bond0 TYPE=Bond BONDING_MASTER=yes BONDING_OPTS="mode=balance-rr" ONBOOT=yes BOOTPROTO=none IPADDR=192.168.122.111 NETMASK=255.255.255.0 GATEWAY=192.168.122.1 IPV4_FAILURE=no IPV6INIT=no [root@server1 network-scripts]#
Create file ifcfg-eth2 and ifcfg-eth3 files in the /etc/sysconfig/network-scripts.
[root@server1 network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 NAME=bond0 TYPE=Bond BONDING_MASTER=yes BONDING_OPTS="mode=balance-rr" ONBOOT=yes BOOTPROTO=none IPADDR=192.168.122.111 NETMASK=255.255.255.0 GATEWAY=192.168.122.1 IPV4_FAILURE=no IPV6INIT=no
[root@server1 network-scripts]# cat ifcfg-eth2 DEVICE=eth2 NAME=eth2 UUID=2e63ec5c-a82e-43ba-bdbe-5d43a18cc3c6 TYPE=Ethernet ONBOOT=yes MASTER=bond0 SLAVE=yes
[root@server1 network-scripts]# cat ifcfg-eth3 DEVICE=eth3 NAME=eth3 UUID=bbf8d1ab-7557-482f-b344-21e17fdb5eff Type=Ethernet ONBOOT=yes MASTER=bond0 SLAVE=yes [root@server1 network-scripts]#
Deactivate and activate bond0 with the ifdown and ifup command, verify with the ip addr command and perform a reboot to ensure the configuration survives a reboot.
[root@server1 ~]# ifdown bond0 [root@server1 ~]# ifup bond0 [root@server1 ~]# ip addr [root@server1 ~]# reboot
Open /etc/hosts and append the following entry.
[root@server1 ~]# vi /etc/hosts 192.168.1.110 server1bond.example.org server1bond
Configure Interface Bonding with NetworkManager CLI
The nmcli command is a NetworkManager tool that allows you to add, show, alter, delete, start and stop bonding and teaming interfaces and control and report their status.
The exercise will be done on server2 , the interface allocation will be done on host1. Two new interfaces eth2 and eth3 will be added to server2. Configure a bond and activate it using NetworkManager commands. Reboot to verify bond activation.
Check the operational status of the NetworkManager service.
[root@server2 ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2016-07-03 23:21:46 CEST; 3 days ago
Main PID: 703 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─703 /usr/sbin/NetworkManager --no-daemon
└─895 /usr/bin/teamd -o -n -U -D -N -t team0
Jul 03 23:21:51 server2 NetworkManager[703]: <info> (team0): IPv6 config waiting until carrie... on
Jul 03 23:21:51 server2 NetworkManager[703]: <info> (team0): device state change: ip-config -... 0]
Jul 03 23:21:51 server2 NetworkManager[703]: <info> (team0): device state change: ip-check ->... 0]
Jul 03 23:21:51 server2 NetworkManager[703]: <info> (team0): device state change: secondaries... 0]
Jul 03 23:21:51 server2 NetworkManager[703]: <info> NetworkManager state is now CONNECTED_GLOBAL
Jul 03 23:21:51 server2 NetworkManager[703]: <info> NetworkManager state is now CONNECTED_SITE
Jul 03 23:21:51 server2 NetworkManager[703]: <info> NetworkManager state is now CONNECTED_GLOBAL
Jul 03 23:21:51 server2 NetworkManager[703]: <info> (team0): Activation: successful, device a...ed.
Jul 03 23:21:53 server2 NetworkManager[703]: <info> Policy set 'eth0' (eth0) as default for I...NS.
Jul 03 23:21:55 server2 NetworkManager[703]: <info> startup complete
Hint: Some lines were ellipsized, use -l to show in full.
[root@server2 ~]#
List available network interfaces including the ones just added.
[root@server2 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION bond0 bond connected bond0 eth0 ethernet connected eth0 eth1 ethernet connected bond-slave-eth1 eth2 ethernet connected bond-slave-eth2 eth3 ethernet connected eth3 eth4 ethernet connected eth4 team0 team connected team0 lo loopbaack unmanaged -- [root@server2 ~]#
Load the bonding driver in the kernel with the modprobe command if it is not already loaded and verify with the modinfo command.
[root@server2 ~]# modprobe bonding [root@server2 ~]# modinfo bonding filename: /lib/modules/3.10.0-327.18.2.el7.x86_64/kernel/drivers/net/bonding/bonding.ko author: Thomas Davis, tadavis@lbl.gov and many others description: Ethernet Channel Bonding Driver, v3.7.1 version: 3.7.1 license: GPL alias: rtnl-link-bond rhelversion: 7.2 srcversion: 49765A3F5CDFF2C3DCFD8E6 depends: intree: Y vermagic: 3.10.0-327.18.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: EB:27:91:DE:1A:BE:A5:F9:5A:A5:BC:B8:91:E1:33:2B:ED:29:8E:5E sig_hashalgo: sha256 parm: max_bonds:Max number of bonded devices (int) parm: tx_queues:Max number of transmit queues (default = 16) (int) parm: num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int) parm: num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int) parm: miimon:Link check interval in milliseconds (int) parm: updelay:Delay before considering link up, in milliseconds (int) parm: downdelay:Delay before considering link down, in milliseconds (int) parm: use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int) parm: mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp) parm: primary:Primary network device to use (charp) parm: primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp) parm: lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp) parm: ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp) parm: min_links:Minimum number of available links before turning on carrier (int) parm: xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3, 3 for encap layer 2+3, 4 for encap layer 3+4 (charp) parm: arp_interval:arp interval in milliseconds (int) parm: arp_ip_target:arp targets in n.n.n.n form (array of charp) parm: arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp) parm: arp_all_targets:fail on any/all arp targets timeout; 0 for any (default), 1 for all (charp) parm: fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp) parm: all_slaves_active:Keep all frames received on an interface by setting active flag for all slaves; 0 for never (default), 1 for always. (int) parm: resend_igmp:Number of IGMP membership reports to send on link failure (int) parm: packets_per_slave:Packets to send per slave in balance-rr mode; 0 for a random slave, 1 packet per slave (default), >1 packets per slave. (int) parm: lp_interval:The number of seconds between instances where the bonding driver sends learning packets to each slaves peer switch. The default is 1. (uint) [root@server2 ~]#
Add a logical interface called bond0 of type bond with connection name bond0, load balancing policy round-robin, IP address 192.168.122.112/24 and gateway 192.168.122.1
[root@server2 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION eth0 ethernet connected eth0 eth3 ethernet connected eth3 eth4 ethernet connected eth4 team0 team connected team0 eth1 ethernet disconnected -- eth2 ethernet disconnected -- lo loopback unmanaged -- [root@server2 ~]# nmcli con add type bond con-name bond0 ifname bond0 mode balance-rr ip4 192.168.122.112/24 gw4 192.168.122.1 Connection 'bond0' (3a3657d8-189d-462f-bbfb-d76167dcf890) successfully added. [root@server2 ~]#
This command has added a bond device and created /etc/sysconfig/network-scripts/ifcfg-bond0.
[root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BONDING_OPTS=mode=balance-rr TYPE=Bond BONDING_MASTER=yes BOOTPROTO=none IPADDR=192.168.122.112 PREFIX=24 GATEWAY=192.168.122.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=bond0 UUID=3a3657d8-189d-462f-bbfb-d76167dcf890 ONBOOT=yes [root@server2 ~]#
Now add slave interfaces eth1 and eth2 to the master bond device bond0.
[root@server2 ~]# nmcli con add type bond-slave ifname eth1 master bond0 Connection 'bond-slave-eth1' (bfa37034-b685-409f-9e9e-23a7b13a4939) successfully added. [root@server2 ~]# nmcli con add type bond-slave ifname eth2 master bond0 Connection 'bond-slave-eth2' (f5152103-78b3-49eb-baab-fe890305d85d) successfully added. [root@server2 ~]#
This command has added eth1 and eth2 interfaces as slaves to bond0 and has created files ifcfg-bond-slave-eth1 and ifcfg-bond-slave-eth2 in directory /etc/sysconfig/network-scripts.
[root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-eth1 TYPE=Ethernet NAME=bond-slave-eth1 UUID=bfa37034-b685-409f-9e9e-23a7b13a4939 DEVICE=eth1 ONBOOT=yes MASTER=bond0 SLAVE=yes [root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond-slave-eth2 TYPE=Ethernet NAME=bond-slave-eth2 UUID=f5152103-78b3-49eb-baab-fe890305d85d DEVICE=eth2 ONBOOT=yes MASTER=bond0 SLAVE=yes [root@server2 ~]#
Activate bond0.
[root@server2 ~]# nmcli con down bond0 Connection 'bond0' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8) [root@server2 ~]# nmcli con up bond0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/11) [root@server2 ~]#
And check the new connection and IP assignments.
[root@server2 ~]# ip addr|grep bond0
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
inet 192.168.122.112/24 brd 192.168.122.255 scope global bond0
[root@server2 ~]# nmcli con show
NAME UUID TYPE DEVICE
eth3 0e87dd30-b785-4b68-ae9f-565478e3f136 802-3-ethernet --
bond-slave-eth2 f5152103-78b3-49eb-baab-fe890305d85d 802-3-ethernet eth2
eth4 4d91a659-3606-44c5-9291-ed5cff38776a 802-3-ethernet --
eth3 8a11e510-aa77-496f-8acb-0adb7aef9a12 802-3-ethernet eth3
team0 c8f6bad2-2700-40d2-8286-acc38e87f74e team team0
eth4 804f1ee0-0bbe-4fc5-b77b-d072462b9d51 802-3-ethernet eth4
bond0 3a3657d8-189d-462f-bbfb-d76167dcf890 bond bond0
bond-slave-eth1 bfa37034-b685-409f-9e9e-23a7b13a4939 802-3-ethernet eth1
eth0 9086b45c-33a0-4f59-b402-2a63c37086f9 802-3-ethernet eth0
[root@server2 ~]# nmcli con show --active
NAME UUID TYPE DEVICE
bond-slave-eth2 f5152103-78b3-49eb-baab-fe890305d85d 802-3-ethernet eth2
eth3 8a11e510-aa77-496f-8acb-0adb7aef9a12 802-3-ethernet eth3
team0 c8f6bad2-2700-40d2-8286-acc38e87f74e team team0
eth4 804f1ee0-0bbe-4fc5-b77b-d072462b9d51 802-3-ethernet eth4
bond0 3a3657d8-189d-462f-bbfb-d76167dcf890 bond bond0
bond-slave-eth1 bfa37034-b685-409f-9e9e-23a7b13a4939 802-3-ethernet eth1
eth0 9086b45c-33a0-4f59-b402-2a63c37086f9 802-3-ethernet eth0
[root@server2 ~]#
Reboot and verify the connections again.
Interface Teaming
Interface teaming is introduced in RHEL7 as an additional choice to implement enhance throughput and fault tolerance at the network interface level. Teaming is a new implementation. Teaming handels the flow of network packets faster that bonding does.And, unline bonding, which is accomplished purely in the kernel space and provides no user control over its operation, teaming only requires the integration of the essential code into the kernel and the rest is implemented via the teamd daemon, which gives users the ability to control it with the teamdctl command.
Like bonding, teaming can be configured by either editing the files directly or using the NetworkManager CLI, TUI or Gnome Network GUI.
Configure Interface Teaming with Network Manager CLI
Add two new interfaces to server2 and call them eth3 and eth4. Configure a team using NetworkManager CLI and reboot to verfiy team activation. Assign the hostname an alias for the team IP address and run a ping from another server to confirm connectivity.
Add two virtual network devices to server'2.
Check the status of the NetworkManager service.
[root@server2 ~]# systemctl status NetworkManager
● NetworkManager.service - Network Manager
Loaded: loaded (/usr/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2016-07-07 11:01:23 CEST; 3h 18min ago
Main PID: 690 (NetworkManager)
CGroup: /system.slice/NetworkManager.service
├─690 /usr/sbin/NetworkManager --no-daemon
└─772 /usr/bin/teamd -o -n -U -D -N -t team0
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth1): device state change: ip-config ->... 0]
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth1): device state change: secondaries ... 0]
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth1): Activation: successful, device ac...ed.
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (bond0): link connected
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth2): device state change: config -> ip... 0]
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (bond0): enslaved bond slave eth2
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth2): Activation: connection 'bond-slav...ion
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth2): device state change: ip-config ->... 0]
Jul 07 11:35:46 server2 NetworkManager[690]: <info> (eth2): device state change: secondaries ... 0]
Jul 07 11:35:47 server2 NetworkManager[690]: <info> (eth2): Activation: successful, device ac...ed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server2 ~]#
List all available network interfaces including the ones just added.
[root@server2 ~]# nmcli dev status DEVICE TYPE STATE CONNECTION bond0 bond connected bond0 eth0 ethernet connected eth0 eth1 ethernet connected bond-slave-eth1 eth2 ethernet connected bond-slave-eth2 eth3 ethernet connected Wired connection 2 eth4 ethernet connected Wired connection 1 lo loopback unmanaged -- [root@server2 ~]#
Load the teamdriver in the kernel and verify.
[root@server2 ~]# ´´´modprobe team´´´ [root@server2 ~]# ´´´modinfo team´´´ filename: /lib/modules/3.10.0-327.18.2.el7.x86_64/kernel/drivers/net/team/team.ko alias: rtnl-link-team description: Ethernet team device driver author: Jiri Pirko <jpirko@redhat.com> license: GPL v2 rhelversion: 7.2 srcversion: C59FD6905408120CA7C83CD depends: intree: Y vermagic: 3.10.0-327.18.2.el7.x86_64 SMP mod_unload modversions signer: CentOS Linux kernel signing key sig_key: EB:27:91:DE:1A:BE:A5:F9:5A:A5:BC:B8:91:E1:33:2B:ED:29:8E:5E sig_hashalgo: sha256 [root@server2 ~]#
Add a logical interface called ´´team0´´ of type team with connection name team0, IP address 192.168.122.122-24 and gateway 192.168.122.1
[root@server2 ~]# ´´´nmcli con add type team con-name team0 ifname team0 ip4 192.168.122.122/24 gw4 192.168.122.1´´´ Connection 'team0' (7e4157b1-b416-4d59-a70a-d263c86d6419) successfully added. [root@server2 ~]#
This command has added a bond device and created file /etc/sysconfig/network-scripts/ifcfg/team/.
[root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-team0 DEVICE=team0 DEVICETYPE=Team BOOTPROTO=none IPADDR=192.168.122.122 PREFIX=24 GATEWAY=192.168.122.1 DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=team0 UUID=7e4157b1-b416-4d59-a70a-d263c86d6419 ONBOOT=yes [root@server2 ~]#
Add eth3 and eth4 interfaces as slaves to the team.
[root@server2 ~]# nmcli con add type team-slave con-name eth3 ifname eth3 master team0 Connection 'eth3' (b2dfeb9c-750f-4340-8b1c-11d1d8495380) successfully added. [root@server2 ~]# nmcli con add type team-slave con-name eth4 ifname eth4 master team0 Connection 'eth4' (4e40798d-1530-4e12-8c8c-8a93994a2983) successfully added. [root@server2 ~]#
This command has added interfaces eth3 and eth4 as slaves to team0 and has created files ifcfg-eth3 and ifcfg-eth4 in directory etc/sysconfig/network-scripts.
[root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth3 NAME=eth3 UUID=b2dfeb9c-750f-4340-8b1c-11d1d8495380 DEVICE=eth3 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort [root@server2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth4 NAME=eth4 UUID=4e40798d-1530-4e12-8c8c-8a93994a2983 DEVICE=eth4 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort [root@server2 ~]#
Activate team0.
[root@server2 ~]# nmcli con up team0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/27) [root@server2 ~]#
Check the new connection and IP assignments.
[root@server2 ~]# ip addr|grep team
14: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
inet 192.168.
Show the connection for the team and slaves.
[root@server2 ~]# nmcli con show|egrep 'team0|eth3|eth4' Wired connection 1 717124f9-86bf-4d4b-9883-4c3d54f4a2c5 802-3-ethernet eth4 eth4 4e40798d-1530-4e12-8c8c-8a93994a2983 802-3-ethernet -- eth3 b2dfeb9c-750f-4340-8b1c-11d1d8495380 802-3-ethernet -- team0 7e4157b1-b416-4d59-a70a-d263c86d6419 team team0 Wired connection 2 27b05974-258d-4ae1-9c0b-81d04ff85ca0 802-3-ethernet eth3 [root@server2 ~]#
Get the details of the team devices.
[root@server2 ~]# teamnl team0 ports Gaat kennelijk iets mis. Prima. 6: eth4: up 0Mbit HD 5: eth3: up 0Mbit HD [root@server2 ~]#
[root@server2 ~]# teamdctl team0 state
setup:
runner: roundrobin
ports:
eth3
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
eth4
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
[root@server2 ~]#
Add the following entry to the h/etc/hosts file.
192.168.122.122 server2t server2t.roggeware.nl
And reboot the system to verify persistence accross reboots.
IPv6
IPv6 is a 128-bit software address providing access to 2 power 128 addresses. IPv6 addresses contain eight colon-separated groups of four hexadecimal numbers.
Below the ip addr command outpt shows IPv6 address for configured interfaces.
[root@server2 ~]# ip addr|grep inet6
inet6 ::1/128 scope host
inet6 2602:306:cc2d:f591::b/64 scope global
inet6 fe80::5054:ff:fe7b:595a/64 scope link
inet6 fe80::5054:ff:feea:a5e0/64 scope link tentative dadfailed
inet6 fe80::5054:ff:febe:9f27/64 scope link
[root@server2 ~]#
Managing IPv6
IPv6 can be assigned to interfaces using any of the network management tools available to su. Entries added with the ip command do not survice system reboots.
Configure and Test IPv6 Addresses
Hoe met die dubbele dubbele punten.
IPv6 addressen in adaper en hosts file. config met nmtui.
Routing
Routing is the process of choosing paths on the network along which to send network traffic. This process is implemented with the deployment of specialized hardware devices called routers.
When systems on two distinct networks communicate with each other, proper routes must be in place for them to be able to talk.
One of three rules is applied in the routing mechanism to determin the correct route.
- If the source and destination systems are on the same network, the packet is sent directly to the desination system.
- If the source and destination systems are on two different networks, all defined (static or dynamic) routes are tried one after the other. If a proper route is determined, the packet is forwarded to it, which then forwards the packet to the correct destination.
- If the source and destination system are on two different networks but no routes are defined between them, the packet is forwarded to the defalt router (or the default gateway), which attempts to search for an appropiate route to the destination. If found, the packet is delivered to the destination system.
Routing Table
A routing table preserves information abaout available routes and their status. It may be built and pdated dnamically or manually by adding or removing routes. The ip command can be used to view entries in the routing table on our RHEL7 system.
[root@atlas ~]# ip route default via 192.168.1.254 dev enp3s0 169.254.0.0/16 dev enp3s0 scope link metric 1002 169.254.0.0/16 dev enp4s1 scope link metric 1004 192.168.1.0/24 dev enp3s0 proto kernel scope link src 192.168.1.100 192.168.1.0/24 dev enp4s1 proto kernel scope link src 192.168.1.101 192.168.2.0/24 dev virbr1 proto kernel scope link src 192.168.2.1 192.168.3.0/24 dev virbr2 proto kernel scope link src 192.168.3.1 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 [root@atlas ~]#
Other commands, such as route, will display additional columns of information that include flags, references, use and iface.Common flags are U (route is up), H (destination is a host), G (route is a gateway).
[root@atlas ~]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default router.home 0.0.0.0 UG 0 0 0 enp3s0 link-local 0.0.0.0 255.255.0.0 U 1002 0 0 enp3s0 link-local 0.0.0.0 255.255.0.0 U 1004 0 0 enp4s1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp4s1 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr2 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 [root@atlas ~]#
Managing Routes
Managing routes involves addign, modifying and deleting routes and setting the default route. The ip command, the NetworkManage UI, the Network Settings GUI or the Network Connections Gui can be used for route administration. Entries added with the ip command do not survive system reboots. Those added with the other tools stay persistent as they are saved in specific route-* files in the /etc/sysconfig/network-srcipts directory.
Add Static Routes Manually
Temporarily add a static route to network 192.168.3.0/24 via eth1 with gateway 192.168.0.1 and another to network 192.168.4.0/24 via team0 with gateway 192.168.2.1 using the ip command.
Add a static route.
[root@server1 ~]# ip route add 192.168.3.0/24 via 192.168.122.1 dev eth0
Add a static route to 192.168.4.0/24 via team0 with gateway 192.168.122.1.
[root@server1 ~]# ip route add 192.168.4.0/24 via 192.168.122.1 dev team0
Show the routing table to validate the addition of the new routes.
[root@server1 ~]# ip route default via 192.168.122.1 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev team0 scope link metric 1009 192.168.3.0/24 via 192.168.122.1 dev eth0 192.168.4.0/24 via 192.168.122.1 dev team0 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.101 192.168.122.0/24 dev team0 proto kernel scope link src 192.168.122.121 [root@server1 ~]#
Reboot the system and run ip route again to confirm the removal of the new routes.
create files route-eth1 and route-team0 in /etc/sysconfig/network-scripts and insert the following entries.
[root@server1 ~]# cat /etc/sysconfig/network-scripts/route-eth0 ADDRESS0=192.168.3.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.122.1 [root@server1 ~]# cat /etc/sysconfig/network-scripts/route-team0 ADDRESS0=192.168.4.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.122.1 [root@server1 ~]#
Restart eth0 and team0 for the routes to take effect.
[root@server1 ~]# ifdown eth0; ifup eth0; [root@server1 ~]# ifdown team0; ifup team0; [root@server1 ~]#
Run the ip route command again to validate the presence of the new routes.
Delete both routes by removing ther entries from the routing table and deleting the configuration files.
[root@server1 ~]# ip route del 192.168.3.0/24 [root@server1 ~]# ip route del 192.168.4.0/24 [root@server1 ~]# rm -f route-eth1 route-team0 [root@server1 ~]#
Confirm the deletion wiith ip route. You should not see the routes.
Commands
modprobe bonding Add and remove modules from the kernel modinfo bonding Show information about a kernel module uuidgen eth2 Generate UUIDs
systemctl status NetworkManager nmcli dev status nmcli con add type bond con-name bond0 ifname bond0 mode balance-rr ip4 192.168.122.112 gw4 192.168.122.1 nmcli con add type bond-slave ifname eth1 master bond0 nmcli con up bond0 nmcli con show nmtui Indien NetworkManager aktief is
yum install teamd software modprobe team modinfo team nmcli con add type team con-name team0 ifname team0 ip4 192.168.122.122/24 gw4 192.168.122.1 nmcli con add type team-slave con-name eth4 ifname eth4 master team0
nmtui nm-connection-editor Graphical Network Administration Tool, including bonding and teaming teamd teamdctl teamnl IPv6 128bit 1204:bab1:21d1:bb43:23a1:9bde:87df:bac9 Zowel IPv4 als IPv6 adressen kunnen op een adapter gezet worden. ping6
RIP Routing Information protocol OSPF Open Shortest Path First ip route, route, netstat -rn ip route add ip route del
Files
/etc/sysconfig/network-scripts/ifcfg-bond0
/ifcfg-eth2
/ifcfg-eth3
Hoofdstuk 16 Synchronzing Time with NTP
The Network Time Protocol service maintains he clokc on the system and keeps it synchronized with a more accurate adn reliable source of time.
Understanding Network Time Protocol
Network Time Protocol (NTP) is a networking protocol for synchronizing the system clock with timeservers that are physically closer and redundant for high accuracy and reliability. NTP supports both client-server and peer-to-peer configurations with an option too user either public-key or symmetric-key cryptographgy for authentication.
The NTP daemon, called ntpd, uses the UDP protocl over well-known port 123 and it runs on all participating servers, peers and clients.
Time Source
A time source is any device that acts as a provider of time to other devices. The most accurate source of time is provided by atomic clocks that are deployed around the globe. Atomic clocks user Universal Time, Coordinated (UTC) for time accuracy. When choosing a time source for a network, preference should be given to the one that is phycally close and takes the least amount of time to send and receive NTP packets.
Local System Clock
You can arrange for one of the RHEL systems to function as a provider of time using its own clock. This requires the maintenance of correct time on this server either manually of automatically via the cron daemon. This server has no way of synchronizing itself with a more reliable and precise external time source. using a local clock as a timeserver with reliance on its own clock is the least recommended option.
Internet-Based Public Timeserver
Several public time servers are available (visit www.ntp.org for a list) are available via the internet. To use a time source, you may need to open a port in the firewall to allow NTP traffic to pass through. Internet-based timeservers are spread around the world and are typically operated by government agencies, research organizations and universities.
Radio/Atomic Clock
A radio clock is regarded as the most accurate provider of time. A radio clock receives time updates from one or more atomic clocks. Global Positioning System (GPS), National Istitue of science and Technology (NIST) radio station WWVB broadcasts in the Americas and DCF77 radio broadcasts in Europe are some popular radio clock methods.
NTP Roles
A role is a function that a system performs from an NTP standpoint. A system can be configured to assume one or more of the following roles.
Primary NTP Server
A primary NTP server gets time from one of the time sources mentioned above and provides time to one or more secondary servers or clients, or both. It can also be configured to broadcast time to seconary servers and clients.
Secondary NTP Server
A secondary NTP server receives time from a primary server or directly from one of the time sources mentioned above. It can be used to provide time to a set of clients to offload the primary, r for redundancy.
NTP Peer
An NTP peer provides time to an NTP server and receives time from it. All peers work at the same stratum level and all of them are considered equally reliable. Both primary and secondary servers can be peers of each other.
NTP Client
An NTP client recieves time from either a primary or a secondary server. A client can be configured in one of the following ways.
- As a polling client that contacts a defined NTP server directly for time synchronization.
- As a broadcast client that listens to time broadcasts by an NTP server. The NTP server must be configured in the broadcast modei order for a broadcast client to be able to bind to it. A broadcast NTP configuration cannot span the local subnet.
- A multicast client operates in a similar fashion as a broadcast cliennt; however it is able to span the local subnet. The NTP server must be configured in the multicast mode in order for a client to work with it.
- A manycast clinet automatically discovers manycast NTP servers and uses the ones with the best performance. The NPT server must be configured in the manycast mode in order for a manycast client to work with it.
Stratum Levels
There are different types of time sources available to synchronize the system time. These time sources are categorized hierarchically into multiple levels ,w hich are referred to as stratum levels based on their distance from the reference clock.
The reference clocks operate at stratum level 0. Besides stratum 0, there are fifteen addional stratum levels that range between 1 and 15. A stratum 0 device cannot be used on the network directly. It is attached to one of the computers via an RS-232 connection, and then that computer is configured to operate at stratum 1. Servers function at stratum 1 are called time servers (or primary time servers) and they can be set up to provide time to stratum 2 servers over a network via NTP packets. Similarly, a stratum 3 server can be configured to synchronize its time with a stratum 2 server, and so on. Servers sharing the smae stratum can be configured as peers to exchange time updates with each other.
Managing Network Time Protocol
This section discusses the management tasks including installing the NTP software, configuring an NTP server, peer and client, configuring a broadcast NTP server and client using a combiantion of manual file editing and commands and testing the configuraions
NTP Packages and Utilities
There is only one required software package that needs to be installed on the system for NTP. This package is called "ntp" and includes all the necessary support to confgure the system as an NTP server, peer or client. Addionally a package called "ntpdate" may also be installed to get access to a command that is used to update the system with an NTP server without the involvement of the ntpd daemon.
[root@server1 ~]# yum list installed |grep ^ntp ntp.x86_64 4.2.6p5-22.el7.centos.2 @updates ntpdate.x86_64 4.2.6p5-22.el7.centos.2 @updates [root@server1 ~]#
These packages bring several administration commands, some of which are described below.
- ntpdate Updates the system date and time immediately. Deprecated. User ntpd -q instead.
- ntpq Queries the NTP daemon.
- ntpd NTP daemon program that must run on a system to use it as a server, peer or client.
- ntpstat Shows time synchronization status.
NTP Configuration File
The key configuration file is called /etc/ntp.conf. This file can be modified by hand. This file is the only file that needs to be modified for NTP server, peer or client.
Use Pre-Defined NTP Polling Client
By default, the NTP software comes pe-configured for use as an NTP client. The configuration file /etc/ntp.conf already has four public NTP server entries. You will activate the NTP service and checkk to ensure that is is functional.
Install the NTP software.
[root@server2 ~]# yum install ntp Package ntp-4.2.6p5-22.el7.centos.2.x86_64 already installed and latest version Nothing to do [root@server2 ~]#
Ensure that the public NTP entries are in /etc/ntp.conf.
[root@server2 ~]# grep ^server /etc/ntp.conf server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst [root@server2 ~]#
Enable the ntpd daemon to autostart at reboots.
[root@server2 ~]# systemctl enable ntpd Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. [root@server2 ~]#
Start the ntp service and check its status.
[root@server2 ~]# systemctl start ntpd
[root@server2 ~]# systemctl status ntpd
â ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2016-07-08 17:40:49 CEST; 6s ago
Process: 3821 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3822 (ntpd)
CGroup: /system.slice/ntpd.service
ââ3822 /usr/sbin/ntpd -u ntp:ntp -g
Jul 08 17:40:49 server2 ntpd[3822]: Listen normally on 7 lo ::1 UDP 123
Jul 08 17:40:49 server2 ntpd[3822]: Listen normally on 8 eth0 2602:306:cc2d:f591::b UDP 123
Jul 08 17:40:49 server2 ntpd[3822]: Listen normally on 9 eth0 fe80::5054:ff:fe7b:595a UDP 123
Jul 08 17:40:49 server2 ntpd[3822]: Listening on routing socket on fd #26 for interface updates
Jul 08 17:40:49 server2 systemd[1]: Started Network Time Service.
Jul 08 17:40:49 server2 ntpd[3822]: 0.0.0.0 c016 06 restart
Jul 08 17:40:49 server2 ntpd[3822]: 0.0.0.0 c012 02 freq_set kernel -0.061 PPM
Jul 08 17:40:50 server2 ntpd[3822]: 0.0.0.0 c61c 0c clock_step +1.408934 s
Jul 08 17:40:52 server2 ntpd[3822]: 0.0.0.0 c614 04 freq_mode
Jul 08 17:40:53 server2 ntpd[3822]: 0.0.0.0 c618 08 no_sys_peer
[root@server2 ~]#
Check whether the system is bound to the NTP servers.
[root@server2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
+ntp2.edutel.nl 80.94.65.10 2 u 6 64 3 9.663 1.286 1.046
+db.communibase. 193.79.237.14 2 u 7 64 3 10.360 3.902 0.940
ntp.newfx.nl .STEP. 16 u - 64 0 0.000 0.000 0.000
*37.97.195.195 193.79.237.14 2 u 10 64 3 11.719 4.271 1.141
[root@server2 ~]#
The above output indicates that the ntpd daemon on server2 is currently bound to an NTP server 37.97.195.195.
Configure NTP Server and Polling Client
Exercise for server1 (NTP server) and server2 (NTP client). Server1 will be set up as an NTP server and sync time to its local clock and provide time to clients on the network. Server2 will be configured as a polling client to obtain time from server1.
Install the NTP software on server1.
[root@server1 ~]# yum install ntp Package ntp-4.2.6p5-22.el7.centos.2.x86_64 already installed and latest version Nothing to do [root@server1 ~]#
Comment out all server entries from /etc/ntp.conf and add a new one with 127.127.1.0.
[root@server1 ~]# grep server /etc/ntp.conf # Use public servers from the pool.ntp.org project. #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server 127.127.1.0 [root@server1 ~]#
Enable the NPT service to start at reboots, open port 123 and start the ntpd daemon.
[root@server1 ~]# systemctl enable ntpd
[root@server1 ~]# firewall-cmd --permanent --add-service ntp
success
[root@server1 ~]# firewall-cmd --reload
success
[root@server1 ~]# systemctl stop ntpd
[root@server1 ~]# systemctl start ntpd
[root@server1 ~]# systemctl status ntpd
â ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2016-07-09 11:33:35 CEST; 6s ago
Process: 27745 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 27746 (ntpd)
CGroup: /system.slice/ntpd.service
ââ27746 /usr/sbin/ntpd -u ntp:ntp -g
Jul 09 11:33:34 server1 ntpd[27746]: Listen normally on 7 eth0 2602:306:cc2d:f591::a UDP 123
Jul 09 11:33:34 server1 ntpd[27746]: Listen normally on 8 team0 fe80::5054:ff:fe6a:f7a4 UDP 123
Jul 09 11:33:34 server1 ntpd[27746]: Listen normally on 9 eth3 fe80::5054:ff:fe6a:f7a4 UDP 123
Jul 09 11:33:34 server1 ntpd[27746]: Listen normally on 10 eth1 fe80::5054:ff:fe1f:6538 UDP 123
Jul 09 11:33:34 server1 ntpd[27746]: Listen normally on 11 eth0 fe80::5054:ff:fe18:5661 UDP 123
Jul 09 11:33:34 server1 ntpd[27746]: Listening on routing socket on fd #28 for interface updates
Jul 09 11:33:34 server1 ntpd[27746]: 0.0.0.0 c016 06 restart
Jul 09 11:33:34 server1 ntpd[27746]: 0.0.0.0 c012 02 freq_set kernel 14.150 PPM
Jul 09 11:33:35 server1 systemd[1]: Started Network Time Service.
Jul 09 11:33:35 server1 ntpd[27746]: 0.0.0.0 c515 05 clock_sync
[root@server1 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 5 l 9 64 1 0.000 0.000 0.000
[root@server1 ~]#
The above output shows that the ntpd daemon on server1 is using its own clock as the timeserver.
Disable the server directives in the /etc/ntp.conf file on server2 and add the following to user server1 as a time server.
[root@server2 ~]# grep server /etc/ntp.conf # Use public servers from the pool.ntp.org project. #server 0.centos.pool.ntp.org iburst #server 1.centos.pool.ntp.org iburst #server 2.centos.pool.ntp.org iburst #server 3.centos.pool.ntp.org iburst server server1.roggeware.nl [root@server2 ~]#
Restart ntpd and check the status of binding with ntpq.
[root@server2 ~]# systemctl restart ntpd
[root@server2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*server1 LOCAL(0) 6 u 3 64 1 0.524 -2.423 0.000
[root@server2 ~]# ntpstat
synchronised to NTP server (192.168.122.101) at stratum 7
time correct to within 8389 ms
polling server every 64 s
[root@server2 ~]#
Configure an NTP Peer
Configure host1 as a peer of TNP server server1 and test the configuration.
Install the NTP software on host1.
[root@atlas ~]# yum install ntp Resolving Dependencies --> Running transaction check ---> Package ntp.x86_64 0:4.2.6p5-22.el7.centos.1 will be updated ---> Package ntp.x86_64 0:4.2.6p5-22.el7.centos.2 will be an update --> Processing Dependency: ntpdate = 4.2.6p5-22.el7.centos.2 for package: ntp-4.2.6p5-22.el7.centos.2.x86_64 --> Running transaction check ---> Package ntpdate.x86_64 0:4.2.6p5-22.el7.centos.1 will be updated ---> Package ntpdate.x86_64 0:4.2.6p5-22.el7.centos.2 will be an update --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================= Package Arch Version Repository Size ================================================================================================================= Updating: ntp x86_64 4.2.6p5-22.el7.centos.2 updates 544 k Updating for dependencies: ntpdate x86_64 4.2.6p5-22.el7.centos.2 updates 84 k Transaction Summary ================================================================================================================= Upgrade 1 Package (+1 Dependent package)
[root@atlas ~]#
Comment out all server directives from /etc/ntp.conf and add the peer directive with hostname server1.
[root@atlas ~]# egrep "peer|server" /etc/ntp.conf restrict default nomodify notrap nopeer noquery # Use public servers from the pool.ntp.org project. #server 0.nl.pool.ntp.org iburst #server 1.nl.pool.ntp.org iburst #server 2.nl.pool.ntp.org iburst #server 3.nl.pool.ntp.org iburst peer server1.roggeware.nl [root@atlas ~]#
Enable the NTP service and open UDP port 123 in the firewall.
[root@atlas ~]# systemctl enable ntp [root@atlas ~]# firewall-cmd --permanent --add-service ntp success [root@atlas ~]# firewall-cmd --reload success [root@atlas ~]#
Restart the ntpd daemon and check its status.
[root@atlas ~]# systemctl restart ntpd
[root@atlas ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*server1 LOCAL(0) 6 u 6 64 1 0.544 0.140 0.000
[root@atlas ~]#
Configure a Broadcast Server and Client
server2 Will be sset up as an NTP client to obtain time fro the original four NTP servers and broadcast tme to devices on the local network. Open UDP port 123 in the firewall to allow NTP traffic to pass trhough. Configure host1 as a broadcast client to get time from the broadcast. Assumption that NTP software is already installed.
Ensure that the server directives as defined in /etc/ntp.conf on server2 are as shown below:
server 0.nl.pool.ntp.org iburst server 1.nl.pool.ntp.org iburst server 2.nl.pool.ntp.org iburst server 3.nl.pool.ntp.org iburst broadcast 192.168.1.255
Enable the NPT server, add UDP port 123 to the firewall configuration, restart ntpd and check its status.
[root@server2 ~]# systemctl enable ntpd
[root@server2 ~]# firewall-cmd --permanent --add-service ntp
success
[root@server2 ~]# firewall-cmd --reload
success
[root@server2 ~]# systemctl restart ntpd
[root@server2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
pomo.komputilo. 195.13.23.5 3 u 1 64 1 15.685 0.132 0.000
+ntp1.monshouwer 193.79.237.14 2 u 1 64 1 13.622 0.610 0.215
services.freshd .STEP. 16 u - 64 0 0.000 0.000 0.000
*146.185.139.19 193.67.79.202 2 u 1 64 1 11.285 1.905 0.000
192.168.1.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
[root@server2 ~]#
The above output shows that the ntpd daemon on server2 is using the public NTP as the provider of time. It also shows that this server is broadcasting time to devices on the 192.168.1.0 network.
Disable the server directoives in the /etc/ntp.conf file on host1 and add broadcastclient and disable directives as shown.
#server 0.nl.pool.ntp.org iburst #server 1.nl.pool.ntp.org iburst #server 2.nl.pool.ntp.org iburst #server 3.nl.pool.ntp.org iburst #peer server1.roggeware.nl broadcastclient # broadcast client
[root@atlas ~]# systemctl restart ntpd
[root@atlas ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*server2b 195.191.113.251 3 u 52 64 16 0.058 -0.064 0.086
[root@atlas ~]#
Overview of System-Config-Date Tool
The NTP client service can be set up using the grpahical System-Config-Date tool. This tool is not installed by default.
Run the following to install it.
[root@atlas ~]# yum install system-config-date Package system-config-date-1.10.6-2.el7.centos.noarch already installed and latest version Nothing to do [root@atlas ~]#
In order too run this tool, execute system-config-date in an X terminal window. A graphical window will show up were you can configure NTP servers and let the ntpdate command run immediately.
Update System Clock Manually
You can run the ntpdate command anytime to bring the system clock close to the time on an NTP server. The NPT service must not to be running in order for this command to work. Run ntpdate manually and specify either the hostname or the IP address of the remote time server.
For example, to bring the clock on server1 at par with the clockon server2, tun the following on server1.
[root@server1 ~]# systemctl stop ntpd [root@server1 ~]# ntpdate server2 11 Jul 16:43:26 ntpdate[7284]: adjust time server 192.168.122.102 offset -0.255794 sec [root@server1 ~]# systemctl start ntpd [root@server1 ~]#
Querying NTP Servers
Command ntpq is used for querying NTP servers. Option -p prints a list of NTP servers known to the system along with a summary of their status.
[root@server2 ~]# ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
xwww.elandsgrach 193.67.79.202 2 u 82 256 377 6.703 -104.78 7.887
+x.ns.gin.ntt.ne 249.224.99.213 2 u 23 256 377 10.932 -0.341 3.268
tt52.ripe.net .INIT. 16 u - 1024 0 0.000 0.000 0.000
*ntp1.monshouwer 193.79.237.14 2 u 101 256 377 12.794 9.196 2.620
192.168.122.255 .BCST. 16 u - 64 0 0.000 0.000 0.000
[root@server2 ~]#
This command produces ten columns of output.
- remote Shows IP addresses or hostnames of NTP servers and peers. Each IP/hostname may be preceeded by one of the following characters:
- * Indicated the current source of synchronisation
- # Indicates the server selected for synchronisation,but distance exceeds the maximum.
- o Displays the server selected for synchronisation.
- + Indicates the system considered for synchronisation.
- x Designated false ticker by the intersection algorythm.
- . Indicates the systems picked up from the end of the candidate list.
- - Indicated the systems not considered for synchronisation.
- blank. Indicates the server rejected because of high stratum level or failed sanity checks.
- refid Shows a reference ID for each time server.
- st Displays stratum level. 16 indicates an invalid level.
- t Shows available types; l=local, u=unicast, m=multicast, b=broadcast and -=netaddr.
- when Displays time, in econds, when a response was last received from the server.
- poll Shows a polling interval. Default is 64 seconds.
- reach Expresses the number of successful attempts to reach the server.
- 001 Most recent probe was answered.
- 357 One probe was unanswered.
- 377 All recent probes were answered.
- delay Indicates a length of time in milleseconds, it took for the reply packet to resturn in response to a query sent to the server.
- offset Shows a time difference, in milliseconds, between server and client clocks.
- jitter Displays a variation of offset measurement between samples. This is an error-bound estimate.
man ntp.conf.
ntpd used UDP over well-known port 123
NTP roles: Primary NTP server, Secondary NTP server, NTP peer, NTP client.
NTP Client can be configured as a polling, broadcast, multicast or manycast client.
Packages ntp and ntpdate Commands ntpdate, ntpq, ntpd and ntpstat.
NTP Configuration file /etc/ntp.conf. Key directives driftfile, logfile restrict, server, peer, broadcast, crypto, includefile, keys. See man ntp.conf. firewall-cmd --permanent --add-service ntp;reload
Package system-config-date for system-config-date tool Output ntpq -p is important Commands
ntpdc ntpdate ntpstat
Hoofdstuk 17 Working with Firewalld and Kerberos
Firewalld is a new way of interacting with iptables rules. It allows the administrator to enter new security rules and activate them during runtime without disconnecting existing connections.
Network Address Translation is a feature that enables a system on the internal network to access the Internet via an intermediary device. IP masquerading, in contrast, eenables more that one system on the internal network to access the Internet via an intermediary device. In either case, the systems IP address on the internal network are concealed from the outside world and only one IP address is seen. That one IP address is of the intermediary device.
Kerberos is a client/server authentication protocol that works on the basis of digital tickets to allow systems communication over non-secure networks. Kerberos uses a combination of Kerberos services and encrypted keys for the implmentation of secure authentication mechanism on the network.
Understanding Firewalld
RHEL7 has introduced an improved mechanism for security rules management called firewalld (dynamic firewall). One of the primary reasosn for adding the support for firewalld is its ability to activate changes dynamically without disconnecting established connections.
Firewalld supports the D-BUS implementation and it brings the concept of network zones to manage the security rules. Everything in firewalld is related to one or more zones. Iptables does not have a daemon process, as it is purely implemented in the kernel space. We can activate either of the two at a time.
Firewalld configuration is stored in the /etc/firewalld directory and can be customized as desired. The userland management tools arethe command fireall-cmd and the graphical tool called firewall-config. In addition, it allows us to create and modify zone and service information by hand and activate them as desired.
Network Zones
Firewalld zones classify incoming network traffic for simplified firewall management. Zones define the level of trust for network connections based on principles such as a source IP or network interface for incoming network traffic. The inbound traffic is checked against zone settings and it is handled appropiately as per configured rules in the zone. Each zone can have its own list of services and ports that are opened or closed.
Firewalld proves nine zones by default. These system-defined zones file are XML-formatted and are located in the /usr/lib/firewalld/zones directory. By default, the public zone is the default zone.
[root@atlas zones]# ls -l /etc/firewalld/zones/ total 32 -rw-r--r--. 1 root root 424 Oct 16 2015 home.xml -rw-r--r--. 1 root root 424 Oct 6 2015 home.xml.old -rw-r--r--. 1 root root 415 Oct 16 2015 internal.xml -rw-r--r--. 1 root root 415 Oct 6 2015 internal.xml.old -rw-r--r--. 1 root root 590 Jul 11 22:35 public.xml -rw-r--r--. 1 root root 562 Jul 11 14:23 public.xml.old -rw-r--r--. 1 root root 342 Oct 16 2015 work.xml -rw-r--r--. 1 root root 342 Oct 6 2015 work.xml.old [root@atlas zones]#
Each zone on the system may have one or more interfaces assigned to it. When a service request arrives, firewalld checks whether it is already defined in a zone by the IP it is originated from (the source network) or the network interface it is coming through. If yes, it binds the request with that zone, otherwise it binds the request with the default zone.
Services
Services are an essential component of firewalld zones. In fact, using services in zones is the preferred method for firewalld configuration and management. Service configuration is stored in separate XML files located in the /usr/lib/firewalld/services and /etc/firewalld/services directories for system- and user-defined services respectively. The configuration files in the user-defined service directory take precedence over the ones located in the other directory.
A service typically contains a port number, protocol, and an IP address.
Ports can also be defined directly without using the service configuration technique. In essence, defining network ports does not require the presence of a service or a service configuration file.
[root@atlas services]# ls -l /usr/lib/firewalld/services total 216 -rw-r-----. 1 root root 412 Nov 20 2015 amanda-client.xml -rw-r-----. 1 root root 320 Nov 20 2015 bacula-client.xml -rw-r-----. 1 root root 346 Nov 20 2015 bacula.xml -rw-r-----. 1 root root 305 Nov 20 2015 dhcpv6-client.xml -rw-r-----. 1 root root 234 Nov 20 2015 dhcpv6.xml -rw-r-----. 1 root root 227 Nov 20 2015 dhcp.xml -rw-r-----. 1 root root 346 Nov 20 2015 dns.xml ... -rw-r-----. 1 root root 211 Nov 20 2015 transmission-client.xml -rw-r-----. 1 root root 593 Nov 20 2015 vdsm.xml -rw-r-----. 1 root root 475 Nov 20 2015 vnc-server.xml -rw-r-----. 1 root root 310 Nov 20 2015 wbem-https.xml [root@atlas services]#
Direct Interface and Rich Language
Firewalld offers the possibility to pass security rles directly to iptables using the direct interface mode, but these rules are not persistent. To address this problem, Firewalld provides the support for a high-level language, called the rich language, that allows us to build complex rules without the knowledge of iptables syntax.
Rich Language uses several elements to set rules and name them. These elements includes a source address or tange with an appropiate netmask, destination address or range, service name, port number or range, protocol, masquerade (enable ordisable); forward-port (destination port or rangeto divert traffic to), log and loglevel, and an action (accept, reject, drop).
Network Address Translation and IP Masquerading
Network Address Translation (NAT) refers to the process of altering the IP address of a source or destination network that is enclosed in a datagram packet header while it passes through a device that supports this type of modification. In other words, NAT allows a system on the internal network (home or corporate network) to access an external network (the internet) usign a single registered IP address configured on an intermediary device (a router or firewall).
IP Masquerading is a variant of NAT and it allows several systems on the internal network (192.168.0.0) to access the Internet using that single IP of the intermediary device.
Port Forwarding
We may have to redirect inbound traffic to a port to access an application servicing on that port on our internal system. This port is defined on the intermediary device (router or netfilter module on RHEL). For example, to allow external access to the HTTP service listening on port 8080 on an internal system, both internal system IP and port number are defined on the intermediary device to ensure inbound requests are forwarded to the desired destination. This feature is referred to as port forwarding or port mapping.
Managing Firewalld
Firewalld Commands
firewall-cmd --state Check if firewalld is running. firewall-cmd --reload Reload the permanent rules.
systemctl status firewalld Check if firewalld is running. systemctl restart firewalld Restart the service.
Firewall Command Options for Zone Management
firewall-cmd --get-default-zone or --set-default-zone
--get-active-zones or --get-zones
--list-all or --list-all-zones
--new-zone or --delete-zone
--permanent Used to male a permanent change. Creates or updates appropiatezone files.
--zone Used for operations on a non-default zone.
Firewall Command Options for Service Management
firewall-cmd --get-services Displays available services.
--list-services List services for a zone.
--query-services Tells wether a service is added.
--add-service Adds a service to the zone.
--remove-service Removes a service from a zone.
--new-service Adds a new service.
--delete-service Deletes an existing service.
--zone Used for operations on a non-defailt zone.
Firewall Command Options for Port Management
firewall-cmd --list-ports Lists ports added to a zone.
--add-port Adds a port to a zone.
--remove-port Removes a port from a zone.
--query-port Checks whether a port is added to a zone.
--permanent Used with the add and remove options for persistence.
--zone Used for operations on a non-default zone.
Firewall Command Options for Using Rich Language Rules
firewall-cmd --list-rich-rules
--add-rich-rule
--remove-rich-rule
--query-rich-rule
--permanent
--zone
Add a persistent rich rule to the default zone to allow inbound HTTP access from network 192.168.3.0/24. This rule should log messages with prefix "HTTP Allow rule" at the info level.
firewall-cmd --add-rich-rule 'rule family=ïpv4" source addres="192.168.3.0/24" \
service name="http" log prefix="HTTP Allow Rule" level="info" accept --permanent
Firewalld Command Options for Masquerade Management
firewall-cmd --add-masquerade Adds a masquerade to a zone.
--remove-masquerade
--query-masquerade
--permanent
--zone
Add masquerading support to the external zone:
firewall-cmd --add-masquerade --zone external
Firewalld Command Options for Port Forwarding
firewall-cmd --list-forward-ports
--add-forward-port
--remove-forward-port
--query-forward-port
--permanent
--zone
Forward inbound telnet traffic to port 1000 on the same system:
firewall-cmd --zone external --add-forward-port port=23:proto=tcp:toport=1000 --permanent
Forward inbound ftp traffic to port range 1001 to 1005 on the same system:
firewall-cmd --zone external --permanent --add-forward-port port=21:proto=tcp:toport=1001-1005
Forward inbound smtp traffic to the same port number but to IP 192.168.0.121:
firewall-cmd --zone -external --permanent --add-forward-port port=25:proto=tcp:toaddr=192.168.0.121
Forward inbound tftp traffic to 192.168.0.121:1010
firewall-cmd --zone external --permanent --add-forward-port port=69:proto=tcp:toport=1010:toaddr=192.168.0.121
Firewalld Command Summary
firewall-config Firewalld GUI configuration tool.
firewall-cmd --state, --get-default-zone, --get-active-zones, --get-zones, --list-all, --list-all-zones
--list-all -zone public,
--net-zone testzone --permanent, --delete-zone testzone --permanent
--get-services
--list-services
--query-service
--list-ports --add-port --remove-port --query-port --permanent --zone
--list-rich-rules --add-rich-rule --remove-rich-rule --query-rich-rule --permanent --zone
--add-masquerade --remove-masquerade --query-masquerade --permanent --zone
--list-forward-ports --add-forward-port --remove-forward-port query-forward-port --permanent --zone
Firewalld Files
firewall-config Firewalld GUI configuration tool.
/etc/firewalld Firewalld configuration files. /etc/firewalld/zones User-defines zones. /etc/firewalld/services User-defined services.
/usr/lib/firewalld/zones System-defined zones. /usr/lib/firewalld/services Service configuration.
/var/log/messages /var/log/secure
Understanding and Managing Kerberos
Kerberos uses port 88 for general communication and port 749 for the administration of Kerberos database via commands such as kadmin and kpasswd. The Kerberos ticketing system relies heavily on resolving hostanmes and on accurate timestamps to issue and expire tickets. Therefore it requires adequate clock synchronisation and a working DNS or an accurate /etc/hosts too function correctly.
Terminology
- Authentication: The proces of verifying the identity of a user or service.
- Authentication Service (AS): A service that runs on the Key Distribution Center (KDC) server to authenticate clients and issue initial tickets.
- Client: A user or service (suchs as NFS or Samba) that requests for the issuance of tickets to use network services.
- Credentials: A ticket along with relevant encryption keys.
- Principal: A verified client (user or service) that is recorded in the KDC databaseand to which the KDC can assign tickets.
- Realm: The administrative territory of a KDC, with one or more KDCs and several principals.
- Service Host: A system that runs a kerberized service that clients can use.
- Session key: An encrypted key that is used to secure communication among clients, KDCs and service hosts.
- Service Ticket: An encrypted digital certificate used to authenticate a uuser to a specific network service. It is issued by the TGS after validating a user's TGT and it contains a session key, the principal name, and expiration time and more.
- Ticket Granting Service (TGS): A service that runs on the KDC to generate and issue service tickets to clients.
- Ticket Granting Ticket (TGT): An initial encrypted digital certificate that is used to identify tha client to TGS at the time of requesting service tickets. It is issued by the AS after validationg the client's presence in the KDC database.
How Kerberos Authenticates clients
The Kerberos authentication process can be separated into three parts; an initial stage of getting a TGT (passport), a service stage to obtain a service ticket (visa) and access the service (travel to the visa issuing country).
A user contacts the AS for initial authentication via the kinit command. The AS asks for the user's password, validates it and generates a TGT for the user. The AS aslo produces a sesion key using the user's password. The AS returns the credentials (TGT plus session key) to the user. THe credentials are saved in he clients credential cache.
Later, when the user needs to access a service running on a remote service host, they send the TGT and the session key to the TGS asking to grant the desired access. The TGS verfies the user's credentials by decrypting the TGT and assembles a service ticket for the desired service and encrypts it with the hosts secret key. It transmits the service ticket to the user along with a session key. The user stors the service ticket in their credential cache. Yhe user presents these credentials to the service host, which decrypts the service ticket with iits secret key and validatees the users identy and the authorisation to access the service. The user is then allowed to access the service.
Kerberos Packages and Utilities
Packages krb5-server krb5-workstation
kinit Obtains and caches TGT kdestroy Destroys tickets stored in credential cache. klist List cached tickets kpasswd Changes a principal's password kadmin Administers Kerberos database via the kadmind daemon kadmin.local Same as kadmin, but performs operations directly on the KDC database
Configure a Client to Authenticate Using Kerberos
Install the required package with #yum install krb5-workstation and ensure that /etc/krb5.conf has the following directives set:
[libdefaults]
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = EXAMPLE.COM
default_ccache_name = KEYRING:persistent:%{uid}
[realms]
EXAMPLE.COM = {
kdc = kerberos.example.com
admin_server = kerberos.example.com
}
[domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM
Login as the root principal (assumed to be added as part of KKerberos server setup) and add server1 as a host principal to the KDC database:
#kadmin -p root/admin kadmin:addprinc -randkey host/server1.example.com
While logged in, extract the principal's key and store it locally in a keytab file called krb5.keytab in the /etc directory.
kadmin:ktadd -l /etc/krb5/keytab host/server1.example.com
Activate the use of Kerberos for authentication:
#authconfig --enablekrb5 --update
Edit the /etc/ssh/ssh_config client configuration file and ensure te following two lines are set as shown:
GSSAPIAuthentication yes GSSAPIDelegateCredentials yes
Login as user1 annd execure the kinit command to obtain a TGT from the KDC. Enter the passowrd for user1 when prompted
$kinit Password for user1@exemple.com:
List the TGT details received in the previous step:
$klist
Hoofdstuk 18 Tuning Kernel Parameters, Reporting System Usage and Logging Remotely
Understanding and Tuning Kernel Parameters
Run-Time Parameters
Run-time parameters control the kernel behaviour while the system is operation. The current list of active run-time parameters may be viewed with the command sysctl -a.
Runtime values for these parameters are stored in variuous files located under sub-directories in the proc-sys directory and can be altered on the fly by changing associated files. temporary changes can be accomplished with the sysctl or echo command. To make the change survive accross system-boots, the value must be defined in the /etc/sysctl.conf file or in a file under the /etc/sysctl.d directory.
Boot-Time Parameters
Boot-time parameters, also referred to as command-line options, affect the boot behaviour of the kernel. Their purpose is to pass any hardware specific information that the kernel would not be able to determine automatically. Boot-time parameters are supplied to the kernel via the GRUB2 interface. The entire boot string along with the command-line options can be viewed after boot with cat /proc/cmdline. Defaults are stored in /boot/grub2/grub.cfg.
Generation System Usage Reports
The sysstat Toolset
The sysstat toolset includes several additional monitoring and performance reporting commands such as cifsiostat, iosta, mpstat,nfsiostat,pidstat,sadf and sar. The sysstat service references two configuration files, sysstat and sysstat.ioconf, located in directory /etc/sysconfig.
In addition to the two configuration files, a cron job file/etc/cron.d/sysstat is available.
The dstat Tool
The dstat package includes a single monitoring and reporting tool, wich is called dstat.
Logging System Messages Remotely
Local and remote logging is supported by the rsyslogd service. Configuration files are /etc/rsyslog.conf and the /etc/rsyslog.d directory.
Configure a System as a Loghost
Open /etc/rsyslog.conf and uncomment the following two directives:
# Provides TCP syslog reception #$ModLoad imtcp #$InputTCPServerRun 514
Add TCP port 514 to the default firewalld zone, and load the new rule:
#firewall-cmd --permanent --add-port 514/tcp #firewall-cmd --reload
Set the correct SELinux port type on TCP port 514
#semanage port -a -t syslogd_port_t -p tcp 514
And enable and restart the rsyslog service
#systemctl enable rsyslog #systemctl restart rsyslog
Configure a System as a Loghost Client
Open /etc/rsyslog.conf file and add the following to the bottom of the file:
*.* @@192.168.0.120:514
Set the rsyslog service to autostart at each system reboot, rstart rsyslog and check it's operating state:
#systemctl enable rsyslog #systemctl restart rsyslog #systemctl status rsyslog
Generate a custom log message:
#logger -i "This is a test message from root on server 1"
Log on to the loghost and tail the /var/log/messages file:
#tail /var/log/messages
...
sysctl -a, sysctl -p
/proc/sys
echo 18 >/proc/sys/...
/etc/sysctl.conf /etc/sysctl.d /usr/lib/sysctl.d/00-system.conf
/boot/grub2/grub.cfg /proc/cmdline Boot-Time parameters aka command-line options
df, vmstat, top
Package sysstat: cifsiostat, iostat, mpstat,nfsiostat, pidstat, sa1, sa2, sadc, sadf, sar, dstat /etc/sysconfig/sysstat /etc/sysconfig/sysstat.ioconf /etc/cron.d/sysstat
Package dstat, dstat
Chapter 19 Sharing Block Storage with iSCSI
iSCSI is a storage networking protocol used to share a computer's local storage with remote clients using the SCSI commandset over an existing IP network infrastructure. The client sees the shared storage as a locally attached harddisk and can use any available tool to manage it.
Understanding the iSCSI Protocol
The Internet Small Computer System Interface (iSCSI) is a storage networking transport protocol that carries SCSI commands over IP networks, including the internet.
Unlike the NFS and CIFS protocols that are used for network filesharing, iSCSI presents the network storage to clients as a local raw block disk drive. In iSCSI nomenclature, a storage server is referred to as a target and a client is referred to as an initiator.
Terminology
The iSCSI technology has several terms. The most important terms are described below.
- ACL: An ACL (Access Control List) controls an iSCSI client access to target LUNs.
- Addressing: iSCSI assigns a unique address to each target server. It supports muliple addressing formats. The IQN (iSCSI Qualified Name) is most common.
- Alias: An alias is an optional string of up to 255 characters that may be defined to give a description to an iSCSI LUN.
- Authentication: Authentication allows initiators and targets to prove their identity at the time of discovery and normal access. CHAP-based authentication (Challenge-Handshake Authentication Protocol) uses usernames and passwords, but hide the network transmission of passwords. These methods are referred to as CHAP initiator authentication and mutual CHAP authentication. The third option, demo mode, is the default option and it is used to disable the authentication feature.
- Backstore: A backstore is a local storage resource that serves as the backend for the LUN presented to the initiator. A backstore can be any physical or virtual disk (block) or a plain file (fileio) or a ramdisk image.
- Initiator: An initiator is a client system that accesses LUNs presented by a target server. Initiators are either software- or hardware-driven. A software initiator is a kernel module that uses the iSCSI protocol to emulate a discovered LUN as a block SCSI disk. A hardware initiator uses a dedicated piece of hardware called an HBA. An HBA offloads system processors, resulting in improved system performance.
- iSNS: An iSNS (Internet Storage Name Service) is a protocol that is used by an initiator to discover shared LUNs.
- LUN: A LUN (Logical Unit Number) represents a single addressable logical SCSI disk that is exported on the target server.
- Node: A node is a single discoverable object on the iSCSI SAN. It may represent a target server or an initiator. A node is identified by its IP address or a unique iSCSI address.
- Portal: A portal is a combination of an IP address and TCP port that a target server listens on and initiators connect to. iSCS uses TCP port 3260 by default.
- Target: A target is a server that emulates a backstore as a LUN for use by an initiator over an iSCSI SAN. A target may be a dedicated hardware RAID array or a RHEL server with appropiate software support loaded.
- TPG: A TPG (Target Portal Group) represents one or more network portals assigned to a target LUN for running iSCSI sessions for that LUN.
Packages
A single package, targetcli, needs to be installed on the target server in order to provide the iSCSI target functionality. On the client side iscsi-initiator-utils package is installed. This package brings the iscsiadm management command and /etc/iscsi/iscsid.conf file.
Managing iSCSI Target Server and Initiator
Managing iSCSI on the target servers involves setting up a backstore, building an iSCSI target on the backstore, assigning a network portal, creating a LUN, exporting the LUN, establishing an ACL, and saving the configuration.
Managing iSCSI on the initiator involves discovering a target server for LUNs, logging on to discovered target LUNs, and using disk management tools to partition, format and mount the LUNs.
Understanding the targetcli Command for Target Administration
The targetcli command is an administration shell that allows you to display , create, modify and delete target LUNs. Several kernel modules are loaded in memory to support the setup and operation. You can view the modules that are currently loaded by running the lsmod command:
[root@server2 ~]#lsmod|grep target target_core_pscsi 19318 0 target_core_file 27472 2 target_core_iblock 27510 2 iscsi_target_mod 295398 9 target_core_mod 371914 19 target_core_iblock,target_core_pscsi,iscsi_target_mod,target_core_file crc_t10dif 12714 1 target_core_mod [root@server2 ~]#
Command targetcli invokes a shell interface. Available subcommands can be view with subcommand help.
ls Shows the downward view of the tree from the current location. pwd Displays the current location in the tree. cd Navigates in the tree. exit Quits the interface. saveconfig Saves the modifications. get/set Gets (or sets) configuration attributes. sessions Displays details for open sessions.
Use the ls, pwd and cd commands to navigate in the object hierarchy.
[root@server2 ~]# targetcli targetcli shell version 2.1.fb41 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. /> pwd / /> ls o- / ................................................................................................ [...] o- backstores ..................................................................................... [...] | o- block ......................................................................... [Storage Objects: 1] | | o- iscsidisk1 .............................................. [/dev/vdb (2.0GiB) write-thru activated] | o- fileio ........................................................................ [Storage Objects: 1] | | o- iscsifile1 .................................. [/usr/iscsifile1.img (50.0MiB) write-back activated] | o- pscsi ......................................................................... [Storage Objects: 0] | o- ramdisk ....................................................................... [Storage Objects: 0] o- iscsi ................................................................................... [Targets: 2] | o- iqn.2016-01.com.example.server2:iscsifile1 ............................................... [TPGs: 1] | | o- tpg1 ......................................................................... [gen-acls, no-auth] | | o- acls ................................................................................. [ACLs: 0] | | o- luns ................................................................................. [LUNs: 1] | | | o- lun0 ............................................... [fileio/iscsifile1 (/usr/iscsifile1.img)] | | o- portals ........................................................................... [Portals: 1] | | o- 192.168.122.102:3260 .................................................................... [OK] | o- iqn.2016-01.roggeware.nl.server2:iscsidisk1 .............................................. [TPGs: 1] | o- tpg1 ......................................................................... [gen-acls, no-auth] | o- acls ................................................................................. [ACLs: 0] | o- luns ................................................................................. [LUNs: 1] | | o- lun0 ........................................................... [block/iscsidisk1 (/dev/vdb)] | o- portals ........................................................................... [Portals: 1] | o- 192.168.122.102:3260 .................................................................... [OK] o- loopback ................................................................................ [Targets: 0] />
Adding 1x2GB Virtual Disk to Target Server
Create a 3GB virtual disk for iSCSCI excercises on host1 and attach it to server2.
#cd /var/lib/libvirt/images #qemu-img create -f raw server2.iscsi.2.img 3G Formatting 'server2.iscsi.2.img', fmt=raw size=3221225472 [root@atlas images]# ls -l -rw-------. 1 qemu qemu 10739318784 Jun 24 14:30 rhel7.0.qcow2 -rw-------. 1 qemu qemu 10737418240 Jun 24 14:37 rocrail.img -rw-r--r--. 1 root root 3221225472 Jun 24 14:37 server2.iscsi.2.img [root@atlas images]#
Now attach it to server2 using the virsh command.
[root@atlas images]#virsh domblklist server2 --details Type Device Target Source ------------------------------------------------ file disk vda /var/lib/libvirt/rhpol_virsh/rgvol_virsh.img file disk vdb /var/lib/libvirt/images/server2.iscsi.img [root@atlas images]#virsh attach-disk server2 --source /var/lib/libvirt/images/server2.iscsi.2.img --target vdc --persistent Disk attached successfully [root@atlas images]#virsh domblklist server2 --details Type Device Target Source ------------------------------------------------ file disk vda /var/lib/libvirt/rhpol_virsh/rgvol_virsh.img file disk vdb /var/lib/libvirt/images/server2.iscsi.img file disk vdc /var/lib/libvirt/images/server2.iscsi.2.img [root@server2 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 10G 0 disk ââvda1 252:1 0 500M 0 part /boot ââvda2 252:2 0 9.5G 0 part ââcentos-root 253:0 0 8.5G 0 lvm / ââcentos-swap 253:1 0 1G 0 lvm [SWAP] vdb 252:16 0 2G 0 disk ââvdb1 252:17 0 2G 0 part vdc 252:32 0 3G 0 disk
[root@server2 ~]#
Disk vdc will be configured on server2 as an iSCSI target LUN and accessed as a block disk by server1 (iSCSI initiator).
Configure a Disk-Based iSCSI Target LUN
You will install the targetcli software on server2, set the target service to autostart at system reboots, define disk vdc as a backstore, build a target using this backstore, assign a network portal to the target, create a LUN in the target, disable authentication and create and activate a firewalld service for iSCSI port 3260.
#yum install targetcli #systemctl enable target
Add the disk to the backstore
#targetcli />cd /backstores/block />ls /backstores/block>ls o- block ...................................................................... [Storage Objects: 1] o- iscsidisk1 ........................................... [/dev/vdb (2.0GiB) write-thru activated] /backstores/block>/backstores/block> create iscsidisk2 dev=/dev/vdc Created block storage object iscsidisk2 using /dev/vdc. /backstores/block>ls o- block ...................................................................... [Storage Objects: 2] o- iscsidisk1 ........................................... [/dev/vdb (2.0GiB) write-thru activated] o- iscsidisk2 ......................................... [/dev/vdc (3.0GiB) write-thru deactivated] /backstores/block>
Build an iSCSI target with address iqn.2015-01.com.example.server2.iscsidisk2
/iscsi>create iqn.2016-01.roggeware.nl.server2:iscsidisk2
Created target iqn.2016-01.roggeware.nl.server2:iscsidisk2.
Created TPG 1.
Default portal not created, TPGs within a target cannot share ip:port.
/iscsi> ls
o- iscsi .............................................................................. [Targets: 3]
o- iqn.2016-01.com.example.server2:iscsifile1 .......................................... [TPGs: 1]
| o- tpg1 .................................................................... [gen-acls, no-auth]
| o- acls ............................................................................ [ACLs: 0]
| o- luns ............................................................................ [LUNs: 1]
| | o- lun0 .......................................... [fileio/iscsifile1 (/usr/iscsifile1.img)]
| o- portals ...................................................................... [Portals: 1]
| o- 192.168.122.102:3260 ............................................................... [OK]
o- iqn.2016-01.roggeware.nl.server2:iscsidisk1 ......................................... [TPGs: 1]
| o- tpg1 .................................................................... [gen-acls, no-auth]
| o- acls ............................................................................ [ACLs: 0]
| o- luns ............................................................................ [LUNs: 1]
| | o- lun0 ...................................................... [block/iscsidisk1 (/dev/vdb)]
| o- portals ...................................................................... [Portals: 1]
| o- 192.168.122.102:3260 ............................................................... [OK]
o- iqn.2016-01.roggeware.nl.server2:iscsidisk2 ......................................... [TPGs: 1]
o- tpg1 ................................................................. [no-gen-acls, no-auth]
o- acls ............................................................................ [ACLs: 0]
o- luns ............................................................................ [LUNs: 0]
o- portals ...................................................................... [Portals: 0]
Create a network portal for the target using IP addres 192.168.122.102
/iscsi> cd iqn.2016-01.roggeware.nl.server2:iscsidisk2/tpg1/
/iscsi/iqn.20...csidisk2/tpg1> ls
o- tpg1 ..................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................ [ACLs: 0]
o- luns ................................................................................ [LUNs: 0]
o- portals .......................................................................... [Portals: 0]
/iscsi/iqn.20...csidisk2/tpg1> portals/ create 192.168.122.102
Using default IP port 3260
Created network portal 192.168.122.102:3260.
/iscsi/iqn.20...csidisk2/tpg1> ls
o- tpg1 ..................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................ [ACLs: 0]
o- luns ................................................................................ [LUNs: 0]
o- portals .......................................................................... [Portals: 1]
o- 192.168.122.102:3260 ................................................................... [OK]
Create a LUN called lun0 in the target and export it to the network
/iscsi/iqn.20...csidisk2/tpg1> luns/ create /backstores/block/iscsidisk2
Created LUN 0.
/iscsi/iqn.20...csidisk2/tpg1> ls
o- tpg1 ..................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................ [ACLs: 0]
o- luns ................................................................................ [LUNs: 1]
| o- lun0 .......................................................... [block/iscsidisk2 (/dev/vdc)]
o- portals .......................................................................... [Portals: 1]
o- 192.168.122.102:3260 ................................................................... [OK]
Disable authentication so that any initiator can access this lun.
/iscsi/iqn.20...csidisk2/tpg1> set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 Parameter authentication is now '0'. Parameter demo_mode_write_protect is now '0'. Parameter generate_node_acls is now '1'. /iscsi/iqn.20...csidisk2/tpg1>
Return to the root of the tree and display the entire configuration:
/iscsi/iqn.20...csidisk2/tpg1> cd / /> ls o- / ......................................................................................... [...] o- backstores .............................................................................. [...] | o- block .................................................................. [Storage Objects: 2] | | o- iscsidisk1 ....................................... [/dev/vdb (2.0GiB) write-thru activated] | | o- iscsidisk2 ....................................... [/dev/vdc (3.0GiB) write-thru activated] | o- fileio ................................................................. [Storage Objects: 1] | | o- iscsifile1 ........................... [/usr/iscsifile1.img (50.0MiB) write-back activated] | o- pscsi .................................................................. [Storage Objects: 0] | o- ramdisk ................................................................ [Storage Objects: 0] o- iscsi ............................................................................ [Targets: 3] | o- iqn.2016-01.com.example.server2:iscsifile1 ........................................ [TPGs: 1] | | o- tpg1 .................................................................. [gen-acls, no-auth] | | o- acls .......................................................................... [ACLs: 0] | | o- luns .......................................................................... [LUNs: 1] | | | o- lun0 ........................................ [fileio/iscsifile1 (/usr/iscsifile1.img)] | | o- portals .................................................................... [Portals: 1] | | o- 192.168.122.102:3260 ............................................................. [OK] | o- iqn.2016-01.roggeware.nl.server2:iscsidisk1 ....................................... [TPGs: 1] | | o- tpg1 .................................................................. [gen-acls, no-auth] | | o- acls .......................................................................... [ACLs: 0] | | o- luns .......................................................................... [LUNs: 1] | | | o- lun0 .................................................... [block/iscsidisk1 (/dev/vdb)] | | o- portals .................................................................... [Portals: 1] | | o- 192.168.122.102:3260 ............................................................. [OK] | o- iqn.2016-01.roggeware.nl.server2:iscsidisk2 ....................................... [TPGs: 1] | o- tpg1 .................................................................. [gen-acls, no-auth] | o- acls .......................................................................... [ACLs: 0] | o- luns .......................................................................... [LUNs: 1] | | o- lun0 .................................................... [block/iscsidisk2 (/dev/vdc)] | o- portals .................................................................... [Portals: 1] | o- 192.168.122.102:3260 ............................................................. [OK] o- loopback ......................................................................... [Targets: 0] />
Exit out of the shell interface. By default, the auto_save_on_exit directive is set to true.
/> exit Global pref auto_save_on_exit=true Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json [root@server2 ~]#
Add a service called iscsitarget by creating a file called iscsitarget.xml in the /etc/firewalld/services directory to permit iSCSC traffic on port 3260.
[root@server2 services]# pwd /etc/firewalld/services [root@server2 services]# cat iscsitarget.xml <?xml version="1.0" encoding="utf-8"?> <service> <short>iSCSI</short> <description>This is to permit the iSCSI traffix to pass thtough the firewall</description> <port protocol="tcp" port="3260"/> </service> [root@server2 services]#
And add the new service to firewalld and activate it
[root@server2 services]# firewall-cmd --permanent --add-service iscsitarget;firewall-cmd --reload success success [root@server2 services]#
Understanding the iscsiadm Command for Initiator Administration
The primary tool to discover iSCSI targets, to log in to them and to manage the iSCSI discovery database is the iscsiadm command. This command interacts with the iscsid daemon and reads the /etc/iscsi/iscsid.conf file for configuration directives at the time of discovering and logging in to new targets. The iscsiadm command has four modes of operation.
- Discovery - Queries the specifed portal for available targets based on the configuration defined in /etc/iscsi/iscsi.conf file. Records found are stored in discovery database files in the /var/lib/iscsi directory.
- Node - Establishes a session with the target and creates a corresponding device file for each discovered LUN in the target.
- Session - Displays current session information.
- Iface - Defines network portals
There are several options available with the iscsiadm command. Some of them are
-D (--discover) Discovers targets using discovery records. If no matching record is found, a new record is created based on settings in /etc/iscsi/iscsi.conf. -l (--login) Logs in to the speciified target. -L (--loginall) Logs in to all discovered targets. -m (--mode) Specifies one of the supported modes of operation. -p (--portal) Specifies a target server portal. -o (--op) Specifies one of the supported database operators: new, delete, update, show or non-persistent. -T (--targetname) Specfifies a target name. -t (--type) Specifes a type of discovery. Sendtargets (st) is usually used. -u (--logout) Logs out from a target. -U (--logoutall) Logs out from all targets.
The /etc/iscsi/iscsid.conf File
The /etc/iscsi/iscsid.conf file is the iSCSI initiator configuration file that defines several options for the iscsid daemon that dictate how to handle an iSCSI initator via the iscsiadm command. During an iSCSI target discovery, the iscsiadm command references this file and creates discovery and node records, and stores them in send_targets (or other supported discovery type) and nodes subdirectories under the /var/lib/iscsi/ directory. The records saved in send_targets are used when you attempt to perform discovery on the same target server again, and the records saved in nodes are used when you attemot to log in to the discovered targets.
The /etc/iscsi/initiatorname.iscsi File
The /etc/iscsi/initiatorname.iscsi file stores the discovered node names along with optional aliases using the InitiatorName and InitiatorAlias directives, respectively. This file is read by the iscsid daemon on startup and it is used by the iscsiadm command to determinenode names and aliases.
[root@server1 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsidisk1 Initiatorname=iqn.2016-01.com.example.server2:iscsifile1 [root@server1 ~]#
Mount the iSCSI Target on Initiator
You will install the iscsi-initiator-utils software package on server1, set iscsid service to autostart at system reboots, discover available targets, log in to a discovered target and create a filesystem using LVM. And add an entry to /etc/fstab file and mount it manually. Reboot the system to verify the mount at reboot.
Run yum to install the required package and set the iscsid service to autostart at system reboots
#yum install iscsi-initiator-utils #systemctl enable iscsid
Execute the iscsiadm command in sendtargets type (-t discovery mode (-m) to locate available iSCSI targets from the specified portal (-p)
#iscsiadm -m discovery -t st -p 192.168.122.102 [root@server1 ~]# iscsiadm -m discovery -t st -p 192.168.122.102 192.168.122.102:3260,1 iqn.2016-01.com.example.server2:iscsifile1 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk1 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk2 [root@server1 ~]#
The above command also adds the new record to appropiate discovery database files located in the /var/lib/iscsi directory and starts the iscsi daemon. Log in (-l) to the target (-T) in node mode (-m) at the specified portal (-p) to establish a target/initiator session.
[root@server1 ~]# 'iscsiadm -m node -T iqn.2016-01.roggeware.nl.server2:iscsidisk2 -p 192.168.122.102 -l Logging in to [iface: default, target: iqn.2016-01.roggeware.nl.server2:iscsidisk2, portal: 192.168.122.102,3260] (multiple) Login to [iface: default, target: iqn.2016-01.roggeware.nl.server2:iscsidisk2, portal: 192.168.122.102,3260] successful. [root@server1 ~]#
View the information for the established iSCSI session (-m) and specify printlevel (-P) 1 for verbosity.
[root@server1 ~]# iscsiadm -m session -P1
Target: iqn.2016-01.roggeware.nl.server2:iscsidisk1 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2016-01.com.example.server2:iscsifile1 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
Target: iqn.2016-01.roggeware.nl.server2:iscsidisk2 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
[root@server1 ~]#
The output shows details for the target and the established session. It also shows the name of the LUN as identified on the initiator at the bottom of the output.
Edit the /etc/iscsi/initiatorname.iscsi file and add the target information:
[root@server1 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsidisk1 InitiatorName=iqn.2016-01.com.example.server2:iscsifile1 InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsidisk2 [root@server1 ~]#
Execute the lsblk and fdisk commands to see the new LUN.
[root@server1 ~]# lsblk|grep sdc sdc 8:32 0 3G 0 disk [root@server1 ~]# fdisk -l|grep sdc WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion. Disk /dev/sdc: 3221 MB, 3221225472 bytes, 6291456 sectors [root@server1 ~]#
The /var/log/messages// file has captured several messages for the new LUN.
[root@server1 ~]# grep sdc /var/log/messages Jun 25 12:28:02 server1 kernel: sd 4:0:0:0: [sdc] 6291456 512-byte logical blocks: (3.22 GB/3.00 GiB) Jun 25 12:28:02 server1 kernel: sd 4:0:0:0: [sdc] Write Protect is off Jun 25 12:28:02 server1 kernel: sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 25 12:28:02 server1 kernel: sdc: unknown partition table Jun 25 12:28:02 server1 kernel: sd 4:0:0:0: [sdc] Attached SCSI disk Jun 25 12:50:25 server1 kernel: sd 4:0:0:0: [sdc] 6291456 512-byte logical blocks: (3.22 GB/3.00 GiB) Jun 25 12:50:25 server1 kernel: sd 4:0:0:0: [sdc] Write Protect is off Jun 25 12:50:25 server1 kernel: sd 4:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 25 12:50:25 server1 kernel: sdc: unknown partition table Jun 25 12:50:25 server1 kernel: sd 4:0:0:0: [sdc] Attached SCSI disk [root@server1 ~]#
Use LVM to initialize tis LN, crate a volume group and add the physical volume to it. Create a logical volume of size 1GB, format the logical volume with xfs structures, create a mountpoint, add an entry to /etc/fstab (make shure to use the _netdev option, mount the new filesystem and confirm the mount.
[root@server1 ~]# pvcreate /dev/sdc
/dev/sdc: Data alignment must not exceed device size.
Format-specific initialisation of physical volume /dev/sdc failed.
Failed to setup physical volume "/dev/sdc"
[root@server1 ~]# pvcreate --dataalignment 8m --dataalignmentoffset 4m /dev/sdc
Physical volume "/dev/sdc" successfully created
[root@server1 ~]# vgcreate iscsi /dev/sdc
Volume group "iscsi" successfully created
[root@server1 ~]# lvcreate -L 1G iscsi -n lviscsi
Logical volume "lviscsi" created.
[root@server1 ~]#mkfs.xfs /dev/iscsi/lviscsi
meta-data=/dev/iscsi/lviscsi isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@server1 ~]#cat /etc/fstab
/dev/mapper/centos-root / xfs defaults 0 0
UUID=3d0dd9cb-d7d1-49b6-a6f7-f71acfbb49d4 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults,pri=1 0 0
UUID=95ddc2a2-49c8-425b-a9b8-aad7d171542c swap swap defaults,pri=1 0 0
UUID=768fe142-803b-4bfe-a269-1f246a49fd84 swap swap defaults,pri=1 0 0
/dev/iscsi/lviscsi /iscsi xfs _netdev 0 0
[root@server1 ~]#mount /iscsi
[root@server1 ~]# df -h|grep scsi
/dev/mapper/iscsi-lviscsi 1014M 33M 982M 4% /iscsi
[root@server1 ~]#
Configure a File-Based iSCSI Target and Mount it on Initiator
In this exercise you will configure a 300MB plain file as a backstore, build a target using this backstore, assign a network portal to the target,,create a LUN in the target, export the LUN, diable authentication, and create and activate a firewalld service for iSCSI port 3260. You will discover this target on the initiator, log in to it, and create a filesystem using parted. You will add an entry to /etc/fstab suign the filesystem's UUID and mount the filesystem manually and reboot to ensure the filesystem is mounted automatically.
Configure iSCSI Target Server
Create a file iscsifile2.img of 300MB in the /usr directory as a fileio type backstore called iscsifile2 and display the construction.
[root@server2 ~]# targetcli /backstores/fileio create iscsifile2 /usr/iscsifile2.img 300M fileio iscsifile2 with size 314572800 [root@server2 ~]# targetcli ls /backstores/fileio o- fileio ..................................................................... [Storage Objects: 2] o- iscsifile1 ............................... [/usr/iscsifile1.img (50.0MiB) write-back activated] o- iscsifile2 ............................ [/usr/iscsifile2.img (300.0MiB) write-back deactivated] [root@server2 ~]#
Build an iSCSI target with address iqn.2016-01.roggeware.nl.server2:iscsifile2 on the iscsifile2 backstore in the default TPG and display the construction.
[root@server2 ~]# targetcli /iscsi create iqn.2016-01.roggeware.nl.server2:iscsifile2
Created target iqn.2016-01.roggeware.nl.server2:iscsifile2.
Created TPG 1.
Default portal not created, TPGs within a target cannot share ip:port.
[root@server2 ~]# targetcli ls /iscsi
o- iscsi .............................................................................. [Targets: 4]
o- iqn.2016-01.com.example.server2:iscsifile1 .......................................... [TPGs: 1]
| o- tpg1 .................................................................... [gen-acls, no-auth]
| o- acls ............................................................................ [ACLs: 0]
| o- luns ............................................................................ [LUNs: 1]
| | o- lun0 .......................................... [fileio/iscsifile1 (/usr/iscsifile1.img)]
| o- portals ...................................................................... [Portals: 1]
| o- 192.168.122.102:3260 ............................................................... [OK]
...
o- iqn.2016-01.roggeware.nl.server2:iscsifile2 ......................................... [TPGs: 1]
o- tpg1 ................................................................. [no-gen-acls, no-auth]
o- acls ............................................................................ [ACLs: 0]
o- luns ............................................................................ [LUNs: 0]
o- portals ...................................................................... [Portals: 0]
[root@server2 ~]#
Create a network portal for the target using the IP 192.168.122.102 to be used for iSCSI traffic and the default port. This will make the target discoverable nad accessible on te network. Display the configuration.
[root@server2 ~]# targetcli /iscsi/iqn.2016-01.roggeware.nl.server2:iscsifile2/tpg1/portals create 192.168.122.102
Using default IP port 3260
Created network portal 192.168.122.102:3260.
[root@server2 ~]# targetcli ls /iscsi/iqn.2016-01.roggeware.nl.server2:iscsifile2/tpg1
o- tpg1 ..................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................ [ACLs: 0]
o- luns ................................................................................ [LUNs: 0]
o- portals .......................................................................... [Portals: 1]
o- 192.168.122.102:3260 ................................................................... [OK]
[root@server2 ~]#
Create a LUN called lun0 in the target and export it to the network. And display the LUN construction.
[root@server2 ~]# targetcli /iscsi/iqn.2016-01.roggeware.nl.server2:iscsifile2/tpg1/luns create /backstores/fileio/iscsifile2
Created LUN 0.
[root@server2 ~]# targetcli ls /iscsi/iqn.2016-01.roggeware.nl.server2:iscsifile2/tpg1
o- tpg1 ..................................................................... [no-gen-acls, no-auth]
o- acls ................................................................................ [ACLs: 0]
o- luns ................................................................................ [LUNs: 1]
| o- lun0 .............................................. [fileio/iscsifile2 (/usr/iscsifile2.img)]
o- portals .......................................................................... [Portals: 1]
o- 192.168.122.102:3260 ................................................................... [OK]
[root@server2 ~]#
Disable authentication so that any initiator can access this LUN and display the configuration.
demo_mode_write_protect=0 makes the LUN write-enabled and the generate_node_acls=1 attribute enables the use of TPG-wide authentication settings (this disables any user-defined ACLs.
[root@server2 ~]# targetcli /iscsi/iqn.2016-01.roggeware.nl.server2:iscsifile2/tpg1 set attribute authentication=0 demo_mode_write_protect=0 generate_node_acls=1 Parameter authentication is now '0'. Parameter demo_mode_write_protect is now '0'. Parameter generate_node_acls is now '1'. [root@server2 ~]# targetcli ls o- / ......................................................................................... [...] o- backstores .............................................................................. [...] | o- block .................................................................. [Storage Objects: 2] | | o- iscsidisk1 ....................................... [/dev/vdb (2.0GiB) write-thru activated] | | o- iscsidisk2 ....................................... [/dev/vdc (3.0GiB) write-thru activated] | o- fileio ................................................................. [Storage Objects: 2] | | o- iscsifile1 ........................... [/usr/iscsifile1.img (50.0MiB) write-back activated] | | o- iscsifile2 .......................... [/usr/iscsifile2.img (300.0MiB) write-back activated] | o- pscsi .................................................................. [Storage Objects: 0] | o- ramdisk ................................................................ [Storage Objects: 0] o- iscsi ............................................................................ [Targets: 4] | o- iqn.2016-01.com.example.server2:iscsifile1 ........................................ [TPGs: 1] | | o- tpg1 .................................................................. [gen-acls, no-auth] | | o- acls .......................................................................... [ACLs: 0] | | o- luns .......................................................................... [LUNs: 1] | | | o- lun0 ........................................ [fileio/iscsifile1 (/usr/iscsifile1.img)] | | o- portals .................................................................... [Portals: 1] | | o- 192.168.122.102:3260 ............................................................. [OK] ... | o- iqn.2016-01.roggeware.nl.server2:iscsifile2 ....................................... [TPGs: 1] | o- tpg1 .................................................................. [gen-acls, no-auth] | o- acls .......................................................................... [ACLs: 0] | o- luns .......................................................................... [LUNs: 1] | | o- lun0 ........................................ [fileio/iscsifile2 (/usr/iscsifile2.img)] | o- portals .................................................................... [Portals: 1] | o- 192.168.122.102:3260 ............................................................. [OK] o- loopback ......................................................................... [Targets: 0] [root@server2 ~]#
Save the configuration to /etc/target/saveconfig.json
[root@server2 ~]# targetcli saveconfig Last 10 configs saved in /etc/target/backup. Configuration saved to /etc/target/saveconfig.json [root@server2 ~]#
Add a service called iscsitarget by creating a file called iscsitarget.cml in the /etc/firewalld/services directory to permet iSCSI traffic on port 3260. Create this file and add the service permanently to the firewall configuration.
[root@server2 ~]# cat /etc/firewalld/services/iscsitarget.xml <?xml version="1.0" encoding="utf-8"?> <service> <short>iSCSI</short> <description>This is to permit the iSCSI traffix to pass thtough the firewall</description> <port protocol="tcp" port="3260"/> </service> [root@server2 ~]#firewall-cmd --permanent --add-service iscsitarget;firewall-cmd --reload
Configure iSCSI Initiator Server
Set the iscsid service to autostart at system reboots.
#systemctl enable iscsid
Execute the iscsiadm command in sendtargets (-t) discovery mode (-m) to locate available iscsi targets from the specified portal -p
[root@server1 ~]# iscsiadm -m session tcp: [1] 192.168.122.102:3260,1 iqn.2016-01.com.example.server2:iscsifile1 (non-flash) tcp: [2] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk1 (non-flash) tcp: [3] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk2 (non-flash) [root@server1 ~]# iscsiadm -m discovery -t st -p 192.168.122.102 192.168.122.102:3260,1 iqn.2016-01.com.example.server2:iscsifile1 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk1 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk2 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsifile2 #[root@server1 ~]# iscsiadm -m session tcp: [1] 192.168.122.102:3260,1 iqn.2016-01.com.example.server2:iscsifile1 (non-flash) tcp: [2] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk1 (non-flash) tcp: [3] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk2 (non-flash) [root@server1 ~]#
The above command also adds the new record to appropiate discovery database files located in the /var/lib/iscsi directory and starts the iscsid daemon. This information persists until you delete it.
Login (-l) to the target (-T) in node mode (-m) at the specified portal (-p) to establish a target/initiator session.
[root@server1]# iscsiadm -m node -T iqn.2016-01.roggeware.nl.server2:iscsifile2 -p 192.168.122.102 -l Logging in to [iface: default, target: iqn.2016-01.roggeware.nl.server2:iscsifile2, portal: 192.168.122.102,3260] (multiple) Login to [iface: default, target: iqn.2016-01.roggeware.nl.server2:iscsifile2, portal: 192.168.122.102,3260] successful. [root@server1]# iscsiadm -m session tcp: [1] 192.168.122.102:3260,1 iqn.2016-01.com.example.server2:iscsifile1 (non-flash) tcp: [2] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk1 (non-flash) tcp: [3] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsidisk2 (non-flash) tcp: [4] 192.168.122.102:3260,1 iqn.2016-01.roggeware.nl.server2:iscsifile2 (non-flash) [root@server1]#
View information for the established iSCSI session (-m) and specify printlevel (-P) 3 for verbosity.
[root@server1 ~]# iscsiadm -m session -P3
iSCSI Transport Class version 2.0-870
version 6.2.0.873-30
Target: iqn.2016-01.com.example.server2:iscsifile1 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 1
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 2 State: running
scsi2 Channel 00 Id 0 Lun: 0
Attached scsi disk sdc State: running
Target: iqn.2016-01.roggeware.nl.server2:iscsidisk1 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 2
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 3 State: running
scsi3 Channel 00 Id 0 Lun: 0
Attached scsi disk sdb State: running
Target: iqn.2016-01.roggeware.nl.server2:iscsidisk2 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 3
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 4 State: running
scsi4 Channel 00 Id 0 Lun: 0
Attached scsi disk sda State: running
Target: iqn.2016-01.roggeware.nl.server2:iscsifile2 (non-flash)
Current Portal: 192.168.122.102:3260,1
Persistent Portal: 192.168.122.102:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.2016-01.roggeware.nl.server2:iscsidisk1
Iface IPaddress: 192.168.122.111
Iface HWaddress: <empty>
Iface Netdev: <empty>
SID: 4
iSCSI Connection State: LOGGED IN
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: NO CHANGE
*********
Timeouts:
*********
Recovery Timeout: 120
Target Reset Timeout: 30
LUN Reset Timeout: 30
Abort Timeout: 15
*****
CHAP:
*****
username: <empty>
password: ********
username_in: <empty>
password_in: ********
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 65536
MaxBurstLength: 262144
ImmediateData: Yes
InitialR2T: Yes
MaxOutstandingR2T: 1
************************
Attached SCSI devices:
************************
Host Number: 5 State: running
scsi5 Channel 00 Id 0 Lun: 0
Attached scsi disk sdd State: running
[root@server1 ~]#
The output shows details for the target and the established session. It also shows the name of the LUN (sdd) as identified on the initiator at the very bottom of the output.
Edit the /etc/iscsi/initatorname.iscsi file and add the target information.
[root@server1 ~]# cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsidisk1 Initiatorname=iqn.2016-01.com.example.server2:iscsifile1 InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsidisk2 InitiatorName=iqn.2016-01.roggeware.nl.server2:iscsifile2 [root@server1 ~]#
Execute the lsblk and fdisk commands and grep for sdd to see the new LUN.
[root@server1 ~]# lsblk|grep sdd sdd 8:48 0 300M 0 disk [root@server1 ~]# fdisk -l|grep sdd WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion. Disk /dev/sdd: 314 MB, 314572800 bytes, 614400 sectors [root@server1 ~]#
The /var/log/messages file has captured several messages for the new LUN.
[root@server1 ~]# grep sdd /var/log/messages Jun 27 17:21:28 server1 kernel: sd 5:0:0:0: [sdd] 614400 512-byte logical blocks: (314 MB/300 MiB) Jun 27 17:21:28 server1 kernel: sd 5:0:0:0: [sdd] Write Protect is off Jun 27 17:21:28 server1 kernel: sd 5:0:0:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA Jun 27 17:21:28 server1 kernel: sdd: unknown partition table Jun 27 17:21:28 server1 kernel: sd 5:0:0:0: [sdd] Attached SCSI disk [root@server1 ~]#
Use parted to label disk /dev/sdd, create a 200MB primary partition, display the disk's partition table, format the partition with ext4 structures, create mountpoint /iscsifile2, determine UUID, add an entry to /etc/fstab using the UUID and make sure to use the _netdev option.
[root@server1 ~]# parted /dev/sdd mklabel msdos
Information: You may need to update /etc/fstab.
[root@server1 ~]# parted /dev/sdd mkpart primary 1 200m
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? I
Information: You may need to update /etc/fstab.
[root@server1 ~]# parted /dev/sdb print
Model: LIO-ORG iscsidisk1 (scsi)
Disk /dev/sdb: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1000kB 200MB 199MB primary
[root@server1 ~]#
[root@server1 ~]# mkfs.ext4 /dev/sdd1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=8192 blocks
48768 inodes, 194336 blocks
9716 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33816576
24 block groups
8192 blocks per group, 8192 fragments per group
2032 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
[root@server1 ~]# mkdir /iscsifile2
[root@server1 ~]# blkid |grep sdd
/dev/sdd1: UUID="4d679483-d0bc-42e5-a8bf-28826d0ce8bf" TYPE="ext4"
[root@server1 ~]# vi /etc/fstab
[root@server1 ~]# cat /etc/fstab
/dev/mapper/centos-root / xfs defaults 0 0
UUID=3d0dd9cb-d7d1-49b6-a6f7-f71acfbb49d4 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults,pri=1 0 0
UUID=95ddc2a2-49c8-425b-a9b8-aad7d171542c swap swap defaults,pri=1 0 0
UUID=768fe142-803b-4bfe-a269-1f246a49fd84 swap swap defaults,pri=1 0 0
UUID=fd6dc73b-24f4-4c14-a91e-25b4cdafec93 /aap ext4 defaults 0 0
/dev/iscsi/lviscsi /iscsi xfs _netdev 0 0
UUID="4d679483-d0bc-42e5-a8bf-28826d0ce8bf" /iscsifile2 ext4 _netdev 0 0
[root@server1 ~]# mount /iscsifile2
[root@server1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-root 8.5G 2.3G 6.3G 27% /
devtmpfs 487M 0 487M 0% /dev
tmpfs 497M 0 497M 0% /dev/shm
tmpfs 497M 20M 478M 4% /run
tmpfs 497M 0 497M 0% /sys/fs/cgroup
/dev/mapper/aap-aaplv 93M 26M 61M 30% /aap
/dev/vda1 497M 295M 203M 60% /boot
/dev/mapper/iscsi-lviscsi 1014M 33M 982M 4% /iscsi
tmpfs 100M 0 100M 0% /run/user/99
tmpfs 100M 0 100M 0% /run/user/0
/dev/sdd1 180M 1.6M 165M 1% /iscsifile2
[root@server1 ~]# df -h |grep file2
/dev/sdd1 180M 1.6M 165M 1% /iscsifile2
[root@server1 ~]#
Reboot the server and ensure that the client configuration survives a reboot.
TOM
targetcli for target administration, package targetclt targetcli saveconfig iscsiadm for initiator administration, package iscsi-initiator-utils
iscsiadm -m discovery -t st -p 192.168.122.102 Locate available targets iscsiadm -m node -T iqn.2016-01.roggeware.nl.server2:iscsidisk1 -p 192.168.122.102 -l Login (-l) to target (-T) in node mode (-m) at portal (-p) iscsiadm -m session
systemctl enable iscsid
Files
/etc/iscsi/iscsi.conf Used during target discovery. /etc/iscsi/initatorname.iscsi Stores nodenames. /var/lib/iscsi/
Chapter 20 Sharing File Storage with NFS
Understanding Network Filesystem
Network File System (NFS) is a networking protocol that allows file sharing over the network. The remote system that makes its shares available for network access is referred to as an NFS server and the process of making the shares acceible is referred to as exporting. The shares are accessed by systems called NFS clients and the process of making the shares accessible is referred to as mounting. A system can provideboth server and client functionality concurrently.
A sub-directory, or the parent directory of a share cannot be re-exported if it exists in the same filesystem. Similarly, a mounted share cannot be exported further.
NFS uses the Remote Procedure Call (RPC) and eXternal Data Representation (XDR) mechanisms that allow a server and a client to communicate with each other.
NFS Versions
RHEL7 provides support for NFS versions 3, 4.0 and 4.1, with NFSv4 being the default. NFSv3 supports both TCP and UDP transport protocols , asynchronous writes and 64-bit file sizes (supports files larger than 2GB). NFSv4 and NFSv4.1 are Inernet Engineering Task Foorce (IETF) standard protocols that provide all of the features of NFSv3 protocol plus the ability to transit firewalls and work on the Internet, enhanced security, encrypted transfer, support for ACLs, greaster scalability, better cross-platform interoperability and better handling of system crashes.
This chapter will focus on the NFSv4 protocol, which is the default protocol in RHEL7.
NFS Security
NFSv4 guarantees secure operations on WANs. When an NFS client attempt to access a remote share, an exhange of information takes place with the server to identify the client and the user on the server., authenticate them to the server, and authorize their access to the share. In-transit data between the two entities is encrypted tp prevent eavesdropping and unauthorized access. NFS may be configured to use an existing kerberos server for authentication, integrity and data encryption. The NFS protocol uses TCP port 2049 for all communications between server and client.
NFS Daemons
NFS is a client/server protocol that employs several daemon programs to work together in order to export and mount shares, and manage I/O bewtween them. One daemon runs on the server and the rest runs on both the server and the client.
- NFSD NFS server process, responds to client requests on TCP port 2049 for file access and operations. Provides file locking and recovery mechanism.
- rpcbind Runs on both server and client, converts RPC program numbers into universal addresses to facilitate communnication for other RPC-based processes.
- rpc.rquotad Runs on both server and client, displays user quota informationfor a remotely mounted share on the server and it allows the setup of user quotas on a mounted share on the client.
- rpc.idmapd Rns on both the server and the client to control the mappingsof UIDs and GIDs with teir corresponding usernames and groupnames based on the configuration defined in /etc/idmapd.conf..
NFS Commands
There are numerous commands available to establish and manage NFS shares and to monitor their I/O. A proper understanding of the usage of these commands is necessary for smooth administration of NFS
- exportfs Server command that exports shares listed in the /etc/exports file and the files in the /etc/exports.d direcotry with .exports extension.
- mount Client command that mounts a share specified at the command line or listed in the /etc/fstab, and adds an entry to the /etc/mtab file.
- nfsiostat Client command tat provides NFS I/O statistics on mounted shares by consulting the /proc/self/mountstats file.
- nfsstat Displays NFS and RPC statistics by consulting the /proc/net/rpc/nfsd (server) and /proc/net/rpc/nfs (client) files.
- mountstats Client command that displays per-mount statistics by consulting the /proc/self/mountstats file.
Commands such as rpcinfo and showmount are also available; however they are not needed in an NFSv4 environment.
NFS Configuration and Functional Files
NFS reads configuration data from various files at startup and during its operation.
- /etc/exports server file that contains share definitions for export.
- /var/lib/nfs/etab Server file that records entries for exported shares wether or not they are remotely mounted. This file is updated each time a share is exported or unexported.
- /etc/nfsmount.conf Client file that defines settings used at mounting shares.
- /etc/fstab Client file system table that contains a list of shares to be mounted at system reboots or manually with the mount command.
- /etc/sysconfig/nfs A server- and client-side NFS startup configuration file.
Of these, exports and fstab files are manually updated, nfsmount.conf and /etc/sysconfig/nfs files do not need any modification if NFSv4 is used with default settings. The etab and mtab files are automatically updated when the exportfs and mount/umount commands are executed.
The /etc/exports File and NFS Server Options
The /etc/exports file defines the configuration for NFS shares. it contains one-line entry per share to be exported. For each share, a pathname, client information and options are included. Options must be enclosed in within parentheses and there must not be any space following the hostname. Some of the options are described below with their defaults in brackets
- * Represents all possible matches for hostnames, IP addresses, domain names or network addresses.
- all_squash(no_all_squash)[no_all_squash] treats all users, including the root user on the client as anonymous users.
- anongid=GID[65534] Assigns this GID explicitly to anonymous groups on the client.
- anonuid=UID [65534] Assigns this uid explicitly to anonymous users on the client.
- async(sync)[sync] Replies to client requests before changs made by previous requests are written to disk.
- fsid Identifies the type of share being exported. Options are device number, root or UUID/ This option applies to filesystem shares only.
- mp Exports only if the specified share is a filesystem.
- root_squash(no_root_squash)[root_squash] Prevents the root user on the client from gaiing superuser access on mounted shares by mapping root to an unprivilidged user account called nfsnobody with UID 65534.
- rw(ro)[ro] Allows file modifications on the client.
- sec[sec=sys] Limits the share export to clinets using one of these security methods: sys, krb5, krb5i or krb5p. The sys option uses local UIDs and GIDs and the rest use Kerberos for user authentication.
- secure/(insecure)[secure]] Allows access only on clinets using ports lower than 1024.
- subtree_check(no_subtree_check)[no_subtree_check] Enalbes permission checks on higher-level direcotries of a share.
- wdelay(no_wdelay)[wdelay] Delays data writes to a share it it expects the arrivl of another write request to the same share soon, thereby reducing the number of actual writes to the share.
The following shows a few sample entries to understand the syntax of the exports file.
/exports1 client1 client2 client3.example.com(rw,insecure) /exports2 client4.example.com(rw) 192.168.1.20(no_root_squash) 192.168.0.0/24
The first example will export /export1 to client1 and client2 using all the defaults and to client3.example.com with read/write and insecure options. The second example will export /export2 to client4.example.com with read/write option to a client with IP 192.168.1.20 with no_rooot_squahs option and to the 192.168.0.0.24 netowrk with all the default options.
Configuring NFS Server and Client
This section presents several exercises how to setup NFS service and export a share, mount the shareon the client and start the NFS client processes, export and mount another share for group collaboration and export a different share with Kerberos authentication.
SELinux Requirements for NFS Operation
By default SELinux policy allows NFS to export shares on the network without making any changes to either file contexts or booleans. All NFS daemons are confined by default and are labeled with appropiate domain types. For instance, the nfsd process is labeled with kernel_t type, rpcbind is labeled with the rpcbind_t type. This information can be verified with the following.
[root@server1 ~]# ps -efZ|grep 'nfs|rpc' unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 root 19949 19931 0 11:51 pts/0 00:00:00 grep --color=auto nfs|rpc [root@server1 ~]#
Similarly, NFS configuration and functional files already have proper SELinux contacts in place and need no modifications. For instance, the context on /etc/exports is.
[root@server1 ~]# ls -lZ /etc/exports -rw-r--r--. root root system_u:object_r:exports_t:s0 /etc/exports [root@server1 ~]#
However, any directory or filesystem that you want to export on the network for sharing purposes will need to have either public_content_ro_t or public_content_rw_t SELinux type applied. This is only required if more than one file-sharing service, such as a combination of NFS and CIFS, NFS and FTP or CIFS and FTP, are used.
The SELinux policy includes numerous booleans that may be of interest from an NFS operation standpoint.Most of these booleans relate to services such as HTTP, KVM and FP that want to use mounted NFS shares to store their files. To list the booleans other than the ones mentioned above, run the getsebool command.
[root@server1 ~]# getsebool -a|egrep '^nfs|^use_nfs' nfs_export_all_ro --> on nfs_export_all_rw --> on nfsd_anon_write --> off use_nfs_home_dirs --> off [root@server1 ~]#
The output lists four booleans.
- nfs_export_all_ro Allows/disallows share exports in read-only mode.
- nfs_export_all_rw Allows/disallows share exports in read/write mode.
- nfsd_anonn_write Allows/disallows the nfsd daemon to write anonymously to public directories on clients.
- use_nfs_home_dirs Allows/disallows NFS clients to mount user home directories.
Create a directory called /common and export it with the NFSv4 protocol to server2 in read/write mode with root squash disabled. Create another directory called /nfsrhcsa and export it with the NFSv4 protocol to server2 in read-only mode. Ensure that appropiate SELinux controls are enabled for the NFS service and it is allowed through the firewall. Confirm exports using a command and a file.
Install the NFS package called nfs-utils and create directories for mountpoints.
[root@server1 ~]# yum install nfs-utils Package 1:nfs-utils-1.3.0-0.21.el7_2.1.x86_64 already installed and latest version Nothing to do [root@server1 ~]# mkdir /common /nfsrhcsa [root@server1 ~]#
Activate the SELinux booleans persistently to allow NFS exports in both read-only adn read/write modes and verify the activation.
[root@server1 ~]# setsebool -P nfs_export_all_ro=1 nfs_export_all_rw=1 [root@server1 ~]# getsebool -a|grep nfs_exp nfs_export_all_ro --> on _export_all_rw --> on [root@server1 ~]#
Add the NFS service persistently to the firewalld configuration to allow the NFS traffic on TCP port 2049 and load the rule.
[root@server1 ~]# firewall-cmd --add-service nfs --permanent success [root@server1 ~]# firewall-cmd --list-services dhcpv6-client dns http https mysql nfs ntp ssh [root@server1 ~]# firewall-cmd --reload success [root@server1 ~]# firewall-cmd --list-services dhcpv6-client dns http https mysql nfs ntp ssh [root@server1 ~]#
Set the rpcbind and NFS services to autostart at system reboots and start these services.
[root@server1 ~]# systemctl enable rpcbind nfs-server Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service. Created symlink from /etc/systemd/system/sockets.target.wants/rpcbind.socket to /usr/lib/systemd/system/rpcbind.socket. [root@server1 ~]# systemctl start rpcbind nfs [root@server1 ~]#
Open /etc/exports file and add an entry for /common to export it to server2 with read/write and no_root_squash options, and an entry for /nfsrhcsa to export it to server2 using the sync option. And export the entries defined in /etc/exports.
[root@server1 ~]# cat /etc/exports /common server2.roggeware.nl(rw,no_root_squash) /nfsrhcsa server2.roggeware.nl(sync) [root@server1 ~]# exportfs -avr exporting server2:/nfsdata exporting server2:/nfsrhcsa [root@server1 ~]#
Show the contents of /var/lib/nfs/etab.
[root@server1 ~]# cat /var/lib/nfs/etab /nfsrhcsa server2(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_ squash) /common server2(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash) [root@server1 ~]#
The NFS service is now setup on server1 If you want to unexport one of these shares, you can do this with the exprtfs command by specifying the -u option.
[root@server1 ~]# exportfs -u server2:/common [root@server1 ~]# exportfs -v /nfsrhcsa server2(ro,wdelay,root_squash,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) [root@server1 ~]#
NFS Client Options
You have just shared a directory as an NFS share on the network. On the client, the mount command is used to connect the NFS share to the filesystem hierarchy. This command supports several options.
- ac(noac)[ac] Specifies to cache file attributes for better performance.
- async(syn)[sync]Causes the I/O th happen asynchroneously.
- defaults Selects the following default options automatically: rw,suid,dev,exec,auto,nouser and async.
- fg/bg[fg] Use fg for shares that must be available. If a foreground fails it is retried for retry minutes. With bg mount attempts are tried repeatedly for retry minutes in the background without hampering the system boot process or haning the client.
- hard/soft[hard] With hard, the client tries repeatedly to mount a share until it either successds or times out. With soft, if a mount is tried for retrans times unsuccessfully, an error message is displayed
- _netdev Mounts a share only ofater the networking has been started.
- remount Attempts to remont an already mounted share with, perhaps,different options.
- rw/ro[rw]rw allows file modifications and ro prevents file modifications.
- sec=mode[sys] Specified the type of security. Default used UIDs and GIDs. Additional choices are krb5, krb5i and krb5p.
- suid/nosuid[suid] Allows users to run setuid and setgid programs.
See man mount and man exports for all options.
Access and mount the /common share on server2. Create mount point /nfsthcemnt and add an entry to the filesystem table for mount during boot. Confirm and test the mount.
Install the NFS package and create the mount point.
[root@server2 ~]# yum install nfs-utils Installed Packages nfs-utils.x86_64 1:1.3.0-0.21.el7_2.1 @updates [root@server2 ~]# mkdir /nfsrhcemnt [root@server2 ~]#
Set the rpcbind service to autostart at system reboots and start the service.
[root@server2 ~]# systemctl enable rpcbind
Created symlink from /etc/systemd/system/sockets.target.wants/rpcbind.socket to /usr/lib/systemd/system/rpcbind.socket.
[root@server2 ~]# systemctl start rpcbind'
[root@server2 ~]# systemctl status rpcbind
â rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; indirect; vendor preset: enabled)
Active: active (running) since Fri 2016-07-01 11:59:53 CEST; 7s ago
Process: 26411 ExecStart=/sbin/rpcbind -w ${RPCBIND_ARGS} (code=exited, status=0/SUCCESS)
Main PID: 26412 (rpcbind)
CGroup: /system.slice/rpcbind.service
ââ26412 /sbin/rpcbind -w
Jul 01 11:59:52 server2 systemd[1]: Starting RPC bind service...
Jul 01 11:59:53 server2 systemd[1]: Started RPC bind service.
[root@server2 ~]#
Open /etc/fstab and add the following entry.
[root@server2 ~]# cat /etc/fstab /dev/mapper/centos-root / xfs defaults 0 0 UUID=16ad26a9-2cf6-44ac-bc0d-832be1ef8911 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 server1.roggeware.nl:/common /nfsrhcemnt nfs _netdev,rw 0 0 <=== [root@server2 ~]#
[root@server2 ~]# mount /nfsrhcemnt [root@server2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 8.5G 1.8G 6.8G 21% / ... /dev/vda1 497M 277M 221M 56% /boot server1.roggeware.nl:/common 8.5G 2.3G 6.3G 27% /nfsrhcemnt [root@server2 ~]#
Create file /nfsrhcemnt/nfsrhcetest and confirm the creation.
[root@server2 /]# echo aap>/nfsrhcemnt/nfsrhcetest [root@server2 /]# ls -l /nfsrhcemnt/nfsrhcetest -rw-r--r--. 1 root root 4 Jul 1 2016 /nfsrhcemnt/nfsrhcetest [root@server2 /]#
On server1 create a group, add members, create a direcotry, enable gid and export it to server2. On server2 create users and group, create mountpoint, add entry to /etc/fstab and mount the share. Confirm mount and permissions.
Add group, users and directory, set ownership and enable setgid. And verify configuration.
[root@server1 ~]# groupadd -g 7777 nfssdatagrp [root@server1 ~]# usermod -G nfssdatagrp user3 [root@server1 ~]# usermod -G nfssdatagrp user4 [root@server1 ~]# mkdir /nfssdata [root@server1 ~]# chown nfsnobody:nfssdatagrp /nfssdata [root@server1 ~]# chmod 2770 /nfssdata [root@server1 ~]# ll -d /nfssdata drwxrws---. 2 nfsnobody nfssdatagrp 36 Feb 12 11:15 /nfsdata [root@server1 ~]#
Add the following line to /etc/exports and export the entry.
[root@server1 ~]# cat /etc/exports /common server2.roggeware.nl(rw,no_root_squash) /nfssdata server2.roggeware.nl(rw,no_root_squash) #[root@server1 ~]# exportfs -avr exporting server2:/nfsdata exporting server2:/common [root@server1 ~]#
Show contents of /var/lib/nfs/etab.
[ root@server1 ~]# cat /var/lib/nfs/etab /nfsdata server2(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash) /nfsrhcsa server2(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash) /common server2(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash) [root@server1 ~]#
On the NFS client, server.
[root@server2 /]# groupadd -g 7777 nfssdatagrp [root@server2 /]# useradd user3;useradd user4 [root@server2 /]# echo user123|passwd --stdin user3 Changing password for user user3. passwd: all authentication tokens updated successfully. [root@server2 /]# echo user123|passwd --stdin user4 Changing password for user user4. passwd: all authentication tokens updated successfully. [root@server2 /]# usermod -G nfssdatagrp user3 [root@server2 /]# usermod -G nfssdatagrp user4
Open /etc/fstab and add the following entry.
[root@server2 /]# cat /etc/fstab /dev/mapper/centos-root / xfs defaults 0 0 UUID=16ad26a9-2cf6-44ac-bc0d-832be1ef8911 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 server1.roggeware.nl:/common /nfsrhcemnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfsdata /nfssdatamnt nfs _netdev,rw 0 0 <=== [root@server2 /]#
Create the mountpoint and mount the share and confirm the mount.
[root@server2 /]# mkdir /nfssdatamnt/ [root@server2 /]# mount /nfssdatamnt [root@server2 /]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 8.5G 1.8G 6.8G 21% / devtmpfs 487M 0 487M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 57M 441M 12% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup /dev/vda1 497M 277M 221M 56% /boot tmpfs 100M 0 100M 0% /run/user/0 server1.roggeware.nl:/common 8.5G 2.3G 6.3G 27% /nfsrhcemnt server1.roggeware.nl:/nfssdata 8.5G 2.3G 6.3G 27% /nfssdatamnt [root@server2 /]#
Confirm that /nfssdatamnt has proper permissions and owning group.
[root@server2 /]# ls -ld /nfsdatamnt/ drwxrws---. 2 nfsnobody nfsdatagrp 36 Feb 12 11:15 /nfsdatamnt/ [root@server2 /]#
Logon as user3 and create a file, and login as user4 and create another file. And verify the correct creation of the files.
[root@server2 nfsdatamnt]# su - user3 Last login: Fri Feb 12 11:14:40 CET 2016 on pts/0 [user3@server2 ~]$ cd /nfsdatamnt [user3@server2 nfsdatamnt]$ echo Aapje>filecreatedbyuser3 [user3@server2 nfsdatamnt]$ ls -l totaal 4 -rw-rw-r--. 1 user3 nfsdatagrp 0 12 feb 11:15 aapuser3 -rw-rw-r--. 1 user4 nfsdatagrp 0 12 feb 11:15 aapuser4 -rw-rw-r--. 1 user3 nfsdatagrp 6 1 jul 2016 filecreatedbyuser3 [user3@server2 nfsdatamnt]$ exit uitgelogd [root@server2 nfsdatamnt]# su - user4 Last login: Fri Feb 12 11:15:14 CET 2016 on pts/0 [user4@server2 ~]$ cd /nfsdatamnt/ [user4@server2 nfsdatamnt]$ echo Aapje>filecreatedbyuser4 [user4@server2 nfsdatamnt]$ ls -l totaal 8 -rw-rw-r--. 1 user3 nfsdatagrp 0 12 feb 11:15 aapuser3 -rw-rw-r--. 1 user4 nfsdatagrp 0 12 feb 11:15 aapuser4 -rw-rw-r--. 1 user3 nfsdatagrp 6 1 jul 2016 filecreatedbyuser3 -rw-rw-r--. 1 user4 nfsdatagrp 6 1 jul 2016 filecreatedbyuser4 [user4@server2 nfsdatamnt]$ id UID=1002(user4) GID=1002(user4) groepen=1002(user4),7777(nfsdatagrp) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [user4@server2 nfsdatamnt]$
server1 Is the NFS server and server2 is the Kerberos server and NFS client.
server2 runs Kerberos services (both KDC and admin services) for realm EXAMPLE.COM and root user is added as an admin principal, DNS is disabled, the hosts file is updated with mappings for server1 and server2 and these servers are added as host principals (host/server1) and (host/server2) to the KDC database with their keytab files stored in the /etc directory by name krb5.keytab.
Exam tips:
- You may have to copy an existing keytab file from a specified locationto the /etc directory.
- You do not have to worry about updating the /etc/hosts file. DNS will be in place.
On the NFS server server1.
Create and export a /nfskrb5 directory with the following entry in /ext/exports.
[root@server1 ~]# cat /etc/exports /common server2.roggeware.nl(rw,no_root_squash) /nfsrhcsa server2.roggeware.nl(sync) /nfsdata server2.roggeware.nl(rw,no_root_squash) /nfskrb5 server2.roggeware.nl(sec=krb5p,rw,no_root_squash) <=== [root@server1 ~]#
Activate nfs-secure-server service at system reboot, start and verify the service.
[root@server1 ~]# systemctl enable nfs-secure-server <=== Werkt niet op mijn CentOS 7, lijkt wel alleen voor RHEL7
On the NFS client server2.
Activate the nfs-secure server at system reboots, start and verify.
[root@server2 nfsdatamnt]# systemctl enable nfs-secure
Failed to execute operation: No such file or director
[root@server2 nfsdatamnt]# systemctl start nfs-secure
[root@server2 nfsdatamnt]# systemctl status nfs-secure
â rpc-gssd.service - RPC security service for NFS client and server
Loaded: loaded (/usr/lib/systemd/system/rpc-gssd.service; static; vendor preset: disabled)
Active: inactive (dead)
Condition: start condition failed at Fri 2016-07-01 15:45:46 CEST; 10s ago
ConditionPathExists=/etc/krb5.keytab was not met
Mar 21 23:16:09 server2 systemd[1]: Started RPC security service for NFS client and server.
Jul 01 12:09:08 server2 systemd[1]: Started RPC security service for NFS client and server.
Jul 01 15:45:46 server2 systemd[1]: Started RPC security service for NFS client and server.
[root@server2 nfsdatamnt]#
Open /etc/fstab and add the following entry.
/dev/mapper/centos-root / xfs defaults 0 0 UUID=16ad26a9-2cf6-44ac-bc0d-832be1ef8911 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 server1.roggeware.nl:/common /nfsrhcemnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfsdata /nfsdatamnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfskrb5 /nfskrb5mnt nfs sec=krb5p 0 0 <=== [root@server2 nfsdatamnt]#
Create the mountpoint and mount the new share.
[root@server2 /]# mkdir /nfskrb5mnt [root@server2 /]# mount /nfskrb5mnt mount.nfs: an incorrect mount option was specified [root@server2 /]
Monitoring NFS Activities
Monitoring NFS activities involves capturing and displaying read and write statistics on the NFS server and client. Tools as nfsstat, nfsiostat and mountstats are available.
The nfsstat command can be run on both the NFS server and client to produce NFS and RPC I/O statistics.
The nfsiostat command is an NFS client utility that produces read and write statistics for wach mounted share
The mountstat TOM
yum install nfs-utils
getsebool -a|grep nfs_export setsebool -P nfs_export_all_ro=1 nfs_export_all_rw=1 firewall-cmd --add-service nfs firewall-cmd --reload firewall-cmd --list-services
systemctl enable rpcbind nfs-server
/etc/exports /common server2.example.com(rw,no_root_squash) /etc/fstab server1.example.com:/common /nfsrhcemnt nfs _netdev,rw 0 0 /var/lib/nfs/etab /etc/sysconfig/nfs exportfs -avr exportfs -u server2.example.com:/common
man exports
nfsstat nfsiostat mountstats
Chapter 21 Sharing File Storage with Samba
Samba is a networking protocol that allows Linux and Unix systems to share file and print resources with Windows and other Linux and Unix systems. RHEL& includes the support for Samba v4.1, which uses the SMB3 prtocol that allows encrypted transport connections. The Samba service is configured with the help of a single configuration file and a few commands.
Understanding Samba
Server Message Block (SMB) now widely known as the Common Internet File System (CIFS).
The system that shares it file and print resources is referred to as a Samba server and the system that accesses those shared resources is referred to as a Samba client. A single system can be configured to provide both server and client functionality concurrently.
A Samba server can:
- Act as a print server for windows systems.
- Be configured as a Primary Domain Controller (PDC) and as a Backup Domain Controller. for a Samba-based PDC.
- Be set up as an Active Directory member server on a Windows network.
- Provide Windows Internet Name Service (WINS) name resolution.
Samba Daemon
Samba and CIFS are client/server protocols that employ the smbd daemon on the server to share and manage direcotries and filesystems This daemon process uses TCP port 445 for operation and it is also responsible for share locking and user authentication.
Samba Commands
There are numerous commands available to establish and manage Samba. A proper understanding of the usage of these commands is essential for smooth operation.
* mount Mounts a Samba share specified at the command line or listed in the /etc/fstab file. Adds an entry to /etc/mtab * mount.cifs Mounts a Samba share on the client. * pdbedit Maintains a local user database in /var/lib/samba/private/smbpasswd on the server. * smbclient Connects to a Samba share to perform FTP-like operations. * smbpasswd Changes Samba user passwords. * testparm Tests the syntax of the smb.conf file * umount
Samba Configuraiton and Functional Files
Samba references several files at startup and during its operation.
- /etc/samba/smb.conf Samba server configuration file.
- /etc/samba/smbusers Maintains Samba and Linux user mappings.
- /etc/sysconfig/samba Contains directives used at Samba startup.
- /var/lib/samba/private/smbpasswd Maintains Samba user passwords.
- /var/log/samba Directory location for Samba logs.
Understanding Samba Configuration File
The /etc/samba/smb.conf file is the primary configuration file for setting up a Samba server. This file has two major sections: Global Settings and Share Definitions. An excerpt from this file:
[root@server1 ~]# cat /etc/samba/smb.conf
[global]
workgroup = EXAMPLE
server string = server1 is the Samba Server Sharing /common and /smbrhcsa
interfaces = lo eth0 192.168.122.
hosts allow = 127. 192.168.122. .roggeware.nl
log file = /var/log/samba/log.%m
max log size = 5000
security = user
passdb backend = smbpasswd
[common]
comment = /common directory available to user10
hosts deny = 192.168.22.0/24
browsable = yes
path = /common
public = yes
valid users = user10
write list = user10
writeable = yes
[smbrhcsa]
comment = /smbrhcsa directory available to user1
browsable = yes
path = /smbrhcsa
public = yes
valid users = user1
write list = user1
writable = yes
[root@server1 ~]#
Check the man pages for smb.conf for details.
Samba Software Packages
There are several packages that need to be installed.
[root@server1 ~]# yum list installed|grep samba samba.x86_64 4.2.10-6.2.el7_2 @updates samba-client.x86_64 4.2.10-6.2.el7_2 @updates samba-client-libs.x86_64 4.2.10-6.2.el7_2 @updates samba-common.noarch 4.2.10-6.2.el7_2 @updates samba-common-libs.x86_64 4.2.10-6.2.el7_2 @updates samba-common-tools.x86_64 4.2.10-6.2.el7_2 @updates samba-libs.x86_64 4.2.10-6.2.el7_2 @updates [root@server1 ~]#
- samba Provides Sambe server support.
- samba-client Includes utuilites for operations on server and client.
- samba-common Provides Samba man pages, commands and configuration files.
- samba-libs Contains library routines used by Samba server and client.
- cifs-utils Client-side utilities for managing CIFS shares.
A Samba server needs all packages exept for the cifs-utils package. On the client side only cifs-utils and samba-client packages are needed.
Configuring Samba Server and Client
This section presents several exercises to set up the Samba service and share a directory or file system.
SELinux Requirements for Samba Operation
Let's look at the Samba-specific SELinux contexts on processes and files and also see the booleans that may need to be modified for Samba to function properly. The Samba daemon is confied by defalt and is labeled appropiately with smbd_t domain type. This can be verified with the following.
[root@server1 ~]# ps -efZ|grep smbd system_u:system_r:smbd_t:s0 root 2301 1 0 Jun30 ? 00:00:04 /usr/sbin/smbd system_u:system_r:smbd_t:s0 root 4790 2301 0 Jun30 ? 00:00:00 /usr/sbin/smbd [root@server1 ~]#
Similarly, Samba configuration and functional files already have proper SELinux contexts in place; therefore they need no modifications. For instance, the context on the /etc/samba/smb.conf file is.
[root@server1 ~]# ls -lZ /etc/samba/smb.conf -rw-r--r--. root root system_u:object_r:samba_etc_t:s0 /etc/samba/smb.conf [root@server1 ~]#
However, any direcotry or file system that you want to share on the network with Samba alone needs to have samba_share_t type applied to it. In case of multiple file-sharing services such as a combination of CIFS and NFS, sharing the same directory or filesystem, you will need to use euther the public_content_ro_t or public_content_rw_t type instead.
There is one boolean called samba_share_nfs which is enabled in case the same directory or filesystem is shared via both NFS and CIFS. To list Samba-related booleans, run the getsebool command as follows.
[root@server1 ~]# getsebool -a|egrep 'samba|smb|cifs' cobbler_use_cifs --> off ftpd_use_cifs --> off git_cgi_use_cifs --> off git_system_use_cifs --> off httpd_use_cifs --> off ksmtuned_use_cifs --> off mpd_use_cifs --> off polipo_use_cifs --> off samba_create_home_dirs --> off samba_domain_controller --> off samba_enable_home_dirs --> off samba_export_all_ro --> on <=== Allows/disallows Samba to share in read-only mode samba_export_all_rw --> on <=== Allows/disallows Samba to share in read-write mode samba_load_libgfapi --> off samba_portmapper --> off samba_run_unconfined --> off samba_share_fusefs --> off samba_share_nfs --> on sanlock_use_samba --> off smbd_anon_write --> off <=== Allows/disallows Samba to write to public directories with public_content_rw_t type tmpreaper_use_samba --> off use_samba_home_dirs --> off virt_sandbox_use_samba --> off virt_use_samba --> off [root@server1 ~]#
Some of the booleans will be used in the exercises.
Exercise done on server1.
part1: Share /common direcotry (path) which you also share via NFS in the previous chapter. Make this share browsable with login (vald users) and write access (writeable) given only to user10 (write list) from systems in the example.com domain. This share should have read-only access (public) given to user3 and it should not be accessible (hosts deny) from 192.168.2.0/24 network.
part2: Create a direcotry /smbrhcsa (path) in browsable mode (browsable) with loing (valid users) and write (writable) access aloocated only to (write list) user1 and read-only (public) access to user3.
Arrange proper SELinux controls and allow it through the firewall.
Install Samba server package samba and samba-client and create directory /smbrhcsa.
[root@server1 ~]# yum install samba samba-client Package samba-4.2.10-6.2.el7_2.x86_64 already installed and latest version Package samba-client-4.2.10-6.2.el7_2.x86_64 already installed and latest version Nothing to do [root@server1 ~]# mkdir /smbrhcsa [root@server1 ~]#
Activate the SELinux Booleans persistently to allow Samba shares in both read-only and read-write modes to share /common and verify the activation.
[root@server1 ~]# setsebool -P samba_export_all_ro=1 samba_export_all_rw=1 samba_share_nfs=1 [root@server1 ~]# getsebool samba_export_all_ro samba_export_all_rw samba_share_nfs samba_export_all_ro --> on samba_export_all_rw --> on samba_share_nfs --> on [root@server1 ~]#
Add SELinux file types public_content_rw_t on /common and samba_share_t on /smbrhcsa directories to the SELinux policies and apply the new contexts on both directories and confirm.
[root@server1 ~]# semanage fcontext -at public_content_rw_t "/common(/.*)?" [root@server1 ~]# semanage fcontext -at samba_share_t "/smbrhcsa(/.*)?" [root@server1 ~]# ls -lZd /common drwxr-xr-x. root root unconfined_u:object_r:samba_share_t:s0 /common [root@server1 ~]# restorecon /common /smbrhcsa [root@server1 ~]# ls -lZd /common /smbrhcsa drwxr-xr-x. root root unconfined_u:object_r:public_content_rw_t:s0 /common drwxr-xr-x. root root unconfined_u:object_r:samba_share_t:s0 /smbrhcsa [root@server1 ~]#
Add the Samba service persistently to the firewalld configuration to allow Samba trafixpon TCP port 445.
[root@server1 ~]# firewall-cmd --permanent --add-service samba success [root@server1 ~]# firewall-cmd --reload success [root@server1 ~]# firewall-cmd --list-service dhcpv6-client dns http https mysql nfs ntp samba ssh [root@server1 ~]#
Rename /etc/samba/smb.conf to smb.conf.original and create a new smb.conf.
[root@server1 samba]# cat smb.conf
[global]
workgroup = EXAMPLE
server string = server1 is the Samba Server Sharing /common and /smbrhcsa
interfaces = lo eth0 192.168.122.
hosts allow = 127. 192.168.122. .roggeware.nl
log file = /var/log/samba/log.%m
max log size = 5000
security = user
passdb backend = smbpasswd
[common]
comment = /common directory available to user10
hosts deny = 192.168.22.0/24
browsable = yes
path = /common
public = yes
valid users = user10
write list = user10
writeable = yes
[smbrhcsa]
comment = /smbrhcsa directory available to user1
browsable = yes
path = /smbrhcsa
public = yes
valid users = user1
write list = user1
writable = yes
[root@server1 samba]#
Execute the testparm command to check for syntax errors. Use the -v switch to dispolay other default values that are not defined in the file.
[root@server1 samba]# testparm
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[common]"
Processing section "[smbrhcsa]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
# Global parameters
[global]
workgroup = EXAMPLE
...
[smbrhcsa]
comment = /smbrhcsa directory available to user1
path = /smbrhcsa
valid users = user1
write list = user1
read only = No
guest ok = Yes
[root@server1 samba]#
Create Linux user user10 with password user123 and add user10 to Samba user database /var/lib/samba/private/smbpasswd and assign password user123. Show the contents of the smbpasswd file.
[root@server1 samba]# useradd user10 [root@server1 samba]# echo user123|passwd --stdin user10 Changing password for user user10. passwd: all authentication tokens updated successfully. [root@server1 samba]# smbpasswd -a user10 New SMB password: Retype new SMB password: [root@server1 samba]# cat /var/lib/samba/private/smbpasswd user10:1005:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:EACB2C6A3AAA4ED476ED2741BE8C7A4E:[U ]:LCT-5776D0FA: [root@server1 samba]#
Display (-L) the user information verbosely using the pdbedit command.
[root@server1 samba]# pdbedit -Lv -------------- Unix username: user10 NT username: Account Flags: [U ] User SID: S-1-5-21-2626351804-4208986171-2860593508-3010 Primary Group SID: S-1-5-21-2626351804-4208986171-2860593508-513 Full Name: Home Directory: \\server1\user10 HomeDir Drive: Logon Script: Profile Path: \\server1\user10\profile Domain: SERVER1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Fri, 01 Jul 2016 22:22:18 CEST Password can change: Fri, 01 Jul 2016 22:22:18 CEST Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [root@server1 samba]#
Set the Samba service smb to autostart at system reboot, start the service and confirm the status.
[root@server1 samba]# systemctl enable smb
[root@server1 samba]# systemctl start smb
[root@server1 samba]# systemctl status smb
â smb.service - Samba SMB Daemon
Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-06-30 13:03:55 CEST; 1 day 9h ago
Main PID: 2301 (smbd)
Status: "smbd: ready to serve connections..."
CGroup: /system.slice/smb.service
ââ2301 /usr/sbin/smbd
ââ4790 /usr/sbin/smbd
Jul 01 21:47:10 server1 smbd[4790]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 01 22:00:11 server1 smbd[22389]: [2016/07/01 22:00:11.319557, 0] ../source3/printing/print_cups...ect)
Jul 01 22:00:11 server1 smbd[22389]: Unable to connect to CUPS server localhost:631 - Transport en...cted
Jul 01 22:00:11 server1 smbd[4790]: [2016/07/01 22:00:11.323154, 0] ../source3/printing/print_cups...back)
Jul 01 22:00:11 server1 smbd[4790]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 01 22:13:11 server1 smbd[23176]: [2016/07/01 22:13:11.799842, 0] ../source3/printing/print_cups...ect)
Jul 01 22:13:11 server1 smbd[23176]: Unable to connect to CUPS server localhost:631 - Transport en...cted
Jul 01 22:13:11 server1 smbd[4790]: [2016/07/01 22:13:11.803140, 0] ../source3/printing/print_cups...back)
Jul 01 22:13:11 server1 smbd[4790]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 01 22:25:59 server1 systemd[1]: Started Samba SMB Daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server1 samba]#
List (-L) the shares available on the server as user10 (-U) using the smbclient command.
[root@server1 samba]# smbclient -L //localhost -U user10
Enter user10's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Sharename Type Comment
--------- ---- -------
common Disk /common directory available to user10
smbrhcsa Disk /smbrhcsa directory available to user1
IPC$ IPC IPC Service (server1 is the Samba Server Sharing /common and /smbrhcsa)
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Server Comment
--------- -------
Workgroup Master
--------- -------
[root@server1 samba]#
The Samvba service is now set up on server1 with /common and //smbrhcsa shared over the network and are available for accessing and mounting on the client.
On server2, access and mount the /common share exported in the previous exercise. Create user10 with same UID as used on server1. Create mount point /smbrhcemnt and add an entry to the filesystem table to enable mounting at boot. Confirm share access and mountusign commands, test access by creating a file in the mount point and viewing it on the Samba server. Store username and password for user10 in a file owned by root with 0400 permissions.
Install the Samba client package samba-client and cifs-utils.
[root@server2 ~]# yum install samba-client cifs-utils Package samba-client-4.2.10-6.2.el7_2.x86_64 already installed and latest version Package cifs-utils-6.2-7.el7.x86_64 already installed and latest version Nothing to do [root@server2 ~]#
Create Linux user user10 with password user123
[root@server1 ~]# id user10 uid=1005(user10) gid=1005(user10) groups=1005(user10),7778(dba) root@server2 ~]# useradd user10 [root@server2 ~]# echo user123|passwd --stdin user10 Changing password for user user10. passwd: all authentication tokens updated successfully. [root@server2 ~]# id user10 uid=1005(user10) gid=1005(user10) groups=1005(user10) [root@server2 ~]#
List (-L) what shares are available from server1 using the smbclient command.
[root@server2 ~]# smbclient -L //server1/common -U user10
Enter user10's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Sharename Type Comment
--------- ---- -------
common Disk /common directory available to user10
smbrhcsa Disk /smbrhcsa directory available to user1
IPC$ IPC IPC Service (server1 is the Samba Server Sharing /common and /smbr hcsa)
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Server Comment
--------- -------
Workgroup Master
--------- -------
[root@server2 ~]#
Logon to the /common share as user10 using the cmbclient cpmmand.
[root@server2 ~]# smbclient -L //server1/common -U user10
Enter user10's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Sharename Type Comment
--------- ---- -------
common Disk /common directory available to user10
smbrhcsa Disk /smbrhcsa directory available to user1
IPC$ IPC IPC Service (server1 is the Samba Server Sharing /common and /smbr hcsa)
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Server Comment
--------- -------
Workgroup Master
--------- -------
[root@server2 ~]# smbclient //server1/common -U user10
Enter user10's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
smb: \>
The connection is successfully established with the /common share. You can run the help command, use ls to list files, use get/mget and put/mput to transfer one or more files. Issue exit when done.
Create mount point /smbrhcemnt and mount /common/ on /smbrhcemnt as user10.
[root@server2 ~]# mkdir /smbrhcemnt [root@server2 ~]# mount //server1/common /smbrhcemnt -o username=user10 Password for user10@//server1/common: ******* [root@server2 ~]#
Execute the df and mount commands to check the status of the share.
[root@server2 ~]#df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 8.5G 1.8G 6.8G 21% / devtmpfs 487M 0 487M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 57M 441M 12% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup /dev/vda1 497M 277M 221M 56% /boot server1.roggeware.nl:/common 8.5G 2.3G 6.3G 27% /nfsrhcemnt server1.roggeware.nl:/nfsdata 8.5G 2.3G 6.3G 27% /nfsdatamnt tmpfs 100M 0 100M 0% /run/user/0 //server1/common 8.5G 2.3G 6.3G 27% /smbrhcemnt <=== [root@server2 smbrhcemnt]# mount|grep smbrhce //server1/common on /smbrhcemnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user10,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1) [root@server2 smbrhcemnt]#
Create a file called /etc/samba/smbrhcecred and add the credentials for user10 to it.
[root@server2 smbrhcemnt]# cat /etc/samba/smbrhcecred username=user10 password=user123 [root@server2 smbrhcemnt]# ls -l /etc/samba/smbrhcecred -rwxr-xr-x. 1 root root 33 Jul 2 19:30 /etc/samba/smbrhcecred [root@server2 smbrhcemnt]# chown root /etc/samba/smbrhcecred [root@server2 smbrhcemnt]# chmod 0400 /etc/samba/smbrhcecred [root@server2 smbrhcemnt]# ls -l /etc/samba/smbrhcecred -r--------. 1 root root 33 Jul 2 19:30 /etc/samba/smbrhcecred [root@server2 smbrhcemnt]#
Open /etc/fstab and add the following entry.
[root@server2 smbrhcemnt]# cat /etc/fstab /dev/mapper/centos-root / xfs defaults 0 0 UUID=16ad26a9-2cf6-44ac-bc0d-832be1ef8911 /boot xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 server1.roggeware.nl:/common /nfsrhcemnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfsdata /nfsdatamnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfskrb5 /nfskrb5mnt nfs sec=krb5p 0 0 //server1/common /smbrhcemnt cifs _netdev,rw,credentials=/etc/samba/etc/samba/smbrhcecred 0 0 <=== [root@server2 smbrhcemnt]#
Add the _netdev option to instruct the system to wait for networking to establish before attempting to mount this filesystem.
Create a file called smbrhcetest as user10 under /smbrhcemnt and confirm its creation by running ll on the Samba server.
ERROR Schrijven file door user10 lukt hier niet
In this exercise you will create on server1 users user11 and user12 and a group called smbgrp. Add the users to this group, create directory /smbsdata, set owning group to dmbgrp, permssion 0770 and share /smbsdata for group collaboration. Create on server2 users user11 and user12 and group smbgrp, add both users to this group as members. Create /smbsdatamnt mount point for this share and add an entry to /etc/fstab. Mount the share on /smbsdatamnt and confirm the mount. Login as user3 and user4 and create files for group collaboration.
On server1 create user11 and user12 with password user123, add group smbgrp with GID 8888, add the users as members to group 'smbgrp and create the smbsdata directory.
[root@server1 ~]# useradd user11;useradd user12 [root@server1 ~]# echo user123|passwd --stdin user11 Changing password for user user11. passwd: all authentication tokens updated successfully. [root@server1 ~]# echo user123|passwd --stdin user12 Changing password for user user12. passwd: all authentication tokens updated successfully. [root@server1 ~]# groupadd -g 8888 smbgrp [root@server1 ~]# usermod -G smbgrp user11 [root@server1 ~]# usermod -G smbgrp user12 [root@server1 ~]# mkdir /smbsdata [root@server1 ~]#
Set owning group on /smbsdata to smbgrp and set permissions to 0770.
[root@server1 ~]# chgrp smbgrp /smbsdata/ [root@server1 ~]# chmod 0770 /smbsdata/ [root@server1 ~]# ls -ld /smbsdata/ drwxrwx---. 2 root smbgrp 6 Jul 3 14:20 /smbsdata/ [root@server1 ~]#
Activate the SELinux booleans persistently to allow the share in both read-only and read/write modes and verify the activation.
[root@server1 ~]# setsebool -P samba_export_all_ro=1 samba_export_all_rw=1 getsebool samba_export_all_ro samba_export_all_rw [root@server1 ~]# getsebool samba_export_all_ro samba_export_all_rw samba_export_all_ro --> on samba_export_all_rw --> on [root@server1 ~]#
Add SELinux file context with type samba_share_t on /smbsdata to the SELinux policiy rules, apply the new contect on the directory and confirm. Use command seinfo -t to list all available types.
[root@server1 ~]# semanage fcontext -at samba_share_t "/smbsdata(/.*)?" [root@server1 ~]# restorecon -v /smbsdata restorecon reset /smbsdata context unconfined_u:object_r:default_t:s0->unconfined_u:object_r:samba_share_t:s0 [root@server1 ~]# ls -ldZ /smbsdata drwxrwx---. root smbgrp unconfined_u:object_r:samba_share_t:s0 /smbsdata [root@server1 ~]#
Add the Samba service persistently to the firewalld configuration to allow Samba traffix on TCP port 445, and load the rule.
[root@server1 ~]# firewall-cmd --permanent --add-service samba success [root@server1 ~]# firewall-cmd --reload success [root@server1 ~]# firewall-cmd --list-services dhcpv6-client dns http https mysql nfs ntp samba ssh [root@server1 ~]#
Append the following to /etc/samba/smb.config and verify the configuration.
[smbsdata]
comment = /smbsdata directory for group collaboration
browsable = yes
path = /smbsdata
public = no
valid users = @smbgrp
write list = @smbgrp
writeable = yes
force group =+smbgrp
create mask = 0770
[root@server1 ~]# testparm
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[common]"
Processing section "[smbrhcsa]"
Processing section "[smbsdata]"
Loaded services file OK.
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
# Global parameters
[global]
workgroup = EXAMPLE
...
[smbsdata]
comment = /smbsdata directory for group collaboration
path = /smbsdata
valid users = @smbgrp
write list = @smbgrp
force group = +smbgrp
read only = No
create mask = 0770
[root@server1 ~]#
Add user11 and user12 to the Samba user database /var/lib/samba/private/smbpasswd and assign them password user123.
[root@server1 ~]# smbpasswd -a user11 New SMB password: Retype new SMB password: Added user user11. [root@server1 ~]# smbpasswd -a user12 New SMB password: Retype new SMB password: Added user user12. [root@server1 ~]# cat /var/lib/samba/private/smbpasswd user10:1005:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:EACB2C6A3AAA4ED476ED2741BE8C7A4E:[U ]:LCT-5776D0FA: user11:1006:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:EACB2C6A3AAA4ED476ED2741BE8C7A4E:[U ]:LCT-577908F8: user12:1008:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:EACB2C6A3AAA4ED476ED2741BE8C7A4E:[U ]:LCT-577908FE: [root@server1 ~]#
Display the user information using the pdbedit command.
[root@server1 ~]# pdbedit -Lv --------------- Unix username: user10 NT username: Account Flags: [U ] User SID: S-1-5-21-2626351804-4208986171-2860593508-3010 Primary Group SID: S-1-5-21-2626351804-4208986171-2860593508-513 Full Name: Home Directory: \\server1\user10 HomeDir Drive: Logon Script: Profile Path: \\server1\user10\profile Domain: SERVER1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Fri, 01 Jul 2016 22:22:18 CEST Password can change: Fri, 01 Jul 2016 22:22:18 CEST Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF --------------- Unix username: user11 NT username: Account Flags: [U ] User SID: S-1-5-21-2626351804-4208986171-2860593508-3012 Primary Group SID: S-1-5-21-2626351804-4208986171-2860593508-513 Full Name: Home Directory: \\server1\user11 HomeDir Drive: Logon Script: Profile Path: \\server1\user11\profile Domain: SERVER1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Sun, 03 Jul 2016 14:45:44 CEST Password can change: Sun, 03 Jul 2016 14:45:44 CEST Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF --------------- Unix username: user12 NT username: Account Flags: [U ] User SID: S-1-5-21-2626351804-4208986171-2860593508-3016 Primary Group SID: S-1-5-21-2626351804-4208986171-2860593508-513 Full Name: Home Directory: \\server1\user12 HomeDir Drive: Logon Script: Profile Path: \\server1\user12\profile Domain: SERVER1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Sun, 03 Jul 2016 14:45:50 CEST Password can change: Sun, 03 Jul 2016 14:45:50 CEST Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [root@server1 ~]#
Set the Samba service to autostart at boot, start the service and verify the status.
[root@server1 ~]# systemctl enable smb
[root@server1 ~]# systemctl start smb
[root@server1 ~]# systemctl status smb
â smb.service - Samba SMB Daemon
Loaded: loaded (/usr/lib/systemd/system/smb.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2016-07-02 19:53:37 CEST; 18h ago
Main PID: 26729 (smbd)
Status: "smbd: ready to serve connections..."
CGroup: /system.slice/smb.service
ââ26729 /usr/sbin/smbd
ââ26731 /usr/sbin/smbd
ââ26748 /usr/sbin/smbd
Jul 03 14:20:18 server1 smbd[29822]: Unable to connect to CUPS server localhost:631 - Trans...cted
Jul 03 14:20:18 server1 smbd[26731]: [2016/07/03 14:20:18.311007, 0] ../source3/printing/pri...ack)
Jul 03 14:20:18 server1 smbd[26731]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 03 14:33:18 server1 smbd[26731]: [2016/07/03 14:33:18.790280, 0] ../source3/printing/pri...ack)
Jul 03 14:33:18 server1 smbd[26731]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 03 14:46:19 server1 smbd[30289]: [2016/07/03 14:46:19.128314, 0] ../source3/printing/pri...ect)
Jul 03 14:46:19 server1 smbd[30289]: Unable to connect to CUPS server localhost:631 - Trans...cted
Jul 03 14:46:19 server1 smbd[26731]: [2016/07/03 14:46:19.130632, 0] ../source3/printing/pri...ack)
Jul 03 14:46:19 server1 smbd[26731]: failed to retrieve printer list: NT_STATUS_UNSUCCESSFUL
Jul 03 14:50:40 server1 systemd[1]: Started Samba SMB Daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server1 ~]#
List the shares available on the server as user11 using the smbclient command:
[root@server1 ~]# smbclient -L //server1 -U user11
Enter user11's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Sharename Type Comment
--------- ---- -------
common Disk /common directory available to user10
smbrhcsa Disk /smbrhcsa directory available to user1
smbsdata Disk /smbsdata directory for group collaboration
IPC$ IPC IPC Service (server1 is the Samba Server Sharing /common and /smbrhcsa)
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Server Comment
--------- -------
Workgroup Master
--------- -------
[root@server1 ~]#
On server2, the Samba client, create users user11 and user12 with password user123 (matching UIDs/GIDs as on server1). Create group smbgrp with GID 8888 and add user user11 and user12 as members to this group.
[root@server2 ~]# useradd user11;useradd user12 [root@server2 ~]# echo user123|passwd --stdin user11 Changing password for user user11. passwd: all authentication tokens updated successfully. [root@server2 ~]# echo user123|passwd --stdin user12 Changing password for user user12. passwd: all authentication tokens updated successfully. [root@server2 ~]# groupadd -g 888 smbgrp [root@server2 ~]# usermod -G smbgrp user11 [root@server2 ~]# usermod -G smbgrp user12 [root@server2 ~]#
Lis (-L) what shares are available from server1 using the smbclient command.
[root@server2 ~]# smbclient -L //server1 -U user11
Enter user11's password:
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Sharename Type Comment
--------- ---- -------
common Disk /common directory available to user10
smbrhcsa Disk /smbrhcsa directory available to user1
smbsdata Disk /smbsdata directory for group collaboration
IPC$ IPC IPC Service (server1 is the Samba Server Sharing /common and /smbrhcsa)
Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10]
Server Comment
--------- -------
Workgroup Master
--------- -------
[root@server2 ~]#
Logon to the /smbsdata share as user11 using the smbclient command.
[root@server2 ~]# smbclient //server1/smbsdata -U user11 Enter user11's password: Domain=[EXAMPLE] OS=[Windows 6.1] Server=[Samba 4.2.10] smb: \>
Create /smbsdatamnt mountpoint and mount /smbsdata on the smbsdatamnt mount point as user11.
[root@server2 ~]# mkdir /smbsdatamnt [root@server2 ~]# mount //server1/smbsdata /smbsdatamnt -o username=user11 Password for user11@//server1/smbsdata: ******* [root@server2 ~]#
Execute the df and mount commandsto check the status of the share.
[root@server2 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/centos-root 8869888 1822608 7047280 21% / //server1/common 8869888 2331940 6537948 27% /smbrhcemnt tmpfs 101692 0 101692 0% /run/user/0 //server1/smbsdata 8869888 2331940 6537948 27% /smbsdatamnt <=== [root@server2 ~]# mount
...
/dev/vda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota) server1.roggeware.nl:/common on /nfsrhcemnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.102,local_lock=none,addr=192.168.122.101,_netdev) server1.roggeware.nl:/nfsdata on /nfsdatamnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.102,local_lock=none,addr=192.168.122.101,_netdev) //server1/common on /smbrhcemnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user10,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1) //server1/smbsdata on /smbsdatamnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user11,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1) [root@server2 ~]#
Create /etc/samba/smbsdatacred file and add the credentials for user11 to it so that this user is able to mount this share. Set ownership to root and file permissions to 0400.
[root@server2 ~]# cat /etc/samba/smbsdatacred username=user11 password=user123 [root@server2 ~]# chown root /etc/samba/smbsdatacred [root@server2 ~]# chmod 0400 /etc/samba/smbsdatacred [root@server2 ~]# ls -l /etc/samba/smbsdatacred -r--------. 1 root root 33 Jul 3 22:18 /etc/samba/smbsdatacred [root@server2 ~]#
Add the following entry to /etc/fstab// to mount the share at reboot. Perform umount mount to test the new fstab entry.
[root@server2 ~]# cat /etc/fstab ... server1.roggeware.nl:/common /nfsrhcemnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfsdata /nfsdatamnt nfs _netdev,rw 0 0 server1.roggeware.nl:/nfskrb5 /nfskrb5mnt nfs sec=krb5p 0 0 //server1/common /smbrhcemnt cifs rw,credentials=/etc/samba/smbrhcecred 0 0 //server1/smbsdata /smbsdatamnt cifs _netdev,rw,credentials=/etc/samba/smbsdatacred 0 0 [root@server2 ~]# [root@server2 ~]# umount /smbsdatamnt [root@server2 ~]# mount /smbsdatamnt [root@server2 ~]# mount|grep sdata server1.roggeware.nl:/nfsdata on /nfsdatamnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=2 55,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.102,local_lock=none,addr=192.168.122.101,_netdev) //server1/smbsdata on /smbsdatamnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user11,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1) [root@server2 ~]# mount|grep sdatamnt server1.roggeware.nl:/nfsdata on /nfsdatamnt type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.102,local_lock=none,addr=192.168.122.101,_netdev) //server1/smbsdata on /
smbsdatamnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user11,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=104 8576,wsize=65536,actimeo=1)
[root@server2 ~]# mount|grep smbsdatamnt //server1/smbsdata on /smbsdatamnt type cifs (rw,relatime,vers=1.0,cache=strict,username=user11,domain=SERVER1,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.122.101,unix,posixpaths,serverino,acl,rsize=104 8576,wsize=65536,actimeo=1) [root@server2 ~]#
Create a file called smbsdatatest11 as user11 and another file called smbsdatatest12 as user12' under smbsdatamnt. List the directorycontents to ensure both files have owning group smbgrp.
[root@server2 ~]# ls -l /smbsdatamnt/
total 0
[root@server2 ~]# su - user11 Last login: Sun Jul 3 22:29:07 CEST 2016 on pts/0 [user11@server2 ~]$ touch /smbsdatamnt/smbsdatatest11;exit uitgelogd [root@server2 ~]# su - user12 [user12@server2 ~]$ touch /smbsdatamnt/smbsdatatest12;exit uitgelogd [root@server2 ~]# ls -l /smbsdatamnt/ total 0 -rw-rw----. 1 user11 smbgrp 0 Jul 3 22:31 smbsdatatest11 -rw-rw----. 1 user11 smbgrp 0 Jul 3 22:32 smbsdatatest12 [root@server2 ~]#
This exercise assumes that server2 is running Kerberos services (both KDC and admin services) for realm example.com, the root user is added as an admin principal, DNS is disabled, and the hosts file is updated with appropiate mappings for both server1 and server2. Samba services run on server1.
In this exercise you will add the Samba server as a cifs principal and produce a keytab for it and store it locally. Add appropiate entries to the Samba server for a share and test access on the client.
On the Kerberos server server2.
Login as the root principal and add server1 as a cifs principal to the KDC database.
# kadmin -p root/padmin Authenticating as principal root/admin with password. Password for root/admin@EXAMPLE.COM: kadmin:addprinc -randkey cifs/server1.example.com WARNING: no policy specified for cifs/server1.example.com@EXAMPLE.COM: defaulting to no policy Principal "cifs/server1.example.com@EXAMPLE.COM" created.
Generate a leytab for the new principal and store it in the /etc/krb5.kettab file.
kadmin:ktadd -k /etc/krb5.keytab cifs/server1.example.com
Ensure that the file has the ownership and owning group set to root and permissions to 0600.
Copy the keytab file to the Samba server server1.
#scp -pr /etc/krb5.keytab server1:/etc
On server1.
Follow the steps provided in exercise "Provide Network Shares to Samba Client"to create and share /smbkrb5 directory to create and share /smbkrb5 directory for user7 access with security set to ADS and Kerberos realm set to EXAMPLE.COM.
On server2.
Confirm access to the share by logging in to ut using Kerberos (-k) credentials.
#smbclient -l //server1/smbkrb5 -U user7
Create /smbkrb5mnt mount point.
#mkdir /smbkrb5mnt
Mount /smbkrb5 on to the /smbkrb5mnt mount point as user7.
#mount //server1/smbkrb5 /smbkrb5mnt -o username=user7,sec=krb5,rw
Verify the mount with the df and mount commands. Open the /etc/fstab file and add the following entry.
//server1/smbkrb5 /smbkrb5mnt cifs username=user7,rw,sec=krb5 0 0
Create a file called smbkrb5test as user7 under /smbkrb5mnt and check its existence on the Samba server.
Chapter 22 Hosting Websites with Apache
Apache Commands
apachectl Starts, stops and checks status of the httpd process. Systemctl may also be used.
htpasswd Create and updates files to store usernames and passwords for basic authentication of Apache users.
httpd Server program for the Apache webservice.
-t verify configuration file
-D vhosts verify vhost configuration file
Apache Configuration Files
/etc/httpd Default directory for all configuration files. /run/httpd Runtime information. /usr/lib64/httpd/modules Additional Apache modules. /var/log/httpd Apache logfiles. /usr/share/doc/httpd-2.4.6
Apache Software Packages
httpd httpd-manual html pages in /usr/share/httpd/manual accessible with links or elinks. httpd-tools
Configuring Apache Web Servers
system-config-selinux SELinux Configuration Tool getsebool -a|grep httpd
firewall-cmd --permanent -add-service=http firewall-cmd --reload firewall-cmd --permanent -add-port 8900/tcp semanage fcontext -at httpd_sys_content_t "/var/vhost2(/.*)?" restorecon -Rv /var/vhost2
elinks http://localhost
Understanding and Configuring Apache Web Servers over SSL/TLS
CA Certificate Authority. CSR Certificate Signing Request.
OpenSSL logfiles are in /etc/httpd/logs which is a symbolic link to /var/log/httpd/
Software Packages
mod_ssl openssl
Commands
openssl list-standard-commands openssl genpkey -algorithm rsa -pkeyopt rsa_keygen_bits:2048 -out server1.example.com.key Generate private key. openssl req -new -key server1.example.com.key -out server1.example.com.csr openssl x509 -req -days 120 -signkey server1.example.com.key -in server1.example.com.csr -out server1.example.com.crt openssl s_client -coonect localhost:443 -state httpd -D DUMP_VHOSTS restorecon -Rv /var/www/html firewall-cmd --permanent --add-servce https
Files
/etc/httpd/conf.d/ssl.conf Installed by package mod_ssl. /etc/pki/tls/certs Default location for certificates.
Chapter 23 Sending and Receiving Electronic Mail
MUA Mail User Agent. MSA Mail Submission Agent. MTA Mail transport Agent. MDA Mail Delivery Agent
POP Post Office Protocol. IMAP Internet Message Access Protocol.
Postfix daemons
master, nqmgr, pickup, smtpd
Postfix Commands
alternatives Displays and sets the default MTA.
--set mta
--display mta
mail/mailx Sends and receives email.
postalias/newalias Processes the alias database (/etc/aliases by default).
postconf Displays and modifies the Postfix configuration stored in the main.cf file.
-d Display default settings.
-n Display settings defined in main.cf.
postfix Controls operation of Postfix servcies, including start, stop, health, and reload config.
check Check main.cf for syntax errors.
postmap Process and converts some configuration files into Postfix-compatible databases.
postqueue/mailq Lists and controls Postfix queue.
Files
/etc/postfix Postfix directory with configuration files. /etc/postfix/access Establish access control based on emailaddress, hosts, domains or network address. man 5 access. /etc/postfix/access.db Run postmap /etc/postfix/access to update this database. /etc/postfix/canonical Run postmap /etc/postfix/canonical to update this database. man 5 canonical. /etc/postfix/generic establish mapping for local and non-local mailaddresses. Syntax identical to canonical. /etc/postfix/main.cf /etc/postfix/master.cf /etc/postfix/relocated /etc/postfix/transport /etc/postfix/virtual
/etc/aliases
/var/lib/postfix /var/log/maillog
/var/spool/postfix /var/spool/mail
Managing Postfix
SElinux requirements for the postfix operation
ps -eZ|grep postfix ls -lZd /etc/postfix /var/lib/postfix /var/spool/postfix semanage port -l|grep smtp getsebool -a|grep postfix
Packages
postfix
Configuring DNS
Determining the IP address of a hostname is referred to as forward name resolution or simply name resolution and determining the hostname associated with an IP address is reffered to as reverse name resolution.
DNS Name Space and Domains
The DNS Name Space is an hierarchical organization of all the domains on the internet. The root of the name space is represented by a dot. The hierarchy right below the root represents top-level domains (TLD) that are either generic, such as .com, .net , .org and .gov, and referred to as gTLDs or specific to a two-letter country-code, such as .ca and .uk and referred to as ccTLDs. Sub-domains fall under domains and are separated by a dot.
BIND Software Packages and Service Daemon
bind Provides software to configure a DNS server. bind-libs Contains library files for bind and bind-utils packages. bind-utils Comprises of resolver tools, such as dig, host and nslookup
Daemon named listens on well-know port 53 and supports both TCP and UDP protocols for operation. See /usr/share/doc/bind for example named configuration files.
DNS Commands
systemctl enable named systemctl start named named-checkconf
/etc/named.conf /usr/share/doc/bind /var/log/messages /var/named.rfc1912.zones /var/named/ Zone files
SeLinux requirements
ps -eZ |grep named shows domain type named_t. semanage port -l|grep dns getsebool -a |grep ^named
Chapter 25 Managing MariaDB
Packages
mariadb Provides MariaDB client programs and a configuration file mariadb-server Contains MariaDB server, tools, and configuration and logfiles mariadb-libs Comprises of essential library files for MariaDB client programs
The MariaDB server package also loads the mysql daemon binary file . This daeom process listens on port 3306 and supports both TCP and UDP protocols for operation.. It must run on the system to allow client access.
MariaDB Commands
mysql Command line shell interface for administration and query. mysql_secure_installation Improves the security of MariaDB installation. mysqldump Backs up or restores one or more tables or database.
Maria DB Configuration Files
/etc/my.cnf Global defaults. Primary configuration file /etc/my.cnf.d/ Directory for configuration files. /etc/my.cnf.d/client.cnf /etc/my.cnf.d/mysql-clients.cnf /etc/my.cnf.d/server.cnf
/var/log/mariadb/mariadb.log
SELinux Requirements for MariaDB Operation
By default the mysqld daemon runs confined in its own domain with domain-type mysqld_t.
ps -eZ|grep mysqld --> system_u:system_r:mysqld_t:s0 5245 ? 00:06:00 mysqld
The SELinux filetype associated with the mysqld daemon file is mysql_exec_t, configuration files in the /etc/my.cnf.d directory is etc_t, database files in the /var/lib/mysql directory is mysqld_db_t and logfiles in /var/log/mariadb is mysql_log_t.
ll -dZ /usr/libexec/mysqld /etc/my.cnf.d /var/lib/mysql /var/log/mariadb drwxr-xr-x. root root system_u:object_r:mysqld_etc_t:s0 /etc/my.cnf.d -rwxr-xr-x. root root system_u:object_r:mysqld_exec_t:s0 /usr/libexec/mysqld drwxr-xr-x. mysql mysql system_u:object_r:mysqld_db_t:s0 /var/lib/mysql drwxr-x---. mysql mysql system_u:object_r:mysqld_log_t:s0 /var/log/mariadb
semanage port -l|grep mysql mysqld_port_t tcp 1186, 3306, 63132-63164
getsebool -a|grep mysql mysql_connect_any --> off selinuxuser_mysql_connect_enabled --> off
Install MariaDB
yum install mariadb-server systemctl enable mariadb mysql_secure_installation firewall-cmd --permanent --add-service mysql;firewall-cmd --reload systemctl start mariadb
Start the MariaDB Shell and Understand its Usage
mysql -u root -p Start the MariaDB shell help status
Subcommands for Database and Table Operations
create, drop, show delete, describe insert, rename, select , update
show databases; create database database; use database;
create table scientists(Sno int,FirstName varchar(20), LastName varchar(20), City varchar(20),Country varchar(20),Age int);
describe scientists;
insert into scientists values('1','Albert','Einstein','Ulm','Germany','76');
select * from scientists where FirstName='Albert';
select * from scientists where Age>77;
select * from scientists where Country='Poland' or Country='Germany';
select * from scientists order by FirstName;
select * from scientists order by LastName desc;
select * from scientists where Contry like 'U%'; where Age like '7%';
rename table scientists to science; update science set FirstName='Benjamin',LastName='Franklin' where Sno='1'; delete from science where Sno='1' or Sno='7';
Backing Up and Restoring a Database or Table
mysqldump -u root -p --all-databases >db.all.sql Backup all. mysqldump -u root -p rhce1 >db.rhce1.sql Backup specific database. mysql: create database rhce1 Create database to be restored (if it does not exists). mysql -u root -p rhce1 <db.all.sql Restore specific database. mysqldump -u root -p DB1 tbl1 tbl2 >db.tbl12.sql Backup specific tables. mysql -u root -p DB1 tbl1 <db.tbl12.sql Restore specific table.