Sponsored Content
Operating Systems Linux If i don't have raid disks can i shut down dmraid device-mapper? Post 302512202 by Corona688 on Friday 8th of April 2011 04:23:33 PM
Old 04-08-2011
Is an extra line on bootup really that important to you? These things are loaded on boot to make the system more flexible and really aren't a drain on resources; you could do more harm than good digging deep enough to remove it that completely.
 

10 More Discussions You Might Find Interesting

1. Red Hat

IBM RAID disks

We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300... (3 Replies)
Discussion started by: pdudley
3 Replies

2. Solaris

Move disks to different StorEdge, keeping RAID

Hi. I need to move a 5 disk RAID5 array from a SE3310 box to a different SE3310 array. After installing the disks in the "new" StorEdge device, I "would like" ;) to be able have access to the data which is on the RAID. Essentially, the quesion is, how can this be done? :confused: I checked... (5 Replies)
Discussion started by: alexs77
5 Replies

3. Red Hat

device-mapper-multipath path [undef]

I have an HP blade with Qlogic HBA's connected to an EVA8000. I have downloaded the latest multipath.conf from HP's website. The drive presented to the server appears to be configured and working except the output of "multipath -l" shows for all paths. What is causing this output? mpath0... (2 Replies)
Discussion started by: manzier
2 Replies

4. Red Hat

Device Mapper Notations and LVM

Hi, I had a doubt regarding device mapper notations and their corresponding LVM volumes. I have configured a volume group with two logical volumes in it as root and swap. The entries in the /etc/fstab file show the dm notations namely, /dev/mapper/VolGroup00-LogVol01... (2 Replies)
Discussion started by: kanna_geekworkz
2 Replies

5. Solaris

Solaris not recognizing RAID 5 disks

I've just installed Sol 10 Update 9 on a Sun 4140 server and have a RAID 1 configuration (2 136 Gb drives) for the OS and have created a RAID 5 array (6 136 GB) drives. When i log into the system I am unable to see the RAID 5 disks at all. I've tried using the devfsadm command but no luck and... (9 Replies)
Discussion started by: goose25
9 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Red Hat

Device-mapper behaviour booting on init=bin/bash

Good morning Recently we needed to change the password from a redhat 6.5 system that no one knew the root password. Starting the system with the init=/bin/bash method took us to the following scenario: system_vg active with only root_lv and tmpfs mounted. our entries at fstab are like... (1 Reply)
Discussion started by: Ikaro0
1 Replies

8. Solaris

Hardware RAID using three disks

Dear All , Pl find the below command , # raidctl -l Controller: 1 Volume:c1t0d0 Disk: 0.0.0 Disk: 0.1.0 Disk: 0.3.0 # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies

9. UNIX for Advanced & Expert Users

Command to see the logical volume path, device mapper path and its corresponding dm device path

Currently I am using this laborious command lvdisplay | awk '/LV Path/ {p=$3} /LV Name/ {n=$3} /VG Name/ {v=$3} /Block device/ {d=$3; sub(".*:", "/dev/dm-", d); printf "%s\t%s\t%s\n", p, "/dev/mapper/"v"-"n, d}' Would like to know if there is any shorter method to get this mapping of... (2 Replies)
Discussion started by: royalibrahim
2 Replies

10. Ubuntu

Md0 raid don't see my folders

I suddenly don't see my folders into /mnt/md0. What can be reason? mdadm --detail /dev/md* /dev/md0: Version : 1.2 Creation Time : Fri Jan 18 09:54:27 2019 Raid Level : raid1 Array Size : 1953383488 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953383488 (1862.89 GiB... (8 Replies)
Discussion started by: tomislav91
8 Replies
BOOTUP(7)							      bootup								 BOOTUP(7)

NAME
bootup - System bootup process DESCRIPTION
A number of different components are involved in the system boot. Immediately after power-up, the system BIOS will do minimal hardware initialization, and hand control over to a boot loader stored on a persistent storage device. This boot loader will then invoke an OS kernel from disk (or the network). In the Linux case, this kernel (optionally) extracts and executes an initial RAM disk image (initrd), such as generated by dracut(8), which looks for the root file system (possibly using systemd(1) for this). After the root file system is found and mounted, the initrd hands over control to the host's system manager (such as systemd(1)) stored on the OS image, which is then responsible for probing all remaining hardware, mounting all necessary file systems and spawning all configured services. On shutdown, the system manager stops all services, unmounts all file systems (detaching the storage technologies backing them), and then (optionally) jumps back into the initrd code which unmounts/detaches the root file system and the storage it resides on. As a last step, the system is powered down. Additional information about the system boot process may be found in boot(7). SYSTEM MANAGER BOOTUP
At boot, the system manager on the OS image is responsible for initializing the required file systems, services and drivers that are necessary for operation of the system. On systemd(1) systems, this process is split up in various discrete steps which are exposed as target units. (See systemd.target(5) for detailed information about target units.) The boot-up process is highly parallelized so that the order in which specific target units are reached is not deterministic, but still adheres to a limited amount of ordering structure. When systemd starts up the system, it will activate all units that are dependencies of default.target (as well as recursively all dependencies of these dependencies). Usually, default.target is simply an alias of graphical.target or multi-user.target, depending on whether the system is configured for a graphical UI or only for a text console. To enforce minimal ordering between the units pulled in, a number of well-known target units are available, as listed on systemd.special(7). The following chart is a structural overview of these well-known units and their position in the boot-up logic. The arrows describe which units are pulled in and ordered before which other units. Units near the top are started before units nearer to the bottom of the chart. local-fs-pre.target | v (various mounts and (various swap (various cryptsetup fsck services...) devices...) devices...) (various low-level (various low-level | | | services: udevd, API VFS mounts: v v v tmpfiles, random mqueue, configfs, local-fs.target swap.target cryptsetup.target seed, sysctl, ...) debugfs, ...) | | | | | \__________________|_________________ | ___________________|____________________/ |/ v sysinit.target | ____________________________________/|\________________________________________ / | | | | | | | | v v | v v (various (various | (various rescue.service timers...) paths...) | sockets...) | | | | | v v v | v rescue.target timers.target paths.target | sockets.target | | | | v \_________________ | ___________________/ |/ v basic.target | ____________________________________/| emergency.service / | | | | | | v v v v emergency.target display- (various system (various system manager.service services services) | required for | | graphical UIs) v | | multi-user.target | | | \_________________ | _________________/ |/ v graphical.target Target units that are commonly used as boot targets are emphasized. These units are good choices as goal targets, for example by passing them to the systemd.unit= kernel command line option (see systemd(1)) or by symlinking default.target to them. timers.target is pulled-in by basic.target asynchronously. This allows timers units to depend on services which become only available later in boot. BOOTUP IN THE INITIAL RAM DISK (INITRD) The initial RAM disk implementation (initrd) can be set up using systemd as well. In this case, boot up inside the initrd follows the following structure. The default target in the initrd is initrd.target. The bootup process begins identical to the system manager bootup (see above) until it reaches basic.target. From there, systemd approaches the special target initrd.target. Before any file systems are mounted, it must be determined whether the system will resume from hibernation or proceed with normal boot. This is accomplished by systemd-hibernate-resume@.service which must be finished before local-fs-pre.target, so no filesystems can be mounted before the check is complete. When the root device becomes available, initd-root-device.target is reached. If the root device can be mounted at /sysroot, the sysroot.mount unit becomes active and initrd-root-fs.target is reached. The service initrd-parse-etc.service scans /sysroot/etc/fstab for a possible /usr mount point and additional entries marked with the x-initrd.mount option. All entries found are mounted below /sysroot, and initrd-fs.target is reached. The service initrd-cleanup.service isolates to the initrd-switch-root.target, where cleanup services can run. As the very last step, the initrd-switch-root.service is activated, which will cause the system to switch its root to /sysroot. : (beginning identical to above) : v basic.target | emergency.service ______________________/| | / | v | initrd-root-device.target emergency.target | | | v | sysroot.mount | | | v | initrd-root-fs.target | | | v v initrd-parse-etc.service (custom initrd | services...) v | (sysroot-usr.mount and | various mounts marked | with fstab option | x-initrd.mount...) | | | v | initrd-fs.target \______________________ | | v initrd.target | v initrd-cleanup.service isolates to initrd-switch-root.target | v ______________________/| / v | initrd-udevadm-cleanup-db.service v | (custom initrd | services...) | \______________________ | | v initrd-switch-root.target | v initrd-switch-root.service | v Transition to Host OS SYSTEM MANAGER SHUTDOWN
System shutdown with systemd also consists of various target units with some minimal ordering structure applied: (conflicts with (conflicts with all system all file system services) mounts, swaps, | cryptsetup | devices, ...) | | v v shutdown.target umount.target | | \_______ ______/ / v (various low-level services) | v final.target | _____________________________________/ \_________________________________ / | | | | | | v v v v systemd-reboot.service systemd-poweroff.service systemd-halt.service systemd-kexec.service | | | | v v v v reboot.target poweroff.target halt.target kexec.target Commonly used system shutdown targets are emphasized. SEE ALSO
systemd(1), boot(7), systemd.special(7), systemd.target(5), dracut(8) systemd 237 BOOTUP(7)
All times are GMT -4. The time now is 02:16 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy