Sponsored Content
Full Discussion: Patching on ZFS file-system
Operating Systems Solaris Patching on ZFS file-system Post 302564069 by solaris_1977 on Wednesday 12th of October 2011 06:31:01 PM
Old 10-12-2011
I know we can create BE in Solaris-10 as well, but not sure about command/steps. Still searching on net for them.
Vishal, do you have idea, how to go ahead with patching ?
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

zfs file system

Hi, I try add a new file system: #zfs create dsk1/mqm it came back with: #cannot create 'dsk1/mqm': no such pool 'dsk1' what do I have to do? Kind regards Mehrdad (2 Replies)
Discussion started by: mehrdad68
2 Replies

2. Shell Programming and Scripting

ZFS file system - memory monitoring

I am working on a server where the 'root' user ZFS filesystem. Now when I do Top commands it says only 750M free .But when I count the actual memory utilized it comes only to 12 GB and the total size of the server is 32G. I think rest of the space is held up by ZFS file system. Is there a... (5 Replies)
Discussion started by: prasperl
5 Replies

3. Solaris

increase SWAP on ZFS file system

Hi All, I am using this commands to dynamically increase ZFS swap space on Solaris my question is: 1- after i make these commands it will permanent or it will remove after restart 2- how to make it permanent # swap -l swapfile dev swaplo bloques libre /dev/zvol/dsk/rpool/swap... (4 Replies)
Discussion started by: osmanux
4 Replies

4. Emergency UNIX and Linux Support

Not able to extend ZFS file system

Hi All, I have Solaris-10 configured with two non-global zones. All file-systems are mounted on global zone and data file-systems are mounted on non-global zone as lofs. I have added 4 luns of 100 GB each and still not able to extend a file-system. This is production server, so I can not... (5 Replies)
Discussion started by: solaris_1977
5 Replies

5. Solaris

How to take backup of ZFS file system on a tape drive?

Hi Guys, I want to take backup of a ZFS file system on tape drive. Can anybody help me with this? Thanks, Pras (0 Replies)
Discussion started by: prashant2507198
0 Replies

6. Solaris

Not able to increase ZFS file system on NGZ

I have Solaris-10 server running ZFS file-system. ctdp04_vs03-pttmsp01 is one of the non global zone. I wanted to increase a /ttms/prod file-system of zone, which is actually /zone/ctdp04_vs03-pttmsp01/ttms/prod on global server. I have added a new disk of 9 GB, which is emcpower56a and now I can... (16 Replies)
Discussion started by: solaris_1977
16 Replies

7. Solaris

How to grow a zfs file system?

Hi I have the following file system, that needs to be expanded to more 500Gb, so a total of 1Tb: df -h /oradata1 Filesystem Size Used Available Capacity Mounted on oradata1 587G 517G 69G 89% /oradata1 I am not familiar with zfs, I am more... (17 Replies)
Discussion started by: fretagi
17 Replies

8. Solaris

Need help in patching with lu on SVM+ZFS FS with zones

Hello, I need help in understanding, how lu can work on Solaris-10 on this server. I can detach mirror metadevices of LVM, but zpool looks confusing, which mirror I should break. server-app01 # : |format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN300G cyl... (0 Replies)
Discussion started by: solaris_1977
0 Replies
SD_READAHEAD(3) 						   sd_readahead 						   SD_READAHEAD(3)

NAME
sd_readahead - Control ongoing disk boot-time read-ahead operations SYNOPSIS
#include "sd-readahead.h" int sd_readahead(const char *action); DESCRIPTION
sd_readahead() may be called by programs involved with early boot-up to control ongoing boot-time disk read-ahead operations. It may be used to terminate read-ahead operations in case an uncommon disk access pattern is to be expected and hence read-ahead replay or collection is unlikely to have the desired speed-up effect on the current or future boot-ups. The action should be one of the following strings: cancel Terminates read-ahead data collection, and drops all read-ahead data collected during this boot-up. done Terminates read-ahead data collection, but keeps all read-ahead data collected during this boot-up around for use during subsequent boot-ups. noreplay Terminates read-ahead replay. RETURN VALUE
On failure, these calls return a negative errno-style error code. It is generally recommended to ignore the return value of this call. NOTES
This function is provided by the reference implementation of APIs for controlling boot-time read-ahead and distributed with the systemd package. The algorithm it implements is simple, and can easily be reimplemented in daemons if it is important to support this interface without using the reference implementation. Internally, this function creates a file in /run/systemd/readahead/ which is then used as flag file to notify the read-ahead subsystem. For details about the algorithm check the liberally licensed reference implementation sources: http://cgit.freedesktop.org/systemd/systemd/plain/src/readahead/sd-readahead.c resp. http://cgit.freedesktop.org/systemd/systemd/plain/src/systemd/sd-readahead.h sd_readahead() is implemented in the reference implementation's drop-in sd-readahead.c and sd-readahead.h files. It is recommended that applications consuming this API copy the implementation into their source tree. For more details about the reference implementation see sd- readahead(7) If -DDISABLE_SYSTEMD is set during compilation this function will always return 0 and otherwise become a NOP. EXAMPLES
Example 1. Cancelling all read-ahead operations During boots where SELinux has to relabel the file system hierarchy, it will create a large amount of disk accesses that are not necessary during normal boots. Hence it is a good idea to disable both read-ahead replay and read-ahead collection. sd_readahead("cancel"); sd_readahead("noreplay"); SEE ALSO
systemd(1), sd-readahead(7), daemon(7) AUTHOR
Lennart Poettering <lennart@poettering.net> Developer systemd 10/07/2013 SD_READAHEAD(3)
All times are GMT -4. The time now is 03:38 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy