Sponsored Content
Operating Systems Solaris Sharing a physical disk with an LDOM Post 303042253 by hicksd8 on Thursday 19th of December 2019 04:16:14 PM
Old 12-19-2019
One thing for sure is that only one of the nodes (Solaris 11 Global or Solaris 10 LDOM) can have control of the volume. In any situation, having two operating systems writing to a volume simultaneously is a recipe for instant filesystem corruption. One operating system must control file opening, locking, etc. Even in a cluster scenario using dual tailed storage, a major function of the cluster suite is to control which node has exclusive control of the volume and effect disciplined failover when necessary.

Therefore, like any two nodes, one option is to mount the volume on one node, configure a NFS share on that node, and mount the volume using a NFS client from the second node. The first node then controls ALL activity on the volume.
These 4 Users Gave Thanks to hicksd8 For This Post:
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

physical volume and physical disk.

Hello, I need explanations about physical disks and physical volumes. What is the difference between these 2 things? In fact, i am trying to understand what the AIX lspv2command does. Thank you in advance. (2 Replies)
Discussion started by: VeroL
2 Replies

2. HP-UX

determine the physical size of the hard disk

Hi is there a cmd in hpux 11 to determine the physical size of the hard disk. not bdf command. i have searched the other threads here but cant find an answer. thank you guys (4 Replies)
Discussion started by: hoffies
4 Replies

3. Solaris

List all resources on physical host LDOM server

Hello, I have a SUN T5240 running Solaris 10 with Logical Domain Manager (v 1.0.3). You can use the "ldm" command to display current resources on the box. Is there away to display all the "physical resources" on the box(i.e.,used and unused). For example, "ldm ls" will tell me what the... (5 Replies)
Discussion started by: stephanpitts
5 Replies

4. Solaris

Sharing a local disk between to solaris machines

Hi, I recently added a disk on a solaris 9 and I wanted to make it accessible for another machine, using the same name here is what i did : On the machine holding the internal disk in vfstab i added the line /dev/dsk/c1t1d0s4 /dev/rdsk/c1t1d0s4 /SHARED2 ufs 2 yes ... (2 Replies)
Discussion started by: zionassedo
2 Replies

5. Solaris

Installing Solaris OS on LDOM SAN Disk

I have viewed a few previous posts regarding this, but none of them quite described or worked with my issue. I am out of local disk space on my LDOM Manager but still have plenty of SAN vCPU and Memory available so I am trying to install a new LDOM OS on SAN. I have exposed the SAN to the... (0 Replies)
Discussion started by: MobileGSP
0 Replies

6. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

7. Solaris

Disk expansion on LDOM Guest

Hi, There is LDOM Guest where I need to expand /u02 file systems on it. It is residing on a Solaris 11 Hypervisor (Primary Domain). The storage is expanded on vdisk presented to Hypervisor. I need steps to expand the /u02 on LDOM Guest. (2 Replies)
Discussion started by: vidya_sagar2003
2 Replies

8. Solaris

Exporting physical disk to ldom or ZFS volume

Generally, this is what we do:- On primary, export 2 LUNs (add-vdsdev). On primary, assign these disks to the ldom in question (add-vdisk). On ldom, created mirrored zpool from these two disks. On one server (which is older) we have:- On primary, create mirrored zpool from the two LUNs.... (4 Replies)
Discussion started by: psychocandy
4 Replies

9. Solaris

Disk alignment inside of an LDOM

Hi! Quick background for the question... I have Solaris 11.4 control/primary zone with some LDOM's on top of it. I have some raw iSCSI LUN's presented to the control zone/primary zone from a NetApp, which I then pass up to the LDOM's via the VDS/vdisk. So basically the LDOM VM's see the disk as... (1 Reply)
Discussion started by: rtmg
1 Replies
pstat_getlocality(2)						System Calls Manual					      pstat_getlocality(2)

NAME
pstat_getlocality(), pstat_getproclocality() - returns system-wide or per-process information of a ccNUMA system SYNOPSIS
DESCRIPTION
and are part of the general functionality provided to obtain information about various system contexts. These calls return information on different parts of a Cache Coherent Non-Uniform Memory Architecture (ccNUMA) system. returns system-wide information, while returns per-process information. A locality is one "building block" of a ccNUMA system. If a machine has only one locality, it is considered to be an UMA (Uniform Memory Architecture) machine. UMA is also a synonym for Symmetric Multiprocessor (SMP). These locality building blocks are nearly identical to the concept of the locality domain (or LDOM) as described in the mpctl(2) manual page. From that manual page: A locality domain consists of a related collection of processors, memory, and peripheral resources that comprise a fundamental building block of the system. All processors and peripheral devices in a given locality domain have equal latency to the memory contained within that locality domain. There is only one difference between a locality and an LDOM, and that is the concept of interleaved memory. Interleaved memory is a hard- ware-constructed region of physical memory that is created from the memory of several locality domains. This memory is striped together with a very fine granularity. As an example, consider a system with four locality domains 0, 1, 2, and 3. Let's say they all contribute the same amount of memory to the interleave. The interleaved memory may look like this (assuming a 64-byte striping): Memory Address Comes From -------------- ---------- 0 - 63 (bytes) LDOM 0 64 - 127 LDOM 1 128 - 191 LDOM 2 192 - 255 LDOM 3 256 - 319 LDOM 0 etc, etc Interleaved memory is a good place to put shared objects, the kernel, and objects that could be accessed from any part of the system. There will be at most one interleaved locality. Some systems may not have interleaved memory. Given the four-LDOM example above, these calls would return five localities - one for each LDOM, and one for interleaved memory. The rea- son that mpctl(2) does not count interleaved memory as an LDOM is because mpctl(2) is used for scheduling purposes, and interleaved memory contains no processors. Function Descriptions Returns system-wide information specific to each locality. There is one instance of this context for each locality on the system. For each locality requested, data, up to a maximum of elem- size bytes, are returned in the pointed to by buf. The elemcount parameter specifies the number of that are available at buf to be filled in. The index parameter specifies the starting index within the context of localities. The types and field members of the are as follows: pst_locality_flags_t psl_flags Contains information about the given locality. See the description of pst_locality_flags_t below for details. int64_t psl_ldom_id This is the LDOM id used by mpctl(2) to identify this locality. For the interleaved locality, this field will be -1. int64_t psl_physical_id A hardware-based number that ties the locality to some recognizable physically indexable entity. An example of this is a cell id number. uint64_t psl_total_pages The total number of physical pages in this locality. uint64_t psl_free_pages The number of free physical pages in this locality at this moment. uint64_t psl_cpus The number of enabled cpus in this locality. This is irrespective of any that may be in effect for those cpus. psl_flags is a bitfield described by the enumerated type pst_locality_flags_t . This field describes some of the properties of the locality. Valid values for pst_locality_flags_t are the following: This locality is the interleaved locality. This locality is not an interleaved locality. It will map to exactly one locality domain returned by the mpctl(2) system call. and are mutually exclusive. This locality does not contribute any physical memory to the interleave. can only be set if is also set. On an UMA system, there will be one locality and will be set in psl_flags. Returns information specific to a particular process' locality behavior. There is one instance of this context for each locality for each process on the system. For each instance requested, data, up to a maximum of elemsize bytes, are returned in the pointed to by buf. At most one instance (locality) is returned for each call to The pid parameter specifies the process id of the process for which locality information is to be returned. A pid of zero indicates that locality information for the currently executing process should be returned. The index parameter specifies the starting index within the context of localities. The types and field members of the are as follows: int64_t ppl_ldom_id This is the LDOM id used by mpctl(2) to identify this locality. For the interleaved locality, this field will be -1. uint64_t ppl_rss_total The total number of resident pages for this process in this locality. uint64_t ppl_rss_shared The number of shared resident pages for this process in this locality. uint64_t ppl_rss_private The number of private resident pages for this process in this locality. uint64_t ppl_rss_weighted The number of resident pages for this process in this locality, weighted by the number of processes sharing each page. Pri- vate pages count as one page, and shared pages count as the page divided by the number of processes sharing that page. Notes These functions only return the wide (64 bit) versions of their associated structures. In order for narrow (32 bit) applications to use these interfaces, the flag must be used at compile time. These interfaces are available for narrow applications written in standard C and extended ANSI, and for all wide applications. RETURN VALUE
and return the following values: Successful completion. n is the number of instances returned in buf . Failure. is set to indicate the error. ERRORS
Upon failure, is set to one of the following values. [EFAULT] buf points to an invalid address. [EINVAL] elemsize is less than or equal to zero, or elemsize is larger than the size of the associated data structure. [EINVAL] index is negative. [ESRCH] for pstat_getproclocality(), the requested pid could not be found. EXAMPLES
/* * This program returns system-wide and per-process memory * locality information. To compile the 32-bit version, * use -D_PSTAT64. The 64-bit version does not need any * special compiler flags. */ #include <unistd.h> #include <stdio.h> #include <sys/param.h> #include <sys/pstat.h> #include <sys/errno.h> #define BURST ((size_t)3) #define STRSZ 80 unsigned long pgsize; void pid_locinfo ( pid_t pid ); void sys_locinfo ( void ); void pages_to_str ( uint64_t pages, char *str ); void usage ( int argc, char **argv ) { fprintf ( stderr, "Usage: %s [-p pid] ", argv[0] ); fprintf ( stderr, "This program prints out per locality " ); fprintf ( stderr, "memory usage. If 'pid' is supplied, " ); fprintf ( stderr, "information on that process is " ); fprintf ( stderr, "returned in addition to system-wide " ); fprintf ( stderr, "information. " ); exit(1); } /* * Verify arguments, call sys_locinfo(), and call pid_locinfo() * if desired. */ int main ( int argc, char **argv ) { pid_t pid = (pid_t) 0; if ( (argc == 2) || (argc > 3) || ((argc == 3) && (strncmp(argv[1], "-p", 2))) ) { usage(argc, argv); } if ( argc == 3 ) { pid = atoi(argv[2]); if (pid < 0) { /* note that pid 0 is "this process" */ usage(argc, argv); } } /* Get the size of a page for later calculations */ pgsize = sysconf ( _SC_PAGE_SIZE ); sys_locinfo(); if ( argc == 3 ) { pid_locinfo ( pid ); } return 0; } /* * Display the system-wide memory usage per locality. */ void sys_locinfo ( void ) { int i; /* index within pstl[] */ int count; /* the actual number of pstl structures */ int idx = 0; /* index within the context of localities */ struct pst_locality pstl[BURST]; char total_str[STRSZ], free_str[STRSZ], used_str[STRSZ]; uint64_t total=0, free=0; printf ( " --- System wide locality info: --- " ); printf ( "%6s%6s%7s%6s%10s%10s%10s ", "index", "ldom", "physid", "type", "total", "free", "used" ); /* Get a maximum of BURST pst_locality structures */ count = pstat_getlocality ( pstl, sizeof(struct pst_locality), BURST, idx ); while ( count > 0 ) { for ( i=0 ; i<count ; i++ ) { /* Keep running totals for later */ total += pstl[i].psl_total_pages; free += pstl[i].psl_free_pages; /* Convert integers into strings */ pages_to_str ( pstl[i].psl_total_pages, total_str ); pages_to_str ( pstl[i].psl_free_pages, free_str ); pages_to_str ( (pstl[i].psl_total_pages - pstl[i].psl_free_pages), used_str ); printf ( "%6d%6lld%7lld%6s%10s%10s%10s ", (idx+i), pstl[i].psl_ldom_id, pstl[i].psl_physical_id, ((pstl[i].psl_flags & PSL_INTERLEAVED) ? "ILV":"CLM"), total_str, free_str, used_str ); } idx += count; /* * Get (at most) the next BURST pst_locality * structures, starting at idx */ count = pstat_getlocality ( pstl, sizeof(struct pst_locality), BURST, idx ); } if ( count < 0 ) { perror ( "pstat_getlocality" ); exit(1); } if ( idx == 1 ) { /* Don't print totals if there's one locality */ printf ( " " ); return; } /* Convert integer totals into strings */ pages_to_str ( total, total_str ); pages_to_str ( free, free_str ); pages_to_str ( total-free, used_str ); /* Print totals */ printf ( "%6s%6s%7s%6s%10s%10s%10s ", "", "", "", "", "-----", "-----", "-----" ); printf ( "%6s%6s%7s%6s%10s%10s%10s ", "", "", "", "", total_str, free_str, used_str ); } /* * Given a pid, display its per-locality physical memory usage. */ void pid_locinfo ( pid_t pid ) { int count, i=0; struct pst_proc_locality ppl; char total_str[STRSZ], shared_str[STRSZ]; char private_str[STRSZ], weighted_str[STRSZ]; uint64_t total=0, shared=0, private=0, weighted=0; /* * With this interface, information on only one locality * can be returned at a time. This will get the first: */ count = pstat_getproclocality ( &ppl, sizeof(struct pst_proc_locality), pid, i ); printf ( " --- Per-process locality info for pid %d: --- ", pid ); printf ( "%6s%10s%10s%10s%10s ", "idx", "total", "shared", "private", "weighted" ); while ( count == 1 ) { total += ppl.ppl_rss_total; shared += ppl.ppl_rss_shared; private += ppl.ppl_rss_private; weighted += ppl.ppl_rss_weighted; pages_to_str ( ppl.ppl_rss_total, total_str ); pages_to_str ( ppl.ppl_rss_shared, shared_str ); pages_to_str ( ppl.ppl_rss_private, private_str ); pages_to_str ( ppl.ppl_rss_weighted, weighted_str ); printf ( "%6d%10s%10s%10s%10s ", i, total_str, shared_str, private_str, weighted_str ); i++; count = pstat_getproclocality ( &ppl, sizeof(struct pst_proc_locality), pid, i ); } if ( count < 0 ) { if ( errno == ESRCH ) { fprintf ( stderr, "Process %d not found ", pid ); exit(1); } perror ( "pstat_getproclocality" ); exit(1); } if ( i == 1 ) { /* Don't print totals if there's one locality */ printf ( " " ); return; } pages_to_str ( total, total_str ); pages_to_str ( shared, shared_str ); pages_to_str ( private, private_str ); pages_to_str ( weighted, weighted_str ); printf ( "%6s%10s%10s%10s%10s ", "", "-----", "-----", "-----", "-----" ); printf ( "%6s%10s%10s%10s%10s ", "", total_str, shared_str, private_str, weighted_str ); } /* * Given a quantity of memory in pages, fill str with a * human-readable string representing that amount. */ void pages_to_str ( uint64_t pages, char *str ) { uint64_t kpg = pages*(pgsize/1024L); uint64_t mpg = kpg/1024L; uint64_t gpg = mpg/1024L; if ( gpg > 10 ) { sprintf ( str, "%lluG", gpg ); } else if ( mpg > 10 ) { sprintf ( str, "%lluM", mpg ); } else if ( kpg > 1 ) { sprintf ( str, "%lluK", kpg ); } else { sprintf ( str, "%llu", pages ); } } AUTHOR
The routines were developed by Hewlett-Packard Company. SEE ALSO
pstat(2), mpctl(2). pstat_getlocality(2)
All times are GMT -4. The time now is 08:25 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy