Sponsored Content
Full Discussion: HP9000 Container - NFS Issue
Operating Systems HP-UX HP9000 Container - NFS Issue Post 302661785 by dardan on Monday 25th of June 2012 06:51:47 PM
Old 06-25-2012
i think first you should correct permissions on hp-ux for /var/adm/crash.
 

10 More Discussions You Might Find Interesting

1. Red Hat

nfs issue

Hello, I am running RHEL4 AS. /net/hostname is not always accessible. To solve the problem, I restart nfs, nfslock and portmap. in /var/log/messages, I keep on receiving: nfs_statfs: statfs error = 116 1] What does the above error mean? 2] any idea why portmap becomes dead? thanks. (1 Reply)
Discussion started by: melanie_pfefer
1 Replies

2. Solaris

Swap issue on Container

Dear all, Kindly need your advice on swap issue on my container, SunOS 5.10. We have allocated swap 20GB and memory 8 GB. We have some application installed on the container. A few days ago we experienced a very slow system response...even for command like ls, df -k, etc.. I tried to... (8 Replies)
Discussion started by: sprout009
8 Replies

3. Red Hat

NFS Access Issue

Hi, I am facing issue on NFS. I have shared /data file file system on Server 192.192.192.1, added below lines in /etc/exports /data 192.192.192.2(rw,no_root_squash,sync) the owner of /data directory was test(uid 500) and same I have mounted on another server 192.192.192.2 where the... (3 Replies)
Discussion started by: manoj.solaris
3 Replies

4. Solaris

New Member with NFS Issue

Hi Guys. Maybe someione can assist here. Solaris 9 with a NFS mount point to a Netapp Filer. NFS V3 Changes in the environment: Moved from 1 building to a new one. (All the equipment and network stayed the same) Errors: nfs: NFS read failed for server netapp: error 11 (RPC: Server... (2 Replies)
Discussion started by: Boesman
2 Replies

5. UNIX for Dummies Questions & Answers

nfs issue.

Hi whenever i execute ls -ltr i get following entry. But there is no such user as 501 .What is the issue. #ls -ltr drwxrwxrwx 2 501 501 3896 Aug 19 10:46 File1 # cat /etc/passwd | grep 501 The problem is this folder is mouted on client machine using nfs.But after mouting client... (4 Replies)
Discussion started by: pinga123
4 Replies

6. Solaris

NFS server on Legacy Container?

Hi All, I have a problem as follows. Historically, there was an Ultra10 workstation running Solaris 8 using automount to access NFS volumes on a Solaris 8 server. The Ultra 10 was retired and the Solaris 8 server has been migrated to a Legacy Container (Solaris 8 Branded, whole root,... (9 Replies)
Discussion started by: hicksd8
9 Replies

7. Red Hat

NFS Performance Issue

Hi, I have a conflict. I have 2 servers: the 1st creates an application logs and the 2nd is an empty server. I want to export (read-only) from the 1st server by NFS the directory that contains the logs to the 2nd server. Now, the question is: If in the 2nd server i'm doing a lot of... (3 Replies)
Discussion started by: moshesa
3 Replies

8. Red Hat

NFS mount issue

Hi Friends, My source server is HP and my destination is linux. I have to mount a dir thru nfs from source to destn. almost 8 servers i did the same thing and it is working fine but on the 9th server i can't able to mount. Everything i have did for nfs configuration.Even i can able to ping... (1 Reply)
Discussion started by: Mohamed Thamim
1 Replies

9. Red Hat

NFS mounting issue

The server ip is 10.2.2.24. I have installed nfs-utils package the i have edited /etc/exports i have added the following line /home 10.2.2.0/24(rw,sync,no_root_squash,no_all_squash) i have saved, i have started the nfs service, then i am trying to mount nfs sharing from client machine using... (5 Replies)
Discussion started by: ainstin
5 Replies

10. Red Hat

NFS mount issue

we are facing a weird NFS mount issue on one of the linux host , a NFS volume of 2.4TB is mounted in the linux host , but df only reports 131g , which exactly matches rootfilesytem size nfs mount filer_filer1:/vol/bug_test/q0 131G 116G 8.5G 94% /nas/bug_test root... (2 Replies)
Discussion started by: skamal4u
2 Replies
vzmigrate(8)							    Containers							      vzmigrate(8)

NAME
vzmigrate - migrate a container between two OpenVZ servers SYNOPSIS
vzmigrate [-r|--remove-area yes|no] [--ssh=ssh_options] [--rsync=rsync_options] [--keep-dst] [--online] [-v] destination_address CTID DESCRIPTION
This utility is used to migrate a container from one (source) Hardware Node (HN) to another (destination) HN. The utility can migrate either stopped or running container. For a stopped container, simple CT private area transfer is performed (rsync(1) is used for file transfer). For running containers, migration may be offline (default) or online. This program uses ssh as a transport layer. You will need to put ssh public key to destination node and be able to connect to node without entering password. OPTIONS
-r, --remove-area yes | no Whether to remove a container area on source HN for the successfully migrated container. Default is yes. --ssh=options Additional options that will be passed to ssh while establishing connection to destination HN. --rsync=options Additional options that will be passed to rsync(8). You may add options like -z to enable data compression if you are migrating over a slow link. --keep-dst Do not clean synced destination container private area in case of some error. It makes sense to use this option on big container migration to avoid syncing container private area again in case some error (on container stop for example) occurs during first migration attempt. --online Perform online (zero-downtime) migration: during the migration the container hangs for a while and after the migration it continues working as though nothing has happened. -v Verbose mode. Causes vzmigrate to print debugging messages about its progress. Multiple -v options increase the verbosity. The maximum is 3. EXAMPLES
Migration of CT 101 to 192.168.1.130 with downtime: vzmigrate 192.168.1.130 101 Online migration of CT 102 to 192.168.1.130: vzmigrate --online 192.168.1.130 102 EXIT STATUS
0 EXIT_OK Command completed successfully. 1 EXIT_USAGE Bad command line options. 2 EXIT_VE_STOPPED Container is stopped. 4 EXIT_CONNECT Can't connect to destination (source) HN. 6 EXIT_COPY Container private area copying/moving failed. 7 EXIT_VE_START Can't start or restore destination CT. 8 EXIT_VE_STOP Can't stop or checkpoint source CT. 9 EXIT_EXISTS Container already exists on destination HN. 10 EXIT_NOTEXIST Container does not exists on source HN. 12 EXIT_IP_INUSE You attempt to migrate CT which IP address(es) are already in use on the destination node. 13 EXIT_QUOTA Operation with CT quota failed. SEE ALSO
rsync(1). COPYRIGHT
Copyright (C) 2001-2010, Parallels, Inc. Licensed under GNU GPL. OpenVZ 28 Jun 2011 vzmigrate(8)
All times are GMT -4. The time now is 04:17 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy