Solution: The issue was resolved by using the slibclean command. According to the manual,
Quote:
The slibclean command unloads all object files with load and use counts of 0. It can also be used to remove object files that are no longer used from both the shared library region and in the shared library and kernel text regions by removing object files that are no longer required.
The error is caused by the fact that an instance of /xlt/thefile
is running. You should be able to see it with ps aux. Since the
operating system uses demand paging, not all of the pages of a program
are necessarily mapped into memory when it starts. This means that
the process may request more pages of that program to be mapped into
memory at any time. So the operating system doesn't let you remove
the file.
You may also test the tar file by issuing:
and see if this works or produces the same error. You also might compare its output to files you have on your system. If you have tarred the files with absolute pathes it will be untarred to absolute pathes too, not the directory which happens to be your $PWD.
TIP: if you want your output formatted pagewise pipe it to either "more" or "pg".
I'm pulling a 1MB file from tape using tar. It's a 300GB DLT tape and it does have a lot of files on it because it's go the entire OS and Oracle RMAN files and 3000 table exports, but it's taking 2-3 hours to pull this file off of it. Is this type of performance what I should expect?
The... (0 Replies)
I have this tar file which has files of (.ksh, .ini &.sql) and their hard and soft links.
Later when the original files and their directories are deleted (or rather lost as in a system crash), I have this tar file as the only source to restore all of them.
In such a case when I do,
tar... (4 Replies)
Hi, is there a way to use compression on the TAR command running on a AIX 4.2 ?
I did a "man tar" but did not see mentions of using compression, nor how to even find out the tar version.
I want to look into ways of reducing the amount of time to do backups. One backup is dumping database... (9 Replies)
Not sure if this is really in the right forum but here goes....
Looking for a way to extract individual compressed files from a compressed tarball WITHOUT tar -zxvf and then recompressing. Basically we need to be able to chunk out an individual compressed file while it still remains... (6 Replies)
i need to restore everything in a certain directory and lower. I have a tgz archive of all of the files, and i need to restore everything in /user/home/xxxx/ and below. this is a users home directory. this is a dumb question and i know when i see the answer i am going to say DUH, but i am... (2 Replies)
I'm working on a project that requires me to compress then relocate directories to a different location based on their last date of modification. After running the script I check to see if it worked, and upon unzipping the tar.gz using I created everything that should be there is. I then performed... (4 Replies)
Hi
i want to tar a directory.. i have tried few command but it is not working .
please let me know how to tar and untar a directory
below are the error which i am getting
tar -zxvf tl11cp01_042414_071123.tar.gz
tar: Not a recognized flag: z
tar -zxvf... (3 Replies)
Hi,
We have the requirement that, needs to extract the *.arj (archive) files in IBM AIX platform.
Anyone can guide me to how to extract the files using ARJ or any other zip technique.
Regards,
Deepak. (3 Replies)
Hello,
Getting this very strange error, made tar/zip through gnu tar
GNU Tar ( successful tar and zip without any errors )
/opt/freeware/bin/tar cvf - /oraapp| gzip > /backup/bkp_15_6_16_oraapp.tgz
GNU unTar error
root@test8:/>gunzip < /config1/bkp_15_6_16_oraapp.tgz |... (5 Replies)
The below bash will untar each tar.bz2 folder in the directory, then remove the tar.bz2.
Each of the tar.bz2 folders ranges from 40-75GB and currently takes ~2 hours to extract. Is there a way to speed up the extraction process?
I am using a xeon processor with 12 cores. Thank you :).
... (7 Replies)
Discussion started by: cmccabe
7 Replies
LEARN ABOUT DEBIAN
htpurge
htdig(1) General Commands Manual htdig(1)NAME
htpurge - remove unused documents from the database (general maintenance script)
SYNOPSIS
htpurge [-][-a][-c configfile][-u][-v]
DESCRIPTION
Htpurge functions to remove specified URLs from the databases as well as bad URLs, unretrieved URLs, obsolete documents, etc. It is recom-
mended that htpurge be run after htdig to clean out any documents of this sort.
OPTIONS
- Take URL list from standard input (rather than specified with -u). Format of input file is one URL per line. -a Use alternate work
files. Tells htpurge to append .work to database files, causing a second copy of the database to be built. This allows the original
files to be used by htsearch during the run.
-c configfile
Use the specified configfile instead of the default.
-u URL Add this URL to the list of documents to remove. Must be specified multiple times if more than one URL are to be removed. Should nor
be used together with -.
-v Verbose mode. This increases the verbosity of the program. Using more than 2 is probably only useful for debugging purposes. The
default verbose mode (using only one -v) gives a nice progress report while digging.
FILES
/etc/htdig/htdig.conf
The default configuration file.
SEE ALSO
Please refer to the HTML pages (in the htdig-doc package) /usr/share/doc/htdig-doc/html/index.html and the manual pages htdigconfig(8) ,
htdig(1) and htmerge(1) for a detailed description of ht://Dig and its commands.
AUTHOR
This manual page was written by Robert Ribnitz, based on the HTML documentation of ht://Dig.
January 2004 htdig(1)