Sponsored Content
Operating Systems Linux Red Hat Disk is Full but really does not contain huge data Post 302662203 by binlib on Tuesday 26th of June 2012 10:06:06 AM
Old 06-26-2012
Code:
lsof |awk '/deleted/ && $7>1e6'

 

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

disk full

Please solve the following NOTICE HTFS:No space on dev hd(1/42) (2 Replies)
Discussion started by: msuheel
2 Replies

2. Shell Programming and Scripting

How to extract data from a huge file?

Hi, I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows: 1. Needs to create folders as the strings starts with "item_*" from the input file 2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Discussion started by: srsahu75
5 Replies

3. Linux

Disk full 100%

one of my servers / was full by 100% i cleard some space, now though i have enough space on / partition still df is showing disk usage as 100% am not able to create any single txt file ? why so ? (3 Replies)
Discussion started by: bryanabhay
3 Replies

4. AIX

Huge difference in reported Disk usage between ls,df and du

IBM RS6000 F50 AIX 4.3.2 i am having trouble in calculating the actual size of a set of directories and reconciling the results with the actual Hard Disk space used I have 33GB disk which is showing 7.8GB used, a byte count of the files in the directory/sub-dirs i`m interested in is 48GB,... (4 Replies)
Discussion started by: cooperuf
4 Replies

5. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

6. UNIX for Advanced & Expert Users

Disk Space full

I was tryin to copy a large file under /tmp location. I guess the disk space got full and i got fork error. Then I tried removing some files but the shell did not let me do anything bash> rm apache22.tar bash: fork: Not enough space bash> pwd /tmp bash> vmstat 1 bash: fork: Not... (3 Replies)
Discussion started by: mohtashims
3 Replies

7. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies

8. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

9. Shell Programming and Scripting

Disk full alerts

i want to create 1 script to monitor 1 particular filesystem out of the diferent filesystems. if disk space of that particular filesystem increases by 80% it sends an alert mail to an email id ---------- Post updated at 04:18 PM ---------- Previous update was at 04:17 PM ---------- no. I am... (1 Reply)
Discussion started by: rakeshhhhhhhh
1 Replies
scrounge-ntfs(8)					    BSD System Manager's Manual 					  scrounge-ntfs(8)

NAME
scrounge-ntfs -- helps retrieve data from corrupted NTFS partitions SYNOPSIS
scrounge-ntfs -l disk scrounge-ntfs -s disk scrounge-ntfs [-m mftoffset] [-c clustersize] [-o outdir] disk start end DESCRIPTION
scrounge-ntfs is a utility that can rescue data from corrupted NTFS partitions. It writes the files retrieved to another working file system. Certain information about the partition needs to be known in advance. The -l mode is meant to be run in advance of the data corruption, with the output stored away in a file. This allows scrounge-ntfs to recover data reliably. See the 'NOTES' section below for recover info when this isn't the case. OPTIONS
The options are as follows: -c The cluster size (in sectors). When not specified a default of 8 is used. -l List partition information for a drive. This will only work when the partition table for the given drive is intact. -m When recovering data this specifies the location of the MFT from the beginning of the partition (in sectors). If not specified then no directory information can be used, that is, all rescued files will be written to the same directory. -o Directory to put rescued files in. If not specified then files will be placed in the current directory. -s Search disk for partition information. (Not implemented yet). disk The raw device used to access the disk which contains the NTFS partition to rescue files from. eg: '/dev/hdc' start The beginning of the NTFS partition (in sectors). end The end of the NTFS partition (in sectors) NOTES
If you plan on using this program sucessfully you should prepare in advance by storing a copy of the partition information. Use the -l option to do this. Eventually searching for disk partition information will be implemented, which will solve this problem. When only one partition exists on a disk or you want to rescue the first partition there are ways to guess at the sector sizes and MFT loca- tion. See the scrounge-ntfs web page for more info: http://memberwebs.com/swalter/software/scrounge/ AUTHOR
Stef Walter <stef@memberwebs.com> scrounge-ntfs June 1, 2019 scrounge-ntfs
All times are GMT -4. The time now is 04:40 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy