Sponsored Content
Top Forums UNIX for Advanced & Expert Users Delete first 100 lines from a BIG File Post 302657787 by methyl on Monday 18th of June 2012 11:14:55 AM
Old 06-18-2012
@unohu
Please explain why do not want to use a workfile and the context in which this logfile is created?

If the log file is open by a process, there is no method to shorten the file without using a workfile (and even that method risks losing new log entries). However it is possible to retain the same inode (is that your issue?).

Check whether the file is open with the fuser command.
 

10 More Discussions You Might Find Interesting

1. Solaris

delete first 100 lines rather than zero out of file

Hi experts, in my solaris 9 the file- /var/adm/messeages growin too first. by 24 hours 40MB. And always giving the below messages-- bash-2.05# tail -f messages Nov 9 16:35:38 ME1 last message repeated 1 time Nov 9 16:35:38 ME1 ftpd: wtmpx /var/adm/wtmpx No such file or directory Nov 9... (7 Replies)
Discussion started by: thepurple
7 Replies

2. Solaris

delete first 100 lines from a file

I have a file with 28,00,000 lines of rows in this the first 80 lines will be chunks . I want to delete the chunks of 80 lines. I tried tail -f2799920 filename. is there any efficient way to do this. Thanks in advance. (7 Replies)
Discussion started by: salaathi
7 Replies

3. Shell Programming and Scripting

How to delete lines in a file that have duplicates or derive the lines that aper once

Input: a b b c d d I need: a c I know how to get this (the lines that have duplicates) : b d sort file | uniq -d But i need opossite of this. I have searched the forum and other places as well, but have found solution for everything except this variant of the problem. (3 Replies)
Discussion started by: necroman08
3 Replies

4. Shell Programming and Scripting

Print #of lines after search string in a big file

I have a command which prints #lines after and before the search string in the huge file nawk 'c-->0;$0~s{if(b)for(c=b+1;c>1;c--)print r;print;c=a}b{r=$0}' b=0 a=10 s="STRING1" FILE The file is 5 gig big. It works great and prints 10 lines after the lines which contains search string in... (8 Replies)
Discussion started by: prash184u
8 Replies

5. Shell Programming and Scripting

Re: Deleting lines from big file.

Hi, I have a big (2.7 GB) text file. Each lines has '|' saperator to saperate each columns. I want to delete those lines which has text like '|0|0|0|0|0' I tried: sed '/|0|0|0|0|0/d' test.txt Unfortunately, it scans the file but does nothing. file content sample:... (4 Replies)
Discussion started by: dipeshvshah
4 Replies

6. UNIX for Advanced & Expert Users

In a huge file, Delete duplicate lines leaving unique lines

Hi All, I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space. I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies

7. Shell Programming and Scripting

Delete rows from big file

Hi all, I have a big file (about 6 millions rows) and I have to delete same occurrences, stored in a small file (about 9000 rews). I have tried this: while read line do grep -v $line big_file > ok_file.tmp mv ok_file.tmp big_file done < small_file It works, but is very slow. How... (2 Replies)
Discussion started by: Tibbeche
2 Replies

8. UNIX for Dummies Questions & Answers

Delete records from a big file based on some condition

Hi, To load a big file in a table,I have a make sure that all rows in the file has same number of the columns . So in my file if I am getting any rows which have columns not equal to 6 , I need to delete it . Delimiter is space and columns are optionally enclosed by "". This can be ... (1 Reply)
Discussion started by: hemantraijain
1 Replies

9. Shell Programming and Scripting

Want to extract certain lines from big file

Hi All, I am trying to get some lines from a file i did it with while-do-loop. since the files are huge it is taking much time. now i want to make it faster. The requirement is the file will be having 1 million lines. The format is like below. ##transaction, , , ,blah, blah... (38 Replies)
Discussion started by: mad man
38 Replies

10. UNIX for Beginners Questions & Answers

How to copy only some lines from very big file?

Dear all, I have stuck with this problem for some days. I have a very big file, this file can not open by vi command. There are 200 loops in this file, in each loop will have one line like this: GWA quasiparticle energy with Z factor (eV) And I need 98 lines next after this line. Is... (6 Replies)
Discussion started by: phamnu
6 Replies
FUSER(1)						    BSD General Commands Manual 						  FUSER(1)

NAME
fuser -- list IDs of all processes that have one or more files open SYNOPSIS
fuser [-cfkmu] [-M core] [-N system] [-s signal] file ... DESCRIPTION
The fuser utility writes to stdout the PIDs of processes that have one or more named files open. For block and character special devices, all processes using files on that device are listed. A file is considered open by a process if it was explicitly opened, is the working directory, root directory, jail root directory, active executable text, kernel trace file or the controlling terminal of the process. If -m option is specified, the fuser utility will also look through mmapped files. The following options are available: -c Treat files as mount point and report on any files open in the file system. -f The report must be only for named files. -k Send signal to reported processes (SIGKILL by default). -m Search through mmapped files too. -u Write the user name associated with each process to stderr. -M Extract values associated with the name list from the specified core instead of the default /dev/kmem. -N Extract the name list from the specified system instead of the default, which is the kernel image the system has booted from. -s Use given signal name instead of default SIGKILL. The following symbols, written to stderr will indicate how files is used: r The file is the root directory of the process. c The file is the current workdir directory of the process. j The file is the jail-root of the process. t The file is the kernel tracing file for the process. x The file is executable text of the process. y The process use this file as its controlling tty. m The file is mmapped. w The file is open for writing. a The file is open as append only (O_APPEND was specified). d The process bypasses fs cache while writing to this file (O_DIRECT was specified). s Shared lock is hold. e Exclusive lock is hold. EXIT STATUS
The fuser utility returns 0 on successful completion and >0 otherwise. EXAMPLES
The command: ``fuser -fu .'' writes to standard output the process IDs of processes that are using the current directory and writes to stderr an indication of how those processes are using the directory and user names associated with the processes that are using this directory. SEE ALSO
fstat(1), ps(1), systat(1), iostat(8), pstat(8), vmstat(8) STANDARDS
The fuser utility is expected to conform to IEEE Std 1003.1-2004 (``POSIX.1''). HISTORY
The fuser utility appeared in FreeBSD 9.0. AUTHORS
The fuser utility and this manual page was written by Stanislav Sedov <stas@FreeBSD.org>. BUGS
Since fuser takes a snapshot of the system, it is only correct for a very short period of time. When working via kvm(3) interface the report will be limited to filesystems the fuser utility knows about (currently only cd9660, devfs, nfs, ntfs, nwfs, udf, ufs and zfs). BSD
May 13, 2011 BSD
All times are GMT -4. The time now is 03:23 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy