Sponsored Content
Top Forums Shell Programming and Scripting awk help to make my work faster Post 302271357 by matrixmadhan on Thursday 25th of December 2008 02:48:04 AM
Old 12-25-2008
0.6 M don't seem to be a big number ( I guess )
load everything to memory with a map as

line_no => line_no_contents

then do a lookup with line_no, its done then Smilie
 

10 More Discussions You Might Find Interesting

1. Solaris

looking for different debugger for Solaris or to make sunstudio faster

im using the sunstudio but it is very slow , is there ant other GUI debugger for sun Solaris or at list some ways to make it faster ? im using to debug throw telnet connection connected to remote server thanks (0 Replies)
Discussion started by: umen
0 Replies

2. Shell Programming and Scripting

Can anyone make this script run faster?

One of our servers runs Solaris 8 and does not have "ls -lh" as a valid command. I wrote the following script to make the ls output easier to read and emulate "ls -lh" functionality. The script works, but it is slow when executed on a directory that contains a large number of files. Can anyone make... (10 Replies)
Discussion started by: shew01
10 Replies

3. Red Hat

Re:How to make the linux pc faster

Hi, Can any one help me out in solving the problem i have a linux database server it is tooo slow that i am unable to open even the terminial is there any solution to get rid of this problem.How to make this server faster. Thanks & Regards Venky (0 Replies)
Discussion started by: venky_vemuri
0 Replies

4. Shell Programming and Scripting

How to make copy work faster

I am trying to copy a folder which contains a list of C executables. It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process. Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Discussion started by: prasperl
2 Replies

5. Shell Programming and Scripting

Running rename command on large files and make it faster

Hi All, I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Discussion started by: shoaibjameel123
6 Replies

6. Shell Programming and Scripting

Make script faster

Hi all, In bash scripting, I use to read files: cat $file | while read line; do ... doneHowever, it's a very slow way to read file line by line. E.g. In a file that has 3 columns, and less than 400 rows, like this: I run next script: cat $line | while read line; do ## Reads each... (10 Replies)
Discussion started by: AlbertGM
10 Replies

7. Shell Programming and Scripting

awk changes to make it faster

I have script like below, who is picking number from one file and and searching in another file, and printing output. Bu is is very slow to be run on huge file.can we modify it with awk #! /bin/ksh while read line1 do echo "$line1" a=`echo $line1` if then echo "$num" cat file1|nawk... (6 Replies)
Discussion started by: mirwasim
6 Replies

8. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies

9. Shell Programming and Scripting

How to make awk command faster for large amount of data?

I have nginx web server logs with all requests that were made and I'm filtering them by date and time. Each line has the following structure: 127.0.0.1 - xyz.com GET 123.ts HTTP/1.1 (200) 0.000 s 3182 CoreMedia/1.0.0.15F79 (iPhone; U; CPU OS 11_4 like Mac OS X; pt_br) These text files are... (21 Replies)
Discussion started by: brenoasrm
21 Replies

10. Shell Programming and Scripting

How to make faster loop in multiple directories?

Hello, I am under Ubuntu 18.04 Bionic. I have one shell script run.sh (which is out of my topic) to run files under multiple directories and one file to control all processes running under those directories (control.sh). I set a cronjob task to check each of them with two minutes of intervals.... (3 Replies)
Discussion started by: baris35
3 Replies
CSPLIT(1)						    BSD General Commands Manual 						 CSPLIT(1)

NAME
csplit -- split files based on context SYNOPSIS
csplit [-ks] [-f prefix] [-n number] file args ... DESCRIPTION
The csplit utility splits file into pieces using the patterns args. If file is a dash ('-'), csplit reads from standard input. Files are created with a prefix of ``xx'' and two decimal digits. The size of each file is written to standard output as it is created. If an error occurs whilst files are being created, or a HUP, INT, or TERM signal is received, all files previously written are removed. The options are as follows: -f prefix Create file names beginning with prefix, instead of ``xx''. -k Do not remove previously created files if an error occurs or a HUP, INT, or TERM signal is received. -n number Create file names beginning with number of decimal digits after the prefix, instead of 2. -s Do not write the size of each output file to standard output as it is created. The args operands may be a combination of the following patterns: /regexp/[[+|-]offset] Create a file containing the input from the current line to (but not including) the next line matching the given basic regular expression. An optional offset from the line that matched may be specified. %regexp%[[+|-]offset] Same as above but a file is not created for the output. line_no Create containing the input from the current line to (but not including) the specified line number. {num} Repeat the previous pattern the specified number of times. If it follows a line number pattern, a new file will be created for each line_no lines, num times. The first line of the file is line number 1 for historic reasons. After all the patterns have been processed, the remaining input data (if there is any) will be written to a new file. Requesting to split at a line before the current line number or past the end of the file will result in an error. ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of csplit as described in environ(7). EXIT STATUS
The csplit utility exits 0 on success, and >0 if an error occurs. EXAMPLES
Split the mdoc(7) file foo.1 into one file for each section (up to 21 plus one for the rest, if any): csplit -k foo.1 '%^.Sh%' '/^.Sh/' '{20}' Split standard input after the first 99 lines and every 100 lines thereafter: csplit -k - 100 '{19}' SEE ALSO
sed(1), split(1), re_format(7) STANDARDS
The csplit utility conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY
A csplit command appeared in PWB UNIX. BUGS
Input lines are limited to LINE_MAX (2048) bytes in length. BSD
February 6, 2014 BSD
All times are GMT -4. The time now is 01:11 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy