Sponsored Content
Top Forums Shell Programming and Scripting Splitting large file into multiple files in unix based on pattern Post 302535863 by jayan_jay on Saturday 2nd of July 2011 09:25:43 AM
Old 07-02-2011
try this..
Code:
 % awk ' { print $1 } ' input_file  | cut -c 11- | uniq | awk ' { print "grep \"[0-9]"$0"\" input_file > "$0".txt" } ' | sh


Last edited by jayan_jay; 07-04-2011 at 07:48 AM..
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk - splitting 1 large file into multiple based on same key records

Hello gurus, I am new to "awk" and trying to break a large file having 4 million records into several output files each having half million but at the same time I want to keep the similar key records in the same output file, not to exist accross the files. e.g. my data is like: Row_Num,... (6 Replies)
Discussion started by: kam66
6 Replies

2. Shell Programming and Scripting

Problem with splitting large file based on pattern

Hi Experts, I have to split huge file based on the pattern to create smaller files. The pattern which is expected in the file is: Master..... First... second.... second... third.. third... Master... First.. second... third... Master... First... second.. second.. second..... (2 Replies)
Discussion started by: saisanthi
2 Replies

3. Shell Programming and Scripting

Splitting large file and renaming based on field

I am trying to update an older program on a small cluster. It uses individual files to send jobs to each node. However the newer database comes as one large file, containing over 10,000 records. I therefore need to split this file. It looks like this: HMMER3/b NAME 1-cysPrx_C ACC ... (2 Replies)
Discussion started by: fozrun
2 Replies

4. Shell Programming and Scripting

Help me pls : splitting single file in unix into different files based on data

I have a file in unix with sample data as follows : -------------------------------------------------------------- -------------------------------------------------------------- {30001002|XXparameter|Layout|$ I want this file to be splitted into different files and corresponding to the sample... (54 Replies)
Discussion started by: Ravindra Swan
54 Replies

5. Shell Programming and Scripting

Help needed - Split large file into smaller files based on pattern match

Help needed urgently please. I have a large file - a few hundred thousand lines. Sample CP START ACCOUNT 1234556 name 1 CP END ACCOUNT CP START ACCOUNT 2224444 name 1 CP END ACCOUNT CP START ACCOUNT 333344444 name 1 CP END ACCOUNT I need to split this file each time "CP START... (7 Replies)
Discussion started by: frustrated1
7 Replies

6. Shell Programming and Scripting

Sed: Splitting A large File into smaller files based on recursive Regular Expression match

I will simplify the explaination a bit, I need to parse through a 87m file - I have a single text file in the form of : <NAME>house........ SOMETEXT SOMETEXT SOMETEXT . . . . </script> MORETEXT MORETEXT . . . (6 Replies)
Discussion started by: sumguy
6 Replies

7. Shell Programming and Scripting

Splitting textfile based on pattern and name new file after pattern

Hi there, I am pretty new to those things, so I couldn't figure out how to solve this, and if it is actually that easy. just found that awk could help:(. so i have a textfile with strings and numbers (originally copy pasted from word, therefore some empty cells) in the following structure: SC... (9 Replies)
Discussion started by: luja
9 Replies

8. Shell Programming and Scripting

Help with Splitting a Large XML file based on size AND tags

Hi All, This is my first post here. Hoping to share and gain knowledge from this great forum !!!! I've scanned this forum before posting my problem here, but I'm afraid I couldn't find any thread that addresses this exact problem. I'm trying to split a large XML file (with multiple tag... (7 Replies)
Discussion started by: Aviktheory11
7 Replies

9. UNIX for Advanced & Expert Users

Concatenation of multiple files based on file pattern

Hi, I have the following reports that get generated every 1 hour and this is my requirement: 1. 5 reports get generated every hour with the names "Report.Dddmmyy.Thhmiss.CTLR" "Report.Dddmmyy.Thhmiss.ACCD" "Report.Dddmmyy.Thhmiss.BCCD" "Report.Dddmmyy.Thhmiss.CCCD"... (1 Reply)
Discussion started by: Jesshelle David
1 Replies

10. UNIX for Beginners Questions & Answers

UNIX script to append multiple text files into one file based on pattern present in filaname

Hi All-I am new to Unix , I need to write a script. Can someone help me with a requirement where I have list of files in a directory, I want to Merge the files if a pattern of string matches in filenames? AAAL_555A_ORANGE1_F190404.TXT AAAL_555A_ORANGE2_F190404.TXT AAAL_555A_ORANGE3_F190404.TXT... (6 Replies)
Discussion started by: Shankar455
6 Replies
UNIQ(1) 						    BSD General Commands Manual 						   UNIQ(1)

NAME
uniq -- report or filter out repeated lines in a file SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]] DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are not adjacent, so it may be necessary to sort the files first. The following options are available: -c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space. -d Only output lines that are repeated in the input. -f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one. -s chars Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is character one. -u Only output lines that are not repeated in the input. -i Case insensitive comparison of lines. ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7). EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs. COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation. SEE ALSO
sort(1) STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002. HISTORY
A uniq command appeared in Version 3 AT&T UNIX. BSD
December 17, 2009 BSD
All times are GMT -4. The time now is 04:08 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy