09-29-2011
is there any easy way to do this.. I have more than 200 files I don't want to write cat statement for all these files to append the data in to one big file.
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello,
My apologies if this has been posted elsewhere, I have had a look at several threads but I am still confused how to use these functions. I have two files, each with 5 columns:
File A: (tab-delimited)
PDB CHAIN Start End Fragment
1avq A 171 176 awyfan
1avq A 172 177 wyfany
1c7k A 2 7... (3 Replies)
Discussion started by: InfoSeeker
3 Replies
2. Shell Programming and Scripting
I have a file and need to only select users that have a shell of “/bin/bash” in the line using awk or sed please help (4 Replies)
Discussion started by: boyboy1212
4 Replies
3. UNIX for Dummies Questions & Answers
file1:
Toronto:12439755:1076359:July 1, 1867:6
Quebec City:7560592:1542056:July 1, 1867:5
Halifax:938134:55284:July 1, 1867:4
Fredericton:751400:72908:July 1, 1867:3
Winnipeg:1170300:647797:July 15, 1870:7
Victoria:4168123:944735:July 20, 1871:10
Charlottetown:137900:5660:July 1, 1873:2... (2 Replies)
Discussion started by: mindfreak
2 Replies
4. UNIX for Dummies Questions & Answers
Hi,
I have 20 tab delimited text files that have a common column (column 1). The files are named GSM1.txt through GSM20.txt. Each file has 3 columns (2 other columns in addition to the first common column).
I want to write a script to join the files by the first common column so that in the... (5 Replies)
Discussion started by: evelibertine
5 Replies
5. Shell Programming and Scripting
Is it possible to join all the values after sorting them based on 1st column key and replace empty rows with 0 like below ?
input
a1 0 a1 1 a1 1 a3 1 b2 1
a2 1 a4 1 a2 1 a4 1 c4 1
a3 1 d1 1 a3 1 b1 1 d1 1
a4 1 c4 1 b2 1
b1 1
b2 1
c4 1
d1 1
output... (8 Replies)
Discussion started by: quincyjones
8 Replies
6. Shell Programming and Scripting
I have a file with two fields in it delimited by a comma. Some of the first fields are duplicates. I am trying to eliminate any duplicate records in the first field, and combine the second fields in the output file.
For example, if the input is:
Jane,group=A
Bob,group=A
Bob,group=D... (3 Replies)
Discussion started by: DJR
3 Replies
7. Shell Programming and Scripting
Hi,
I am trying to join 2 csv files, to create a 3rd output file with the joined data.
Below is an example of my Input Data:
Input File 1
NAME, FAV_FOOD, FAV_DRINK, ID, GENDER
Bob, Fish, Coke, 1, M
Lisa, Rice, Water, 2, F
Jenny, Noodle, Tea, 3, F
Ken, Pizza, Coffee, 4, M
Lisa,... (7 Replies)
Discussion started by: RichZR
7 Replies
8. Shell Programming and Scripting
Hi,
I'm an absolute beginner in shell programming.
I would need a script for a NAS that makes the csv files sorted by date and always monthly a zip.
In the current month, the data should be integrated into this folder, so there are only monthly files.
Thanks for your help (1 Reply)
Discussion started by: Pipo
1 Replies
9. Shell Programming and Scripting
Hello,
This post is already here but want to do this with another way
Merge multiples files with multiples duplicates keys by filling "NULL" the void columns for anothers joinning files
file1.csv:
1|abc
1|def
2|ghi
2|jkl
3|mno
3|pqr
file2.csv:
1|123|jojo
1|NULL|bibi... (2 Replies)
Discussion started by: yjacknewton
2 Replies
LEARN ABOUT DEBIAN
flow-cat
flow-cat(1) General Commands Manual flow-cat(1)
NAME
flow-cat -- Concatenate flow files
SYNOPSIS
flow-cat [-aghmp] [-b big|little] [-C comment] [-d debug_level] [-o filename] [-t start_time] [-T start_time] [-z z_level]
[file|directory ...]
DESCRIPTION
The flow-cat utility processes files and/or directories of files in the flow-tools format. The resulting concatenated data set is written
to the standard output or file specified by -o. If file is a single dash (`-') or absent, flow-cat will read from the standard input.
OPTIONS
-a Do not ignore filenames that begin with tmp.
-b big|little
Byte order of output.
-C Comment
Add a comment.
-d debug_level
Enable debugging.
-g Sort file list by capture start time before processing.
-h Display help.
-m Disable the use of mmap().
-p Preload headers. Use to preserve meta information such as lost flows.
-o file Write to file instead of the standard out.
-t start_time
Select flow files up to start_time. If used with -T select files between start_time and end_time.
-T end_time
Select flow files after end_time. If used with -t select files between start_time and end_time.
-z z_level
Configure compression level to z_level. 0 is disabled (no compression), 9 is highest compression.
file|directory...
Process the files and/or directory.
TIME
/DATE parsing
start_time and end_time parsing is implemented with getdate.y, a commonly used function to process free-form time date specifications.
Example usage borrowed from cvs:
1 month ago
2 hours ago
400000 seconds ago
last year
last Monday
yesterday
a fortnight ago
3/31/92 10:00:07 PST
January 23, 1987 10:05pm
22:00 GMT
EXAMPLES
Concatenate all flow files begining with ft-v05.2001-05.01, use flow-print to display the results.
flow-cat ft-v05.2001-05-01.* | flow-print
Concatenate flow files in /flows/krc4, store store the output in compressed.flows at compression level 9 (best). The headers are preloaded
so various metadata such as the flow count is correct in the result. Filenames begining with tmp which are typically in-progress flow
files from flow-capture are not processed.
flow-cat -p -z9 /flows/krc4 > compressed.flows
BUGS
None known.
AUTHOR
Mark Fullmer maf@splintered.net
SEE ALSO
flow-tools(1)
flow-cat(1)