Hi bharathbangalor,
The sample input file you showed us and the output you said you wanted have tabs as the field separators. But you told Subbeh "More over the fields are coma (sic) separated".
Are you saying your input file has commas instead of tabs as field separators?
Are you saying you want the output to use commas instead of tabs as field separators?
The sample input file you showed us is sorted by key, city, and account. The awk script I provided assumes that all entries in your input file with the same key are on contiguous lines and prints output that is in the same order as the input. The awk script Subbeh provided will work no matter what order the input is in, but (other than the header) prints output lines in random order.
To be sure we're coming up with code that will work for you:
Do all of the input lines for a given id in your real data appear on adjacent lines?
Do you care about the order of the output lines?
What operating system and version are you using? (I.e., what is the output from uname -a?
What is the output from the command getconf LINE_MAX?
Will the length in bytes of any input line (including field separators and the trailing newline character) in your real data exceed the number printed by getconf?
Will the length in bytes of the longest output line you want to produce from your real data exceed the number printed by getconf? If it will, will the number of bytes in the longest output field you want to produce from your real data exceed the number printed by getconf?
And, does every line in your input file have the same number of fields?
Hi Don,
PFA my actual data and required output sample.
PFB the answers for your question
1. Yes
2. No
3.2.6.32-279.5.1.el6.x86_64 #1 SMP Tue Jul 24 13:57:35 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
4.2048
5.NO
6.NO.
I need a little help as I am a complete novice at scripting in unix. However, i am posed with an issue...:eek: i have two csv files in the following format@
FILE1.CSV:
HEADER
HEADER
Header
, , HEADER
001X ,,200
002X ,,300
003X ... (6 Replies)
I am trying to place all my data in a single row (order doesn't matter). Note I am a Unix novice, so please go easy on me.
Here is an example
Raw data:
row#
(1) 45 64 23
(2) 32 1 6 56
(3) 32 45
Needs to be like this:
row#
(1) 45
(2) 32
(3) 32 ... (2 Replies)
INPUT
have a file with 2 columns. evry set in a column ends with a symbol //.
the first one with something like chr, chr no, chromosome name, cell no. cell no. etc and the second column has values belong to the first columnlike chr Xy, 22, 345,22222 etc. Some clumns have repeated but not... (4 Replies)
Hi GUYS sorry for putting simple query. I have tried the methods posted previously in this site but I'm unable to join the similar values in different columns of different files.
I used sort -u file1 and join but no use.??
I'm attaching my inputfiles.Plz chek them
I have two files.
1st file... (10 Replies)
Hi Experts,
I need your timely help. I have a problem with merging two files. Here my situation :
Here I have to compare first three fields from FILE1 with FILE2. If they are equal, I have to append the remaining values from FILE2 with FILE1 to create the output.
FILE1:
Class ... (3 Replies)
I have 48 csv files in my directory that all have this form:
Storm Speed (mph),43.0410781151
Storm motion direction (degrees),261.580774982
MLCAPE,2450.54098661
MLCIN,-9.85040520279
MLLCL,230
MLLFC,1070.39871
MLEL,207.194689294
MLCT,Not enough data
Sbcape,2203.97617778... (3 Replies)
Hi List,
I have two files. File1 contains all of the data I require to be processed, and I need to add another field to this data by matching a common field in File2 and appending a corresponding field to the data in File1 based on the match... So:
File 1:... (1 Reply)
Dear List,
I have a file of csv data which has a different line per compliance check per host. I do not want any omissions from this csv data file which looks like this:
date,hostname,status,color,check
02-03-2012,COMP1,FAIL,Yellow,auth_pass_change... (3 Replies)
Hi,
I have following 2 CSV files
file1.txt
A1,B1,C1,D1,E1
A2,B2,C2,D2,E2
A3,B3,C3,D3,E3
....
file2.txt
A1,B1,P1,Q1,R1,S1,T1,U1
A1,B1,P2,Q2,R2,S2,T2,U2
A1,B1,P3,Q3,R3,S3,T3,U3
A2,B2,X1,Y1,Z1,I1,J1,K1
A2,B2,X2,Y2,Z2,I2,J2,K2
A2,B2,X3,Y3,Z3,I3,J3,K3
A2,B2,X4,Y4,Z4,I4,J4,K4... (2 Replies)
Discussion started by: learnoutmore99
2 Replies
LEARN ABOUT DEBIAN
mongoimport
MONGOIMPORT(1) Mongo Database MONGOIMPORT(1)NAME
mongoimport - the Mongo import tool
SYNOPSIS
mongoimport [OPTIONS]
DESCRIPTION
mongoimport is a tool to import a MongoDB collection from JSON, CSV, or TSV. The query can be filtered or a list of fields to input can be
given.
OPTIONS
--help show usage information
-h, --host HOST
server to connect to (default HOST=localhost)
-d, --db DATABASE
database to use
-c, --c COLLECTION
collection to use (some commands)
--dbpath PATH
directly access mongod data files in this path, instead of connecting to a mongod instance
-v, --verbose
be more verbose (include multiple times for more verbosity e.g. -vvvvv)
-f, --fields NAMES
comma separated list of field names e.g. -f name,age
--fieldFile FILE
file with fields names - 1 per line
--jsonArray
load a json array, not one item per line. Currently limited to 4MB.
--ignoreBlanks
if given, empty fields in csv and tsv will be ignored
--type TYPE
type of file to import. default: json (json,csv,tsv)
--file FILE
file to import from; if not specified stdin is used
--drop drop collection first
--headerline
CSV,TSV only - use first line as headers
COPYRIGHT
Copyright 2007-2009 10gen
SEE ALSO
For more information, please refer to the MongoDB wiki, available at http://www.mongodb.org.
AUTHOR
Kristina Chodorow
10gen January 2010 MONGOIMPORT(1)