Sponsored Content
Top Forums Shell Programming and Scripting Convert column to quote and comma separated row Post 302923120 by Scrutinizer on Thursday 30th of October 2014 01:54:03 PM
Old 10-30-2014
Try:
Code:
awk '{printf fmt,$1}' fmt="'%s'\n" file | paste -sd, -

 

10 More Discussions You Might Find Interesting

1. UNIX Desktop Questions & Answers

Unix Comma Separated to Excel Column

I would like to copy 2 parts of a csv file from Unix to an XL sheet. However to save time I do not want to format the column ever time I cut and paste into XL(Text2Column). I've used awk -F, '{Print $1, $2....}'. Is there a script or code that can automatically format the csv for XL columns? ... (3 Replies)
Discussion started by: ravzter
3 Replies

2. Shell Programming and Scripting

Converting Column values to comma delimted single Row

I have a requirement in which i have to read a file which has multiple columns seperated by a pipe "|" from this i have to read each column values seperately and create a comma seperated row for the column and write to another file. eg: Input file: ColA ColB 1 2 2 x 3 y... (5 Replies)
Discussion started by: nvuradi
5 Replies

3. Shell Programming and Scripting

Deleting a column in multiple files that are comma separated

Hi, I have a directory that contains say 100 files named sequencially like input_1.25_50_C1.txt input_1.25_50_C2.txt input_1.25_50_C3.txt input_1.25_50_C4.txt .. .. .. input_1.25_50_C100.txt an example of the content in each of the file is: "NAME" "MEM.SHIP" "cgd1_10" "cgd1_10"... (9 Replies)
Discussion started by: Lucky Ali
9 Replies

4. Shell Programming and Scripting

Need Help - comma inside double quote in comma separated csv,

Hello there, I have a comma separated csv , and all the text field is wrapped by double quote. Issue is some text field contain comma as well inside double quote. so it is difficult to process. Input in the csv file is , 1,234,"abc,12,gh","GH234TY",34 I need output like below,... (8 Replies)
Discussion started by: Uttam Maji
8 Replies

5. Shell Programming and Scripting

convert row to column with respect of first column.

Input file A.txt :- C2062 -117.6 -118.5 -117.5 C5145 0 0 0 C5696 0 0 0 Output file B.txt C2062 X -117.6 C2062 Y -118.5 C2062 Z -117.5... (4 Replies)
Discussion started by: asavaliya
4 Replies

6. UNIX for Dummies Questions & Answers

[solved] Comma separated values to space separated

Hi, I have a large number of files which are written as csv (comma-separated values). Does anyone know of simple sed/awk command do achieve this? Thanks! ---------- Post updated at 10:59 AM ---------- Previous update was at 10:54 AM ---------- Guess I asked this too soon. Found the... (0 Replies)
Discussion started by: lost.identity
0 Replies

7. Shell Programming and Scripting

Convert comma separated file to fix length

Hi, I am converting a comma separated file to fixed field lenght and I am using that: COLUMNS="25 24 67 26 39 63 20 34 35 14 397" ( cat $indir/input_file.dat | \ $AWK -v columns="$COLUMNS" ' BEGIN { FS=","; OFS=""; split(columns, arr, " "); } { for(i=1; i<=NF;... (5 Replies)
Discussion started by: apenkov
5 Replies

8. Shell Programming and Scripting

Insert single quote on every word separated by comma

Hello, I have a text file as:-ABC BCD CDF DEF EFGI need to convert as 'ABC', 'BCD', 'CDF', 'DEF', 'EFG' using a unix command anybody can help me out on this. Regards, Jas Please wrap all code, files, input & output/errors in CODE tags. It makes them easier to read and preserves... (12 Replies)
Discussion started by: jassi10781
12 Replies

9. Shell Programming and Scripting

awk to parse comma separated field and removing comma in between number and double quotes

Hi Experts, Please support I have below data in file in comma seperated, but 4th column is containing comma in between numbers, bcz of which when i tried to parse the file the column 6th value(5049641141) is being removed from the file and value(222.82) in column 5 becoming value of column6. ... (3 Replies)
Discussion started by: as7951
3 Replies

10. Shell Programming and Scripting

Convert fixed value fields to comma separated values

Hi All, Hope you are doing Great!!!. Today i have came up with a problem to say exactly it was for performance improvement. I have written code in perl as a solution for this to cut in specific range, but it is taking time to run for files thousands of lines so i am expecting a sed... (9 Replies)
Discussion started by: mad man
9 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 06:29 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy