Sponsored Content
Top Forums Shell Programming and Scripting How to separate rows of data into another column? Post 302957285 by Corona688 on Thursday 8th of October 2015 04:05:15 PM
Old 10-08-2015
Code:
# -F, means "split columns on comma", so it knows that 22222 is column 2, or $2.
# $ means column in awk and has absolutely nothing to do with shell variables.
#
# The single quote begins a single block of text which the shell feeds into awk unchanged:
awk -F, '
# A[2222]=A[2222] + ";" + entire current line
# Note that whenever A[2222] is empty, this leaves ; in front.
# It gets removed later with substr.
{ A[$2]=A[$2]";"$0 }

END { # Run this when the file runs out of lines
        for(X in A)        # For each index X in array A
         print substr(A[X],2)        # Print characters 2-n of A[X]
}' input-filename > output-filename

This User Gave Thanks to Corona688 For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Find different column numbers among rows in data

I want to find the different column numbers among rows in a file. For example: A001 a b c d e ... N A002 a b c d e ... N A003 a b c d e ... N+1 A004 a b c d e ... N A005 a b c d e ... N+2 : : For most of the lines I will have N columns (say 1000 rows) in each line except the line 3... (5 Replies)
Discussion started by: AMBER
5 Replies

2. Shell Programming and Scripting

column data to rows every n line

Hi every one, I am trying to organise an input text file like: input 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 into an output as following: output file 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 (5 Replies)
Discussion started by: nxp
5 Replies

3. UNIX for Dummies Questions & Answers

How to separate a single column file into files of the same size (i.e. number of rows)?

I have a text file with 1,000,000 rows (It is a single column text file of numbers). I would like to separate the text file into 100 files of equal size (i.e. number of rows). The first file will contain the first 10,000 rows, the second row will contain the second 10,000 rows (rows 10,001-20,000)... (2 Replies)
Discussion started by: evelibertine
2 Replies

4. Shell Programming and Scripting

Summing up rows data regarding 1st column

Dear all, I have one file like LABEL A B C D E F G H I J K L M N G02100 64651.3 25630.7 8225.21 51238 267324 268005 234001 52410.9 18598.2 10611 10754.7 122535 267170 36631.4 G02100 12030.3 8260.15 8569.91 ... (4 Replies)
Discussion started by: AAWT
4 Replies

5. Shell Programming and Scripting

Divide data with specific column values into separate files

hello! i need a little help from you :) ... i need to split a file into separate files depending on two conditions using scripting. The file has no delimiters. The conditions are col 17 = "P" and col 81 = "*", this will go to one output file; col 17 = "R" and col 81 = " ". Here is an example. ... (3 Replies)
Discussion started by: chanclitas
3 Replies

6. Shell Programming and Scripting

Transpose Column of Data to Rows

I can no longer find my commands, but I use to be able to transpose data with common fields from a single column to rows using a command line. My data is separated as follows: NAME=BOB ADDRESS=COLORADO PET=CAT NAME=SUSAN ADDRESS=TEXAS PET=BIRD NAME=TOM ADDRESS=UTAH PET=DOG I would... (7 Replies)
Discussion started by: docdave78
7 Replies

7. Shell Programming and Scripting

[Solved] Compare column data in all the rows

Hi.. In the below sorted input file.. I am comparing the first 3 columns of data one by one row and it is a pipeline delimitter file.. AA|BB|CC|line1 AA|BB|CC|ine4 AA|BB|CC|line2 BB|CC|DD|line3 BB|CC|DD|line5 If first 3 columns of data matches with any record in the file the... (4 Replies)
Discussion started by: NareshN
4 Replies

8. Shell Programming and Scripting

Convert Column data values to rows

Hi all , I have a file with the below content Header Section employee|employee name||Job description|Job code|Unitcode|Account|geography|C1|C2|C3|C4|C5|C6|C7|C8|C9|Csource|Oct|Nov|Dec|Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep Data section ... (1 Reply)
Discussion started by: Hypesslearner
1 Replies

9. Shell Programming and Scripting

Data rearranging from rows to column

Hello Everyone, I have a input file looks like -0.005-0.004-0.003-0.002-0.00100.0010.0020.0030.0040.005My desired output should look like -0.005 -0.004 -0.003 -0.002 -0.001 0 0.001 0.002 0.003 0.004 0.005I had some success in getting the desired output. But i face a problem when i... (15 Replies)
Discussion started by: dinesh.n
15 Replies

10. Shell Programming and Scripting

How to separate data coming in one column of CSV file?

I am running an ISQL command on Sybase DB and getting output of a query in an CSV file. The issue is that all the data comes in to the same column, i want them to be separated in different columns. SQL_COMMAND=command.sql file=file.txt formatFile=formatFile.txt report=report.csv echo... (1 Reply)
Discussion started by: Sharma331
1 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 10:12 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy