Sponsored Content
Top Forums Shell Programming and Scripting awk - split data based on the count Post 302977445 by chill3chee on Monday 18th of July 2016 10:27:51 AM
Old 07-18-2016
solved

That was absolutely brilliant. Thank you RudiC Smilie
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

awk script to split a file based on the condition

I have the file with the records like 4234234 US phone 3244234 US cup 2342342 CA phone 8947234 US phone 2389472 CA cup 2348972 US maps 3894234 CA phone I want the records with (US,phone) as record to be in one file, (Us, cup) in another file and (CA,cup) to be in another I mean all... (12 Replies)
Discussion started by: superprogrammer
12 Replies

2. Shell Programming and Scripting

Split a file based on pattern in awk, grep, sed or perl

Hi All, Can someone please help me write a script for the following requirement in awk, grep, sed or perl. Buuuu xxx bbb Kmmmm rrr ssss uuuu Kwwww zzzz ccc Roooowwww eeee Bxxxx jjjj dddd Kuuuu eeeee nnnn Rpppp cccc vvvv cccc Rhhhhhhyyyy tttt Lhhhh rrrrrssssss Bffff mmmm iiiii Ktttt... (5 Replies)
Discussion started by: kumarn
5 Replies

3. Shell Programming and Scripting

using awk to count no of records based on conditions

Hi I am having files with date and time stamp as the folder names like 200906051400,200906051500,200906051600 .....hence everyday 24 files will be generated i need to do certain things on this 24 files daily file contains the data like 200906050016370 0 1244141195225298lessrv3 ... (13 Replies)
Discussion started by: aemunathan
13 Replies

4. Shell Programming and Scripting

split file based on group count

Hi, can some one please help me to split the file based on groups. like in the below scenario x indicates the begining of the group and the file should be split each with 2 groups below there are 10 groups it should create 5 files. could you please help? (4 Replies)
Discussion started by: hitmansilentass
4 Replies

5. Shell Programming and Scripting

Split File data using awk

HI Guys, I need to split the file in to number of files . file contains FILEHEADER and EOF . I have to split n number of times . I have to form the file with each splitted message between FILEHEADER and EOF using awk beign and end . how to implement please suggest. (2 Replies)
Discussion started by: manish8484
2 Replies

6. Shell Programming and Scripting

KSH: Split String into smaller substrings based on count

KSH HP-SOL-Lin Cannot use xAWK I have several strings that are quite long and i want to break them down into smaller substrings. What I have String = "word1 word2 word3 word4 .....wordx" What I want String1="word1 word2" String2="word 3 word4" String3="word4 word5" Stringx="wordx... (5 Replies)
Discussion started by: nitrobass24
5 Replies

7. Shell Programming and Scripting

awk script to split file into multiple files based on many columns

So I have a space delimited file that I'd like to split into multiple files based on multiple column values. This is what my data looks like 1bc9A02 1 10 1000 FTDLNLVQALRQFLWSFRLPGEAQKIDRMMEAFAQRYCQCNNGVFQSTDTCYVLSFAIIMLNTSLHNPNVKDKPTVERFIAMNRGINDGGDLPEELLRNLYESIKNEPFKIPELEHHHHHH 1ku1A02 1 10... (9 Replies)
Discussion started by: viored
9 Replies

8. Shell Programming and Scripting

awk to split and parse unpredictable data

data.txt: CRITICAL: iLash: 97.00%, SqlPlus: 99.00%. Warning/critical thresholds: 95/98% I need to pull only the disknames: iLash and SqlPlus The following command will only pull iLash: echo "CRITICAL: iLash: 97.00%, SqlPlus: 99.00%. Warning/critical thresholds: 95/98%" | awk -F":"... (7 Replies)
Discussion started by: SkySmart
7 Replies

9. Shell Programming and Scripting

awk to count and rename based on fields

In the below awk using the tab-delimited input, I am trying count the - symbol in $5 and output the count as well as the renamed condition ins. I am also count the - symbol in $6 and output the count as well as the renamed condition del. I am also count the tomes that in $5 and $6 there are... (6 Replies)
Discussion started by: cmccabe
6 Replies

10. Shell Programming and Scripting

Split files based on row delimiter count

I have a huge file (around 4-5 GB containing 20 million rows) which has text like: <EOFD>11<EOFD>22<EORD>2<EOFD>2222<EOFD>3333<EORD>3<EOFD>44<EOFD>55<EORD>66<EOFD>888<EOFD>9999<EORD> Actually above is an extracted file from a Sql Server with each field delimited by <EOFD> and each row ends... (8 Replies)
Discussion started by: amvip
8 Replies
Data::Dumper::Concise(3pm)				User Contributed Perl Documentation				Data::Dumper::Concise(3pm)

NAME
Data::Dumper::Concise - Less indentation and newlines plus sub deparsing SYNOPSIS
use Data::Dumper::Concise; warn Dumper($var); is equivalent to: use Data::Dumper; { local $Data::Dumper::Terse = 1; local $Data::Dumper::Indent = 1; local $Data::Dumper::Useqq = 1; local $Data::Dumper::Deparse = 1; local $Data::Dumper::Quotekeys = 0; local $Data::Dumper::Sortkeys = 1; warn Dumper($var); } So for the structure: { foo => "bar baz", quux => sub { "fleem" } }; Data::Dumper::Concise will give you: { foo => "bar baz", quux => sub { use warnings; use strict 'refs'; 'fleem'; } } instead of the default Data::Dumper output: $VAR1 = { 'quux' => sub { "DUMMY" }, 'foo' => 'bar baz' }; (note the tab indentation, oh joy ...) If you need to get the underlying Dumper object just call "DumperObject". Also try out "DumperF" which takes a "CodeRef" as the first argument to format the output. For example: use Data::Dumper::Concise; warn DumperF { "result: $_[0] result2: $_[1]" } $foo, $bar; Which is the same as: warn 'result: ' . Dumper($foo) . ' result2: ' . Dumper($bar); DESCRIPTION
This module always exports a single function, Dumper, which can be called with an array of values to dump those values. It exists, fundamentally, as a convenient way to reproduce a set of Dumper options that we've found ourselves using across large numbers of applications, primarily for debugging output. The principle guiding theme is "all the concision you can get while still having a useful dump and not doing anything cleverer than setting Data::Dumper options" - it's been pointed out to us that Data::Dump::Streamer can produce shorter output with less lines of code. We know. This is simpler and we've never seen it segfault. But for complex/weird structures, it generally rocks. You should use it as well, when Concise is underkill. We do. Why is deparsing on when the aim is concision? Because you often want to know what subroutine refs you have when debugging and because if you were planning to eval this back in you probably wanted to remove subrefs first and add them back in a custom way anyway. Note that this -does- force using the pure perl Dumper rather than the XS one, but I've never in my life seen Data::Dumper show up in a profile so "who cares?". BUT BUT BUT ... Yes, we know. Consider this module in the ::Tiny spirit and feel free to write a Data::Dumper::Concise::ButWithExtraTwiddlyBits if it makes you happy. Then tell us so we can add it to the see also section. SUGARY SYNTAX
This package also provides: Data::Dumper::Concise::Sugar - provides Dwarn and DwarnS convenience functions Devel::Dwarn - shorter form for Data::Dumper::Concise::Sugar SEE ALSO
We use for some purposes, and dearly love, the following alternatives: Data::Dump - prettiness oriented but not amazingly configurable Data::Dump::Streamer - brilliant. beautiful. insane. extensive. excessive. try it. JSON::XS - no, really. If it's just plain data, JSON is a great option. AUTHOR
mst - Matt S. Trout <mst@shadowcat.co.uk> CONTRIBUTORS
frew - Arthur Axel "fREW" Schmidt <frioux@gmail.com> COPYRIGHT
Copyright (c) 2010 the Data::Dumper::Concise "AUTHOR" and "CONTRIBUTORS" as listed above. LICENSE
This library is free software and may be distributed under the same terms as perl itself. perl v5.10.1 2011-01-20 Data::Dumper::Concise(3pm)
All times are GMT -4. The time now is 02:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy