I am trying to split the following output into two columns, where each column has Source: Destination:
OUTPUT TO FILTER
I am attempting to use the below condition ORS=NR%2?FS:RS (If NR%2, then TRUE condition - FS - default space, else FALSE condition - RS - default \n) however it does not work.
DESIRED OUTPUT
Can I please get some guidance around what is incorrect?
Have a column "address" which is combination of city, region and postal code like.
Format is : city<comma><space>region<space>postal code
abc, xyz 123456
All these three city, region and postal code are not mandatory. There can be any one of the above. In that case a nell... (2 Replies)
hey guys...
Im looking to do the following:
1
2
3
4
5
6
7
8
9
Change to:
1 4 7
2 5 8
3 6 9
Did use | perl -lpe'$\=$.%3?$":"\n"' , but it doesnt give me the matrix i want. (3 Replies)
Hey everyone,
I have an issue with a client that is passing me a list of values in one column, and occasionally the combination of all the values results in more than an 255 character string. My DB has a 255 character limit, so I am looking to take the column (comma delimited file), and if it... (1 Reply)
Hi,
I need help to split a long text in a column which is separated by ; and i need to print them out in multiple columns. My input file is tab-delimited and has 11 columns as below:-
aRg02004 21452 asdfwf 21452 21452 4.6e-29 5e-29 -1 3 50 ffg|GGD|9009 14101.10 High class -node. ; ffg|GGD|969... (3 Replies)
Hi,
I have a similar input format-
A_1 2
B_0 4
A_1 1
B_2 5
A_4 1
and looking to print in this output format with headers. can you suggest in awk?awk because i am doing some pattern matching from parent file to print column 1 of my input using awk already.Thanks!
letter number_of_letters... (5 Replies)
Hi,
I have a text file 'Item_List.txt' containing only 1 column. This column lists different products, each separated by the same generic string header "NEW PRODUCT, VERSION 1.1". After this the name of the product is given, then a delimiter string "PRODUCT FIELD", and then the name of the... (11 Replies)
Hi Experts,
Please bear with me, i need help
I am learning AWk and stuck up in one issue.
First point : I want to sum up column value for column 7, 9, 11,13 and column15 if rows in column 5 are duplicates.No action to be taken for rows where value in column 5 is unique.
Second point : For... (1 Reply)
I want to split this with every 5 or 50 depend on how much data the file will have. And remove the comma on the end
Source file will have
001,0002,0003,004,005,0006,0007,007A,007B,007C,007E,007F,008A,008C
Need Output from every 5 tab and remove the comma from end of each row
... (4 Replies)
Discussion started by: ranjancom2000
4 Replies
LEARN ABOUT DEBIAN
flow-import
flow-import(1) General Commands Manual flow-import(1)NAME
flow-import -- Import flows into flow-tools from other NetFlow packages.
SYNOPSIS
flow-import [-h] [-b big|little] [-d debug_level] [-f format] [-m mask_fields] [-V pdu_version] [-z z_level]
DESCRIPTION
The flow-import utility will convert data from cflowd and ASCII CSV files into flow-tools format.
OPTIONS -b big|little
Byte order of output.
-d debug_level
Enable debugging.
-f format Export format. Supported formats are:
0 cflowd
2 ASCII CSV
3 Cisco NFCollector
-h Display help.
-m mask_fields
Select fields for cflowd and ASCII formats. The mask_fields is built from a bitwise OR of the following:
UNIX_SECS 0x0000000000000001LL
UNIX_NSECS 0x0000000000000002LL
SYSUPTIME 0x0000000000000004LL
EXADDR 0x0000000000000008LL
DFLOWS 0x0000000000000010LL
DPKTS 0x0000000000000020LL
DOCTETS 0x0000000000000040LL
FIRST 0x0000000000000080LL
LAST 0x0000000000000100LL
ENGINE_TYPE 0x0000000000000200LL
ENGINE_ID 0x0000000000000400LL
SRCADDR 0x0000000000001000LL
DSTADDR 0x0000000000002000LL
SRC_PREFIX 0x0000000000004000LL
DST_PREFIX 0x0000000000008000LL
NEXTHOP 0x0000000000010000LL
INPUT 0x0000000000020000LL
OUTPUT 0x0000000000040000LL
SRCPORT 0x0000000000080000LL
DSTPORT 0x0000000000100000LL
PROT 0x0000000000200000LL
TOS 0x0000000000400000LL
TCP_FLAGS 0x0000000000800000LL
SRC_MASK 0x0000000001000000LL
DST_MASK 0x0000000002000000LL
SRC_AS 0x0000000004000000LL
DST_AS 0x0000000008000000LL
IN_ENCAPS 0x0000000010000000LL
OUT_ENCAPS 0x0000000020000000LL
PEER_NEXTHOP 0x0000000040000000LL
ROUTER_SC 0x0000000080000000LL
EXTRA_PKTS 0x0000000100000000LL
MARKED_TOS 0x0000000200000000LL
The default value is all fields applicable to the pdu_version.
-V pdu_version
Use pdu_version format output.
1 NetFlow version 1 (No sequence numbers, AS, or mask)
5 NetFlow version 5
6 NetFlow version 6 (5+ Encapsulation size)
7 NetFlow version 7 (Catalyst switches)
8.1 NetFlow AS Aggregation
8.2 NetFlow Proto Port Aggregation
8.3 NetFlow Source Prefix Aggregation
8.4 NetFlow Destination Prefix Aggregation
8.5 NetFlow Prefix Aggregation
8.6 NetFlow Destination (Catalyst switches)
8.7 NetFlow Source Destination (Catalyst switches)
8.8 NetFlow Full Flow (Catalyst switches)
8.9 NetFlow ToS AS Aggregation
8.10 NetFlow ToS Proto Port Aggregation
8.11 NetFlow ToS Source Prefix Aggregation
8.12 NetFlow ToS Destination Prefix Aggregation
8.13 NetFlow ToS Prefix Aggregation
8.14 NetFlow ToS Prefix Port Aggregation
1005 Flow-Tools tagged version 5
-z z_level
Configure compression level to z_level. 0 is disabled (no compression), 9 is highest compression.
EXAMPLES
Convert the cflowd file flows.cflowd to the flow-tools file flows. Store as Version 5 with compression level 5.
flow-import -V5 -z5 -f0 < flows.cflowd > flows
EXAMPLES
Convert the ASCII CSV data in flows.ascii to flow-tools format. The ASCII data must include all fields represented by 0xFF31EF in the
order listed above. Store as Version 5 with no compression.
flow-import -z0 -f2 -m0xFF31EF < flows.ascii > flows
BUGS
The pcap format is a hack.
AUTHOR
Mark Fullmer maf@splintered.net
SEE ALSO flow-tools(1)flow-import(1)