Discussion line court and change context of huge data


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Discussion line court and change context of huge data
# 8  
Old 10-14-2009
Thanks a lot for your code, Franklin52
Code:
awk '/^+A/{c++}END{print c}' data_file

This code work faster than the grep function.
Hopefully still got few more ways to improve the line court speed Smilie
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Phrase XML with Huge Data

HI Guys, I have Big XML file with Below Format :- Input :- <pokl>MKL=1,FN=1,GBNo=B10C</pokl> <d>192</d> <d>315</d> <d>35</d> <d>0,7,8</d> <pokl>MKL=1,dFN=1,GBNo=B11C</pokl> <d>162</d> <d>315</d> <d>35</d> <d>0,5,6</d> <pokl>MKL=1,dFN=1,GBNo=B12C</pokl> <d>188</d> (4 Replies)
Discussion started by: pareshkp
4 Replies

2. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

3. Shell Programming and Scripting

awk does not work well with huge data?

Dear all , I found that if we work with thousands line of data, awk does not work perfectly. It will cut hundreds line (others are deleted) and works only on the remain data. I used this command : awk '$1==1{$1="Si"}{print>FILENAME}' coba.xyz to change value of first column whose value is 1... (4 Replies)
Discussion started by: ariesto
4 Replies

4. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies

5. Red Hat

Disk is Full but really does not contain huge data

Hi All, My disk usage show 100 % . When I check “df –kh” it shows my root partition is full. But when I run the “du –skh /” shows only 7 GB is used. Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 28G 260MB 100% / How I can identify who is using the 20 GB of memory. Os: Centos... (10 Replies)
Discussion started by: kalpeer
10 Replies

6. UNIX for Dummies Questions & Answers

Copy huge data into vi editor

Hi All, HP-UX dev4 B.11.11 U 9000/800 3251073457 I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this? ... (18 Replies)
Discussion started by: alok.behria
18 Replies

7. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

8. UNIX for Advanced & Expert Users

A variable and sum of its value in a huge data.

Hi Experts, I got a question.. In the following output of `ps -elf | grep DataFlow` I get:- 242001 A mqsiadm 2076676 1691742 0 60 20 26ad4f400 130164 * May 09 - 3:02 DataFlowEngine EAIDVBR1_BROKER 5e453de8-2001-0000-0080-fd142b9ce8cb VIPS_INQ1 0 242001 A mqsiadm... (5 Replies)
Discussion started by: varungupta
5 Replies

9. Shell Programming and Scripting

How to extract data from a huge file?

Hi, I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows: 1. Needs to create folders as the strings starts with "item_*" from the input file 2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Discussion started by: srsahu75
5 Replies
Login or Register to Ask a Question
fwb_ipfw(1)							 Firewall Builder						       fwb_ipfw(1)

NAME
fwb_ipfw - Policy compiler for ipfw SYNOPSIS
fwb_ipfw [-vVx] [-d wdir] [-o output.fw] [-i] -f data_file.xml object_name DESCRIPTION
fwb_ipfw is a firewall policy compiler component of Firewall Builder (see fwbuilder(1)). This compiler generates code for ipfw - a firewall and traffic shaper in FreeBSD (see ipfw(8)). Compiler reads objects definitions and firewall description from the data file specified with "-f" option and generates firewall configuration and activation script. The generated file has a name that starts with the name of the firewall object, with an extension ".fw". It is a shell script that flushes current policy, then loads new filter and nat rules. The data file and the name of the firewall objects must be specified on the command line. Other command line parameters are optional. OPTIONS
-f FILE Specify the name of the data file to be processed. -o output.fw Specify output file name -d wdir Specify working directory. Compiler creates firewall activation script in this directory. If this parameter is missing, then all files will be placed in the current working directory. -v Be verbose: compiler prints diagnostic messages when it works. -V Print version number and quit. -i When this option is present, the last argument on the command line is supposed to be firewall object ID rather than its name -x Generate debugging information while working. This option is intended for debugging only and may produce lots of cryptic messages. NOTES
Support for ipfw was added in version 1.0.10 of Firewall Builder URL
Firewall Builder home page is located at the following URL: http://www.fwbuilder.org/ BUGS
Please report bugs using bug tracking system on SourceForge: http://sourceforge.net/tracker/?group_id=5314&atid=105314 SEE ALSO
fwbuilder(1), fwb_ipt(1), fwb_pf(1) fwb_ipf(1) FWB
fwb_ipfw(1)