Discussion line court and change context of huge data


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Discussion line court and change context of huge data
# 8  
Old 10-14-2009
Thanks a lot for your code, Franklin52
Code:
awk '/^+A/{c++}END{print c}' data_file

This code work faster than the grep function.
Hopefully still got few more ways to improve the line court speed Smilie
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Phrase XML with Huge Data

HI Guys, I have Big XML file with Below Format :- Input :- <pokl>MKL=1,FN=1,GBNo=B10C</pokl> <d>192</d> <d>315</d> <d>35</d> <d>0,7,8</d> <pokl>MKL=1,dFN=1,GBNo=B11C</pokl> <d>162</d> <d>315</d> <d>35</d> <d>0,5,6</d> <pokl>MKL=1,dFN=1,GBNo=B12C</pokl> <d>188</d> (4 Replies)
Discussion started by: pareshkp
4 Replies

2. Solaris

The Fastest for copy huge data

Dear Experts, I would like to know what's the best method for copy data around 3 mio (spread in a hundred folders, size each file around 1kb) between 2 servers? I already tried using Rsync and tar command. But using these command is too long. Please advice. Thanks Edy (11 Replies)
Discussion started by: edydsuranta
11 Replies

3. Shell Programming and Scripting

awk does not work well with huge data?

Dear all , I found that if we work with thousands line of data, awk does not work perfectly. It will cut hundreds line (others are deleted) and works only on the remain data. I used this command : awk '$1==1{$1="Si"}{print>FILENAME}' coba.xyz to change value of first column whose value is 1... (4 Replies)
Discussion started by: ariesto
4 Replies

4. Shell Programming and Scripting

Aggregation of huge data

Hi Friends, I have a file with sample amount data as follows: -89990.3456 8788798.990000128 55109787.20 -12455558989.90876 I need to exclude the '-' symbol in order to treat all values as an absolute one and then I need to sum up.The record count is around 1 million. How... (8 Replies)
Discussion started by: Ravichander
8 Replies

5. Red Hat

Disk is Full but really does not contain huge data

Hi All, My disk usage show 100 % . When I check “df –kh” it shows my root partition is full. But when I run the “du –skh /” shows only 7 GB is used. Filesystem Size Used Avail Use% Mounted on /dev/sda1 30G 28G 260MB 100% / How I can identify who is using the 20 GB of memory. Os: Centos... (10 Replies)
Discussion started by: kalpeer
10 Replies

6. UNIX for Dummies Questions & Answers

Copy huge data into vi editor

Hi All, HP-UX dev4 B.11.11 U 9000/800 3251073457 I need to copy huge data from windows text file to vi editor. when I tried copy huge data, the format of data is not preserverd and appered to scatterd through the vi, something like give below. Please let me know, how can I correct this? ... (18 Replies)
Discussion started by: alok.behria
18 Replies

7. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

8. UNIX for Advanced & Expert Users

A variable and sum of its value in a huge data.

Hi Experts, I got a question.. In the following output of `ps -elf | grep DataFlow` I get:- 242001 A mqsiadm 2076676 1691742 0 60 20 26ad4f400 130164 * May 09 - 3:02 DataFlowEngine EAIDVBR1_BROKER 5e453de8-2001-0000-0080-fd142b9ce8cb VIPS_INQ1 0 242001 A mqsiadm... (5 Replies)
Discussion started by: varungupta
5 Replies

9. Shell Programming and Scripting

How to extract data from a huge file?

Hi, I have a huge file of bibliographic records in some standard format.I need a script to do some repeatable task as follows: 1. Needs to create folders as the strings starts with "item_*" from the input file 2. Create a file "contents" in each folders having "license.txt(tab... (5 Replies)
Discussion started by: srsahu75
5 Replies
Login or Register to Ask a Question
LIBLINEAR-TRAIN(1)					      General Commands Manual						LIBLINEAR-TRAIN(1)

NAME
liblinear-train - train a linear classifier and produce a model SYNOPSIS
liblinear-train [options] training_set_file [model_file] DESCRIPTION
liblinear-train trains a linear classifier using liblinear and produces a model suitable for use with liblinear-predict(1). training_set_file is the file containing the data used for training. model_file is the file to which the model will be saved. If model_file is not provided, it defaults to training_set_file.model. To obtain good performances, sometimes one needs to scale the data. This can be done with svm-scale(1). OPTIONS
A summary of options is included below. -s type Set the type of the solver: 0 ... L2-regularized logistic regression 1 ... L2-regularized L2-loss support vector classification (dual) (default) 2 ... L2-regularized L2-loss support vector classification (primal) 3 ... L2-regularized L1-loss support vector classification (dual) 4 ... multi-class support vector classification 5 ... L1-regularized L2-loss support vector classification 6 ... L1-regularized logistic regression 7 ... L2-regularized logistic regression (dual) -c cost Set the parameter C (default: 1) -e epsilon Set the tolerance of the termination criterion For -s 0 and 2: |f'(w)|_2 <= epsilon*min(pos,neg)/l*|f'(w0)_2, where f is the primal function and pos/neg are the number of positive/negative data (default: 0.01) For -s 1, 3, 4 and 7: Dual maximal violation <= epsilon; similar to libsvm (default: 0.1) For -s 5 and 6: |f'(w)|_inf <= epsilon*min(pos,neg)/l*|f'(w0)|_inf, where f is the primal function (default: 0.01) -B bias If bias >= 0, then instance x becomes [x; bias]; if bias < 0, then no bias term is added (default: -1) -wi weight Weight-adjusts the parameter C of class i by the value weight -v n n-fold cross validation mode -q Quiet mode (no outputs). EXAMPLES
Train a linear SVM using L2-loss function: liblinear-train data_file Train a logistic regression model: liblinear-train -s 0 data_file Do five-fold cross-validation using L2-loss SVM, using a smaller stopping tolerance 0.001 instead of the default 0.1 for more accurate solutions: liblinear-train -v 5 -e 0.001 data_file Train four classifiers: positive negative Cp Cn class 1 class 2,3,4 20 10 class 2 class 1,3,4 50 10 class 3 class 1,2,4 20 10 class 4 class 1,2,3 10 10 liblinear-train -c 10 -w1 2 -w2 5 -w3 2 four_class_data_file If there are only two classes, we train ONE model. The C values for the two classes are 10 and 50: liblinear-train -c 10 -w3 1 -w2 5 two_class_data_file Output probability estimates (for logistic regression only) using liblinear-predict(1): liblinear-predict -b 1 test_file data_file.model output_file SEE ALSO
liblinear-predict(1), svm-predict(1), svm-train(1) AUTHORS
liblinear-train was written by the LIBLINEAR authors at National Taiwan university for the LIBLINEAR Project. This manual page was written by Christian Kastner <debian@kvr.at>, for the Debian project (and may be used by others). March 08, 2011 LIBLINEAR-TRAIN(1)