Identifying new fields of data


 
Thread Tools Search this Thread
Operating Systems Solaris Identifying new fields of data
# 1  
Old 01-10-2005
Identifying new fields of data

i have hundreds of lines of formatted data with 10 different fields per line. the data is refreshed every few minutes and some fields in some lines may reflect new data. i'm looking for a sample of code that help me to identify those new fields so that i can write them to a file to indicate that new data has been noted.
tks in advance...
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Join command: how to keep all fields in one data

Dear all, I 'd like to ask a question. I have two datasets: a.txt (only has one filed, call 'SNP'), b.txt( has thousands of fields, 1st field call 'SNP'). a.txt: rs9527 rs318567 rs12376 ... b.txt: rs167893 1 2 0 2 1 2 ... rs318567 2 0 2 1 2 0 ... rs12376 0 2 0 2 1 2 ... I... (2 Replies)
Discussion started by: forevertl
2 Replies

2. Shell Programming and Scripting

Awk - Script assistance on identifying non matching fields

Hoping for some assistance. my source file consists of: os, ip, username win7, 123.56.78, john win7, 123.56.78, paul win7, 10.1.1.1, john win7, 10.2.2.3, joe I've been trying to run a script that will only return ip and username where the IP address is the same and the username is... (3 Replies)
Discussion started by: tekvaio
3 Replies

3. Shell Programming and Scripting

Identifying entries based on 2 fields in a string.

Hi Guys, I’m struggling to use two fields to do a duplicate/ unique by output. I want to look IP addresses assigned to more than one account during a given period in the logs. So duplicate IP and account > 1 then print all the logs for that IP. I have been Using AWK (just as its installed... (3 Replies)
Discussion started by: wabbit02
3 Replies

4. Shell Programming and Scripting

Identifying specific fields in a Row

Hi, I am new to UNIX. Can some one help me to solve the below. I have a requirement to to identify the specific fields in row and also some part of the field. In my file I have a record as sundra;10.44.48.65;10thstreet TCP packet out of state: First packet isn't SYN;telno:... (3 Replies)
Discussion started by: suneel.mekala
3 Replies

5. Shell Programming and Scripting

Match data based on two fields, and append to a line

I need to write a program to do something like a 'vlookup' in excel. I want to match data from file2 based on two fields (where both match) in file1, and for matching lines, add the data from two of the fields from file2 to file1. If anyone knows something in perl or awk that can do this, I'd be... (20 Replies)
Discussion started by: jamessmith01
20 Replies

6. Shell Programming and Scripting

parsing data file picking out certain fields

I have a file that is large and is broken up by groups of data. I want to take certain fields and display them different to make it easier to read. Given input file below: 2008 fl01 LAC 2589 polk doal xx 2008q1 mx sect 25698541 Sales 08 Dept group lead1 ... (8 Replies)
Discussion started by: timj123
8 Replies

7. UNIX for Dummies Questions & Answers

Listing out three fields of data

How would I find three different fields in a data file such as first name, last name, credit card number in a particular file? Thanks in advance for your help (3 Replies)
Discussion started by: damion
3 Replies

8. UNIX for Dummies Questions & Answers

Remove Data from Fields

I would like some sugestions on how to solve the following problem with removing selected data from fields. Each day I receive a file containing 22,000 records that I use a combination of awk and the cut command to remove unwanted fields. This is a work in process as I learn more about awk, sed... (4 Replies)
Discussion started by: greengrass
4 Replies

9. Shell Programming and Scripting

How to change Raw data to Coloumn data fields

Dear All, I have some data file.see below. --------------ALARM CLEARING FROM SubNetwork=ONRM_RootMo,SubNetwork=AXE,ManagedElement=CGSN-------------- Alarm Record ID: 25196304 Event Time: 2006-08-28 13:41:35 Event Type: ... (1 Reply)
Discussion started by: Nayanajith
1 Replies
Login or Register to Ask a Question
uniq(1) 						      General Commands Manual							   uniq(1)

NAME
uniq - Removes or lists repeated lines in a file SYNOPSIS
Current Syntax uniq [-cdu] [-f fields] [-s chars] [input-file [output-file]] Obsolescent Syntax uniq [-cdu] [-fields] [+chars] [input-file [output-file]] The uniq command reads from the specified input_file, compares adjacent lines, removes the second and succeeding occurrences of a line, and writes to standard output. STANDARDS
Interfaces documented on this reference page conform to industry standards as follows: uniq: XCU5.0 Refer to the standards(5) reference page for more information about industry standards and associated tags. OPTIONS
Precedes each output line with a count of the number of times each line appears in the file. This option supersedes the -d and -u options. Displays repeated lines only. Ignores the first fields fields on each input line when doing comparisons, where fields is a positive deci- mal integer. A field is the maximal string matched by the basic regular expression: [[:blank:]]*[^[:blank:]]* If the fields argument specifies more fields than appear on an input line, a null string is used for comparisons. Ignores the spec- ified number of characters when doing comparisons. The chars argument is a positive decimal integer. If specified with the -f option, the first chars characters after the first fields fields are ignored. If the chars argument speci- fies more characters than remain on an input line, uniq uses a null string for comparison. Displays unique lines only. Equivalent to -f fields. (Obsolescent) Equivalent to -s chars. (Obsolescent) OPERANDS
A pathname for the input file. If this operand is omitted or specified as -, then standard input is read. A pathname for the output file. If this operand is omitted, then standard output is written. DESCRIPTION
The input_file and output_file arguments must be different files. If the input_file operand is not specified, or if it is -, uniq uses standard input. Repeated lines must be on consecutive lines to be found. You can arrange them with the sort command before processing. EXAMPLES
To delete repeated lines in the following file called fruit and save it to a file named newfruit, enter: uniq fruit newfruit The file fruit contains the following lines: apples apples bananas cherries cherries peaches pears The file newfruit contains the following lines: apples bananas cherries peaches pears EXIT STATUS
The following exit values are returned: Successful completion. An error occurred. ENVIRONMENT VARIABLES
The following environment variables affect the execution of uniq: Provides a default value for the internationalization variables that are unset or null. If LANG is unset or null, the corresponding value from the default locale is used. If any of the internationalization vari- ables contain an invalid setting, the utility behaves as if none of the variables had been defined. If set to a non-empty string value, overrides the values of all the other internationalization variables. Determines the locale for the interpretation of sequences of bytes of text data as characters (for example, single-byte as opposed to multibyte characters in arguments). Determines the locale for the for- mat and contents of diagnostic messages written to standard error. Determines the location of message catalogues for the processing of LC_MESSAGES. SEE ALSO
Commands: comm(1), sort(1) Standards: standards(5) uniq(1)