03-26-2014
Thanks for all the suggestions. My workaround so far has been to skip the duplicate line removal and add it to the querries I run against the database instead. This has allowed me to procede with my analysis.
I will go back and fix the database at some point however when the current deadline passes.
I will have to check closely which fields alone can indicate a duplicate record. Since the addition of a column (and therefore the length of $0) is what broke it, taking something out may help. I'm not sure as I am already taking the important columns from two types of database files and the "housekeeping columns" are not included.
Is there a way to do a checksum or fairly robust hash in awk? That might be the best way to shorten the array names which appears to what is killing awk.
Mike
PS. I would love to be using real Linux instead of Cygwin (as I do at home). Unfortunately that is a boundary condition.
Last edited by Michael Stora; 03-26-2014 at 08:03 PM..
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File.
The example of the File is shown below:
8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies
2. Shell Programming and Scripting
Hi,
My awk program is failing. I figured out using command
od -c filename
that the last line of the file doesnt end with a new line character.
Mine is an automated process because of this data is missing.
How do i handle this?
I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies
3. Shell Programming and Scripting
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
... (4 Replies)
Discussion started by: kiranmosarla
4 Replies
4. Shell Programming and Scripting
Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt.
and i am stuck.
list=`cat /u/Test/programs`;
psg "ServTest" | awk -v listawk=$list '{
cmd_name=($5 ~ /^/)? $9:$8
for(pgmname in listawk)
... (6 Replies)
Discussion started by: Anteus
6 Replies
5. Shell Programming and Scripting
I have an extremely large csv file that I need to search the second field, and upon matches update the last field...
I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies
6. Shell Programming and Scripting
Folks ,
i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable.
csv file format is :-
trousers:shirts,price,50
jeans:tshirts,rate,60
pants:blazer,costprice,40
etc
i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies
7. Shell Programming and Scripting
I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it.
awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies
8. Shell Programming and Scripting
I'm looking to remove duplicate rows from a CSV file with a twist.
The first row is a header.
There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies
9. UNIX for Dummies Questions & Answers
Hi all,
I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following:
FILE:
1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100
2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY)
3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies
10. Shell Programming and Scripting
My file (the output of an experiment) starts off looking like this,
_____________________________________________________________
Subjects incorporated to date: 001
Data file started on machine PKSHS260-05CP
**********************************************************************
Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies
LEARN ABOUT OSX
db_upgrade
db_upgrade(1) BSD General Commands Manual db_upgrade(1)
NAME
db_upgrade
SYNOPSIS
db_upgrade [-NsV] [-h home] [-P password] file ...
DESCRIPTION
The db_upgrade utility upgrades the Berkeley DB version of one or more files and the databases they contain to the current release version.
The options are as follows:
-h
Specify a home directory for the database environment; by default, the current working directory is used.
-N
Do not acquire shared region mutexes while running. Other problems, such as potentially fatal errors in Berkeley DB, will be ignored as
well. This option is intended only for debugging errors, and should not be used under any other circumstances.
-P
Specify an environment password. Although Berkeley DB utilities overwrite password strings as soon as possible, be aware there may be a
window of vulnerability on systems where unprivileged users can see command-line arguments or where utilities are not able to overwrite
the memory containing the command-line arguments.
-s
This flag is only meaningful when upgrading databases from releases before the Berkeley DB 3.1 release.
As part of the upgrade from the Berkeley DB 3.0 release to the 3.1 release, the on-disk format of duplicate data items changed. To cor-
rectly upgrade the format requires that applications specify whether duplicate data items in the database are sorted or not. Specifying
the -s flag means that the duplicates are sorted; otherwise, they are assumed to be unsorted. Incorrectly specifying the value of this
flag may lead to database corruption.
Because the db_upgrade utility upgrades a physical file (including all the databases it contains), it is not possible to use db_upgrade to
upgrade files where some of the databases it includes have sorted duplicate data items, and some of the databases it includes have
unsorted duplicate data items. If the file does not have more than a single database, if the databases do not support duplicate data
items, or if all the databases that support duplicate data items support the same style of duplicates (either sorted or unsorted),
db_upgrade will work correctly as long as the -s flag is correctly specified. Otherwise, the file cannot be upgraded using db_upgrade, and
must be upgraded manually using the db_dump and db_load utilities.
-V
Write the library version number to the standard output, and exit.
It is important to realize that Berkeley DB database upgrades are done in place, and so are potentially destructive. This means that if the
system crashes during the upgrade procedure, or if the upgrade procedure runs out of disk space, the databases may be left in an inconsistent
and unrecoverable state. See Upgrading databases for more information.
The db_upgrade utility may be used with a Berkeley DB environment (as described for the -h option, the environment variable DB_HOME, or
because the utility was run in a directory containing a Berkeley DB environment). In order to avoid environment corruption when using a
Berkeley DB environment, db_upgrade should always be given the chance to detach from the environment and exit gracefully. To cause db_upgrade
to release all environment resources and exit cleanly, send it an interrupt signal (SIGINT).
The db_upgrade utility exits 0 on success, and >0 if an error occurs.
ENVIRONMENT
DB_HOME If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home, as
described in DB_ENV->open.
SEE ALSO
db_archive(1), db_checkpoint(1), db_deadlock(1), db_dump(1), db_load(1), db_printlog(1), db_recover(1), db_stat(1), db_verify(1)
Darwin December 3, 2003 Darwin