Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Output Multiple Field from dataBase file Post 39799 by Dennz on Sunday 31st of August 2003 03:47:43 PM
Old 08-31-2003
Computer Output Multiple Field from dataBase file

I am fairly new in unix I was wondering if anybody can help me out with this:

I am trying to output to a file the following fields;

Field1
Field2
Field4

From a database file dataBase1.
this is how the file looks:

dataBase1 TABLE DATA Example
==================


Table
------------------------------------
Field1 : XXXXXXXXXX
Field2 : XXXXXXXXXX
Field3 : XXXXXXXXXX
Field4 : XXXXXXXXX

I am however able to output the fields i wanted one by one by using

*dataBase1_dump -a |grep "Field1 : XXXXX" >filename.out

but I wanted to maybe like output the file in this format

Field1 : XXXXXXXXXX
Field2 : XXXXXXXXXX
Field4 : XXXXXXXXX

Smilie Thank You so much for taking the time looking into this guys.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

check for the value of one particular field and give output in a different file

hi i need to check for the value of one particular field : in the output file. the file may contain many such records as below how to ???? *** Throttled with base name + key params! : : -518594328 : les.alarm.LBS12005 : les.alarm.LBS12005 : les : lessrv1 : les : 2328 : 0... (7 Replies)
Discussion started by: aemunathan
7 Replies

2. Shell Programming and Scripting

how to extract the data from database (oracle) and send the output as an .xls file?

Hi, How to extract the data from Oracle database and sent the output data to mails using mailx command with .xls attachement? Here i know how to connect the database using unix shell script and how to use the mailx command in UNIX script But i don't know how to use the .xls format file (i... (1 Reply)
Discussion started by: psiva_arul
1 Replies

3. Shell Programming and Scripting

Handling multiple fields of a database file for toupper() function in awk

hello everyone.... script is: To convert the contents of a database file into uppercase my code is: printf "%s\n" , $2 | awk '{print toupper($2)}' emp.lst i m able to do only for one field.....didn't get any sources for handling multiple fields. please suggest me for multiple... (1 Reply)
Discussion started by: Priyanka Bhati
1 Replies

4. Shell Programming and Scripting

How to split file into multiple files using awk based on 1 field in the file?

Good day all I need some helps, say that I have data like below, each field separated by a tab DATE NAME ADDRESS 15/7/2012 LX a.b.c 15/7/2012 LX1 a.b.c 16/7/2012 AB a.b.c 16/7/2012 AB2 a.b.c 15/7/2012 LX2 a.b.c... (2 Replies)
Discussion started by: alexyyw
2 Replies

5. Shell Programming and Scripting

Compare two files Field by field and output the result in another file

Hi Friends, Need Help. I have file1.txt as File1.txt |123|A|7267|Hyder|Cross|Sell|7801 |995|A|7051|2008|Lunar|New|Year|Promotion|7801 |996|A|7022|Q108|Targ|Prospect|&|SSCC|Savings|Promo|7801 |997|A|7182|Q1|Feb-Apr|08|Credit|ITA|PA|SBA|Campaign|7801 File2.txt... (7 Replies)
Discussion started by: i150371485
7 Replies

6. Shell Programming and Scripting

Plz Help. Compare 2 files field by field and get the output in another file.

Hi Freinds, I have 2 files . one is source.txt and second one is target.txt. I want to keep source.txt as baseline and compare target.txt. please find the data in 2 files and Expected output. Source.txt 1|HYD|NAG|TRA|34.5|1234 2|CHE|ESW|DES|36.5|134 3|BAN|MEH|TRA|33.5|234... (5 Replies)
Discussion started by: i150371485
5 Replies

7. UNIX for Dummies Questions & Answers

ID incorrect field values in dat file and output to new file

Hi All I have a .dat file, the values are seperated by ". I wish to identify all field values in field 14 that are not '01-APR-2013' band then copy those records to a new file. Can anyone suggest the UNIX command required. Thanks in advance Andy (2 Replies)
Discussion started by: aurum1313
2 Replies

8. Linux

How do I format a Date field of a .CSV file with multiple commas in a string field?

I have a .CSV file (file.csv) whose data are all enclosed in double quotes. Sample format of the file is as below: column1,column2,column3,column4,column5,column6, column7, Column8, Column9, Column10 "12","B000QRIGJ4","4432","string with quotes, and with a comma, and colon: in... (3 Replies)
Discussion started by: dhruuv369
3 Replies

9. Shell Programming and Scripting

Grep from multiple patterns multiple file multiple output

Hi, I want to grep multiple patterns from multiple files and save to multiple outputs. As of now its outputting all to the same file when I use this command. Input : 108 files to check for 390 patterns to check for. output I need to 108 files with the searched patterns. Xargs -I {} grep... (3 Replies)
Discussion started by: Diya123
3 Replies

10. Shell Programming and Scripting

Get output of multiple pattern match from first field to a file

Hi All, Greetings! I have a file of 40000+ lines with different entries, I need matching entries filterd out to their files based on first filed pattern for the matching : For example: All server1 entries (in field1) to come together with its path in 2nd field. The best output I want... (9 Replies)
Discussion started by: rveri
9 Replies
TABLIFY(1p)						User Contributed Perl Documentation					       TABLIFY(1p)

NAME
tablify - turn a delimited text file into a text table SYNOPSIS
tablify [options] file Options: -h|--help Show help --no-headers Assume first line is data, not headers --no-pager Do not use $ENV{'PAGER'} even if defined --strip-quotes Strip " or ' around fields -l|--list List the fields in the file (for use with -f) -f|--fields=f1[,f2] Show only fields in comma-separated list; when used in conjunction with "no-headers" the list should be field numbers (starting at 1); otherwise, should be field names -w|where=f<cmp>v Apply the "cmp" Perl operator to restrict output where field "f" matches the value "v"; acceptable operators include ==, eq, >, >=, <=, and =~ -v|--vertical Show records vertically --limit=n Limit to first "n" records --fs=x Use "x" as the field separator (default is tab " ") --rs=x Use "x" as the record separator (default is newline " ") --as-html Create an HTML table instead of plain text DESCRIPTION
This script is essentially a quick way to parse a delimited text file and view it as a nice ASCII table. By selecting only certain fields, employing a where clause to only select records where a field matches some value, and using the limit to only see some of the output, you almost have a mini-database front-end for a simple text file. EXAMPLES
Given a data file like this: name,rank,serial_no,is_living,age George,General,190293,0,64 Dwight,General,908348,0,75 Attila,Hun,,0,56 Tojo,Emporor,,0,87 Tommy,General,998110,1,54 To find the fields you can reference, use the list option: $ tablify --fs ',' -l people.dat +-----------+-----------+ | Field No. | Field | +-----------+-----------+ | 1 | name | | 2 | rank | | 3 | serial_no | | 4 | is_living | | 5 | age | +-----------+-----------+ To extract just the name and serial numbers, use the fields option: $ tablify --fs ',' -f name,serial_no people.dat +--------+-----------+ | name | serial_no | +--------+-----------+ | George | 190293 | | Dwight | 908348 | | Attila | | | Tojo | | | Tommy | 998110 | +--------+-----------+ 5 records returned To extract the first through third fields and the fifth field (where field numbers start at "1" -- tip: use the list option to quickly determine field numbers), use this syntax for fields: $ tablify --fs ',' -f 1-3,5 people.dat +--------+---------+-----------+------+ | name | rank | serial_no | age | +--------+---------+-----------+------+ | George | General | 190293 | 64 | | Dwight | General | 908348 | 75 | | Attila | Hun | | 56 | | Tojo | Emporor | | 87 | | Tommy | General | 998110 | 54 | +--------+---------+-----------+------+ 5 records returned To select only the ones with six serial numbers, use a where clause: $ tablify --fs ',' -w 'serial_no=~/^d{6}$/' people.dat +--------+---------+-----------+-----------+------+ | name | rank | serial_no | is_living | age | +--------+---------+-----------+-----------+------+ | George | General | 190293 | 0 | 64 | | Dwight | General | 908348 | 0 | 75 | | Tommy | General | 998110 | 1 | 54 | +--------+---------+-----------+-----------+------+ 3 records returned To find Dwight's record, you would do this: $ tablify --fs ',' -w 'name eq "Dwight"' people.dat +--------+---------+-----------+-----------+------+ | name | rank | serial_no | is_living | age | +--------+---------+-----------+-----------+------+ | Dwight | General | 908348 | 0 | 75 | +--------+---------+-----------+-----------+------+ 1 record returned To find the name of all the people with a serial number who are living: $ tablify --fs ',' -f name -w 'is_living==1' -w 'serial_no>0' people.dat +-------+ | name | +-------+ | Tommy | +-------+ 1 record returned To filter outside of program and simply format the results, use "-" as the last argument to force reading of STDIN (and probably assume no headers): $ grep General people.dat | tablify --fs ',' -f 1-3 --no-headers - +---------+--------+--------+ | Field1 | Field2 | Field3 | +---------+--------+--------+ | General | 190293 | 0 | | General | 908348 | 0 | | General | 998110 | 1 | +---------+--------+--------+ 3 records returned When dealing with data lacking field names, you can specify "no-headers" and then refer to fields by number (starting at one), e.g.: $ tail -5 people.dat | tablify --fs ',' --no-headers -w '3 eq "General"' - +--------+---------+--------+--------+--------+ | Field1 | Field2 | Field3 | Field4 | Field5 | +--------+---------+--------+--------+--------+ | George | General | 190293 | 0 | 64 | | Dwight | General | 908348 | 0 | 75 | | Tommy | General | 998110 | 1 | 54 | +--------+---------+--------+--------+--------+ 3 records returned If your file has many fields which are hard to see across the screen, consider using the vertical display with "-v" or "--vertical", e.g.: $ tablify --fs ',' -v --limit 1 people.dat ************ Record 1 ************ name: George rank: General serial_no: 190293 is_living: 0 age : 64 1 record returned SEE ALSO
o Text::RecordParser o Text::TabularDisplay o DBD::CSV Although I don't DBD::CSV this module, the idea was much the inspiration for this. I just didn't want to have to install DBI and DBD::CSV to get this kind of functionality. I think my interface is simpler. AUTHOR
Ken Youens-Clark <kclark@cpan.org>. LICENSE AND COPYRIGHT
Copyright (C) 2006-10 Ken Youens-Clark. All rights reserved. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; version 2. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. perl v5.10.1 2010-07-26 TABLIFY(1p)
All times are GMT -4. The time now is 02:49 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy