Sponsored Content
Full Discussion: CSV file
Top Forums UNIX for Advanced & Expert Users CSV file Post 302491986 by GRS64 on Saturday 29th of January 2011 12:07:59 AM
Old 01-29-2011
CSV file

I am new to script writing. With the help of others I now have a script that reads the files names in a folder and records each file name in a comma delimited file. "import.txt".

I can now import this into Open office Calc as a TextCSV file but it places each file name at the head of a separate column.

What I really want is to be able to import the file such that it shows each file name in a separate row but in the 1 column.

It seems to me I need some how to put a return. line feed or carriage return in where the , is in the script.

How do I do this?


I known one can do an Edit cut and the a special paste in a way that transposes the data base from a single long row of data and to a single column but this is messy for the numbers I have to do. So if it can be done in the generation of the CSV file it would be a lot beter and easier.

I am using OpenOffice 3.3, and Fedora 14 64 bit as the OS

The current script in the area where the comma is is

"$LINE," >> import.txt: done.

This works fine for what I want but it gives the end results as I say with the file name one per column when importing as comma delimited.

Thanks
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Need to compare two csv files values and write into another csv file

Hi all, Am new to scripting. So i just need your ideas to help me out. Here goes my requirement. I have two csv files 1.csv 2.csv abc,1.24 abc,1 def,2.13 def,1 I need to compare the first column of 1.csv with 2.csv and if matches then need to compare... (2 Replies)
Discussion started by: chinnahyd
2 Replies

2. Shell Programming and Scripting

2 problems: Mailing CSV file / parsing CSV for display

I have been trying to find a good solution for this seemingly simple task for 2 days, and I'm giving up and posting a thread. I hope someone can help me out! I'm on HPUX, using sqlplus, mailx, awk, have some other tools available, but can't install stuff that isn't already in place (without a... (6 Replies)
Discussion started by: soldstatic
6 Replies

3. Shell Programming and Scripting

Reading from a CSV and writing in same CSV file

Hi, I am tryng to read from a csv file and based on some grep command output I will modify one of the column in the same csv. Example:- Input CSV:- 20120829001415,noneAA,google.com 20120829001415,dfsafds,google.com 20120829001415,noneAA,google.com Intermediate Step:- If 2nd column of... (3 Replies)
Discussion started by: kmajumder
3 Replies

4. Shell Programming and Scripting

Comparing 2 CSV files and sending the difference to a new csv file

(say) I have 2 csv files - file1.csv & file2.csv as mentioned below: file1.csv ID,version,cost 1000,1,30 2000,2,40 3000,3,50 4000,4,60 file2.csv ID,version,cost 1000,1,30 2000,2,45 3000,4,55 6000,5,70 ... (1 Reply)
Discussion started by: Naresh101
1 Replies

5. Shell Programming and Scripting

Compare 2 csv files in ksh and o/p the difference in a new csv file

(say) I have 2 csv files - file1.csv & file2.csv as mentioned below: file1.csv ID,version,cost 1000,1,30 2000,2,40 3000,3,50 4000,4,60 file2.csv ID,version,cost 1000,1,30 2000,2,45 3000,4,55 6000,5,70 The... (7 Replies)
Discussion started by: Naresh101
7 Replies

6. Shell Programming and Scripting

Match columns from two csv files and update field in one of the csv file

Hi, I have a file of csv data, which looks like this: file1: 1AA,LGV_PONCEY_LES_ATHEE,1,\N,1,00020460E1,0,\N,\N,\N,\N,2,00.22335321,0.00466628 2BB,LES_POUGES_ASF,\N,200,200,00006298G1,0,\N,\N,\N,\N,1,00.30887539,0.00050312... (10 Replies)
Discussion started by: djoseph
10 Replies

7. Shell Programming and Scripting

Compare 2 files of csv file and match column data and create a new csv file of them

Hi, I am newbie in shell script. I need your help to solve my problem. Firstly, I have 2 files of csv and i want to compare of the contents then the output will be written in a new csv file. File1: SourceFile,DateTimeOriginal /home/intannf/foto/IMG_0713.JPG,2015:02:17 11:14:07... (8 Replies)
Discussion started by: refrain
8 Replies

8. Shell Programming and Scripting

Save output of updated csv file as csv file itself

Hi, all I want to sort a csv file based on timestamp from oldest to newest and save the output as csv file itself. Here is an example of my csv file. test.csv SourceFile,DateTimeOriginal /home/intannf/foto/IMG_0739.JPG,2015:02:17 11:32:21 /home/intannf/foto/IMG_0749.JPG,2015:02:17 11:37:28... (10 Replies)
Discussion started by: refrain
10 Replies

9. Shell Programming and Scripting

Save output of updated csv file as csv file itself, part 2

Hi, I have another problem. I want to sort another csv file by the first field. result.csv SourceFile,Airspeed,GPSLatitude,GPSLongitude,Temperature,Pressure,Altitude,Roll,Pitch,Yaw /home/intannf/foto5/2015_0313_090651_219.JPG,0.,-7.77223,110.37310,30.75,996.46,148.75,180.94,182.00,63.92 ... (2 Replies)
Discussion started by: refrain
2 Replies
DBI::SQL::Nano(3)					User Contributed Perl Documentation					 DBI::SQL::Nano(3)

NAME
DBI::SQL::Nano - a very tiny SQL engine SYNOPSIS
BEGIN { $ENV{DBI_SQL_NANO}=1 } # forces use of Nano rather than SQL::Statement use DBI::SQL::Nano; use Data::Dumper; my $stmt = DBI::SQL::Nano::Statement->new( "SELECT bar,baz FROM foo WHERE qux = 1" ) or die "Couldn't parse"; print Dumper $stmt; DESCRIPTION
DBI::SQL::Nano is meant as a *very* minimal SQL engine for use in situations where SQL::Statement is not available. In most situations you are better off installing SQL::Statement although DBI::SQL::Nano may be faster for some very simple tasks. DBI::SQL::Nano, like SQL::Statement is primarily intended to provide a SQL engine for use with some pure perl DBDs including DBD::DBM, DBD::CSV, DBD::AnyData, and DBD::Excel. It isn't of much use in and of itself. You can dump out the structure of a parsed SQL statement, but that's about it. USAGE
Setting the DBI_SQL_NANO flag By default, when a DBD uses DBI::SQL::Nano, the module will look to see if SQL::Statement is installed. If it is, SQL::Statement objects are used. If SQL::Statement is not available, DBI::SQL::Nano objects are used. In some cases, you may wish to use DBI::SQL::Nano objects even if SQL::Statement is available. To force usage of DBI::SQL::Nano objects regardless of the availability of SQL::Statement, set the environment variable DBI_SQL_NANO to 1. You can set the environment variable in your shell prior to running your script (with SET or EXPORT or whatever), or else you can set it in your script by putting this at the top of the script: BEGIN { $ENV{DBI_SQL_NANO} = 1 } Supported SQL syntax Here's a pseudo-BNF. Square brackets [] indicate optional items; Angle brackets <> indicate items defined elsewhere in the BNF. statement ::= DROP TABLE [IF EXISTS] <table_name> | CREATE TABLE <table_name> <col_def_list> | INSERT INTO <table_name> [<insert_col_list>] VALUES <val_list> | DELETE FROM <table_name> [<where_clause>] | UPDATE <table_name> SET <set_clause> <where_clause> | SELECT <select_col_list> FROM <table_name> [<where_clause>] [<order_clause>] the optional IF EXISTS clause ::= * similar to MySQL - prevents errors when trying to drop a table that doesn't exist identifiers ::= * table and column names should be valid SQL identifiers * especially avoid using spaces and commas in identifiers * note: there is no error checking for invalid names, some will be accepted, others will cause parse failures table_name ::= * only one table (no multiple table operations) * see identifier for valid table names col_def_list ::= * a parens delimited, comma-separated list of column names * see identifier for valid column names * column types and column constraints may be included but are ignored e.g. these are all the same: (id,phrase) (id INT, phrase VARCHAR(40)) (id INT PRIMARY KEY, phrase VARCHAR(40) NOT NULL) * you are *strongly* advised to put in column types even though they are ignored ... it increases portability insert_col_list ::= * a parens delimited, comma-separated list of column names * as in standard SQL, this is optional select_col_list ::= * a comma-separated list of column names * or an asterisk denoting all columns val_list ::= * a parens delimited, comma-separated list of values which can be: * placeholders (an unquoted question mark) * numbers (unquoted numbers) * column names (unquoted strings) * nulls (unquoted word NULL) * strings (delimited with single quote marks); * note: leading and trailing percent mark (%) and underscore (_) can be used as wildcards in quoted strings for use with the LIKE and CLIKE operators * note: escaped single quote marks within strings are not supported, neither are embedded commas, use placeholders instead set_clause ::= * a comma-separated list of column = value pairs * see val_list for acceptable value formats where_clause ::= * a single "column/value <op> column/value" predicate, optionally preceded by "NOT" * note: multiple predicates combined with ORs or ANDs are not supported * see val_list for acceptable value formats * op may be one of: < > >= <= = <> LIKE CLIKE IS * CLIKE is a case insensitive LIKE order_clause ::= column_name [ASC|DESC] * a single column optional ORDER BY clause is supported * as in standard SQL, if neither ASC (ascending) nor DESC (descending) is specified, ASC becomes the default ACKNOWLEDGEMENTS
Tim Bunce provided the original idea for this module, helped me out of the tangled trap of namespace, and provided help and advice all along the way. Although I wrote it from the ground up, it is based on Jochen Weidmann's orignal design of SQL::Statement, so much of the credit for the API goes to him. AUTHOR AND COPYRIGHT
This module is written and maintained by Jeff Zucker < jzucker AT cpan.org > Copyright (C) 2004 by Jeff Zucker, all rights reserved. You may freely distribute and/or modify this module under the terms of either the GNU General Public License (GPL) or the Artistic License, as specified in the Perl README file. perl v5.12.1 2007-07-16 DBI::SQL::Nano(3)
All times are GMT -4. The time now is 09:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy