Hi,
i have a file as follows:
jonathan:bonus1,bonus2
gerald:bonus1
patrick:bonus1,bonus2
My desired output is
jonathan:bonus1
jonathan:bonus2
gerald:bonus1
patrick:bonus1
patrick:bonus2
my current code is
cat $F | awk -F""
how should i continue the code? Can i do something... (5 Replies)
Hi i have encountered a problem and i have tried many different things but my brain just has some limitations lol well anyways i was trying to make this program work down below so i can process multiple commands just by separating them with ;. I would apeciate if someone could just make it work kuz... (2 Replies)
Hi (warning: newbie question),
I am writing a script to run a series of tests on a program, which involves a line:
for file in `ls test_suite/*.args`
but later I want to send the output to file.out. But I need to separate the filename and extension somehow...Also $file contains... (2 Replies)
Hello Guys...
I am bit new to shell scripting and was looking for help !!
I have got syslog data on a linux server recording log messages from a device.
I need to seperate the data from log to file so that I can push it excell and get a report from that.
Log is in the format below
"... (2 Replies)
Hi,
I have a text file in following format:
2.45
5.67
6.43
I have to cut the values before decimal and store them in a file.
So the output file should look like:
2
5
6
.
.
and so on...
Can someone suggest me a sed/awk command for doing this? (2 Replies)
I have folder like main. inside main folder there are subfolders & files like main1 main2 main3, file1, file2, file3.
I want folders main1 & main2, file1, file2 from main folder. copy them into new folder.
Please suggest me how to do it.
I am new to shell programming (2 Replies)
Hi all,
I have a single text file, Contig3.fasta, that looks like this:
>NAME1
ACCTGGTA
>NAME2
GGTTGGACA
>NAME3
ATTTTGGGCCAnd It has about 100 items like this in it. What I would like to do is copy each item into 100 different text files, and have them named a certain way
Output... (4 Replies)
Hello,
I have a text file running into around 100 thousand+ lines which has the following rigid structure:
Each field is separated by a comma.
Some examples are given below:
23,Chinttaman Pagare,चिंतमण पगारे
24, Chinttaman Pateel,चिंतामण पाटल
25, Chinttaman Rout,चिंतामण राऊत
26,... (3 Replies)
Hi
I have the following file
# cat red
it.d-state = 50988 41498 45 0 0 0
it.age_buffer= 1500
it.dir = 1
I need to grep lines that are present after "="
But when I do so, few lines has more than one value and some has one
How to parse the line
I have used the follwing... (6 Replies)
Discussion started by: Priya Amaresh
6 Replies
LEARN ABOUT OSX
cpanplus::shell::default::plugins::source5.16
CPANPLUS::Shell::Default::Plugins::Source(3pm) Perl Programmers Reference Guide CPANPLUS::Shell::Default::Plugins::Source(3pm)NAME
CPANPLUS::Shell::Default::Plugins::Source - read in CPANPLUS commands
SYNOPSIS
CPAN Terminal> /source /tmp/list_of_commands /tmp/more_commands
DESCRIPTION
This is a "CPANPLUS::Shell::Default" plugin that works just like your unix shells source(1) command; it reads in a file that has commands
in it to execute, and then executes them.
A sample file might look like this:
# first, update all the source files
x --update_source
# find all of my modules that are on the CPAN
# test them, and store the error log
a ^KANE$'
t *
p /home/kane/cpan-autotest/log
# and inform us we're good to go
! print "Autotest complete, log stored; please enter your commands!"
Note how empty lines, and lines starting with a '#' are being skipped in the execution.
BUG REPORTS
Please report bugs or other issues to <bug-cpanplus@rt.cpan.org<gt>.
AUTHOR
This module by Jos Boumans <kane@cpan.org>.
COPYRIGHT
The CPAN++ interface (of which this module is a part of) is copyright (c) 2001 - 2007, Jos Boumans <kane@cpan.org>. All rights reserved.
This library is free software; you may redistribute and/or modify it under the same terms as Perl itself.
SEE ALSO
CPANPLUS::Shell::Default, CPANPLUS::Shell, cpanp
perl v5.16.2 2012-10-11 CPANPLUS::Shell::Default::Plugins::Source(3pm)