Ok, I'm stumped and can't seem to find relevant info.
(I'm not even sure, I might have asked something similar before.):
I'm trying to use shell scripting/UNIX commands to extract URLs from a fairly large web page, with a view to ultimately wrapping this in PHP with exec() and including the... (2 Replies)
I have a very big file of 5gb size and there are about 50 million records in there. I have to delete the records based on recrord number that I know fromoutside with out opening the file. The record numbers are very random like 5000678, 7890005 etc.
Can somebody let me know how i can... (5 Replies)
Hi,
I have one huge record and know that each record in the file is 550 bytes long. How do I parse out individual records from the single huge record.
Thanks, (4 Replies)
Hi everyone.
I am a newbie to Linux stuff. I have this kind of problem which couldn't solve alone. I have a text file with records separated by empty lines like this:
ID: 20
Name: X
Age: 19
ID: 21
Name: Z
ID: 22
Email: xxx@yahoo.com
Name: Y
Age: 19
I want to grep records that... (4 Replies)
I have 2 files; one file (say, details.txt) contains the details of employees and another file (say, emp.txt) has some selected employee names. I am extracting employee details from details.txt by using emp.txt and the corresponding code is:
while read line
do
emp_name=`echo $line`
grep -e... (7 Replies)
Hi,
I have a file where there "Tab" seperated values are present.I need to identify duplicate entries based on column 1 & 6 only .
For e.g :
I tried using uniq ..but the output is only having one duplicate entry, instead of both the entries.I need both the above entries .
uniq -f5... (2 Replies)
Hi All,
I am new to this forum and this is my first post.
My requirement is like to optimize the time taken to grep the file with 40000 lines.
There are two files FILEA(40000 lines) FILEB(40000 lines).
The requirement is like this, both the file will be in the format below... (11 Replies)
Hi All,
I have a requirement to extract para in XML file on the basis of another list file having specific parameters.
I will extract these para from XML and import in one scheduler tool.
file2
<FOLDER DATACENTER="ControlMserver" VERSION="800" PLATFORM="UNIX" FOLDER_NAME="SH_AP_INT_B01"... (3 Replies)
awk 'NR==FNR{arr;next} $0 in arr' /tmp/Data_mismatch.sh /prd/HK/ACCTCARD_20160115.txt
edit by bakunin: seems that one CODE-tag got lost somewhere. i corrected that, but please check your posts more carefully. Thank you. (5 Replies)
Discussion started by: suresh_target
5 Replies
LEARN ABOUT OSX
svk::log::filter::grep
SVK::Log::Filter::Grep(3) User Contributed Perl Documentation SVK::Log::Filter::Grep(3)SYNOPSIS
SVK::Log::Filter::Grep - search log messages for a given pattern
DESCRIPTION
The Grep filter requires a single Perl pattern (regular expression) as its argument. The pattern is then applied to the svn:log property
of each revision it receives. If the pattern matches, the revision is allowed to continue down the pipeline. If the pattern fails to
match, the pipeline immediately skips to the next revision.
The pattern is applied with the /i modifier (case insensitivity). If you want case-sensitivity or other modifications to the behavior of
your pattern, you must use the "(?imsx-imsx)" extended pattern (see "perldoc perlre" for details). For example, to search for log messages
that match exactly the characters "foo" you might use
svk log --filter "grep (?-i)foo"
However, to search for "foo" without regards for case, one might try
svk log --filter "grep foo"
The result of any capturing parentheses inside the pattern are not available. If demand dictates, the Grep filter could be modified to
place the captured value somewhere in the stash for other filters to access.
If the pattern contains a pipe character ('|'), it must be escaped by preceding it with a '' character. Otherwise, the portion of the
pattern after the pipe character is interpreted as the name of a log filter.
STASH /PROPERTY MODIFICATIONS
Grep leaves all properties and the stash intact.
perl v5.10.0 2008-08-04 SVK::Log::Filter::Grep(3)