hello... I have a file with a list of filepaths in it, like so:
delMe.txt (files to be deleted)
Now here's what I'm trying to do. I want any lines in the above file (delMe.txt) to be deleted out of the original file they came from (below, orig.txt):
orig.txt
I started to do something like this (but think i'm making it complicated)
thanks a bajillion in advance!!
wait a second - i just realized all of my work is really pretty pointless and might be done in a single line.
what would anybody say is the easiest way to:
given a directory of files and subdirectories w/files, recursively remove an entire line from any file which contains string "this is a string to be removed" somewhere inside it
Thanks, but I just can't seem to figure out how to use fgrep to do what I want.
I need to remove all malicious lines of crap inserted by Gumblar. My site doesn't use any iframes, and gumblar places everything in as invisible iframes. so I figured I'd grep for all iframes, and than see what else they have in common. I found they had these two URLs in common:
Here's how I did the search for the files infected with those URLs:
(fyi, its not actually root that's being parsed, its a local copy of the entire public_html file tree)
I redirected that ^ script's output to a file "infected.txt"
it looks like this: (cut off of course)
I then realised that the last line of the majority of the above found files is where the infection lies. So after saving a copy of the full infected.txt and then removing the file paths listed that didn't have an infection on the last line, I wrote this script.
Now I'm realising that was silly and inefficient. I should have done something like ^ that, but instead deleting the last line (in hopes that its no where else in the file) the script should delete ANY lines found (in each file it looks at) that have either "malicoiusString1" OR "maliciousString2".
I'm tearing my hair out (with my lack of programming experience) trying to figure out how to do this. I feel like I'm on the write track ^ but, I've just been stuck for a day and a half now.
ANY and all help would be greatly appreciated! Thanks so much in advance.
-----Post Update-----
Okay, this is what I came up with, if ANYone can try and help fill in my blanks/replace my semi-pseudocode, that would be amazing:
Hello,
I have file of more than 10000 lines.
I want to delete 40 lines after every 20 lines.
e.g from a huge file, i want to delete line no from 34 - 74, then 94 - 134 etc and so on.
Please let me know how i can do it.
Best regards, (11 Replies)
Hi
This is a sample of my data file.
##field PH01000000 1 4869017
#PH01000000G0240
WWW278545G0240 P.he_model_v1.0 erine 119238 121805 . - . ID=PH01000000G0240;Description="zinc finger, C3HC4 type domain containing protein, expressed"... (7 Replies)
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Input:
a
b
b
c
d
d
I need:
a
c
I know how to get this (the lines that have duplicates) :
b
d
sort file | uniq -d
But i need opossite of this. I have searched the forum and other places as well, but have found solution for everything except this variant of the problem. (3 Replies)
I have a file which has about 500K records and I need to delete about 50 records from the file. I know line numbers and am using
sed '13456,13457,......d' filename > new file.
It does not seem to be working.
Any help will greatly appreciated. (5 Replies)