How do I separate a portion of a file name to use grep on?
I'm trying to write a script that takes a file name in the form of Name_Num1_Num2.Extension and I want to separate the name portion and then use grep to see if that name part has any illegal characters in it.
I already have my grep command written and it works, I'm not sure how to separate the Name part from the two _Num1_Num2.Extension, is it possible to just look at just the one part of the file name and perform my command on it?
I also want to do the same with the _Num1_Num2 and the .Extension part with my script to check if they contain illegal characters as well, not sure how to approach that as well.
The reason I want to check each part separately is because each part of the file name has different illegal characters where the name cannot contain anything other than letters, digits and then the number portion should only contain digits.
So in short my question is: is there any way I can check portions of filenames? Or is there a better way to approach this? Thanks.
---------- Post updated at 04:22 PM ---------- Previous update was at 04:00 PM ----------
Never mind, I got a solution by doing the following:
General shell syntax, using unix pattern matching of the case statement to check the filename format, without separating the filename in different parts:
--
Or bash/ksh93/zsh, same thing, but this time using extended regular expression matching:
--
A quick general (POSIX) way to split the filename in different variables, using a here-document:
Or bash/ksh93/zsh, using a here-string:
---
Quote:
Originally Posted by RudiC
How about - provided you are using bash -
Note: this will globally change IFS and not local to the array assignment. Also the order in which these two assignments will be executed is not defined, so it is best to use a semicolon or newline to separate them and to make sure the IFS assignment is done before the array assignment and save IFS and restore it later,
or use (bash):
In the latter case IFS does get set local to the read command.
Last edited by Scrutinizer; 12-04-2016 at 11:58 AM..
I need to grep multiple strings from a particular file.
I found the use of egrep "String1|String2|String3" file.txt | wc-l
Now what I'm really after is that I need to separate word count per each string found. I am trying to keep it to use the grep only 1 time.
Can you guys help ?
... (9 Replies)
Hello,
I want to grep a log ("server.log") for words in a separate file ("white-list.txt") and generate a separate log file containing each line that uses a word from the "white-list.txt" file.
Putting that in bullet points:
Search through "server.log" for lines that contain any word... (15 Replies)
Hi ladies and gentleman.. I have two text file with me. I need to replace one of the file content to another file if one both files have a matching pattern.
Example:
text1.txt:
ABCD 1234567,HELLO_WORLDA,HELLO_WORLDB
DCBA 3456789,HELLO_WORLDE,HELLO_WORLDF
text2.txt:
XXXX,ABCD... (25 Replies)
Need to sort a portion of a file in a Alphabetical Order.
Example : The user adam is not sorted and the user should get sorted. I don't want the complete file to get sorted.
Currently All_users.txt contains the following lines.
##############
# ARS USERS
##############
mike, Mike... (6 Replies)
Alright, here's the deal. I'm running the following ruby script (output follows):
>> /Users/name/bin/acweather.rb -z 54321 -o /Users/name/bin -c
Clouds AND Sun 57/33 - Mostly sunny and cool
I want to just grab the "57/33" portion, but that's it. I don't want any other portion of the line. I... (5 Replies)
I want to grep a portion of the log file. grepping a particular pattern and including 10 lines before that and after that occurence.
grep -n "SomeString Pattern" filename
10 lines before this occurence and 10 lines after that.
Please help. Need the simple script not in awk or sed. (9 Replies)
Hi,
I have my input as follows :
I have given two entries-
From system Mon Aug 1 23:52:47 2005
Source !100000006!:
Impact !100000005!: High
Status ! 7!: New
Last Name+!100000001!:
First Name+ !100000003!:
... (4 Replies)