OK, thanks, I think I get what you're needing. Try something like this:
Here's a sample session of it being run with identical directories to those provided in your example.
So basically as long as you provide the month and year as MMM-YYYY, and the full path to the directory the report is to be generated for, then you'll be fine. If that directory never changes you could hard-code these as variables and do away with having to provide them, but this makes it as interactive as possible as it currently stands.
Hi,
I have a large file with a repeating pattern in it. Now i want the file split into the block of patterns with a specified no. of lines in each file.
i.e. The file is like
1...
2...
2...
3...
1...
2...
3...
1...
2...
2...
2...
2...
2...
3...
where 1 is the start of the block... (5 Replies)
I have a few txt files in some directory and I need to check their sizes one by one. If any of them are greater than 5mb then I need to split the file in two.
Can someone help?
Thanks. (6 Replies)
Hi Friends,
Below is my requirement. I have a file with the below structure.
0001A1....
0001B1..
....
0001L1
0002A1
0002B1
......
0002L1
..
the first 4 characters are the sequence numbers for a record, A record will start with A1 and end with L1 with same sequence number. Now the... (2 Replies)
Hello, I have a large file (2GB) that I would like to split based on pattern and size.
I've used the following command to split the file (token is "HELLO")
awk '/HELLO/{i++}{print > "file"i}' input.txt
and the output is similar to the following (i included filesize in KB):
10 ... (2 Replies)
I need to split a file if it is over 2GB in size (or any size), preferably split on the lines. I have figured out how to get the file size using awk, and I can split the file based on the number of lines (which I got with wc -l) but I can't figure out how to connect them together in the script.
... (6 Replies)
Hi,
I have a file sample_1.txt (300k rows) which has data like below:
* Also each record is around 64k bytes
11|1|abc|102553|125589|64k bytes of data
10|2|def|123452|123356|......
13|2|geh|144351|121123|...
25|4|fgh|165250|118890|..
14|1|abc|186149|116657|......... (6 Replies)
Hi ,
I have huge files around 400 mb, which has clob data and have diffeent scenarios:
I am trying to pass scenario number as parameter and and get required modified file based on the scenario number and criteria.
Scenario 1:
file name : scenario_1.txt
... (2 Replies)
i have file1.txt
asdas|csada|130310|0423|A1|canberra
sdasd|sfdsf|130426|2328|A1|sydney
Expected output : on eaceh third and fourth colum, split into each two characters
asdas|csada|13|03|10|04|23|A1|canberra
sdasd|sfdsf|13|04|26|23|28|A1|sydney (10 Replies)
I have a file that is about 7 GB in size. The requirement is I should split the file equally in such a way that the size of the split files is less than 2Gb. If the file is less than 2gb, than nothing needs to be done. ( need to done using shell script)
Thanks, (4 Replies)
Hi Team,
I have a requirement in such a way that need to split the file into two based on which column particular value appears.Please find my sample file below.
Lets consider the delimiter of this file as either comma or two colons.(:: and ,). So I need to split the file in such a way that all... (2 Replies)
Discussion started by: ginrkf
2 Replies
LEARN ABOUT DEBIAN
unburden-home-dir
UNBURDEN-HOME-DIR(1) User Commands UNBURDEN-HOME-DIR(1)NAME
unburden-home-dir - unburdens home directories from caches and trashes
SYNOPSIS
unburden-home-dir [ -n | -u | -f filter ]
unburden-home-dir ( -h | --help | --version )
DESCRIPTION
unburden-home-dir unburdens the home directory from files and directory which cause high I/O or disk usage but are neither important if
they are lost, e.g. caches or trash directory.
When being run it moves the files and directories given in the configuration file to a location outside the home directory, e.g. /tmp or
/scratch, and puts appropriate symbolic links in the home directory instead.
OPTIONS -f just unburden those directory matched by the given filter (a perl regular expression) -- matches the already unburdened directories
if used together with -u.
-F Do not check for files in use with lsof before (re)moving files.
-n dry run (show what would be done)
-u undo (reverse the functionality and put stuff back into the home directory)
-h, --help
show this help
--version
show the program's version
EXAMPLES
Example configuration files can be found at /usr/share/doc/unburden-home-dir/examples on Debian-based systems and in the etc/ directory of
the source tar ball.
FILES
/etc/unburden-home-dir, /etc/unburden-home-dir.list, ~/.unburden-home-dir, ~/.unburden-home-dir.list, /etc/default/unburden-home-dir,
/etc/X11/Xsession.d/95unburden-home-dir
Read /usr/share/doc/unburden-home-dir/README on debianoid installations or README in the source tar ball for an explanation of these files.
SEE ALSO
corekeeper (http://openvswitch.org/cgi-bin/gitweb.cgi?p=corekeeper), autotrash(1), agedu(1), bleachbit(1).
For du(1)-like but more comfortable tools, see ncdu(1) (text-mode), baobab(1) (GNOME), filelight(1) (KDE), xdiskusage(1) (X tool calling
du(1) itself), or xdu(1) (X tool reading du(1) output from STDIN).
AUTHOR
Unburden Home Dir is written and maintained by Axel Beckert <beckert@phys.ethz.ch>
LICENSE
Unburden Home Dir is available under the terms of the GNU General Public License (GPL) version 2 or any later version at your option.
Unburden Home Directory May 2012 UNBURDEN-HOME-DIR(1)