03-27-2009
find a file and print its size
I have a directory, /local/test/
under this directory is many subdirectories, each subdir has about 70 files, the 70 files are always the same names. I want to print to the screen the size of fileabc.txt in each of the subdirectories. I cannot seem to work with pipe and splats * because there are about 12,000 subdirectories in /local/test/
So each of these 12,000 subdirs contains a fileabc.txt and I want to know its size in each subdir.
I tryed this command and it didnt work
>cd /local/test/
>find . -name fileabc.txt | ls -lt
it obviously didnt work!
any ideas?
Thank you!!
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi,
Can somebody PLEASE help me.
Suppose I want to find a file which has largest no of bytes in a particular directory, How do i do that.
ls -s will give the size of Blocks.
But I want the largest sized file and in bytes or KB OR MB.
tHANKS IN advanvce.
Bye
Rooh
:( (1 Reply)
Discussion started by: rooh
1 Replies
2. HP-UX
Hi,
Could any one please let me know what is the option
available in UNIX to print by specifying the paper size?
We are using Unix11i. I could n't see any option specified in the 'lp' command to print the report by specifying the size of the paper. It would be of great help to me, if... (1 Reply)
Discussion started by: ukarthik
1 Replies
3. Solaris
hi all,
in my server there are some specific application files which are spread through out the server... these are spread in folders..sub-folders..chid folders...
please help me, how can i find the total size of these specific files in the server... (3 Replies)
Discussion started by: abhinov
3 Replies
4. Shell Programming and Scripting
My Question is
-----------------
Assume you've a directory (i.e /home/test/) which contains n number of files,
rename all the files which has byte count more than zero (0) with .bak extension.
Write shell script to achieve this output,
execute the same without using". / " in front of... (6 Replies)
Discussion started by: hgriva1
6 Replies
5. UNIX for Advanced & Expert Users
Hi in my shell script I have to do this
1. there is a file called testing.txt in /home/report directory
If the file size is 0(zero) and date is today's date, then I have to print
"Successful" else "Failed".
2. There is a file called number.txt which will have text only one line like this... (10 Replies)
Discussion started by: gsusarla
10 Replies
6. Shell Programming and Scripting
Hi,
I have to directory
/usr/inbound
-------------
10900.txt
10889.txt
109290202.txt
I need to create inbound directory
and i need to know size of these files one by one
if file size is zero i need to print message like "empty file"
Please help me how to solve this
thanks
krish. (4 Replies)
Discussion started by: kittusri9
4 Replies
7. UNIX for Advanced & Expert Users
Anybody can help
HOW TO FIND THE FILE SIZE IN UNIX (5 Replies)
Discussion started by: lmraochodisetti
5 Replies
8. Shell Programming and Scripting
Hi All...
is the below command be modified in sucha way that i can get the file size along with the name and path of the file
the below command only gives me the file location which are more than 100000k...but I want the exact size of the file also..
find / -name "*.*" -size +100000k
... (3 Replies)
Discussion started by: rpraharaj84
3 Replies
9. UNIX for Dummies Questions & Answers
Hello dear unix command line friends !
I'm looking for a simple combinaison of ls & awk (maybe grep) to print:
list of folders of a directory
|_ ordered by size
like what I have with
$ du -sk ./* | sort -rn
printing that result:
8651520 ./New Virtual Machine_1
8389120 ./Redhat
... (1 Reply)
Discussion started by: holister
1 Replies
10. UNIX for Dummies Questions & Answers
Is there a way to use the find command to recursively scan directories for files greater than 1Gb in size and print out the directory path and file name only?
Thanks in advance. (6 Replies)
Discussion started by: jimbojames
6 Replies
LEARN ABOUT DEBIAN
bp_bulk_load_gff
BP_BULK_LOAD_GFF(1p) User Contributed Perl Documentation BP_BULK_LOAD_GFF(1p)
NAME
bulk_load_gff.pl - Bulk-load a Bio::DB::GFF database from GFF files.
SYNOPSIS
% bulk_load_gff.pl -d testdb dna1.fa dna2.fa features1.gff features2.gff ...
DESCRIPTION
This script loads a Bio::DB::GFF database with the features contained in a list of GFF files and/or FASTA sequence files. You must use the
exact variant of GFF described in Bio::DB::GFF. Various command-line options allow you to control which database to load and whether to
allow an existing database to be overwritten.
This script differs from bp_load_gff.pl in that it is hard-coded to use MySQL and cannot perform incremental loads. See bp_load_gff.pl for
an incremental loader that works with all databases supported by Bio::DB::GFF, and bp_fast_load_gff.pl for a MySQL loader that supports
fast incremental loads.
NOTES
If the filename is given as "-" then the input is taken from standard input. Compressed files (.gz, .Z, .bz2) are automatically
uncompressed.
FASTA format files are distinguished from GFF files by their filename extensions. Files ending in .fa, .fasta, .fast, .seq, .dna and their
uppercase variants are treated as FASTA files. Everything else is treated as a GFF file. If you wish to load -fasta files from STDIN,
then use the -f command-line swith with an argument of '-', as in
gunzip my_data.fa.gz | bp_fast_load_gff.pl -d test -f -
The nature of the bulk load requires that the database be on the local machine and that the indicated user have the "file" privilege to
load the tables and have enough room in /usr/tmp (or whatever is specified by the $TMPDIR environment variable), to hold the tables
transiently.
Local data may now be uploaded to a remote server via the --local option with the database host specified in the dsn, e.g.
dbi:mysql:test:db_host
The adaptor used is dbi::mysqlopt. There is currently no way to change this.
About maxfeature: the default value is 100,000,000 bases. If you have features that are close to or greater that 100Mb in length, then the
value of maxfeature should be increased to 1,000,000,000. This value must be a power of 10.
Note that Windows users must use the --create option.
If the list of GFF or fasta files exceeds the kernel limit for the maximum number of command-line arguments, use the --long_list
/path/to/files option.
COMMAND-LINE OPTIONS
Command-line options can be abbreviated to single-letter options. e.g. -d instead of --database.
--database <dsn> Database name (default dbi:mysql:test)
--adaptor Adaptor name (default mysql)
--create Reinitialize/create data tables without asking
--user Username to log in as
--fasta File or directory containing fasta files to load
--long_list Directory containing a very large number of
GFF and/or FASTA files
--password Password to use for authentication
(Does not work with Postgres, password must be
supplied interactively or be left empty for
ident authentication)
--maxbin Set the value of the maximum bin size
--local Flag to indicate that the data source is local
--maxfeature Set the value of the maximum feature size (power of 10)
--group A list of one or more tag names (comma or space separated)
to be used for grouping in the 9th column.
--gff3_munge Activate GFF3 name munging (see Bio::DB::GFF)
--summary Generate summary statistics for drawing coverage histograms.
This can be run on a previously loaded database or during
the load.
--Temporary Location of a writable scratch directory
SEE ALSO
Bio::DB::GFF, fast_load_gff.pl, load_gff.pl
AUTHOR
Lincoln Stein, lstein@cshl.org
Copyright (c) 2002 Cold Spring Harbor Laboratory
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for
disclaimers of warranty.
perl v5.14.2 2012-03-02 BP_BULK_LOAD_GFF(1p)