Given the positions of the numbers in your ls output, the first number would be the link count number, and the second one the directory size (in blocks). What type of filesystem is mounted on that directory? A network one, a USB one, or some other that your OS doesn't fully understand, but it happy to read?
I've never really paid much attention, if I'm honest, to what NetApp, or whatever network storage, tells me in terms of the sizes of directories (files (and especially ownership) would be different - for example, the number 4294967295 might indicate that the disk is not correctly assigned). I know if I were to look on the filer that everything is probably OK. What it chooses to report to your (client) OS as being the correct information is being misinterpreted by the OS. It's possible there may be some options when mounting the filesystem, or some software that's tightly couple with the storage technology that can be installed, to get you the right numbers.. but, generally, it's really not something to worry about.
By Size in Blocks - would this give an idea of how much size it occupies in GB ?
In principle: yes. Blocksize depends on the filesystem type, though. Originally in UNIX this was 512 bytes and this is still true for many FSes. XENIX filesystem had a blocksize of 2048 bytes and that too has been adopted by some others (although considerably less). Sometimes block sizes of 4096 bytes are used too and probably there are some others i just don't remember off the top of my head. Issue the mount command, most systems will show which type of FS they (think to) have mounted. Take it from there.
Quote:
Originally Posted by infernalhell
But i see the parent directory of this dir has a lower block size.. So i am a little confused.
There are several possible explanations for this: the parent directory could be on another FS or your system doesn't fully understand the networked FS it mounted (enough to handle the file access but can't grok the FS statistics).
By Size in Blocks - would this give an idea of how much size it occupies in GB ?
But i see the parent directory of this dir has a lower block size.. So i am a little confused.
The size of the directory is not the size of the blocks, but rather the product of blocksize and number of blocks used. Consider this on my system:
Both file and dir occupy a single 4096 byte-sized block. ls shows the size of the file regardless of the amount of disk space it occupies. The directory, on the other hand, appears to fit the block it occupies. As you add more to the file it grows until it occupies two, three, or even twenty blocks. Think of the directory as look-up table, containing two fields per record: the name of the file it "contains", and the files inode number. As you add files to the directory it fills up the table until there is no more room, at which point it will grow into a second block. Now look at the number of links if your two directories. Evidently job contains more files than logs, requiring it to use more disk space Hence the discrepancy.
I should point out that I don't know the full workings of file systems; I don't really need to. The above assumptions just give me that warm, fuzzy feeling that I need to use them.
basically would want to know 4294967295 and 2147549184
Something you can try to do is determine the sizeof a dir entry on your system...it's a simple C program to write and it'd give you how much storage each dir entry takes up...although the "no. of entries X sizeof each dir entry" may still not equal the 2147549184 figure...at least that'd be the rationale behind it yet the kernel has complete control over the inner workings of a dir so one cannot be sure of what it's actually doing.
I need to print field and the next one if field matches 'patternA' and also print 'patternB' fields.
echo "some output" | awk '{for(i=1;i<=NF;i++){if($i ~ /patternA/){print $i, $(i+1)}elif($i ~ /patternB/){print $i}}}'
This code returnes me 'syntax error'. Pls advise how to do properly. (2 Replies)
In our environment we used to lot of events for ntp issues. I am unable to find the what needs to consider here. :(
ntpq -p fields.
remote refid st t when poll reach delay offset jitter
---------- Post updated at 05:13 AM ---------- Previous update was at 04:47 AM... (1 Reply)
Hi All,
I am using the following command in Linux:
sar -r 30 3
Linux 2.6.18-194.3.1.7.3.el5xen 02/07/2013
02:55:47 PM kbmemfree kbmemused %memused kbbuffers kbcached kbswpfree kbswpused %swpused kbswpcad
02:56:17 PM 128646024 22348920 14.80 230232 15575860 75497464 ... (4 Replies)
Attached is a file called diff.txt
It is the output from this command:
diff -y --suppress-common-lines --width=5000 1.txt 2.txt > diff.txt
I have also attached 1.txt and 2.txt for your convenience.
Both 1.txt and 2.txt contain one very long CSV string.
File 1.txt is a CSV dump of... (0 Replies)
Diff output as follows:
< AAA BBB CCC DDD EEE 123
> PPP QQQ RRR SSS TTT 111
> VVV WWW XXX YYY ZZZ 333
> AAA BBB CCC DDD EEE 124
How can i use awk to compare the last field to determine if the counter has increased, and need to ensure that the first 4 fields must have the same... (15 Replies)
Hi All,
Looking for a quick AWK script to output some differences between two files.
FILE1
device1 1.1.1.1 PINGS
device1 2.2.2.2 PINGS
FILE2
2862 SITE1 device1-prod 1.1.1.1 icmp - 0 ... (4 Replies)
Hi,
I am writing a code where the file is a pipe delimited and I would need to extract the 2nd part of field2 if it is "ATTN", "C/O" or "%" and check to see if field9 is populated or not. If field9 is already populated then leave it as is but if field9 is not populated then take the 2nd part of... (3 Replies)
Hi guys,
I couldn't find solution to this problem. If anyone knows please help me out.
your guidance is highly appretiated.
I have two files -
FILE1 has the following 7 columns ( - has been added to make columns visible enough else columns are separated by single space)
155.34 - leg - 1... (8 Replies)
I am getting a variable as x=2006/01/18
now I have to extract each field from it.
Like x1=2006, x2=01 and x3=18.
Any idea how?
Thanks a lot for help.
Thanks
CSaha (6 Replies)