Sponsored Content
Top Forums Shell Programming and Scripting Can anyone make this script run faster? Post 302220273 by shew01 on Thursday 31st of July 2008 08:39:48 AM
Old 07-31-2008
Quote:
Originally Posted by Annihilannic
This should do it.

Code:
ls -l | awk 'NF>=5 {tot+=$5}END {printf "%.f\n",tot}'

The NF>=5 is to skip the first line "total nnnn" which reports the number of 512 byte blocks used in the directory.
Yup. This works. Thanks!
 

10 More Discussions You Might Find Interesting

1. Solaris

looking for different debugger for Solaris or to make sunstudio faster

im using the sunstudio but it is very slow , is there ant other GUI debugger for sun Solaris or at list some ways to make it faster ? im using to debug throw telnet connection connected to remote server thanks (0 Replies)
Discussion started by: umen
0 Replies

2. Shell Programming and Scripting

awk help to make my work faster

hii everyone , i have a file in which i have line numbers.. file name is file1.txt aa bb cc "12" qw xx yy zz "23" we bb qw we "123249" jh here 12,23,123249. is the line number now according to this line numbers we have to print lines from other file named... (11 Replies)
Discussion started by: kumar_amit
11 Replies

3. Red Hat

Re:How to make the linux pc faster

Hi, Can any one help me out in solving the problem i have a linux database server it is tooo slow that i am unable to open even the terminial is there any solution to get rid of this problem.How to make this server faster. Thanks & Regards Venky (0 Replies)
Discussion started by: venky_vemuri
0 Replies

4. Shell Programming and Scripting

How to make copy work faster

I am trying to copy a folder which contains a list of C executables. It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process. Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Discussion started by: prasperl
2 Replies

5. Shell Programming and Scripting

Make script faster

Hi all, In bash scripting, I use to read files: cat $file | while read line; do ... doneHowever, it's a very slow way to read file line by line. E.g. In a file that has 3 columns, and less than 400 rows, like this: I run next script: cat $line | while read line; do ## Reads each... (10 Replies)
Discussion started by: AlbertGM
10 Replies

6. Shell Programming and Scripting

Making script run faster

Can someone help me edit the below script to make it run faster? Shell: bash OS: Linux Red Hat The point of the script is to grab entire chunks of information that concerns the service "MEMORY_CHECK". For each chunk, the beginning starts with "service {", and ends with "}". I should... (15 Replies)
Discussion started by: SkySmart
15 Replies

7. Shell Programming and Scripting

awk changes to make it faster

I have script like below, who is picking number from one file and and searching in another file, and printing output. Bu is is very slow to be run on huge file.can we modify it with awk #! /bin/ksh while read line1 do echo "$line1" a=`echo $line1` if then echo "$num" cat file1|nawk... (6 Replies)
Discussion started by: mirwasim
6 Replies

8. Shell Programming and Scripting

Optimize shell script to run faster

data.file: contact { contact_name=royce-rolls modified_attributes=0 modified_host_attributes=0 modified_service_attributes=0 host_notification_period=24x7 service_notification_period=24x7 last_host_notification=0 last_service_notification=0 host_notifications_enabled=1... (8 Replies)
Discussion started by: SkySmart
8 Replies

9. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies

10. Shell Programming and Scripting

How to make faster loop in multiple directories?

Hello, I am under Ubuntu 18.04 Bionic. I have one shell script run.sh (which is out of my topic) to run files under multiple directories and one file to control all processes running under those directories (control.sh). I set a cronjob task to check each of them with two minutes of intervals.... (3 Replies)
Discussion started by: baris35
3 Replies
df_hfs(1M)																df_hfs(1M)

NAME
df_hfs: df - report number of free CDFS, HFS, or NFS file system disk blocks SYNOPSIS
FStype] specific_options] [special|directory]... DESCRIPTION
The command displays the number of free 512-byte blocks and free inodes available for file systems by examining the counts kept in the superblock or superblocks. If a special or a directory is not specified, the free space on all mounted file systems is displayed. If the arguments to are path names, reports on the file systems containing the named files. If the argument to is a special of an unmounted file system, the free space in the unmounted file system is displayed. Options recognizes the following options: Report only the number of kilobytes (KB) free. Report the total number of blocks allocated for swapping to the file system as well as the number of blocks free for swapping to the file system. This option is supported on HFS file systems only. Report the number of files free. Report only the actual count of the blocks in the free list (free inodes are not reported). When this option is specified, reports on raw devices. Report only on the FStype file system type (see fstyp(1M)). For the purposes of this manual entry, FStype can be one of and for the CDFS, HFS, and NFS file systems, respectively. Report the entire structure described in statvfs(2). Report the total number of inodes, the number of free inodes, number of used inodes, and the percentage of inodes in use. Report the allocation in kilobytes (KB). Report on local file systems only. Report the file system name. If used with no other options, display a list of mounted file system types. Specify options specific to the HFS file system type. specific_options is a comma-separated list of suboptions. The available suboption is: Report the number of used and free inodes. Report the total allocated block figures and the number of free blocks. Report the percentage of blocks used, the number of blocks used, and the number of blocks free. This option cannot be used with other options. Echo the completed command line, but perform no other action. The command line is generated by incorporating the user-specified options and other information derived from This option allows the user to verify the command line. When is used on an HFS file system, the file space reported is the space available to the ordinary user, and does not include the reserved file space specified by Unreported reserved blocks are available only to users who have appropriate privileges. See tunefs(1M) for information about When is used on NFS file systems, the number of inodes is displayed as -1 . This is due to superuser access restrictions over NFS. EXAMPLES
Report the number of free disk blocks for all mounted file systems: Report the number of free disk blocks for all mounted HFS file systems: Report the number of free files for all mounted NFS file systems: Report the total allocated block figures and the number of free blocks, for all mounted file systems: Report the total allocated block figures and the number of free blocks, for the file system mounted as /usr: WARNINGS
does not account for: o Disk space reserved for swap space, o Space used for the HFS boot block (8K bytes, 1 per file system), o HFS superblocks (8K bytes each, 1 per disk cylinder), o HFS cylinder group blocks (1K-8K bytes each, 1 per cylinder group), o Inodes (currently 128 bytes reserved for each inode). Non-HFS file systems may have other items that this command does not account for. The option, from prior releases, has been replaced by the option. FILES
File system devices. Static information about the file systems Mounted file system table SEE ALSO
du(1), df(1M), fsck(1M), fstab(4), fstyp(1M), statvfs(2), mnttab(4). STANDARDS CONFORMANCE
df_hfs(1M)
All times are GMT -4. The time now is 07:15 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy