Sponsored Content
Full Discussion: this helps me out big time
Top Forums UNIX for Dummies Questions & Answers this helps me out big time Post 7172 by jerzey4life on Thursday 20th of September 2001 12:32:53 AM
Old 09-20-2001
MySQL this helps me out big time

ever since i started playing with unix at work i have found all kinds of helpful tools that my companie has added into our /usr/bin/

this is the one that helped the most


""""""ldr"""""""
Code:
#!/bin/sh
#
#  @(#) %filespec: ldr-2 %  %date_modified: Wed Sep  6 09:54:07 2000 %
#
#         Module:  ldr
#
#         List directories.
#         If directory name is not specified, list directories
#         under current directory.
#
#         Usage:  <ldr [dir name]>
#

if test $# -lt 1
then
   DIR="."
else
   DIR=$1
fi

for arg in $DIR/*
do
   if [ -d $arg ]
   then
      echo `basename $arg`
   fi
done
exit 0  
#
#
#
##########################################


lists all your dirs in the dir that your in


Smilie Smilie Smilie Smilie Smilie Smilie Smilie

added code tags for readability --oombera

Last edited by oombera; 02-18-2004 at 04:05 PM..
 

6 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

How to view a big file(143M big)

1 . Thanks everyone who read the post first. 2 . I have a log file which size is 143M , I can not use vi open it .I can not use xedit open it too. How to view it ? If I want to view 200-300 ,how can I implement it 3 . Thanks (3 Replies)
Discussion started by: chenhao_no1
3 Replies

2. Shell Programming and Scripting

need help big time

solved (1 Reply)
Discussion started by: rockbike
1 Replies

3. UNIX for Dummies Questions & Answers

How big is too big a config.log file?

I have a 5000 line config.log file with several "maybe" errors. Any reccomendations on finding solvable problems? (2 Replies)
Discussion started by: NeedLotsofHelp
2 Replies

4. UNIX for Dummies Questions & Answers

gref -f taking long time for big file

grep -f taking long time to compare for big files, any alternate for fast check I am using grep -f file1 file2 to check - to ckeck dups/common rows prsents. But my files contains file1 contains 5gb and file 2 contains 50 mb and its taking such a long time to compare the files. Do we have any... (10 Replies)
Discussion started by: gkskumar
10 Replies

5. Red Hat

Du -sh command taking time to calculate the big size files

Hi , My linux server is taking more time to calculate big size from long time. * i am accessing server through ssh * commands # - du -sh * #du -sh * | sort -n | grep G Please guide me for fast way to find big size directories under to / partition Thanks (8 Replies)
Discussion started by: Nats
8 Replies

6. Homework & Coursework Questions

Makefile helps

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: The code in project2 is for a program that formats C++ code into HTML for presentation in a webpage. For example,... (1 Reply)
Discussion started by: dhnguyen0708
1 Replies
GENBACKUPDATA(1)					      General Commands Manual						  GENBACKUPDATA(1)

NAME
genbackupdata - generate backup test data SYNOPSIS
genbackupdata [--chunk-size=SIZE] [--config=FILE] [-c=SIZE] [--create=SIZE] [--depth=DEPTH] [--dump-config] [--dump-setting-names] [--file-size=SIZE] [--generate-manpage=TEMPLATE] [-h] [--help] [--list-config-files] [--log=FILE] [--log-keep=N] [--log-level=LEVEL] [--log-max=SIZE] [--max-files=MAX-FILES] [--no-default-configs] [--output=FILE] [--quiet] [--seed=SEED] [--version] DESCRIPTION
genbackupdata generates test data sets for performance testing of backup software. It creates a directory tree filled with files of dif- ferent sizes. The total size and the distribution of sizes between small and big are configurable. The program can also modify an exist- ing directory tree by creating new files, and deleting, renaming, or modifying existing files. This can be used to generate test data for successive generations of backups. The program is deterministic: with a given set of parameters (and a given pre-existing directory tree), it always creates the same output. This way, it is possible to reproduce backup tests exactly, without having to distribute the potentially very large test sets. The data set consists of plain files and directories. Files are either small text files or big binary files. Text files contain the "lorem ipsum" stanza, binary files contain randomly generated byte streams. The percentage of file data that is small text or big binary files can be set, as can the sizes of the respective file types. Files and directories are named "fileXXXX" or "dirXXXX", where "XXXX" is a successive integer, separate successions for files and directo- ries. There is an upper limit to how many files a directory may contain. After the file limit is reached, a new sub-directory is created. The first set of files go into the root directory of the test set. You have to give one of the options --create, --delete, --rename, or --modify for the program to do anything. You can, however, give more than one of them, if DIR already exists. (Giving the same option more than once means that only the last instance is counted.) (DIR) is created if it doesn't exist already. OPTIONS
--chunk-size=SIZE generate data in chunks of this size (default: 16384) --config=FILE add FILE to config files -c, --create=SIZE how much data to create (default: 0) --depth=DEPTH depth of directory tree (default: 3) --dump-config write out the entire current configuration --dump-setting-names write out all names of settings and quit --file-size=SIZE size of one file (default: 16384) --generate-manpage=TEMPLATE fill in manual page TEMPLATE -h, --help show this help message and exit --list-config-files list all possible config files --log=FILE write log entries to FILE --log-keep=N keep last N logs (10) --log-level=LEVEL log at LEVEL, one of debug, info, warning, error, critical, fatal (default: debug) --log-max=SIZE rotate logs larger than SIZE, zero for never (default: 0) --max-files=MAX-FILES max files/dirs per dir (default: 128) --no-default-configs clear list of configuration files to read --output=FILE write output to FILE, instead of standard output --quiet do not report progress --seed=SEED seed for random number generator (default: 0) --version show program's version number and exit EXAMPLES
Create data for the first generation of a backup: genbackupdata --create=10G testdir Modify an existing set of backup data to create a new generation: genbackupdata -c 5% -d 2% -m 5% -r 0.5% testdir The above command can be run for each new generation. GENBACKUPDATA(1)
All times are GMT -4. The time now is 12:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy