08-11-2008
Ok, thanks.
I've made these changes. It still takes a little while to complete, but this is due to the number of PDFs that aren't linked to more than anything else.
After the first time this is run for real, and all of the PDFs not linked to are archived, the process will be much quicker. I'm probably going to add it to a cron or something so that it runs once a week automatically, and I won't have to worry about unused files wasting disk space
Thanks for all your help!
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have a file that is 20 - 80+ MB in size that is a certain type of log file.
It logs one of our processes and this process is multi-threaded. Therefore the log file is kind of a mess. Here's an example:
The logfile looks like: "DATE TIME - THREAD ID - Details", and a new file is created... (4 Replies)
Discussion started by: elinenbe
4 Replies
2. Shell Programming and Scripting
I'm trying to make a simple search script but cannot get it right. The script should search for keywords inside files. Then return the file paths in a variable. (Each file path separated with \n).
#!/bin/bash
SEARCHQUERY="searchword1 searchword2 searchword3";
for WORD in $SEARCHQUERY
do
... (6 Replies)
Discussion started by: limmer
6 Replies
3. Shell Programming and Scripting
Hi,
I have a text file with data in that I wish to extract, assign to a variable and process through a loop.
Kind of the process that I am after:
1: Grep the text file for the values.
Currently using:
cat /root/test.txt | grep TESTING= | awk -F"=" '{ a = $2 } {print a}' | sort -u
... (0 Replies)
Discussion started by: Spoonless
0 Replies
4. UNIX for Dummies Questions & Answers
Hi all,
I have problem with searching hundreds of CSV files, the problem is that search is lasting too long (over 5min).
Csv files are "," delimited, and have 30 fields each line, but I always grep same 4 fields - so is there a way to grep just those 4 fields to speed-up search.
Example:... (11 Replies)
Discussion started by: Whit3H0rse
11 Replies
5. Shell Programming and Scripting
Hello,
I am using sed in a for loop to replace text in a 100MB file. I have about 55,000 entries to convert in a csv file with two entries per line. The following script works to search file.txt for the first field from conversion.csv and then replace it with the second field. While it works fine,... (15 Replies)
Discussion started by: pbluescript
15 Replies
6. Shell Programming and Scripting
This is my first experience writing unix script. I've created the following script. It does what I want it to do, but I need it to be a lot faster. Is there any way to speed it up?
cat 'Tax_Provision_Sample.dat' | sort | while read p; do fn=`echo $p|cut -d~ -f2,4,3,8,9`; echo $p >> "$fn.txt";... (20 Replies)
Discussion started by: JohnN6
20 Replies
7. Shell Programming and Scripting
Dear all,
Please help with the following.
I have a file, let's call it data.txt, that has 3 columns and approx 700,000 lines, and looks like this:
rs1234 A C
rs1236 T G
rs2345 G T
Please use code tags as required by forum rules!
I have a second file, called reference.txt,... (1 Reply)
Discussion started by: aberg
1 Replies
8. Shell Programming and Scripting
HI Guys hoping some one can help
I have two files on both containing uk phone numbers
master is a file which has been collated over a few years ad currently contains around 4 million numbers
new is a file which also contains 4 million number i need to split new nto two separate files... (4 Replies)
Discussion started by: dunryc
4 Replies
9. Shell Programming and Scripting
Hi,
I've written a ksh script that read a file and parse/filter/format each line. The script runs as expected but it runs for 24+ hours for a file that has 2million lines. And sometimes, the input file has 10million lines which means it can be running for more than 2 days and still not finish.... (9 Replies)
Discussion started by: newbie_01
9 Replies
10. Shell Programming and Scripting
Hello experts,
we have input files with 700K lines each (one generated for every hour). and we need to convert them as below and move them to another directory once.
Sample INPUT:-
# cat test1
1559205600000,8474,NormalizedPortInfo,PctDiscards,0.0,Interface,BG-CTA-AX1.test.com,Vl111... (7 Replies)
Discussion started by: prvnrk
7 Replies
LEARN ABOUT REDHAT
popularity-contest
POPULARITY-CONTEST(8) POPULARITY-CONTEST(8)
NAME
popularity-contest - list the most popular Debian packages
SYNOPSIS
popularity-contest
DESCRIPTION
The popularity-contest command gathers information about Debian packages installed on the system, and prints the name of the most recently
used executable program in that package as well as its last-accessed time (atime) and last-attribute-changed time (ctime) to stdout.
When aggregated with the output of popularity-contest from many other systems, this information is valuable because it can be used to
determine which Debian packages are commonly installed, used, or installed and never used. This helps Debian maintainers make decisions
such as which packages should be installed by default on new systems.
The resulting statistic is available from the project home page https://popcon.debian.org/.
Normally, popularity-contest is run from a cron(8) job, /etc/cron.daily/popularity-contest, which automatically submits the results to
Debian package maintainers (only once a week) according to the settings in /etc/popularity-contest.conf and /usr/share/popularity-con-
test/default.conf.
SEE ALSO
The popularity-contest FAQ at /usr/share/doc/popularity-contest/FAQ popcon-largest-unused(8), cron(8)
Additional documentation is in /usr/share/doc/popularity-contest/.
AUTHOR
Avery Pennarun <apenwarr@debian.org>.
Debian/GNU Linux November 2001 POPULARITY-CONTEST(8)