Sponsored Content
Full Discussion: Optimizing query
Special Forums UNIX and Linux Applications Optimizing query Post 302130202 by matrixmadhan on Friday 3rd of August 2007 02:00:44 PM
Old 08-03-2007
Shell_Life

out of the 4 potential hazards that you have listed

since the query is executed only on a table with 0.25 million records, I just encounter the 4th hazard which is taking real long time.

When it initially took such a long time, I though I might be receiving ' Long transaction aborted '. But didn't.

Considering the alternative of programmatically deleting is a fine idea without filling the logs. Smilie
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Optimizing the system reliability

My product have around 10-15 programs/services running in the sun box, which together completes a task, sequentially. Several instances of the each program/service are running in the unix box, to manage the load and for risk-management reasons. As of now, we dont follow a strict strategy in... (2 Replies)
Discussion started by: Deepa
2 Replies

2. Filesystems, Disks and Memory

optimizing disk performance

I have some questions regarding disk perfomance, and what I can do to make it just a little (or much :)) more faster. From what I've heard the first partitions will be faster than the later ones because tracks at the outer edges of a hard drive platter simply moves faster. But I've also read in... (4 Replies)
Discussion started by: J.P
4 Replies

3. Shell Programming and Scripting

Optimizing for a Speed-up

How would one go about optimizing this current .sh program so it works at a more minimal time. Such as is there a better way to count what I need than what I have done or better way to match patterns in the file? Thanks, #declare variables to be used. help=-1 count=0 JanCount=0 FebCount=0... (3 Replies)
Discussion started by: switch
3 Replies

4. OS X (Apple)

Optimizing OSX

Hi forum, I'm administrating a workstation/server for my lab and I was wondering how to optimize OSX. I was wondering what unnecessary background tasks I could kick off the system so I free up as much memory and cpu power. Other optimization tips are also welcome (HD parameters, memory... (2 Replies)
Discussion started by: deiphon
2 Replies

5. Shell Programming and Scripting

Optimizing the code

Hi, I have two files in the format listed below. I need to find out all values from field 12 to field 20 present in file 2 and list them in file3(format as file2) File1 : FEIN,CHRISTA... (2 Replies)
Discussion started by: nua7
2 Replies

6. Shell Programming and Scripting

Optimizing awk script

Can this awk statement be optimized? i ask because log.txt is a giant file with several hundred thousands of lines of records. myscript.sh: while read line do searchterm="${1}" datecurr=$(date +%s) file=$(awk 'BEGIN{split(ARGV,var,",");print var}' $line) ... (3 Replies)
Discussion started by: SkySmart
3 Replies

7. Shell Programming and Scripting

Optimizing search using grep

I have a huge log file close to 3GB in size. My task is to generate some reporting based on # of times something is being logged. I need to find the number of time StringA , StringB , StringC is being called separately. What I am doing right now is: grep "StringA" server.log | wc -l... (4 Replies)
Discussion started by: Junaid Subhani
4 Replies

8. Shell Programming and Scripting

Optimizing find with many replacements

Hello, I'm looking for advice on how to optimize this bash script, currently i use the shotgun approach to avoid file io/buffering problems of forks trying to write simultaneously to the same file. i'd like to keep this as a fairly portable bash script rather than writing a C routine. in a... (8 Replies)
Discussion started by: f77hack
8 Replies

9. Shell Programming and Scripting

Optimizing bash loop

now, i have to search for a pattern within a particular time frame which the user will provide in the following format: 19/Jun/2018:07:04,21/Jun/2018:21:30 it is easy to get tempted to attempt this search with a variation of the following awk command: awk... (3 Replies)
Discussion started by: SkySmart
3 Replies

10. Web Development

Optimizing JS and CSS

Yes. Got few suggestions. - How about minifying resources - mod_expires - Service workers setup https://www.unix.com/attachments/web-programming/7709d1550557731-sneak-preview-new-unix-com-usercp-vuejs-demo-screenshot-png (8 Replies)
Discussion started by: Akshay Hegde
8 Replies
vxtranslog(1M)															    vxtranslog(1M)

NAME
vxtranslog - administer transaction logging SYNOPSIS
vxtranslog [-H] [-l] [-m {on|off}] [-n number] [-q {on|off}] [-s size] DESCRIPTION
The vxtranslog command is used to administer transaction logging in Veritas Volume Manager (VxVM). This feature can be used to record transactions that are carried out by the vxconfigd daemon at the request of VxVM commands, and can be used in conjunction with the command logging feature (see vxcmdlog(1M)). When the current log file reaches a maximum size, it is renamed as a historic log file, and a new current log file is created. A limited number of historic log files is preserved to avoid filling up the file system. In addition, logging of query requests is turned off by default to prevent the log files from being filled too quickly. Each log file contains a header that records the host name, host ID, and the date and time that the log was created. See the EXAMPLES sec- tion below for a description of the entries that are recorded in a log file. OPTIONS
-H Displays detailed help about the usage of the command. -l Lists current settings for transaction logging. This shows whether transaction and query logging are enabled, the maximum number of historic log files, and the maximum log file size. -m {on|off} Turns transaction logging on or off. By default, transaction logging is turned on, but query logging is turned off. Query log- ging can be turned on by using the -q option. -n number Sets the maximum number of historic log files to maintain. The default number is 5. If number is set to no_limit, there is no limit on the number of historic log files that are created. -q {on|off} Turns query logging on or off. By default, query logging is turned off to prevent the log files filling too rapidly. -s size Sets the maximum size to which a transaction log can grow. (Note that this setting has no effect on existing historic log files.) The suffix modifiers k, m, and g may be used express sizes in kilobytes, megabytes and gigabytes respectively. If no suffix is specified, the default units are kilobytes. If size is set to no_limit, there is no limit on the size of the log file. The size of the transaction log is checked after an entry has been written so the actual size may be slightly larger than that specified. When the log reaches the specified size, the current transaction log file, translog, is renamed as the next available historic log file, translog.number, where number is an integer from 1 up to the maximum number of historic log files that is cur- rently defined. If the maximum number of historic log files has been reached, the oldest historic log file is removed, and the current log file is renamed as that file. The default maximum size of the transaction log file is 1m(1MB). EXAMPLES
Turn on query logging: vxtranslog -q on Set the maximum transaction log file size to 512KB: vxtranslog -s 512k Set the maximum number of historic transaction log files to 10: vxtranslog -n 10 The following are sample entries from a transaction log file: Thu Feb 13 19:30:57 2003 Clid = 26924, PID = 3001, Part = 0, Status = 0, Abort Reason = 0 DA_GET SENA0_1 DISK_GET_ATTRS SENA0_1 DISK_DISK_OP SENA0_1 8 DEVNO_GET SENA0_1 DANAME_GET 0x1d801d8 0x1d801a0 GET_ARRAYNAME SENA 50800200000e78b8 CTLR_PTOLNAME /pci@1f,4000/pci@5/SUNW,qlc@4/fp@0,0 GET_ARRAYNAME SENA 50800200000e78b8 CTLR_PTOLNAME /pci@1f,4000/pci@5/SUNW,qlc@5/fp@0,0 DISCONNECT <no request data> The first line of each log entry is the time stamp of the transaction. The Clid field corresponds to the client ID for the connection that the command opened to vxconfigd. The Status and Abort Reason fields contain error codes if the transaction does not complete normally. The remainder of the record shows the data that was used in processing the transaction. Note: The client ID is the same as that recorded for the corresponding command in the command log. FILES
/etc/vx/log Symbolic link to the log directory. This can be redefined if necessary. /etc/vx/log/translog Current transaction log. /etc/vx/log/translog.number Historic transaction logs. SEE ALSO
vxcmdlog(1M) Veritas Volume Manager Troubleshooting Guide VxVM 5.0.31.1 24 Mar 2008 vxtranslog(1M)
All times are GMT -4. The time now is 06:12 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy