01-07-2012
So you want to establish a full read on a file, while other threads may still be writing to it?
The tail -f command will only start from a set distance of the original file, and then page down from there, skipping the first part of the file. You could use a pager utility like 'less' (or 'less -N' for line numbers) and then optionally force your way downward via [Ctrl]+G (it uses vi commands). It also lets you jump all the way back to the first line...
10 More Discussions You Might Find Interesting
1. UNIX Desktop Questions & Answers
hi
My name is Juan
I dont can clear wtmp and similiar files
how i do it?
thanks (4 Replies)
Discussion started by: jtapia
4 Replies
2. UNIX for Dummies Questions & Answers
can i include this command into my crontab file
> /var/adm/wtmp
to clear the contents on a regular basis ?
what about file permissions ? (6 Replies)
Discussion started by: cubicle^dweller
6 Replies
3. Shell Programming and Scripting
Hey Guys,
i am new into shell programming and i have to do one script which have to record all the commands entered by a specific user.
Example of that, i have a system running on unix, several users are using this system, i have to create like a databse which will record every user entered that... (5 Replies)
Discussion started by: charbel
5 Replies
4. UNIX for Advanced & Expert Users
Hy, I have a question
I have a directory in a unix server,
Some of my files have a diffrent access time,
from the time i accessed them last,
I think some one has copied it,it's not an important file,but none the less,it is my file,It mistakenly had a 777 permission( yes ,I know it is a noob's... (1 Reply)
Discussion started by: lordmod
1 Replies
5. Shell Programming and Scripting
Hi,
We are using rsync for syncing remote directories. It is working great along with detailed logs. As the script cron'd and most of the times there're no files to sync we are getting lot of unnecessary log entries and we need to filter them to show only the log entries for the files... (5 Replies)
Discussion started by: prvnrk
5 Replies
6. HP-UX
Hi,
Last day, In one of our unix boxes there was an issue wherein few of the directory structures were missing / got deleted.
Is there any way by which we can find how it happened, I mean by going through syslog / which user had run what command?
Thanks for your help (3 Replies)
Discussion started by: vivek_damodaran
3 Replies
7. Shell Programming and Scripting
Hi,
I am trying to write a script which would go search and get the info from the logs based on yesterday timestamp and write yesterday logs in new file. The log file format is as follows:
""""""""""""""""""""""""""... (3 Replies)
Discussion started by: harish.parker
3 Replies
8. HP-UX
hello,
i just want to know logs files for these actions listed below :
- User Account Creation
- User Account Deletion
- Failed and or Successful User Password Changes
- Failed Login Activities for all User Users
- System Reboot or and shutdown
help appreciated... (1 Reply)
Discussion started by: Bolou
1 Replies
9. UNIX for Dummies Questions & Answers
My problem: Both access and error logs do not rotate any more and get really large.
They are located here: /srv/www/+vHost name here+/logs/
Configuration seems to be here:
/etc/logrotate.conf => looks OK, including "size 10M" to avoid large files
(/etc/logrotate.d => is empty)
manually... (4 Replies)
Discussion started by: floko
4 Replies
10. Shell Programming and Scripting
Appreciate help for the below issue.
Im using below code.....I dont want to attach the logs when I ran the perl twice...I just want to take backup with today date and generate new logs...What I need to do for the below scirpt..............
1)if logs exist it should move the logs with extention... (1 Reply)
Discussion started by: Sanjeev G
1 Replies
LEARN ABOUT DEBIAN
save_binary_logs
SAVE_BINARY_LOGS(1p) User Contributed Perl Documentation SAVE_BINARY_LOGS(1p)
NAME
save_binary_logs - Concatenating binary or relay logs from the specified file/position to the end of the log. This command is automatically
executed from MHA Manager on failover, and manual execution should not be needed normally.
SYNOPSIS
# Test
$ save_binary_logs --command=test --binlog_dir=/var/lib/mysql --start_file=mysqld-bin.000002
# Saving binary logs
$ save_binary_logs --command=save --binlog_dir=/var/lib/mysql --start_file=mysqld-bin.000002 --start_pos=312
--output_file=/var/tmp/aggregate.binlog
# Saving relay logs
$ save_binary_logs --command=save --start_file=mysqld-relay-bin.000002 --start_pos=312 --relay_log_info=/var/lib/mysql/relay-log.info
--output_file=/var/tmp/aggregate.binlog
save_binary_logs concatenates binary or relay logs from the specified log file/position to the end of the log. This tool is intended to be
invoked from the master failover script(MHA Manager), and manual execution is normally not needed.
DESCRIPTION
Suppose that master is crashed and the latest slave server has received binary logs up to mysqld-bin.000002:312. It is likely that master
has more binary logs. If it is not sent to the slave, slaves will lose all binlogs from mysqld-bin.000002:312. The purpose of the
save_binary_logs is to save binary logs that are not replicated to slaves. If master is reachable through SSH and binary logs are readable,
saving binary logs is possible.
Here is an example:
$ save_binary_logs --command=save --start_file=mysqld-bin.000002 --start_pos=312 --output_file=/var/tmp/aggregate.binlog
Then all binary logs starting from mysqld-bin.000002:312 are concatenated and stored into /var/tmp/aggregate.binlog. If you have binary
logs up to mysqld-bin.000004, the following mysqlbinlog outputs are written.
mysqld-bin.000002:Format Description Event(FDE), plus from 312 to the tail mysqld-bin.000003:from 0 to the tail, excluding FDE
mysqld-bin.000004:from 0 to the tail, excluding FDE
perl v5.14.2 2012-01-08 SAVE_BINARY_LOGS(1p)