Sponsored Content
Top Forums Shell Programming and Scripting Parsing large files in Solaris 11 Post 302952516 by Corona688 on Wednesday 19th of August 2015 01:04:51 PM
Old 08-19-2015
He's got a point though. The system can only soften the blow of 128 system calls vs 1 so much. Try running dd with a bs of 1, it's slow. I only suggested a tiny block size since it seemed necessary which was my mistake too, from leaving sync in the conv options.
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Problem with parsing a large file

Hi All, Following is the sample file and following is the op desired that is the last entry of each unique first field is required. My solution is as follows However the original file has around a million entries and around a 100,000 uniques first fields, so this soln.... (6 Replies)
Discussion started by: gauravgoel
6 Replies

2. Shell Programming and Scripting

Parsing a large log

I need to parse a large log say 300-400 mb The commands like awk and cat etc are taking time. Please help how to process. I need to process the log for certain values of current date. But I am unbale to do so. (17 Replies)
Discussion started by: asth
17 Replies

3. Shell Programming and Scripting

parsing large CDR XML file

Dear Freind in the file attached how parse the data to be like a normal table :D (3 Replies)
Discussion started by: saifsafaa
3 Replies

4. Shell Programming and Scripting

Divide large data files into smaller files

Hello everyone! I have 2 types of files in the following format: 1) *.fa >1234 ...some text... >2345 ...some text... >3456 ...some text... . . . . 2) *.info >1234 (7 Replies)
Discussion started by: ad23
7 Replies

5. Solaris

How to safely copy full filesystems with large files (10Gb files)

Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS. I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies

6. UNIX for Advanced & Expert Users

Need help with configuring large packet size on Solaris 7 / e6500

We're running Solaris 7 on FDDI n/w on an E6500 host and wish to use MTU (packet size) > 1500, more like 3072 bytes to begin with and possibly up to 4096 bytes. Linux has /etc/network/interfaces. Does ANYONE remember the equivalent in Unix? When I do ifconfig eth0 mtu 4000, I get the error... (0 Replies)
Discussion started by: sharique
0 Replies

7. Shell Programming and Scripting

Help needed for parsing large XML with awk.

My XML structure looks like: <?xml version="1.0" encoding="UTF-8"?> <SearchRepository> <SearchItems> <SearchItem> ... </SearchItem> <SearchItem> ... ... (1 Reply)
Discussion started by: jasonjustice
1 Replies

8. UNIX for Dummies Questions & Answers

How to display large file in Solaris?

Hi i want to see file in solaris which are eating space. like we have a listfiles command in AIX which show all the files in decreading order of the size . example of listfile command in this command i am able to all the huge file in root directory. do we have any similar command in... (1 Reply)
Discussion started by: scriptor
1 Replies

9. Shell Programming and Scripting

Parsing a subset of data from a large matrix

I do have a large matrix of the following format and it is tab delimited ch-ab1-20 ch-bb2-23 ch-ab1-34 ch-ab1-24 er-cc1-45 bv-cc1-78 ch-ab1-20 0 2 3 4 5 6 ch-bb2-23 3 0 5 ... (6 Replies)
Discussion started by: Kanja
6 Replies
tz(4)							     Kernel Interfaces Manual							     tz(4)

Name
       tz - SCSI magnetic tape interface

Syntax
       VAX NCR 5380:
	 adapter      uba0    at nexus?
	 controller   scsi0   at uba0	 csr 0x200c0080  vector szintr
	 tape	      tz0     at scsi0	 drive 0

       VAX DEC SII:
	 adapter      ibus0   at nexus?
	 controller   sii0    at ibus?	 vector sii_intr
	 tape	      tz0     at sii0	 drive 0

       RISC DEC SII:
	 adapter      ibus0   at nexus?
	 controller   sii0    at ibus?	 vector sii_intr
	 tape	      tz0     at sii0	 drive 0

       RISC DEC KZQ:
	 adapter      uba0    at nexus?
	 controller   kzq0    at ibus? csr 0761300vector sii_intr
	 tape	      tz0     at kzq0	 drive 0

       RISC NCR ASC:
	 adapter      ibus0   at nexus?
	 controller   asc0    at ibus?	 vector ascintr
	 tape	      tz0     at asc0	 drive 0

Description
       The SCSI tape driver provides a standard tape drive interface as described in This is a driver for any Digital SCSI tape device.

       For  the  TZK10	QIC  format tape drive, the densities supported are QIC-24 (read only) block size of 512 byte blocks, QIC-120, and QIC-150
       read/write block size of 512 byte blocks, and QIC-320 read/write block size of 1024 byte blocks.  With QIC format style tapes all reads and
       writes  must  be  in  multiple  of the block size.  This is a requirement of fixed block tape drives because record boundaries are not pre-
       served.	The QIC densities are selected using the following special device names:

	 QIC-24 Fixed block size.
	 QIC-120 Fixed block size.
	 QIC-150 Fixed block size.
	 QIC-320 Fixed block size.

       With all fixed block tape devices a of a file to the tape must be padded out.  An example of this is a of which has a size of approximately
       3800 bytes.
       dd if=/etc/gettytab of=/dev/rmt0h bs=10k conv=sync
	 or
       dd if=/etc/gettytab of=/dev/rmt0l bs=512 conv=sync
       The option of pads the output to block size.

       This  driver  also  supports  n-buffered  reads	and  writes  to the raw tape interface (used with streaming tape drives).  See for further
       details.

Tape Support
       TZ30, TZK50, TLZ04, TSZ05, TKZ08, TZK10

Diagnostics
       All diagnostic messages are sent to the error logger subsystem.

Files
See Also
       mtio(4), nbuf(4), SCSI(4), MAKEDEV(8), uerf(8), tapex(8)
       Guide to the Error Logger

																	     tz(4)
All times are GMT -4. The time now is 07:36 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy