Sponsored Content
Operating Systems SCO Need advice: Copying large CSV report files off SCO system Post 302539209 by jgt on Friday 15th of July 2011 04:50:31 PM
Old 07-15-2011
Step 1.
signed on as root
#custom
This will the display the software installed, (and allow you to add/delete)
One of the items will be OpenServer Enterprise or Host.

Although the fact that the terminals are attached through a multi serial port card strongly suggests that you have host.
The 'hw' command will display the hardware installed.
Use the "mkdev hd" command to add a hard disk.
If you add a SCSI disk, you will also need either a second computer with the same SCSI controller, or a way of booting this machine from diskette or cd.
Add an IDE disk, if the BIOS allows you to boot from SCSI ahead of IDE.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

copying .profile files to a new server (SCO)

Hello Intelligent Life Forms (I hope) This should be a snap for some of you. I need to copy the /usr .profiles from 1 sco system to another. Migrating to a new server. I've tried a recursive copy to the target system with a NFS mount point from the source. Wouldn't do it permissions... (2 Replies)
Discussion started by: sighbrrguy
2 Replies

2. Web Development

Content Management System for uploading large files

Hi everybody, I am currently trying to develop a simple content management system where I have an internal website for my users to upload large files onto the server. The site is password protected and my users won't be trying to hack into the system so security is a non-factor (as least for... (3 Replies)
Discussion started by: z1dane
3 Replies

3. Shell Programming and Scripting

Sed or awk script to remove text / or perform calculations from large CSV files

I have a large CSV files (e.g. 2 million records) and am hoping to do one of two things. I have been trying to use awk and sed but am a newbie and can't figure out how to get it to work. Any help you could offer would be greatly appreciated - I'm stuck trying to remove the colon and wildcards in... (6 Replies)
Discussion started by: metronomadic
6 Replies

4. Programming

Is there a system call other than 'open' for opening very large files?

Dear all, Inside a C program, I want to open a very big file (about 12 GB) in order to read its content. Here is the code: /* argv contains the path to the file. */ inputFileDescriptor = open(argv, O_RDONLY); if (inputFileDescriptor < 0) { ... (6 Replies)
Discussion started by: dariyoosh
6 Replies

5. Shell Programming and Scripting

Copying of large files fail

Hi, I have a process which duplicates files for different environments. As the files arrive, my script (korn shell) makes copies of them (giving a unique name) and then renames the original file so that my process won't get triggered again. I don't like it either, but it's what we were told to... (4 Replies)
Discussion started by: GoldenEye4ever
4 Replies

6. Shell Programming and Scripting

Copying multiple csv files

Hi, I have mutiple csv files at server1 at /apps/test/data. I needed a script that would copy these csv files from server1 at /usr/data, put them in server2,archive the earlier files that were present in server2 before removing those already present. Kindly help. (2 Replies)
Discussion started by: Alok Ranjan
2 Replies

7. Red Hat

Advice regarding filesystems handling large number of files

Hi All, I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

8. Shell Programming and Scripting

Merging Very large CSV files in Unix

Hi, I have two very large CSV files, which I want to merge (equi-join) based on a key (column). One of the file (say F1) would have ~30 MM records and 700 columns. The other file (~f2) would have same # of records and lesser columns (say 50). I want to create an output file joining on a... (3 Replies)
Discussion started by: student_007
3 Replies

9. Shell Programming and Scripting

Comparing two large unsorted csv files

Hi All, My requirement is to write a shell script to compare two large csv files. I've created sample files for explaining my problem i.e., a.csv and b.csv contents of files: ----------------- a.csv ------ Type,Memory (Kb),Location HD,Size (Mb),Serial # XT,640,D402,0,MG0010... (2 Replies)
Discussion started by: vasavi
2 Replies

10. Shell Programming and Scripting

Copying large files in a bash script stops execution

Hello, I'm new to this forum and like to first of all say hello to everyone. I've got a really annoying problem at the moment. I'm trying to rsync some files (about 200MB with one file of 120MB) from a Raspberry PI with raspbian to a debian server via rsync. This procedure is stored in a... (3 Replies)
Discussion started by: wex_storm
3 Replies
addbadsec(1M)						  System Administration Commands					     addbadsec(1M)

NAME
addbadsec - map out defective disk blocks SYNOPSIS
addbadsec [-p] [ -a blkno [blkno...]] [-f filename] raw_device DESCRIPTION
addbadsec is used by the system administrator to map out bad disk blocks. Normally, these blocks are identified during surface analysis, but occasionally the disk subsystem reports unrecoverable data errors indicating a bad block. A block number reported in this way can be fed directly into addbadsec, and the block will be remapped. addbadsec will first attempt hardware remapping. This is supported on SCSI drives and takes place at the disk hardware level. If the target is an IDE drive, then software remapping is used. In order for software remapping to succeed, the partition must contain an alternate slice and there must be room in this slice to perform the mapping. It should be understood that bad blocks lead to data loss. Remapping a defective block does not repair a damaged file. If a bad block occurs to a disk-resident file system structure such as a superblock, the entire slice might have to be recovered from a backup. OPTIONS
The following options are supported: -a Adds the specified blocks to the hardware or software map. If more than one block number is specified, the entire list should be quoted and block numbers should be separated by white space. -f Adds the specified blocks to the hardware or software map. The bad blocks are listed, one per line, in the specified file. -p Causes addbadsec to print the current software map. The output shows the defective block and the assigned alternate. This option cannot be used to print the hardware map. OPERANDS
The following operand is supported: raw_device The address of the disk drive (see FILES). FILES
The raw device should be /dev/rdsk/c?[t?]d?p0. See disks(1M) for an explanation of SCSI and IDE device naming conventions. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Architecture |x86 | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ SEE ALSO
disks(1M), diskscan(1M), fdisk(1M), fmthard(1M), format(1M), attributes(5) NOTES
The format(1M) utility is available to format, label, analyze, and repair SCSI disks. This utility is included with the addbadsec, diskscan(1M), fdisk(1M), and fmthard(1M) commands available for x86. To format an IDE disk, use the DOS "format" utility; however, to label, analyze, or repair IDE disks on x86 systems, use the Solaris format(1M) utility. SunOS 5.10 24 Feb 1998 addbadsec(1M)
All times are GMT -4. The time now is 08:32 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy