Sponsored Content
Operating Systems Solaris ALOM wont work when KVM connected to Sun Fire V440 server Post 302462501 by jimmy54321 on Thursday 14th of October 2010 11:04:31 AM
Old 10-14-2010
ALOM wont work when KVM connected to Sun Fire V440 server

Hi,

I was asked to connect a KVM screen to a Sun Fire V440 last night so I connected it up but no joy and nothing on the KVM screen. I was told that a reboot may fix the problem so connected to the ALOM and rebooted. On the plus side, the KVM screen now works but I lost the ALOM connection.

On the ALOM output (console -f) I can watch the server start to boot and then ithangs halfway through. The last ALOM messages I see before it freezes include one about probing IDE devices and then I get an error about USB device.

I saw another post on here for someone which had the same problem but it doesn't say how it was fixed.
console -f not working in ALOM

Is there a way of getting the ALOM and KVM screen up and running together? At the moment it's either one or the other.

Thanks in advance
Jimmy

 

10 More Discussions You Might Find Interesting

1. Solaris

Sun Fire V440 and Patch 109147-39

Got an curious issue. I applied 109147-39 to, oh 15 or so various systems all running Jumpstarted Solaris 8. When I hit the first two V440s, they both failed with Return code 139. All non shell commands segfaulted from then on. The patch modified mainly the linker libraries and commands. ... (2 Replies)
Discussion started by: BOFH
2 Replies

2. Solaris

Sun Fire v440 keeps shutting down

Hello, I hope you can help me. I am new to Sun servers and we have a Sun Fire v440 server in which one power supply failed, we are waiting for new one. But now our server is shutting down constantly. Is there any setting with which we can prevent this behaviour? (1 Reply)
Discussion started by: Tibor
1 Replies

3. Solaris

Sun Fire v440 hardware problem (can't get ok>)

First of all it's shut down 60 second after power on and write on console : SC Alert: Correct SCC not replaced - shutting managed system down! This is cured by moving out battery from ALOM card. Now server start to loop during the testing. That's on the console: >@(#) Sun Fire V440,Netra... (14 Replies)
Discussion started by: Alisher
14 Replies

4. Solaris

error messages in Sun Fire V440

Hello, I am seeing error messages in V440 (OS = solaris 8). I have copied here : The system does not reboot constantly and it is up for last 67 days. One more interesting thing I found, I see errors start appearing at 4:52AM last until 6am and again start at 16:52am on same day.. I... (5 Replies)
Discussion started by: upengan78
5 Replies

5. Solaris

Firmware password Solaris Sun Fire v440

Hi: I bougth an used Sun Fire v440, and It have a firmware password. When I turn on the server, it ask for firmware password. (I don 't know what is the correct password). I can access to SC, but when I want to access to OBP, Firmware Password appears again. I remove the battery for two hours,... (1 Reply)
Discussion started by: mguazzardo
1 Replies

6. Solaris

Sun Fire v440 Over heat Problem.

Dear Team, I need some expert advice to my problem. We have a Sun Fire v440 in our customer Place. Server is working fine and no hardware deviations are found except one problem that processors generating too much heat. I have verified and found that the room temperature was 26-27 degree.... (5 Replies)
Discussion started by: sudhansu
5 Replies

7. Solaris

Connect using ALOM to Sun Fire V210

I have bought from eBay a second hand Sun Fire V210 server and I'm really stumped at the lack of complete instructions on how to connect to it. I don't have a Windows machine, I've only got Ubuntu and OS X computers. None of them have an old RS-232 port on them either. In saying that, I have... (12 Replies)
Discussion started by: danijeljames
12 Replies

8. Solaris

Sun-Fire V440 boot disk issue

Hi, I have Sun Fire V440. Boot disks are mirrored. system crashed and it's not coming up. Error message is Insufficient metadevice database replicas located. Use Metadb to delete databases which are broken. Boot disks are mirrored and other disks are ZFS configuration. Please... (2 Replies)
Discussion started by: samnyc
2 Replies

9. Solaris

Removing a disk from SUN Fire V440 running Solaris 8

Hi, I have a SUN Fire V440 server running Solaris 8. One of the 4 disks do not appear when issued the format command. The "ready to remove" LED is not on either. Metastat command warns that this disk "Needs maintenace". Can I just shutdown and power off the machine and then insert an... (5 Replies)
Discussion started by: Echo68
5 Replies

10. Solaris

Sun Fire v440 Hard disk or controller broken? WARNING: /pci@1f,700000/scsi@2/sd@0,0 (sd1)

Hi, I have a Sun Fire V440 server that fails to boot up correctly. A lot of services are not started and the sytems acts really slow to commands. During boot I can see the following Error: WARNING: /pci@1f,700000/scsi@2/sd@0,0 (sd1): SCSI transport failed: reason 'reset': retrying... (15 Replies)
Discussion started by: oliwei
15 Replies
BUF(9)							   BSD Kernel Developer's Manual						    BUF(9)

NAME
buf -- kernel buffer I/O scheme used in FreeBSD VM system DESCRIPTION
The kernel implements a KVM abstraction of the buffer cache which allows it to map potentially disparate vm_page's into contiguous KVM for use by (mainly file system) devices and device I/O. This abstraction supports block sizes from DEV_BSIZE (usually 512) to upwards of several pages or more. It also supports a relatively primitive byte-granular valid range and dirty range currently hardcoded for use by NFS. The code implementing the VM Buffer abstraction is mostly concentrated in /usr/src/sys/kern/vfs_bio.c. One of the most important things to remember when dealing with buffer pointers (struct buf) is that the underlying pages are mapped directly from the buffer cache. No data copying occurs in the scheme proper, though some file systems such as UFS do have to copy a little when deal- ing with file fragments. The second most important thing to remember is that due to the underlying page mapping, the b_data base pointer in a buf is always *page* aligned, not *block* aligned. When you have a VM buffer representing some b_offset and b_size, the actual start of the buffer is (b_data + (b_offset & PAGE_MASK)) and not just b_data. Finally, the VM system's core buffer cache supports valid and dirty bits (m->valid, m->dirty) for pages in DEV_BSIZE chunks. Thus a platform with a hardware page size of 4096 bytes has 8 valid and 8 dirty bits. These bits are generally set and cleared in groups based on the device block size of the device backing the page. Complete page's worth are often referred to using the VM_PAGE_BITS_ALL bitmask (i.e., 0xFF if the hardware page size is 4096). VM buffers also keep track of a byte-granular dirty range and valid range. This feature is normally only used by the NFS subsystem. I am not sure why it is used at all, actually, since we have DEV_BSIZE valid/dirty granularity within the VM buffer. If a buffer dirty operation creates a 'hole', the dirty range will extend to cover the hole. If a buffer validation operation creates a 'hole' the byte-granular valid range is left alone and will not take into account the new extension. Thus the whole byte-granular abstraction is considered a bad hack and it would be nice if we could get rid of it completely. A VM buffer is capable of mapping the underlying VM cache pages into KVM in order to allow the kernel to directly manipulate the data associ- ated with the (vnode,b_offset,b_size). The kernel typically unmaps VM buffers the moment they are no longer needed but often keeps the 'struct buf' structure instantiated and even bp->b_pages array instantiated despite having unmapped them from KVM. If a page making up a VM buffer is about to undergo I/O, the system typically unmaps it from KVM and replaces the page in the b_pages[] array with a place-marker called bogus_page. The place-marker forces any kernel subsystems referencing the associated struct buf to re-lookup the associated page. I believe the place-marker hack is used to allow sophisticated devices such as file system devices to remap underlying pages in order to deal with, for example, re-mapping a file fragment into a file block. VM buffers are used to track I/O operations within the kernel. Unfortunately, the I/O implementation is also somewhat of a hack because the kernel wants to clear the dirty bit on the underlying pages the moment it queues the I/O to the VFS device, not when the physical I/O is actually initiated. This can create confusion within file system devices that use delayed-writes because you wind up with pages marked clean that are actually still dirty. If not treated carefully, these pages could be thrown away! Indeed, a number of serious bugs related to this hack were not fixed until the 2.2.8/3.0 release. The kernel uses an instantiated VM buffer (i.e., struct buf) to place-mark pages in this special state. The buffer is typically flagged B_DELWRI. When a device no longer needs a buffer it typically flags it as B_RELBUF. Due to the underlying pages being marked clean, the B_DELWRI|B_RELBUF combination must be interpreted to mean that the buffer is still actually dirty and must be written to its backing store before it can actually be released. In the case where B_DELWRI is not set, the underlying dirty pages are still properly marked as dirty and the buffer can be completely freed without losing that clean/dirty state information. (XXX do we have to check other flags in regards to this situation ???) The kernel reserves a portion of its KVM space to hold VM Buffer's data maps. Even though this is virtual space (since the buffers are mapped from the buffer cache), we cannot make it arbitrarily large because instantiated VM Buffers (struct buf's) prevent their underlying pages in the buffer cache from being freed. This can complicate the life of the paging system. HISTORY
The buf manual page was originally written by Matthew Dillon and first appeared in FreeBSD 3.1, December 1998. BSD
December 22, 1998 BSD
All times are GMT -4. The time now is 09:43 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy