Which basically means the ldoms that were on there are not starting (not even showing).
If I do ldm list-config it shows live config as next reboot. But, of course, next reboot it reverts back to factory default again.
I must admit I'm wondering if its doing this becasue (with the one faulty DIMM) there is now not enough memory to serve the LDOMS configured. Does this make sense?
Bit worrying if one DIMM failure can take out entire host :-(
What does
show? If replacement is needed, then do it. Once the system "thinks" a certain way about errors it is really hard to try to operate the system like the problem does not exist. As you are seeing. And in fact, doing so may cover up even more serious issues.
I'm not sure what you are actually seeing. Your response seems to me like you have no support contract more than anything else. Which is understandable, but very hard to work around sometimes.
I don't recommend this, but if you are truly desperate try using the
command. I do not recommend it except as a last ditch desperation approach to getting a box going for a short time. Once the error occurs again, you are back to square one.
EDIT: let me put this another way - I had a system which showed bad memory, but when the tech search the fmadm information, he found it was a PCI-e problem. But at first blush the system "thought" it was a DIMM. Let Oracle look at it. Don't decide on your own. My first take was wrong.
Last edited by jim mcnamara; 04-05-2018 at 10:39 AM..
It is unlikely for faulty dimm to effect SP in any way.
Configuration is saved there by issuing ldm add-spconfig <unique_name>
So, after you configure the system as per your desires or change the existing configuration, you run above command to save the work done to SP.
No need to reboot anything, but since work is saved, upon next reboot the configuration saved will be applied from SP (the latest saved).
Did you save the config followed by reboot ?
I always save, since if not saved the list option says "next poweron", probably meaning complete power cycle (not init 6 or reboot).
You should inspect, as Jim mentioned, fmadm faulty.
A properly configured system will report memory errors using that facility.
T4-2 issues meant replacing motherboard.
I thought the ldom config was automatically saved? Got the oracle document here.
At the moment, unable to get ldoms back - here is the output from a few commands:-
ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME... (4 Replies)
Solaris for Sparc 11.1 with the latest patches. Created a Guest LDOM with two vnet's net0 and net1, installed a guest whole root, ip exclusive zone that I want to be able to utilize DHCP. I have been able to create the zone but unable to get it to boot because I am unable to assign an anet to it.... (4 Replies)
Hi. I have 2 SunFire V490 servers running Solaris 10. We may have to upgrade with more memory on one of them to make it compatible with the other. Here's the one with 12GB of RAM:
Memory size: 12288 Megabytes
========================= CPUs ===============================================
... (1 Reply)