Clarification:
We have two boxes. Box 1 is parent. Box 1 owns the filesystem. Box1 shares the filesystem via NFS or samba or whatever. Box 1 does not care who connects to the filesystem and then remote mounts it - via NFS. So you really have a proxy acting in box 1 in its very own kernel space when a request comes over the network. Box 1 controls entirely the NFS mounted disk, because it is actually physically mounted on box 1, not box 2.
Box 2 now runs an NFS/Samba client that connects over to box 1 via the smb protocol (example protocol). Box 2 has a symlink to the NFS mountpoint (that lives on box 2) then points to box 1. This is a mountpoint that connects as a proxy to the real disk on box 1.
This works great. I do not know what Made In Germany saw in your post, but what I described, I think, is clear. Samba or NFS works fine on Solaris 9. You will need to read a little on configuring your fileserver on Box 1. You do not seem to be running NFS to make box 1 a fileserver, and make box 2 a client of that fileserver.
The only weenie you need to know:
As of 2013 the NFS version on Solaris 9 has/had a bug.
Before rebooting:
If you have a user that stays logged on in spite of policy (I did), then you must kill all users processes on both systems. And any process that has the filesystem in question open. All logged on users and possibly system maintenance processes can have open files/directories there.
Why? If there is a file held open on box 2 (i.e., some user has a process with the current directory aimed "in" the NFS mount) then box 1 will hang on shutdown. Forever. If you force kill box 1, it will not rebuild NFS , so you lose the connection when you do reboot . Forever. Forever = you have to destroy and rebuild the connection on both sides. And it fails sometimes. As of 2015 there was a patch for this on Solaris 10, Solaris 11 did not have the problem, and no patch for Solaris 9. Verify this with Oracle support, if you still have support for your Solaris 9 box.
See:
Solaris Operating System - Releases