The trick for named pipes is that until one side has a server, the other side will not open. I forget which, but easy to try in the shell. The /sbin/mknod command makes named pipes on some systems. The serving process needs to open and serve enough times to satisfy all the clients that open.
I do not use named pipes but once in a blue moon, as in ksh and bash, you can make a named pipe that evaporates on its own with prameters of the form "<(commands that write to a pipe being read)" or ">( commands that read from the pipe being written". For instance: "comm -13 <(sort file1) <(sort file2)" makes sorted version of files on named pipes as comm input files to show lines that are new (comm -13 means output is minus (-) deleted (1) and common(3) ). (On systems without something like /dev/fd/0, bash makes named pipes in /var/tmp, and never cleans them up, unless the bash people acted on my bug report and you have that patch!)
You can also, on UNIX that have /dev/fd/ visible file descriptors, use /dev/stdin with a leading pipe, like this trivial example: "cat file1 file2 | sort /dev/stdin", and /dev/stdout with a trailing pipe, like this trivial example: "sort -o /dev/stdout file1 file2 | uniq -d". We had a 32 bit file program and a file over 4g, that ran fine with "old_prog -i /dev/stdin <big_file", because the shell opened the big file open64() and the old_prog opened the named pipe with open(). Technically, this is not a named pipe, but a cue to the kernel to dup() an inherited, already open, flat file fd as the open(). The /dev/fd/# or /proc/fd/# inodes are fake, honored by special kernel handling. This solution is simpler, runs faster and with less CPU, than "cat big_file | old_prog -i /dev/stdin", which is a real pipe. Of course, if you have a program that either reads stdin or writes stdout, either by default or when given a '-', you can use the unnamed pipe '|', like this trivial example: "gunzip <xxx.tgz | tar tf -".
So, between these shell/kernel helps, one wonders if you really need a named pipe, or just like it because if feels more file-like! Admittedly, some of my pipe processing shell trees did scare the kids! However, no file space was used, no space limits hit, no write or read disk overhead, no pages loaded with cached file data, and pipeline parallelism -- what's not to like!