Unix/Linux Go Back    


NetBSD 6.1.5 - man page for midi (netbsd section 4)

Linux & Unix Commands - Search Man Pages
Man Page or Keyword Search:   man
Select Man Page Set:       apropos Keyword Search (sections above)


MIDI(4) 			   BSD Kernel Interfaces Manual 			  MIDI(4)

NAME
     midi -- device-independent MIDI driver layer

SYNOPSIS
     midi* at midibus?
     midi* at pcppi?
     pseudo-device sequencer

     #include <sys/types.h>
     #include <sys/midiio.h>

DESCRIPTION
     The midi driver is the machine independent layer over anything that can source or sink a
     MIDI data stream, whether a physical MIDI IN or MIDI OUT jack on a soundcard, cabled to some
     external synthesizer or input controller, an on-board programmable tone generator, or a sin-
     gle jack, synthesizer, or controller component within a complex USB or IEEE1394 MIDI device
     that has several such components and appears as several MIDI streams.

   Concepts
     One MIDI data stream is a unidirectional stream of MIDI messages, as could be carried over
     one MIDI cable in the MIDI 1.0 specification.  Many MIDI messages carry a four-bit channel
     number, creating up to 16 MIDI channels within a single MIDI stream.  There may be multiple
     consumers of a MIDI stream, each configured to react only to messages on specific channels;
     the sets of channels different consumers react to need not be disjoint.  Many modern devices
     such as multitimbral keyboards and tone generators listen on all 16 channels, or may be
     viewed as collections of 16 independent consumers each listening on one channel.  MIDI
     defines some messages that take no channel number, and apply to all consumers of the stream
     on which they are sent.  For an inbound stream, midi is a promiscuous receiver, capturing
     all messages regardless of channel number.  For an outbound stream, the writer can specify a
     channel number per message; there is no notion of binding the stream to one destination
     channel in advance.

     A single midi device instance is the endpoint of one outbound stream, one inbound stream, or
     one of each.  In the third case, the write and read sides are independent MIDI streams.  For
     example, a soundcard driver may map its MIDI OUT and MIDI IN jacks to the write and read
     sides of a single device instance, but those jacks can be cabled to completely different
     pieces of gear.  Information from dmesg(8), and a diagram of any external MIDI cabling, will
     help clarify the mapping.

   Underlying drivers and MIDI protocol
     Drivers midi can attach include soundcard drivers, many of which support a UART resembling
     Roland's MPU401 and handled by mpu(4), USB MIDI devices via umidi(4), and on-board devices
     that can make sounds, whether a lowly PC speaker or a Yamaha OPL.	Serial port and IEEE1394
     connections are currently science fiction.

     The MIDI protocol permits some forms of message compression such as running status and hid-
     den note-off.  Received messages on inbound streams are always canonicalized by midi before
     presentation to higher layers.  Messages for transmission are accepted by midi in any valid
     form.

   Device access
     Access to midi device instances can be through the raw device nodes, /dev/rmidiN, or through
     the sequencer, /dev/music.

   Raw MIDI access
     A /dev/rmidiN device supports read(2), write(2), ioctl(2), select(2)/poll(2) and the corre-
     sponding kevent(2) filters, and may be opened only when it is not already open.  It may be
     opened in O_RDONLY, O_WRONLY, or O_RDWR mode, but a later read(2) or write(2) will return -1
     if the device has no associated input or output stream, respectively.

     Bytes written are passed as quickly as possible to the underlying driver as complete MIDI
     messages; a maximum of two bytes at the end of a write(2) may remain buffered if they do not
     complete a message, until completed by a following write(2).

     A read(2) will not block or return EWOULDBLOCK when it could immediately return any nonzero
     count, and MIDI messages received are available to read(2) as soon as they are complete,
     with a maximum of two received bytes remaining buffered if they do not complete a message.

     As all MIDI messages are three bytes or fewer except for System Exclusive, which can have
     arbitrary length, these rules imply that System Exclusive messages are the only ones of
     which some bytes can be delivered before all are available.

     System Realtime messages are passed with minimum delay in either direction, ahead of any
     possible buffered incomplete message.  As a result, they will never interrupt any MIDI mes-
     sage except possibly System Exclusive.

     A read(2) with a buffer large enough to accommodate the first complete message available
     will be satisfied with as many complete messages as will fit.  A buffer too small for the
     first complete message will be filled to capacity.  Therefore, an application that reads
     from an rmidi device with buffers of three bytes or larger need never parse across read
     boundaries to assemble a received message, except possibly in the case of a System Exclusive
     message.  However, if the application reads through a buffering layer such as fread(3), this
     property will not be preserved.

     The midi driver itself supports the ioctl(2) operations FIOASYNC, FIONBIO, and FIONREAD.
     Underlying devices may support others.  The value returned for FIONREAD reflects the size in
     bytes of complete messages (or System Exclusive chunks) ready to read.  If the ioctl(2)
     returns n and a read(2) of size n is issued, n bytes will be read, but if a read(2) of size
     m < n is issued, fewer than m bytes may be read if m does not fall on a message/chunk bound-
     ary.

     Raw MIDI access can be used to receive bulk dumps from synthesizers, download bulk data to
     them, and so on.  Simple patching of one device to another can be done at the command line,
     as with
	   $ cat -u 0<>/dev/rmidi0 1>&0
     which will loop all messages received on the input stream of rmidi0 input stream  back to
     its output stream in real time.  However, an attempt to record and play back music with
	   $ cat /dev/rmidiN >foo; cat foo >/dev/rmidiN
     will be disappointing.  The file foo will contain all of the notes that were played, but
     because MIDI messages carry no explicit timing, the 'playback' will reproduce them all at
     once, as fast as they can be transmitted.	To preserve timing information, the sequencer
     device can be used.

   Active Sensing
     The MIDI protocol includes a keepalive function called Active Sensing.  In any receiver that
     has not received at least one Active Sense MIDI message, the feature is suppressed and no
     timeout applies.  If at least one such message has been received, the lapse of any subse-
     quent 300 ms interval without receipt of any message reflects loss of communication, and the
     receiver should silence any currently sounding notes and return to non-Active-Sensing behav-
     ior.  A sender using Active Sensing generally avoids 300 ms gaps in transmission by sending
     Active Sense messages (which have no other effect) as needed when there is no other traffic
     to send in the interval.  This feature can be important for MIDI, which relies on separate
     Note On and Note Off messages, to avoid notes stuck on indefinitely if communication is
     interrupted before a Note Off message arrives.

     This protocol is supported in midi.  An outbound stream will be kept alive by sending Active
     Sense messages as needed, beginning after any real traffic is sent on the stream, and con-
     tinuing until the stream is closed.  On an inbound stream, if any Active Sense has been
     received, then a process reading an rmidi device will see an end-of-file indication if the
     input timeout elapses.  The stream remains open, the driver reverts to enforcing no timeout,
     and the process may continue to read for more input.  Subsequent receipt of an Active Sense
     message will re-arm the timeout.  As received Active Sense messages are handled by midi,
     they are not included among messages read from the /dev/rmidiN device.

     These rules support end-to-end Active Sensing behavior in simple cases without special
     action in an application.	For example, in
	   $ cat -u /dev/rmidi0 >/dev/rmidi1
     if the input stream to rmidi0 is lost, the cat(1) command exits; on the close(2) of rmidi1,
     midi ceases to send Active Sense messages, and the receiving device will detect the loss and
     silence any outstanding notes.

   Access through the sequencer
     To play music using the raw MIDI API would require an application to issue many small writes
     with very precise timing.	The sequencer device, /dev/music, can manage the timing of MIDI
     data in the kernel, to avoid such demanding real-time constraints on a user process.

     The /dev/music device can be opened only when it is not already open.  When opened, the
     sequencer internally opens all MIDI instances existing in the system that are not already
     open at their raw nodes; any attempts to open them at their raw nodes while the sequencer is
     open will fail.  All access to the corresponding MIDI streams will then be through the
     sequencer.

     Reads and writes of /dev/music pass eight-byte event structures defined in <sys/midiio.h>
     (which see for their documentation and examples of use).  Some events correspond to MIDI
     messages, and carry an integer device field to identify one of the MIDI devices opened by
     the sequencer.  Other events carry timing information interpreted or generated by the
     sequencer itself.

     A message received on an input stream is wrapped in a sequencer event along with the device
     index of the stream it arrived on, and queued for the reader of /dev/music.  If a measurable
     time interval passed since the last preceding message, a timing event that represents a
     delay for that interval is queued ahead of the received event.  The sequencer handles output
     events by interpreting any timing event, and routing any MIDI message event at the proper
     time to an underlying output stream according to its device index.  Therefore
	   $ cat /dev/music >foo; cat foo >/dev/music
     can be expected to capture and reproduce an input performance including timing.

     The process of playing back a complex MIDI file is illustrated below.  The file may contain
     several tracks--four, in this example--of MIDI events, each marked with a device index and a
     time stamp, that may overlap in time.  In the example, a, b, and c are device indices of the
     three output MIDI streams; the left-hand digit in each input event represents a MIDI channel
     on the selected stream, and the right-hand digit represents a time for the event's occur-
     rence.  As illustrated, the input tracks are not firmly associated with output streams; any
     track may contain events for any stream.

	  |	 |     a2|4	|
	a0|3	 |     c1|3   c0|3
	  |    b0|2    b1|2	|
	  |    b1|1	 |    c0|1
	a0|0	 |     b0|0	|
	  v	 v	 v	v
       +---------------------------+
       | merge to 1 ordered stream |
       | user code, eg midiplay(1) |
       +---------------------------+
		   b1|2
		   b0|2
		   c0|1
		   b1|1
		   b0|0
		   a0|0
		     v
       _______+-------------+_______user
	      | /dev/music  |	  kernel
	      | (sequencer) |
	      +-------------+
		|    1	  0
	  +-----'    |	  '-----.
	  0	     0		|
	  v	     v		v
       +-------+ +--------+ +---------+
       |midi(4)| |midi(4) | |midi(4)  |
       |rmidia | |rmidib  | |rmidic   |
       +-------+ +--------+ +---------+
       | mpu(4)| |umidi(4)| |midisyn  |
       +-------+ +--------+ +---------+
       |  HW   |     |	    | opl(4)  |
       | MIDI  |     U	    +---------+
       | UART  |      S     | internal|
       +-------+       B    |	tone  |
	   |	       |    |generator|
	   v	       |    +---------+
	external       v
       MIDI device  external
		   MIDI device

     A user process must merge the tracks into a single stream of sequencer MIDI and timing
     events in order by desired timing.  The sequencer obeys the timing events and distributes
     the MIDI events to the three destinations, in this case two external devices connected to a
     sound card UART and a USB interface, and an OPL tone generator on a sound card.

NOTES
     Use of select(2)/poll(2) with the sequencer is supported, however, there is no guarantee
     that a write(2) will not block or return EWOULDBLOCK if it begins with a timer-wait event,
     even if select(2)/poll(2) reported the sequencer writable.

     The delivery of a realtime message ahead of buffered bytes of an incomplete message may
     cause the realtime message to seem, in a saved byte stream, to have arrived up to 640 us
     earlier than it really did, at MIDI 1.0 data rates.  Higher data rates make the effect less
     significant.

     Another sequencer device, /dev/sequencer, is provided only for backward compatibility with
     an obsolete OSS interface in which some sequencer events were four-byte records.  It is not
     further documented here, and the /dev/music API should be used in new code.  The
     /dev/sequencer emulation is implemented only for writing, and that might not be complete.

IMPLEMENTATION NOTES
     Some hardware devices supporting midi lack transmit-ready interrupts, and some have the
     capability in hardware but currently lack driver support.	They can be recognized by the
     annotation (CPU-intensive output) in dmesg(8).  While suitable for music playback, they may
     have an objectionable impact on system responsiveness during bulk transmission such as patch
     downloads, and are best avoided for that purpose if other suitable devices are present.

     Buffer space in midi itself is adequate for about 200 ms of traffic at MIDI 1.0 data rates,
     per stream.

     Event counters record bytes and messages discarded because of protocol errors or buffer
     overruns, and can be viewed with vmstat -e.  They can be useful in diagnosing flaky cables
     and other communication problems.

     A raw sound generator uses the midisyn layer to present a MIDI message-driven interface
     attachable by midi.

     While midi accepts messages for transmission in any valid mixture of compressed or canonical
     form, they are always presented to an underlying driver in the form it prefers.  Drivers for
     simple UART-like devices register their preference for a compressed byte stream, while those
     like umidi(4), which uses a packet protocol, or midisyn, which interprets complete messages,
     register for intact canonical messages.  This design eliminates the need for compression and
     canonicalization logic from all layers above and below midi itself.

FILES
     /dev/rmidiN
     /dev/music
     /dev/sequencer

ERRORS
     In addition to other errors documented for the write(2) family of system calls, EPROTO can
     be returned if the bytes to be written on a raw midi device would violate MIDI protocol.

SEE ALSO
     midiplay(1), ioctl(2), ossaudio(3), audio(4), mpu(4), opl(4), umidi(4)

     For ports using the ISA bus: cms(4), pcppi(4), sb(4)

     For ports using the PCI bus: autri(4), clcs(4), eap(4)

HISTORY
     The midi driver first appeared in NetBSD 1.4.  It was overhauled and this manual page
     rewritten for NetBSD 4.0.

BUGS
     Some OSS sequencer events and ioctl(2) operations are unimplemented, as <sys/midiio.h>
     notes.

     OSS source-compatible sequencer macros should be added to <sys/soundcard.h>, implemented
     with the NetBSD ones in <sys/midiio.h>, so sources written for OSS can be easily compiled.

     The sequencer blocks (or returns EWOULDBLOCK) only when its buffer physically fills, which
     can represent an arbitrary latency because of buffered timing events.  As a result, inter-
     rupting a process writing the sequencer may not interrupt music playback for a considerable
     time.  The sequencer could enforce a reasonable latency bound by examining timing events as
     they are enqueued and blocking appropriately.

     FIOASYNC enables signal delivery to the calling process only; FIOSETOWN is not supported.

     The sequencer can only be a timing master, but does not send timing messages to synchronize
     any slave device; it cannot be slaved to timing messages received on any interface (which
     would presumably require a PLL algorithm similar to NTP's, and expertise in that area to
     implement it).  The sequencer ignores timing messages received on any interface and does not
     pass them along to the reading process, and the OSS operations to change that behavior are
     unimplemented.

     The SEQUENCER_TMR_TIMEBASE ioctl(2) will report successfully setting any timebase up to
     ridiculously high resolutions, though the actual resolution, and therefore jitter, is con-
     strained by hz(9).  Comparable sequencer implementations typically allow a selection from
     available sources of time interrupts that may be programmable.

     The device number in a sequencer event is treated on write(2) as index into the array of
     MIDI devices the sequencer has opened, but on read(2) as the unit number of the source MIDI
     device; these are usually the same if the sequencer has opened all the MIDI devices (that
     is, none was already open at its raw node when the sequencer was opened), but might not be
     the same otherwise.

     There is at present no way to make reception nonpromiscuous, should anyone have a reason to
     want to.

     There should be ways to override default Active Sense behavior.  As one obvious case, if an
     application is seen to send Active Sense explicitly, midi should refrain from adding its
     own.  On receive, there should be an option to pass Active Sense through rather than inter-
     preting it, for apps that wish to handle or ignore it themselves and never see EOF.

     When a midi stream is open by the sequencer, Active Sense messages received on the stream
     are passed to the sequencer and not interpreted by midi.  The sequencer at present neither
     does anything itself with Active Sense messages received, nor supports the OSS API for mak-
     ing them available to the user process.

     System Exclusive messages can be received by reading a raw device, but not by reading the
     sequencer; they are discarded on receipt when the stream is open by the sequencer, rather
     than being presented as the OSS-defined sequencer events.

     midisyn is too rudimentary at present to get satisfactory results from any onboard synth.
     It lacks the required special interpretation of the General MIDI percussion channel in GM
     mode.  More devices should be supported; some sound cards with synthesis capability have
     NetBSD drivers that implement the audio(4) but not the midisyn interface.	Voice stealing
     algorithm does not follow the General MIDI Developer Guidelines.

     ALSA sequencer compatibility is lacking, but becoming important to applications.  It would
     require the function of merging multiple tracks into a single ordered stream to be moved
     from user space into the sequencer.  Assuming the sequencer driven by periodic interrupts,
     timing wheels could be used as in hardclock(9) itself.  Similar functionality will be in
     OSS4; with the right infrastructure it should be possible to support both.  When merging
     MIDI streams, a notion of transaction is needed to group critical message sequences.  If
     ALSA or OSS4 have no such notion, it should be provided as an upward-compatible extension.

     I would rather have open(2) itself return an error (by the POSIX description ENODEV looks
     most appropriate) if a read or write mode is requested that is not supported by the
     instance, rather than letting open(2) succeed and read(2) or write(2) return -1, but so help
     me, the latter seems the more common UNIX practice.

BSD					   May 6, 2006					      BSD
Unix & Linux Commands & Man Pages : ©2000 - 2018 Unix and Linux Forums


All times are GMT -4. The time now is 12:44 AM.