Hi.
When we are up to our knees either in alligators or learning a new language, we don't usually look at all of the details -- for the latter, we're usually just interested in getting results. I'll risk bludgeoning the topic to death with my view.
I see the macro and inclusion facility as a piece separate from the compiler itself. One may place standard c code in a file name.i and use gcc to compile it without running the separate preprocessor,
cpp. One, could, in fact, use a separate macro facility like
m4 to do the work.
The function of cpp is quite limited, but like most symbol manipulators, it will keep track of symbols and their properties. You can test for the property of existence of a symbol in its memory -- that's really what
ifdef and
ifndef do. One could argue that the name of the query should be
if_in_symbol_table, but the existence is often less important than the value, which is simply another property.
Often the history can illuminate the issues:
Quote:
BCPL, B, and C all fit firmly in the traditional procedural family typified by Fortran and Algol 60. They are particularly oriented towards system programming, are small and compactly described, and are amenable to translation by simple compilers. They are `close to the machine' in that the abstractions they introduce are readily grounded in the concrete data types and operations supplied by conventional computers, and they rely on library routines for input-output and other interactions with an operating system. With less success, they also use library procedures to specify interesting control constructs such as coroutines and procedure closures. At the same time, their abstractions lie at a sufficiently high level that, with care, portability between machines can be achieved.
...
Many other changes occurred around 1972-3, but the most important was the introduction of the preprocessor, partly at the urging of Alan Snyder [Snyder 74], but also in recognition of the utility of the the file-inclusion mechanisms available in BCPL and PL/I. Its original version was exceedingly simple, and provided only included files and simple string replacements: #include and #define of parameterless macros. Soon thereafter, it was extended, mostly by Mike Lesk and then by John Reiser, to incorporate macros with arguments and conditional compilation. The preprocessor was originally considered an optional adjunct to the language itself. Indeed, for some years, it was not even invoked unless the source program contained a special signal at its beginning. This attitude persisted, and explains both the incomplete integration of the syntax of the preprocessor with the rest of the language and the imprecision of its description in early reference manuals.
...
C is quirky, flawed, and an enormous success. While accidents of history surely helped, it evidently satisfied a need for a system implementation language efficient enough to displace assembly language, yet sufficiently abstract and fluent to describe algorithms and interactions in a wide variety of environments.
much more at:
http://cm.bell-labs.com/cm/cs/who/dmr/chist.html by Dennis M. Ritchie
At the same place:
Quote:
None of BCPL, B, or C supports character data strongly in the language; each treats strings much like vectors of integers and supplements general rules by a few conventions. In both BCPL and B a string literal denotes the address of a static area initialized with the characters of the string, packed into cells. In BCPL, the first packed byte contains the number of characters in the string; in B, there is no count and strings are terminated by a special character, which B spelled `*e'. This change was made partially to avoid the limitation on the length of a string caused by holding the count in an 8- or 9-bit slot, and partly because maintaining the count seemed, in our experience, less convenient than using a terminator.
which goes to addressing an issue that Jim and I responded to involving c and perl strings at:
http://unix.com/showthread.php?p=302...#post302119024
cheers, drl