Quote:
Originally Posted by
sreyan32
[*]Was/Is UNIX ever an open source operating system ? [*]If UNIX was closed source (as Wikipedia states as "historically closed source") then is LINUX a reverse engineered version of UNIX ?
OK, a few definitions up front:
AT&T had a experimental lab, where some determined guys (Ken Thompson, Brian Kernighan, Dennis Ritchie, ...) programmed a OS - the first UNIX. Over time there were several revisions and at one point in time AT&T gave the code to a university (Berkeley) and let them play with it. They developed (today you'd say "forked") their own version. (This is called "BSD" - Berkeley Systems Department - AT&Ts main version is called "System V", you sure find more history about this when you search for it.) Initially, this was "Unix".
All these systems were closed source, but it was possible to buy a license from AT&T. You got the sources and were allowed to rebuild them, even change them to some point. Companies like IBM (but also Sun, HP, DEC, ...) did this and developed their own flavour of UNIX (in IBMs case "AIX"). These were "Unix" too.
But the success of Unix was not because of the (btw. very very good) implementation of the OS, but because of the stunning simplicity yet elegance of its design. Therefore, after some legal hassles, what "Unix" meant, changed. Before, it was a code base and the design principles were inherently inscribed in it. Not any longer. Today, "Unix" is a set of things an operating system must do under defined circumstances. "If system call X() is issued the system must return "A" in case of "...", "B" in case of "...", and so forth. See it like this: before, "Unix" was a blueprint for a certain car. Now, there is just a standard which says "if i turn the steering wheel clockwise the car is supposed to change the direction to the right side". How the connection between steering wheel and tires is made doesn't matter at all, as long as the system reacts in the expected, standardized manner. (search for "POSIX" and "Single Unix Specification" to get details about how this standard is designed. Or ask our revered local Master of Standards,
Don Cragun and be prepared for the highest quality information on this issue you will ever get.
To answer your question about Linux: Linus Torvalds did not "reverse engineer". He took a very small and basic implementation of a Unix kernel ("Minix"), which was written by an american/dutch university professor for educational purposes: Mr Andrew Tanenbaum. The scope of this was to show students how to write operating systems (kernels) and as an example Mr. Tanenbaum used Unix, because it is so awfully well documented. Mr Torvalds used Minix as a basis and developed his own Unix-like kernel. This, see above, means the kernel is programmed all anew, but if a Unix kernel reacts to some cirumstance/does something in a certain way it will react in the same certain way/do the same.
Linux has (probably because of political considerations) never intended to get the official certification of being a real Unix. It is - we know so much - as "Unix" as it gets and would in all likelihood pass the certification, but this has not happened yet.
Quote:
Originally Posted by
sreyan32
just like
ReactOS is a reversed engineered binary compatible version of Windows.
The difference is: Windows was built with the a certain (IBM-compatible PC) platform in mind. It runs there and nowhere else. The upside is that software is binary compatible: once you get a clean compile you can expect the compiled binary to run on every system (well - that is the theory!). Unix is not
binary but
source-code compatible. It was intended to make little or no assumptions about the system it runs on at all, but this in turn means, that a binary will only run on the target machine it was compiled for.
Windows: once you get it compiled, the compiled binary will run on every system and do the same.
Unix: once you get it compiled you can compile the same source on every Unix system and the produced binary will do the same in any system.
Therefore "reverse engineering" was not necessary. A software compiled for, say, SCO Unix on PC, will not run on Linux for PCs, but the same source can be compiled for any UNIX and each resulting binary is expected to do the same. (Again: this is the theory. I spare you the gory details so that your nights sleep is undisturbed.)
Quote:
Originally Posted by
sreyan32
[*]Now this seems a little odd to ask. Is LINUX actually an OS ? Or is Linux just a kernel ? I am asking this because the Debian OS which I use can work on Linux, FreeBSD and I think the GNU Hurd kernel (I have no idea what it is by the way).
Yes, Linux is an OS. Yes, "Linux" is just the kernel. The kernel of an OS does not do everything itself, but it sets every design decision as a given. Therefore, in fact, the kernel IS the OS. "Kernel" here means also the driver layer, process accounting, process environment, resource scheduling, filesystems, system calls, libraries ....
On top of the Linux kernel one usually uses the GNU toolset. GNU deeloped their own OS ("The Hurd"), which is intended to surpass Unix designwise, but - they might not like to hear that, but this is my opinion) this is doomed to fail the same way as Plan 9 (another try to obsolete Unix) because of the sheer simplicity and straightforwardness of UNIX' design. This is similar to the programming language "Oberon", which Nikolaus Wirth thinks is his best creation, but still: if someone uses any Wirth language at all, he uses the first fruit of Mr. Wirths muse, PASCAL. Like the quote oftenly (bu wrongly) attributed to Mr. Gorbatshov:
He who comes too late is punished by life.
Because it is quite difficult to obtain all the sources, compile them, arrange them in bundles which make sense, etc. "Distributions" basically do exactly this, plus many have developed their own mechanism to bind software together to meaningful bundles. Fedora/Red Hat/CentOS/... have "rpm" for that, Debian/Ubuntu/... have "apt", SuSe has "zypper", etc. basically they all use the same software and bundle that together, write some installation routines, such things. Debian will only incorporate what is thoroughly tested while Fedora is more like
as soon as the developer hits "save" i want the compiled version installed but these are just two sides of the same coin.
Basically it is always the same: the kernel, plus the set of utilities from GNU (like "GNU-ls", "GNU-mount", "GNU-<any-conceivable-unix-command"), plus a set of additional software, like a desktop manager (KDE, GNOME, ...), mail program, web browser and so on. At this level there is little difference between the distros. Fedora may use version 4.8.1.3.5 of some software while Debian still installs 4.7 or 4.5, the one may install KDE while the other has GNOME, but these are details.
I hope this helps.
bakunin