Is UNIX an open source OS ?


 
Thread Tools Search this Thread
Operating Systems Linux Fedora Is UNIX an open source OS ?
# 1  
Old 07-22-2014
Is UNIX an open source OS ?

Hi everyone,
I know the following questions are noobish questions but I am asking them because I am confused about the basics of history behind UNIX and LINUX.

Ok onto business, my questions are-:

  1. Was/Is UNIX ever an open source operating system ?
  2. If UNIX was closed source (as Wikipedia states as "historically closed source") then is LINUX a reverse engineered version of UNIX ?
    I mean if UNIX was closed source how did Linus Torvalds create a "UNIX-like" OS called LINUX without having access to the source code of a closed source OS. The only possible explanation seems reverse engineering UNIX just like ReactOS is a reversed engineered binary compatible version of Windows.
  3. Now this seems a little odd to ask. Is LINUX actually an OS ? Or is Linux just a kernel ? I am asking this because the Debian OS which I use can work on Linux, FreeBSD and I think the GNU Hurd kernel (I have no idea what it is by the way). Wikipedia defines Linux as an OS while I have never used an OS called Linux but I have used distros like "Fedora", "Debian", etc.

Okay that's about it for now. Please don't flame me, I am really confused between the basics here.

Thanks in advance.
# 2  
Old 07-22-2014
Quote:
Originally Posted by sreyan32
[*]Was/Is UNIX ever an open source operating system ? [*]If UNIX was closed source (as Wikipedia states as "historically closed source") then is LINUX a reverse engineered version of UNIX ?
OK, a few definitions up front:

AT&T had a experimental lab, where some determined guys (Ken Thompson, Brian Kernighan, Dennis Ritchie, ...) programmed a OS - the first UNIX. Over time there were several revisions and at one point in time AT&T gave the code to a university (Berkeley) and let them play with it. They developed (today you'd say "forked") their own version. (This is called "BSD" - Berkeley Systems Department - AT&Ts main version is called "System V", you sure find more history about this when you search for it.) Initially, this was "Unix".

All these systems were closed source, but it was possible to buy a license from AT&T. You got the sources and were allowed to rebuild them, even change them to some point. Companies like IBM (but also Sun, HP, DEC, ...) did this and developed their own flavour of UNIX (in IBMs case "AIX"). These were "Unix" too.

But the success of Unix was not because of the (btw. very very good) implementation of the OS, but because of the stunning simplicity yet elegance of its design. Therefore, after some legal hassles, what "Unix" meant, changed. Before, it was a code base and the design principles were inherently inscribed in it. Not any longer. Today, "Unix" is a set of things an operating system must do under defined circumstances. "If system call X() is issued the system must return "A" in case of "...", "B" in case of "...", and so forth. See it like this: before, "Unix" was a blueprint for a certain car. Now, there is just a standard which says "if i turn the steering wheel clockwise the car is supposed to change the direction to the right side". How the connection between steering wheel and tires is made doesn't matter at all, as long as the system reacts in the expected, standardized manner. (search for "POSIX" and "Single Unix Specification" to get details about how this standard is designed. Or ask our revered local Master of Standards, Don Cragun and be prepared for the highest quality information on this issue you will ever get.

To answer your question about Linux: Linus Torvalds did not "reverse engineer". He took a very small and basic implementation of a Unix kernel ("Minix"), which was written by an american/dutch university professor for educational purposes: Mr Andrew Tanenbaum. The scope of this was to show students how to write operating systems (kernels) and as an example Mr. Tanenbaum used Unix, because it is so awfully well documented. Mr Torvalds used Minix as a basis and developed his own Unix-like kernel. This, see above, means the kernel is programmed all anew, but if a Unix kernel reacts to some cirumstance/does something in a certain way it will react in the same certain way/do the same.

Linux has (probably because of political considerations) never intended to get the official certification of being a real Unix. It is - we know so much - as "Unix" as it gets and would in all likelihood pass the certification, but this has not happened yet.

Quote:
Originally Posted by sreyan32
just like ReactOS is a reversed engineered binary compatible version of Windows.
The difference is: Windows was built with the a certain (IBM-compatible PC) platform in mind. It runs there and nowhere else. The upside is that software is binary compatible: once you get a clean compile you can expect the compiled binary to run on every system (well - that is the theory!). Unix is not binary but source-code compatible. It was intended to make little or no assumptions about the system it runs on at all, but this in turn means, that a binary will only run on the target machine it was compiled for.

Windows: once you get it compiled, the compiled binary will run on every system and do the same.

Unix: once you get it compiled you can compile the same source on every Unix system and the produced binary will do the same in any system.

Therefore "reverse engineering" was not necessary. A software compiled for, say, SCO Unix on PC, will not run on Linux for PCs, but the same source can be compiled for any UNIX and each resulting binary is expected to do the same. (Again: this is the theory. I spare you the gory details so that your nights sleep is undisturbed.)

Quote:
Originally Posted by sreyan32
[*]Now this seems a little odd to ask. Is LINUX actually an OS ? Or is Linux just a kernel ? I am asking this because the Debian OS which I use can work on Linux, FreeBSD and I think the GNU Hurd kernel (I have no idea what it is by the way).
Yes, Linux is an OS. Yes, "Linux" is just the kernel. The kernel of an OS does not do everything itself, but it sets every design decision as a given. Therefore, in fact, the kernel IS the OS. "Kernel" here means also the driver layer, process accounting, process environment, resource scheduling, filesystems, system calls, libraries ....

On top of the Linux kernel one usually uses the GNU toolset. GNU deeloped their own OS ("The Hurd"), which is intended to surpass Unix designwise, but - they might not like to hear that, but this is my opinion) this is doomed to fail the same way as Plan 9 (another try to obsolete Unix) because of the sheer simplicity and straightforwardness of UNIX' design. This is similar to the programming language "Oberon", which Nikolaus Wirth thinks is his best creation, but still: if someone uses any Wirth language at all, he uses the first fruit of Mr. Wirths muse, PASCAL. Like the quote oftenly (bu wrongly) attributed to Mr. Gorbatshov:

He who comes too late is punished by life.

Because it is quite difficult to obtain all the sources, compile them, arrange them in bundles which make sense, etc. "Distributions" basically do exactly this, plus many have developed their own mechanism to bind software together to meaningful bundles. Fedora/Red Hat/CentOS/... have "rpm" for that, Debian/Ubuntu/... have "apt", SuSe has "zypper", etc. basically they all use the same software and bundle that together, write some installation routines, such things. Debian will only incorporate what is thoroughly tested while Fedora is more like as soon as the developer hits "save" i want the compiled version installed but these are just two sides of the same coin.

Basically it is always the same: the kernel, plus the set of utilities from GNU (like "GNU-ls", "GNU-mount", "GNU-<any-conceivable-unix-command"), plus a set of additional software, like a desktop manager (KDE, GNOME, ...), mail program, web browser and so on. At this level there is little difference between the distros. Fedora may use version 4.8.1.3.5 of some software while Debian still installs 4.7 or 4.5, the one may install KDE while the other has GNOME, but these are details.

I hope this helps.

bakunin
These 5 Users Gave Thanks to bakunin For This Post:
# 3  
Old 07-22-2014
1) UNIX used to be a closed-source operating system, yes, made by Bell and AT&T.

UNIX is no longer an operating system, however. That particular kind is no longer sold, and the name is now controlled by a different group which maintains a paper standard -- defining what features and utilities a UNIX operating system is supposed to have without getting too specific about how it works internally. If you follow these papers, and certify with them, you can call your operating system UNIX.

These days, there are open UNIXes(the many kinds of BSD, some kinds of Solaris), closed UNIXes(AIX), and everything inbetween.

2) Linux is not UNIX. Linux and the GNU utilities were actually made in a spirit of competition with UNIX. Same with HURD -- HURD was actually the "official" GNU kernel, Linux was an upstart project which appeared out of nowhere and overtook it. Smilie

It's nothing like ReactOS either, which can run Windows programs natively -- you couldn't run HPUX executables, AIX executables, or SunOS executables on Linux natively. It wouldn't make much sense to even try, these proprietary UNIXes are designed to run on their own proprietary machines. For that matter, Linux on ARM is not compatible with Linux on x86!

What different UNIXes have in common is source compatibility -- you can't expect to haul a program from an alien architecture and expect it to run, but you can hope to build it from source on some UNIX's own compiler and get the same effect. This is the sense in which different UNIX and UNIX-likes are supposed to be compatible. They have the same kernel features and programming language "construction kits". This is also how Linux has managed to spread to such a bewildering variety of architectures from supercomputers to set-top boxes.

3) Linux is a kernel. Linux plus the GNU utilities makes a complete UNIX-like operating system. HURD or MACH plus the GNU utilities also makes a complete UNIX-like operating system. MACH plus the BSD utilities could make a genuine UNIX-certified operating system. Different Linux distributions are the Linux kernel plus different userland utilities.

The thing is, Linux and GNU weren't made to be a UNIX -- they were made in direct competition with it. GNU even stands for "GNU's not UNIX". This stems right from the bad old days when a license for the UNIX source could set you back a cool hundred grand in 1980 dollars... It eschews the UNIX name for legal reasons but is very similar. It matters less these days, now that AT&T doesn't control the brand and there's many open alternatives. I hope GNU will forget the old feud and get things UNIX-certified someday.

Last edited by Corona688; 07-22-2014 at 12:36 PM..
These 5 Users Gave Thanks to Corona688 For This Post:
# 4  
Old 07-24-2014
Okay first of a great many thanks for taking the time out to give me such a detailed explanation. I could'nt have got better. But I have a couple of questions.
Quote:
Originally Posted by bakunin
Windows: once you get it compiled, the compiled binary will run on every system and do the same.

Unix: once you get it compiled you can compile the same source on every Unix system and the produced binary will do the same in any system.

Therefore &quot;reverse engineering&quot; was not necessary. A software compiled for, say, SCO Unix on PC, will not run on Linux for PCs, but the same source can be compiled for any UNIX and each resulting binary is expected to do the same. (Again: this is the theory. I spare you the gory details so that your nights sleep is undisturbed.)
Are you saying that for example I have a piece of source code like -:
Code:
#include<stdio.h>
main()
{
printf("Hello World"\n");
}

And I compile it on a SCO UNIX machine and take the executable to Solaris Machine the executable won't run. Are you saying that I would need the source of the hello world program and then I would have to build it back on the Solaris Machine ?

Quote:
Originally Posted by bakunin
The kernel of an OS does not do everything itself, but it sets every design decision as a given. Therefore, in fact, the kernel IS the OS.
I am sorry but could you elaborate what you mean by this. The kernel is at the end responsible for how software interacts with hardware, so it kinda does everything.

One last question it may be off-topic. You said -:
Quote:
Originally Posted by bakunin
ask our revered local Master of Standards, Don Cragun and be prepared for the highest quality information on this issue you will ever get.
How do I contact someone like Don Cragun ? I am not saying that your answers were wrong or insufficient in any way but if I wanted to contact him then how would I do it ? By private message ?

Quote:
Originally Posted by bakunin
I hope this helps.
Yes it did immensely.
# 5  
Old 07-24-2014
Hi sreyan32...
Quote:
How do I contact someone like Don Cragun ? I am not saying that your answers were wrong or insufficient in any way but if I wanted to contact him then how would I do it ? By private message ?
Don't worry do something amiss and he will contact you...

Smilie

Last edited by rbatte1; 07-24-2014 at 10:50 AM.. Reason: Put in the wink string ;) to replace ;oD
These 2 Users Gave Thanks to wisecracker For This Post:
# 6  
Old 07-24-2014
Quote:
Originally Posted by sreyan32
Are you saying that for example I have a piece of source code like -:
Code:
#include<stdio.h>
main()
{
printf("Hello World"\n");
}

And I compile it on a SCO UNIX machine and take the executable to Solaris Machine the executable won't run. Are you saying that I would need the source of the hello world program and then I would have to build it back on the Solaris Machine ?
Yes. Exactly.
Quote:
I am sorry but could you elaborate what you mean by this. The kernel is at the end responsible for how software interacts with hardware, so it kinda does everything.
The features of the kernel define and constrain what your programs can do. Compare any UNIX-like to the Windows kernel for example.

Do you get disk devices? Yes -- as drive letters, c:\ d:\ etc, not as direct files.

Do you get terminal devices? Not really, unless you use a com port, and the emulation is still limited.

Do you get partitions? Yes, each mounted on their own root, not (usually) nested.

Do you get folders? Yes -- separated by \, rather than /. (Oddly, some calls in Windows actually can separate by /, some can't.)

Do you get files? Yes -- with case-insensitive names.

Which meant that Cygwin, which does as much as it can to act like UNIX within the Windows framework, can't avoid these facts. Some things it can translate between -- the / vs \ -- but some things are just unavoidable, like case-sensitive filenames. No matter what it does, if you create 'a' in the same folder as 'A', you are just overwriting 'A' again.

This has made some parts of Cygwin more difficult, slow, and complicated than they need to be just because Windows really isn't meant to do what's being asked of it here. fork() for example -- efficient and fundamental in UNIX, but slow and nightmarish to emulate in Windows, because it uses a very different process model. Also terminal devices, which are a bit of a nightmare to build from scratch anywhere you go -- Linus' project began as a terminal emulator, and from there it wasn't too too far to make it a complete kernel Smilie

And in the end, Windows' kernel just isn't suited to running UNIX-like things. UNIX can run thousands of tiny, short-lived processes in a few moments without a hiccup, that being one of the things it's designed for. Try and do that on Windows and it lags, hiccups, and kills random ones here and there for no apparent reason other than "if you don't make so many processes, it might do that less". Windows prefers larger, fewer, longer-lived processes.

Last edited by Corona688; 07-24-2014 at 12:47 PM..
These 3 Users Gave Thanks to Corona688 For This Post:
# 7  
Old 07-24-2014
Quote:
Originally Posted by sreyan32
Are you saying that for example I have a piece of source code like -:
Code:
#include<stdio.h>
main()
{
printf("Hello World"\n");
}

And I compile it on a SCO UNIX machine and take the executable to Solaris Machine the executable won't run. Are you saying that I would need the source of the hello world program and then I would have to build it back on the Solaris Machine ?
Like Corona688 already said: yes, precisely. The compatibility of UNIX is defined as the guarantee that you can compile the same source code on different systems with the same outcome. For instance, this means that regardless of how your terminal is constructed (in terms of real hardware) you can expect "printf()" to do the same/analogous on every one of them. You can look "printf()" up somewhere in the POSIX standard and will find a detailed "printf() is required to do X in case of Y, return A in case of B, ...".

Notice that this standard just describes what has to come out, not how this outcome is realized! This is why UNIX (and Unix.like systems) run on everything from small embedded systems in your washing machine over cell phones (Android is just a customized Linux kernel), most WLAN routers, NAS appliances to real big irons like the IBM p795. We have about a dozen p780s in our data centers, most of them 4TB of memory and 128 processors. They run dozens of LPARs each. Compare this, along with some 50-60 PCI busses in each I/O subsystem (each system can have up to 4 of them), with your typical PC-compatible server Windows runs on. In addition we have some z/Linux systems running on the mainframe, Linux on all sorts of hardware, a few Sun servers running Solaris (another Unix), etc., etc..

Quote:
Originally Posted by sreyan32
I am sorry but could you elaborate what you mean by this. The kernel is at the end responsible for how software interacts with hardware, so it kinda does everything.
Yes, exactly. It is not only a kernel but also a standard library, which executes all the system calls in a standardized way. If you never have to execute interactive commands but only a fixed program you do not need all the utilities that come with a OS usually and which do things like create users, files, and so on. Most embeded systems are constructed this way: a Linux kernel, the standard library, the one program it is supposed to run and from the kernel and the library stripped everything not necessary to run that one program. Take apart your home WLAN router, telephone or similar device and you might probably find exactly this, burned into an EPROM.


Quote:
Originally Posted by sreyan32
How do I contact someone like Don Cragun ? I am not saying that your answers were wrong or insufficient in any way but if I wanted to contact him then how would I do it ? By private message ?
For instance. He does not bite (well, not when the moon is not full, anyways) as he is a very friendly guy and by far the best expert in UNIX standards issues we have here you can ask him if you need to know details we can't provide. He won't answer via PM (because this would not contribute to the knowledge base we are building here), but he might write something into this thread.

I hope this helps.

bakunin
These 2 Users Gave Thanks to bakunin For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

5 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

UNIX/Linux inventory - Open Source

Hello guys, I need an open source tool that can list all the softwares installed in my unix/linux servers, the tool should list all the softwares installed and the current version, grouped by the hostname, anybody know any solution for this specific problem? Thanks guys, have a good day! (7 Replies)
Discussion started by: denisloide
7 Replies

2. UNIX for Dummies Questions & Answers

Open-source projects to learn concurrency-managed network programming in Unix?

Hi, I am a mid-career programmer with extensive experience in object-oriented design and development in C, C++, and C#. I've written a number of multi-threaded server applications and background services, although my grasp of networking protocols is a bit weak: my current job drifted away from... (2 Replies)
Discussion started by: TheTaoOfPhil
2 Replies

3. UNIX and Linux Applications

need open source KB software for UNIX

Anyone know of a good open source Knowledge Base software for UNIX that can connect to an Oracle back end? (0 Replies)
Discussion started by: RJ45
0 Replies

4. Shell Programming and Scripting

Open Source

Hi Friends I'm new to this UNIX - I'm working on the porting project from Solaris To Linux i just want to map some commands from solaris to Linux so can any one please tell me how to get the source code of the commands like "ls", "cu", "du" Regards sabee (1 Reply)
Discussion started by: sabee.prakash
1 Replies

5. UNIX for Dummies Questions & Answers

open source antivirus

Hello What is the best open source anti virus? Thanks (4 Replies)
Discussion started by: mohammadmahdi
4 Replies
Login or Register to Ask a Question