Re: Part 31
Posted: Sat Dec 22, 2018 1:49 pm
#925 Re: [osFree] Digest Number 203
Expand Messages
Frank Griffin
Dec 23 6:33 AM
criguada@... wrote:
Hide message history
> ??? What are you talking about?
> Did you notice this thread started from a message of mine, or what?
Actually, I didn't. Apologies.
> ??? "traditional unix design" is something that every student of an information
> technology course at every university knows.
and it no longer exists (effectively). The "traditional Unix design" that people learn in school is the AT&T kernel circa 1980. That's when most of the textbooks were written. To give you a frame of reference, shared memory was a really radical idea in the Unix community then.
> BTW, you're right about the "Linux doesn't deviate" argument. I'm not talking
> from personal experience, but from several statements by people whose statements
> I consider valuable: people that have made several contributions to OS/2 and the
> OS/2 community in a way that lets everyone sure about their technical skills.
> I'm sorry but I can't say the same thing about you, though you may be the most
> skilled person in this list.
Taking these in reverse order, I very much doubt that I am the most skilled person on this list (if I am, we have problems ). I'm not sure why you would want to base your opinion about Linux on the opinions of people who focus on and contribute to OS/2. Not that those aren't worthy endeavors, and they certainly don't preclude those people having valid knowledge of Linux, but they hardly seem like valid credentials in the Linux arena.
> > Since this is exactly what you said back in the FreeOS days, I have a
> > sneaking suspicion that your knowledge of how Linux deviates from
> > "traditional Unix" isn't based on any current source base. In fact
>
> I think you're mixing up things. I was mostly there in "lurking mode" at the
> time of FreeOS. I may have posted a few messages at the time of the litigation
> that led to the split, but I was not among the most active writers. You're
> probably thinking about the original founder of the FreeOS project, which was a
> Brasilian guy (IIRC) of which I don't remember the name (but I can find it if
> you like).
No, I'm not confusing you with Daniel Caetano. But it is true that you expressed similar opinions back then about using Linux; see
http://groups.yahoo.com/group/freeos/message/1621
> > I'm sorry, but it is of extreme interest for this discussion.
>
> This is of NO interest. The fact that the Linux kernel is positively using
> recent Intel improvements doesn't shed any light on the difference among the two
> kernels or their compared performance.
> I'm still much more favourable to a tabular comparison among the different
> kernels which are available to settle the question.
I don't see how you can discount this. These days, Intel processor improvements are done for one reason only: performance. If exploiting the features requires changes in the OS, those changes can only be done by people who have access to the kernel source (one way or another). Almost nobody with official access to the OS/2 kernel source (meaning that they are in a position to get their changes incorporated) is doing anything with HT, or 64-bit, or anything else that may come along. On the other hand, the Linux community is falling over itself to beat Microsoft at exploiting these changes. And, since it's open source, you can see what they're doing. And, since it's POPULAR open source, you can also find a wealth of analysis giving other peoples' opinions of what they're doing.
> > Serenity has no access to kernel source code that I've ever seen them
> > post about. Nor have I ever read a post indicating that they are
> > allowed to modify the kernel.
>
> -- start of quote --
> Ok, among other there is mentioned a smp fix, bsmp8603.zip, that someone at
> Intel has tested on their free time for Serenity. So I would like to get that
> fix if possible. The rest of the thread doesn't really say anything if anypne
> outside Intel than has manage to pull the same stunt off, ie getting OS2 to
> support HT.
> -- end of quote --
>
> All the thread is available at the following adress:
>
> http://www.os2world.com/cgi-bin/forum/U ... &P=1#ID429
OK, I read the thread. Most of it is single-sentence posts that give the posters opinion without much to back it up ("I've heard this", "I think that").
As to the Intel patch, either it is based on "escaped" source that Serenity probably couldn't distribute legally, or else it is a binary hack on the distributed OS/2 kernel. I think it's great that somebody did it, and I hope it works, but it hardly seems like a viable ongoing way to incorporate kernel improvements.
If you'd like some more factual data about HT, here's a benchmark article done by the IBM Linux Technology Center (same folks who did the stress test):
http://www-106.ibm.com/developerworks/l ... ary/l-htl/
In the article, they compare the performance of a pre-HT SMP Linux kernel (2.4) with the HT-supporting 2.6 kernel, under workloads which are single-user, multi-user, threaded, and non-threaded. Given all the honing done to the Linux 2.4 SMP kernel for the much-publicized Linux vs. WinServer tests a year or so ago, this should be a pretty fair indicator of what you would see with the existing OS/2 SMP kernel versus an OS/2 kernel enhanced to use HT.
For single-user stuff, HT actually ran a few percent slower in most cases (-1%, -3%). The real gains, as you might expect, come with heavy multiuser workloads. One such workload was a chat room simulation:
*******************************************(start quote)
To measure the effects of Hyper-Threading on Linux multithreaded applications, we use the chat benchmark, which is modeled after a chat room. The benchmark includes both a client and a server. The client side of the benchmark will report the number of messages sent per second; the number of chat rooms and messages will control the workload. The workload creates a lot of threads and TCP/IP connections, and sends and receives a lot of messages. It uses the following default parameters:
Number of chat rooms = 10
Number of messages = 100
Message size = 100 bytes
Number of users = 20
By default, each chat room has 20 users. A total of 10 chat rooms will have 20x10 = 200 users. For each user in the chat room, the client will make a connection to the server. So since we have 200 users, we will have 200 connections to the server. Now, for each user (or connection) in the chat room, a "send" thread and a "receive" thread are created. Thus, a 10-chat-room scenario will create 10x20x2 = 400 client threads and 400 server threads, for a total of 800 threads. But there's more.
Each client "send" thread will send the specified number of messages to the server. For 10 chat rooms and 100 messages, the client will send 10x20x100 = 20,000 messages. The server "receive" thread will receive the corresponding number of messages. The chat room server will echo each of the messages back to the other users in the chat room. Thus, for 10 chat rooms and 100 messages, the server "send" thread will send 10x20x100x19 or 380,000 messages. The client "receive" thread will receive the corresponding number of messages.
The test starts by starting the chat server in a command-line session and the client in another command-line session. The client simulates the workload and the results represent the number of messages sent by the client. When the client ends its test, the server loops and accepts another start message from the client. In our measurement, we ran the benchmark with 20, 30, 40, and 50 chat rooms. The corresponding number of connections and threads are shown in Table 3.
****************************************(end of quote)
Here was the speedup table for the 2.4 SMP kernel:
Table 4. Effects of Hyper-Threading on chat throughput
Number of chat rooms 2419s-noht 2419s-ht Speed-up
20 164,071 202,809 24%
30 151,530 184,803 22%
40 140,301 171,187 22%
50 123,842 158,543 28%
Geometric Mean 144,167 178,589 24%
Note: Data is the number of messages sent by client: higher is better.
Here's the same results for the 2.6 kernel, which has explicit support for HT:
Table 7. Effects of Hyper-Threading on Linux kernel 2.5.32
chat workload
Number of chat rooms 2532s-noht 2532s-ht Speed-up
20 137,792 207,788 51%
30 138,832 195,765 41%
40 144,454 231,509 47%
50 137,745 191,834 39%
Geometric Mean 139,678 202,034 45%
As you can see, being able to update the kernel source and staying on top of improvements can make quite a difference. Which is why, in selecting a (micro)kernel for osFree, I give a lot of weight to whether or not we have a reasonable expectation of seeing work like this done in a timely fashion. The size and quality of the Linux kernel team and their desire to quash MS suggests to me that they have far more resource and motivation to do this than most (if not all) other contenders.
By the way, I'm not saying that it is important for osFree to support HT (or not). HT is just an example of a hardware improvement that a closed source, or out-of-reach kernel, or one we don't have the resource to maintain, can't exploit. Others will come along which may mean more or less to osFree.
> This is obviously THE argument, and it would be for anybody who is concerned
> about OS/2 survival, unless you want to have yet another Linux distribution with
> some OS/2 flavor.
It's like the old story of the blind men and the elephant. What OS/2 is to you depends on what you do with it. You can write apps or drivers for it, in which case you see the APIs. You can use it as a server, in which case you see the reliability, performance, and scalability. Or, you can use it as a client, in which case you see the WPS and the existing apps. You don't see most of OS/2, and you never will. If a replacement shows you all the same features you expect (same APIs, runs the same apps), then it's a good replacement.
> Either you're very lucky, or you don't mess very much with Linux.
> I had to mess with RH9 kernel just a month ago trying to install on an elder
> system, and I see I'm not alonem judging from the messages that have been posted
> recently.
> With OS/2 you NEVER have to mess with the kernel. If a device is supported by
> the system you just install the driver and you're done.
Umm, yeah. And if OS/2 *doesn't* support the device, then you're just up the proverbial creek, which doesn't sound like a better solution to me. I suspect that if OS/2 offered the ability to support additional hardware if you obtained and recompiled the kernel sources (16-bit C, assembler, and all), you'd be happy to do it.
The fact is, that with module support, most distributions choose every possible kernel option as a module under the theory that it's worth the disk space since if the hardware isn't present, the module just won't be loaded at runtime. And anyway, recompiling the Linux kernel is a matter of picking options from a graphical tool, pushing a button, and finding something else to do for an hour. I've been doing it since the mid-90s, although these days I only need to do it to debug problems where I need to modify kernel source.
The Mandrake distro has a total of about 9CDs worth of packages, all told. The base distro they put out, though, is 3 or 4 ISO images, so they are constantly picking and choosing what will "make it" to the base CDs. There was a small uproar on the Mandrake Cooker mailing list a while back because they chose to bump the 50MB package containing the kernel source off of the bare-bones distro. People said "but newbies won't be able to recompile the kernel if they need to". Mandrake's answer was "99.99% of them never do, and those that do know what they're doing and where to find it". Right or wrong, it's an indication that a commercial marketing team with a financial stake in the issue believes that kernel recompiles are pretty rare among users as opposed to developers.
> I think that the concept of multiple roots and single root is absolutely clear
> to anybody on this list, at least those that have some experience on Linux or
> other unices. It's not necessary to explain it.
> And how you can state that "there is no difference" between having separate
> partitions, each one with its own root, and having a single root where you mount
> partitions under subdirectories, well it really beats me.
I thought I explained that, but I'll try again. Other than a semi-religious stance of "drives are just BETTER", I see very little difference between, say,
xcopy C:\onefile.ext D:\twofile.ext
and
xcopy \C\onefile.ext \D\twofile.ext
As I said before, if the shell (CMD.EXE) parser wants to, it can accept the first form and translate it to the second. In a graphical app like "Drives", you would see no difference at all; it's still just a directory tree, whether you call the top-level node "C:", "C", "c-drive", or "/".
Partitions are a completely different issue. Suppose I have two partitions, seen under OS/2 as C: and D:. In native Linux, each partition would have a root directory corresponding to C:\ or D:\. I can define a directory called "D" in the C:\ directory, and then mount the second partition there, e.g.
mount /dev/hda2 /D
at which point I can refer to all of the files on the second partition as /D/filename-or-filepath. If you want to treat C no differently than D, just define a symbolic link from /C to /, and all of the first partition files will answer to /C/filename as well as /filename. Again, CMD.EXE or graphical file-choosers can make this look identical to current OS/2.
In short, you can use as many or as few partitions as you want, with as many or as few virtual drives on them as you want.
> Sure, I'm correct about saying that it's not related to the kernel, but what you
> say is resembling more and more a Linux distro with the capability to run OS/2
> apps, not a new OS based on Linux kernel.
Correct. I am describing as OS/2 personality built on top of a captive and more-or-less hidden Linux base. That achieves the objective of an OS/2 clone with OS/2-style reliability and scalability on day one, and a large, committed team of people supporting the parts of it which are unrelated to the OS/2 personaility.
Starting from there, if you then think that it's desirable to replace, say, X, then you have the leisure of doing it in parallel with the rest of the OS/2 community being able to run OS/2 stuff.
> And talking about the mess, obviously I'm not talking about window managers.
> What do you say about the lack of a global system clipboard, like in OS/2 and
> Win? Yes, I know that there is some software that tries to address the problem.
Old stuff. GNOME and KDE now share clipboard data. The only things that don't are old native X apps.
And, in point of fact, it's irrelevant to an osFree, since you would base PM and WinOS2 on one or the other of GNOME or KDE, not both, in which case there would never have been a clipboard issue to begin with, since these have always had internal clipboard support.
> What do you say about the lack of global keyboard mappings?
I'm ignorant of this issue. I know that Mandrake has what they call global keyboard mappings, but I don't know if they address the problem to which you refer. If you're referring to shortcut keys which are common across apps, it's the same as the clipboard issue: old X apps rolled their own. Modern apps written to Gtk or KDE or whatever use ones which are common to all apps which use the toolkit.
> What do you say about the lack of a system registry, instead of each application
> trying to solve the problem with it's own (often baroque) config files?
I'm not enough of a toolkit maven to swear to you that the newer ones don't have such a registry, but it doesn't really matter. The OS/2 API includes such a registry, so we'd provide one as part of implementing the API. If the toolkit has (or ever gets) one, we'd delegate to that. In any case, OS/2 apps would have one.
> You're obviously ignoring projects aimed at replacing X. Just do a google search
> for "xfree replacement" and you'll find a few, some quite advanced and some just
> "wannabe".
Well, yes, you're correct, I am ignoring them. Because no major Linux distro uses them. I'm sure there are people who dislike X enough to try writing a replacement, and I'm also sure that there are people who just want the experience of writing their own X. More power to them.
But the sigfnificant fact to me is that everybody who is actually putting their money on the line with Linux, e.g. IBM, RedHat, Mandrake, etc., seems to be satisfied enough with X and its ongoing progress.
> > But nobody programs to the X API, which is considered very low-level.
>
> See UDE for an example (Unix Desktop Environment).
>
You're correct, my statement was overly broad. Obviously, some people choose to program to X. I should have said that most new graphical apps included in the main distros don't program to X. They use either GNOME or KDE, and each of these will run the other's apps. Of course, UDE isn't an application, it's a Window Manager, and a full-fledged WM (as opposed to a layer on top of another) doesn't have much choice but to program to X.
I find your choice of UDE as an example interesting. Their project description suggests that they reject GTk+ and QT because of bloat, and find X (or the Xlibs) to be pure, untainted, and worthy of being a base for their new WM. Apparently they don't agree with the folks who are looking to replace X, which just goes to show that there are a lot of opinions out there.
Expand Messages
Frank Griffin
Dec 23 6:33 AM
criguada@... wrote:
Hide message history
> ??? What are you talking about?
> Did you notice this thread started from a message of mine, or what?
Actually, I didn't. Apologies.
> ??? "traditional unix design" is something that every student of an information
> technology course at every university knows.
and it no longer exists (effectively). The "traditional Unix design" that people learn in school is the AT&T kernel circa 1980. That's when most of the textbooks were written. To give you a frame of reference, shared memory was a really radical idea in the Unix community then.
> BTW, you're right about the "Linux doesn't deviate" argument. I'm not talking
> from personal experience, but from several statements by people whose statements
> I consider valuable: people that have made several contributions to OS/2 and the
> OS/2 community in a way that lets everyone sure about their technical skills.
> I'm sorry but I can't say the same thing about you, though you may be the most
> skilled person in this list.
Taking these in reverse order, I very much doubt that I am the most skilled person on this list (if I am, we have problems ). I'm not sure why you would want to base your opinion about Linux on the opinions of people who focus on and contribute to OS/2. Not that those aren't worthy endeavors, and they certainly don't preclude those people having valid knowledge of Linux, but they hardly seem like valid credentials in the Linux arena.
> > Since this is exactly what you said back in the FreeOS days, I have a
> > sneaking suspicion that your knowledge of how Linux deviates from
> > "traditional Unix" isn't based on any current source base. In fact
>
> I think you're mixing up things. I was mostly there in "lurking mode" at the
> time of FreeOS. I may have posted a few messages at the time of the litigation
> that led to the split, but I was not among the most active writers. You're
> probably thinking about the original founder of the FreeOS project, which was a
> Brasilian guy (IIRC) of which I don't remember the name (but I can find it if
> you like).
No, I'm not confusing you with Daniel Caetano. But it is true that you expressed similar opinions back then about using Linux; see
http://groups.yahoo.com/group/freeos/message/1621
> > I'm sorry, but it is of extreme interest for this discussion.
>
> This is of NO interest. The fact that the Linux kernel is positively using
> recent Intel improvements doesn't shed any light on the difference among the two
> kernels or their compared performance.
> I'm still much more favourable to a tabular comparison among the different
> kernels which are available to settle the question.
I don't see how you can discount this. These days, Intel processor improvements are done for one reason only: performance. If exploiting the features requires changes in the OS, those changes can only be done by people who have access to the kernel source (one way or another). Almost nobody with official access to the OS/2 kernel source (meaning that they are in a position to get their changes incorporated) is doing anything with HT, or 64-bit, or anything else that may come along. On the other hand, the Linux community is falling over itself to beat Microsoft at exploiting these changes. And, since it's open source, you can see what they're doing. And, since it's POPULAR open source, you can also find a wealth of analysis giving other peoples' opinions of what they're doing.
> > Serenity has no access to kernel source code that I've ever seen them
> > post about. Nor have I ever read a post indicating that they are
> > allowed to modify the kernel.
>
> -- start of quote --
> Ok, among other there is mentioned a smp fix, bsmp8603.zip, that someone at
> Intel has tested on their free time for Serenity. So I would like to get that
> fix if possible. The rest of the thread doesn't really say anything if anypne
> outside Intel than has manage to pull the same stunt off, ie getting OS2 to
> support HT.
> -- end of quote --
>
> All the thread is available at the following adress:
>
> http://www.os2world.com/cgi-bin/forum/U ... &P=1#ID429
OK, I read the thread. Most of it is single-sentence posts that give the posters opinion without much to back it up ("I've heard this", "I think that").
As to the Intel patch, either it is based on "escaped" source that Serenity probably couldn't distribute legally, or else it is a binary hack on the distributed OS/2 kernel. I think it's great that somebody did it, and I hope it works, but it hardly seems like a viable ongoing way to incorporate kernel improvements.
If you'd like some more factual data about HT, here's a benchmark article done by the IBM Linux Technology Center (same folks who did the stress test):
http://www-106.ibm.com/developerworks/l ... ary/l-htl/
In the article, they compare the performance of a pre-HT SMP Linux kernel (2.4) with the HT-supporting 2.6 kernel, under workloads which are single-user, multi-user, threaded, and non-threaded. Given all the honing done to the Linux 2.4 SMP kernel for the much-publicized Linux vs. WinServer tests a year or so ago, this should be a pretty fair indicator of what you would see with the existing OS/2 SMP kernel versus an OS/2 kernel enhanced to use HT.
For single-user stuff, HT actually ran a few percent slower in most cases (-1%, -3%). The real gains, as you might expect, come with heavy multiuser workloads. One such workload was a chat room simulation:
*******************************************(start quote)
To measure the effects of Hyper-Threading on Linux multithreaded applications, we use the chat benchmark, which is modeled after a chat room. The benchmark includes both a client and a server. The client side of the benchmark will report the number of messages sent per second; the number of chat rooms and messages will control the workload. The workload creates a lot of threads and TCP/IP connections, and sends and receives a lot of messages. It uses the following default parameters:
Number of chat rooms = 10
Number of messages = 100
Message size = 100 bytes
Number of users = 20
By default, each chat room has 20 users. A total of 10 chat rooms will have 20x10 = 200 users. For each user in the chat room, the client will make a connection to the server. So since we have 200 users, we will have 200 connections to the server. Now, for each user (or connection) in the chat room, a "send" thread and a "receive" thread are created. Thus, a 10-chat-room scenario will create 10x20x2 = 400 client threads and 400 server threads, for a total of 800 threads. But there's more.
Each client "send" thread will send the specified number of messages to the server. For 10 chat rooms and 100 messages, the client will send 10x20x100 = 20,000 messages. The server "receive" thread will receive the corresponding number of messages. The chat room server will echo each of the messages back to the other users in the chat room. Thus, for 10 chat rooms and 100 messages, the server "send" thread will send 10x20x100x19 or 380,000 messages. The client "receive" thread will receive the corresponding number of messages.
The test starts by starting the chat server in a command-line session and the client in another command-line session. The client simulates the workload and the results represent the number of messages sent by the client. When the client ends its test, the server loops and accepts another start message from the client. In our measurement, we ran the benchmark with 20, 30, 40, and 50 chat rooms. The corresponding number of connections and threads are shown in Table 3.
****************************************(end of quote)
Here was the speedup table for the 2.4 SMP kernel:
Table 4. Effects of Hyper-Threading on chat throughput
Number of chat rooms 2419s-noht 2419s-ht Speed-up
20 164,071 202,809 24%
30 151,530 184,803 22%
40 140,301 171,187 22%
50 123,842 158,543 28%
Geometric Mean 144,167 178,589 24%
Note: Data is the number of messages sent by client: higher is better.
Here's the same results for the 2.6 kernel, which has explicit support for HT:
Table 7. Effects of Hyper-Threading on Linux kernel 2.5.32
chat workload
Number of chat rooms 2532s-noht 2532s-ht Speed-up
20 137,792 207,788 51%
30 138,832 195,765 41%
40 144,454 231,509 47%
50 137,745 191,834 39%
Geometric Mean 139,678 202,034 45%
As you can see, being able to update the kernel source and staying on top of improvements can make quite a difference. Which is why, in selecting a (micro)kernel for osFree, I give a lot of weight to whether or not we have a reasonable expectation of seeing work like this done in a timely fashion. The size and quality of the Linux kernel team and their desire to quash MS suggests to me that they have far more resource and motivation to do this than most (if not all) other contenders.
By the way, I'm not saying that it is important for osFree to support HT (or not). HT is just an example of a hardware improvement that a closed source, or out-of-reach kernel, or one we don't have the resource to maintain, can't exploit. Others will come along which may mean more or less to osFree.
> This is obviously THE argument, and it would be for anybody who is concerned
> about OS/2 survival, unless you want to have yet another Linux distribution with
> some OS/2 flavor.
It's like the old story of the blind men and the elephant. What OS/2 is to you depends on what you do with it. You can write apps or drivers for it, in which case you see the APIs. You can use it as a server, in which case you see the reliability, performance, and scalability. Or, you can use it as a client, in which case you see the WPS and the existing apps. You don't see most of OS/2, and you never will. If a replacement shows you all the same features you expect (same APIs, runs the same apps), then it's a good replacement.
> Either you're very lucky, or you don't mess very much with Linux.
> I had to mess with RH9 kernel just a month ago trying to install on an elder
> system, and I see I'm not alonem judging from the messages that have been posted
> recently.
> With OS/2 you NEVER have to mess with the kernel. If a device is supported by
> the system you just install the driver and you're done.
Umm, yeah. And if OS/2 *doesn't* support the device, then you're just up the proverbial creek, which doesn't sound like a better solution to me. I suspect that if OS/2 offered the ability to support additional hardware if you obtained and recompiled the kernel sources (16-bit C, assembler, and all), you'd be happy to do it.
The fact is, that with module support, most distributions choose every possible kernel option as a module under the theory that it's worth the disk space since if the hardware isn't present, the module just won't be loaded at runtime. And anyway, recompiling the Linux kernel is a matter of picking options from a graphical tool, pushing a button, and finding something else to do for an hour. I've been doing it since the mid-90s, although these days I only need to do it to debug problems where I need to modify kernel source.
The Mandrake distro has a total of about 9CDs worth of packages, all told. The base distro they put out, though, is 3 or 4 ISO images, so they are constantly picking and choosing what will "make it" to the base CDs. There was a small uproar on the Mandrake Cooker mailing list a while back because they chose to bump the 50MB package containing the kernel source off of the bare-bones distro. People said "but newbies won't be able to recompile the kernel if they need to". Mandrake's answer was "99.99% of them never do, and those that do know what they're doing and where to find it". Right or wrong, it's an indication that a commercial marketing team with a financial stake in the issue believes that kernel recompiles are pretty rare among users as opposed to developers.
> I think that the concept of multiple roots and single root is absolutely clear
> to anybody on this list, at least those that have some experience on Linux or
> other unices. It's not necessary to explain it.
> And how you can state that "there is no difference" between having separate
> partitions, each one with its own root, and having a single root where you mount
> partitions under subdirectories, well it really beats me.
I thought I explained that, but I'll try again. Other than a semi-religious stance of "drives are just BETTER", I see very little difference between, say,
xcopy C:\onefile.ext D:\twofile.ext
and
xcopy \C\onefile.ext \D\twofile.ext
As I said before, if the shell (CMD.EXE) parser wants to, it can accept the first form and translate it to the second. In a graphical app like "Drives", you would see no difference at all; it's still just a directory tree, whether you call the top-level node "C:", "C", "c-drive", or "/".
Partitions are a completely different issue. Suppose I have two partitions, seen under OS/2 as C: and D:. In native Linux, each partition would have a root directory corresponding to C:\ or D:\. I can define a directory called "D" in the C:\ directory, and then mount the second partition there, e.g.
mount /dev/hda2 /D
at which point I can refer to all of the files on the second partition as /D/filename-or-filepath. If you want to treat C no differently than D, just define a symbolic link from /C to /, and all of the first partition files will answer to /C/filename as well as /filename. Again, CMD.EXE or graphical file-choosers can make this look identical to current OS/2.
In short, you can use as many or as few partitions as you want, with as many or as few virtual drives on them as you want.
> Sure, I'm correct about saying that it's not related to the kernel, but what you
> say is resembling more and more a Linux distro with the capability to run OS/2
> apps, not a new OS based on Linux kernel.
Correct. I am describing as OS/2 personality built on top of a captive and more-or-less hidden Linux base. That achieves the objective of an OS/2 clone with OS/2-style reliability and scalability on day one, and a large, committed team of people supporting the parts of it which are unrelated to the OS/2 personaility.
Starting from there, if you then think that it's desirable to replace, say, X, then you have the leisure of doing it in parallel with the rest of the OS/2 community being able to run OS/2 stuff.
> And talking about the mess, obviously I'm not talking about window managers.
> What do you say about the lack of a global system clipboard, like in OS/2 and
> Win? Yes, I know that there is some software that tries to address the problem.
Old stuff. GNOME and KDE now share clipboard data. The only things that don't are old native X apps.
And, in point of fact, it's irrelevant to an osFree, since you would base PM and WinOS2 on one or the other of GNOME or KDE, not both, in which case there would never have been a clipboard issue to begin with, since these have always had internal clipboard support.
> What do you say about the lack of global keyboard mappings?
I'm ignorant of this issue. I know that Mandrake has what they call global keyboard mappings, but I don't know if they address the problem to which you refer. If you're referring to shortcut keys which are common across apps, it's the same as the clipboard issue: old X apps rolled their own. Modern apps written to Gtk or KDE or whatever use ones which are common to all apps which use the toolkit.
> What do you say about the lack of a system registry, instead of each application
> trying to solve the problem with it's own (often baroque) config files?
I'm not enough of a toolkit maven to swear to you that the newer ones don't have such a registry, but it doesn't really matter. The OS/2 API includes such a registry, so we'd provide one as part of implementing the API. If the toolkit has (or ever gets) one, we'd delegate to that. In any case, OS/2 apps would have one.
> You're obviously ignoring projects aimed at replacing X. Just do a google search
> for "xfree replacement" and you'll find a few, some quite advanced and some just
> "wannabe".
Well, yes, you're correct, I am ignoring them. Because no major Linux distro uses them. I'm sure there are people who dislike X enough to try writing a replacement, and I'm also sure that there are people who just want the experience of writing their own X. More power to them.
But the sigfnificant fact to me is that everybody who is actually putting their money on the line with Linux, e.g. IBM, RedHat, Mandrake, etc., seems to be satisfied enough with X and its ongoing progress.
> > But nobody programs to the X API, which is considered very low-level.
>
> See UDE for an example (Unix Desktop Environment).
>
You're correct, my statement was overly broad. Obviously, some people choose to program to X. I should have said that most new graphical apps included in the main distros don't program to X. They use either GNOME or KDE, and each of these will run the other's apps. Of course, UDE isn't an application, it's a Window Manager, and a full-fledged WM (as opposed to a layer on top of another) doesn't have much choice but to program to X.
I find your choice of UDE as an example interesting. Their project description suggests that they reject GTk+ and QT because of bloat, and find X (or the Xlibs) to be pure, untainted, and worthy of being a base for their new WM. Apparently they don't agree with the folks who are looking to replace X, which just goes to show that there are a lot of opinions out there.