#925 Re: [osFree] Digest Number 203
Expand Messages
Frank Griffin
Dec 23 6:33 AM
criguada@... wrote:
Hide message history
> ??? What are you talking about?
> Did you notice this thread started from a message of mine, or what?
Actually, I didn't. Apologies.
> ??? "traditional unix design" is something that every student of an information
> technology course at every university knows.
and it no longer exists (effectively). The "traditional Unix design" that people learn in school is the AT&T kernel circa 1980. That's when most of the textbooks were written. To give you a frame of reference, shared memory was a really radical idea in the Unix community then.
> BTW, you're right about the "Linux doesn't deviate" argument. I'm not talking
> from personal experience, but from several statements by people whose statements
> I consider valuable: people that have made several contributions to OS/2 and the
> OS/2 community in a way that lets everyone sure about their technical skills.
> I'm sorry but I can't say the same thing about you, though you may be the most
> skilled person in this list.
Taking these in reverse order, I very much doubt that I am the most skilled person on this list (if I am, we have problems ). I'm not sure why you would want to base your opinion about Linux on the opinions of people who focus on and contribute to OS/2. Not that those aren't worthy endeavors, and they certainly don't preclude those people having valid knowledge of Linux, but they hardly seem like valid credentials in the Linux arena.
> > Since this is exactly what you said back in the FreeOS days, I have a
> > sneaking suspicion that your knowledge of how Linux deviates from
> > "traditional Unix" isn't based on any current source base. In fact
>
> I think you're mixing up things. I was mostly there in "lurking mode" at the
> time of FreeOS. I may have posted a few messages at the time of the litigation
> that led to the split, but I was not among the most active writers. You're
> probably thinking about the original founder of the FreeOS project, which was a
> Brasilian guy (IIRC) of which I don't remember the name (but I can find it if
> you like).
No, I'm not confusing you with Daniel Caetano. But it is true that you expressed similar opinions back then about using Linux; see
http://groups.yahoo.com/group/freeos/message/1621
> > I'm sorry, but it is of extreme interest for this discussion.
>
> This is of NO interest. The fact that the Linux kernel is positively using
> recent Intel improvements doesn't shed any light on the difference among the two
> kernels or their compared performance.
> I'm still much more favourable to a tabular comparison among the different
> kernels which are available to settle the question.
I don't see how you can discount this. These days, Intel processor improvements are done for one reason only: performance. If exploiting the features requires changes in the OS, those changes can only be done by people who have access to the kernel source (one way or another). Almost nobody with official access to the OS/2 kernel source (meaning that they are in a position to get their changes incorporated) is doing anything with HT, or 64-bit, or anything else that may come along. On the other hand, the Linux community is falling over itself to beat Microsoft at exploiting these changes. And, since it's open source, you can see what they're doing. And, since it's POPULAR open source, you can also find a wealth of analysis giving other peoples' opinions of what they're doing.
> > Serenity has no access to kernel source code that I've ever seen them
> > post about. Nor have I ever read a post indicating that they are
> > allowed to modify the kernel.
>
> -- start of quote --
> Ok, among other there is mentioned a smp fix, bsmp8603.zip, that someone at
> Intel has tested on their free time for Serenity. So I would like to get that
> fix if possible. The rest of the thread doesn't really say anything if anypne
> outside Intel than has manage to pull the same stunt off, ie getting OS2 to
> support HT.
> -- end of quote --
>
> All the thread is available at the following adress:
>
> http://www.os2world.com/cgi-bin/forum/U ... &P=1#ID429
OK, I read the thread. Most of it is single-sentence posts that give the posters opinion without much to back it up ("I've heard this", "I think that").
As to the Intel patch, either it is based on "escaped" source that Serenity probably couldn't distribute legally, or else it is a binary hack on the distributed OS/2 kernel. I think it's great that somebody did it, and I hope it works, but it hardly seems like a viable ongoing way to incorporate kernel improvements.
If you'd like some more factual data about HT, here's a benchmark article done by the IBM Linux Technology Center (same folks who did the stress test):
http://www-106.ibm.com/developerworks/l ... ary/l-htl/
In the article, they compare the performance of a pre-HT SMP Linux kernel (2.4) with the HT-supporting 2.6 kernel, under workloads which are single-user, multi-user, threaded, and non-threaded. Given all the honing done to the Linux 2.4 SMP kernel for the much-publicized Linux vs. WinServer tests a year or so ago, this should be a pretty fair indicator of what you would see with the existing OS/2 SMP kernel versus an OS/2 kernel enhanced to use HT.
For single-user stuff, HT actually ran a few percent slower in most cases (-1%, -3%). The real gains, as you might expect, come with heavy multiuser workloads. One such workload was a chat room simulation:
*******************************************(start quote)
To measure the effects of Hyper-Threading on Linux multithreaded applications, we use the chat benchmark, which is modeled after a chat room. The benchmark includes both a client and a server. The client side of the benchmark will report the number of messages sent per second; the number of chat rooms and messages will control the workload. The workload creates a lot of threads and TCP/IP connections, and sends and receives a lot of messages. It uses the following default parameters:
Number of chat rooms = 10
Number of messages = 100
Message size = 100 bytes
Number of users = 20
By default, each chat room has 20 users. A total of 10 chat rooms will have 20x10 = 200 users. For each user in the chat room, the client will make a connection to the server. So since we have 200 users, we will have 200 connections to the server. Now, for each user (or connection) in the chat room, a "send" thread and a "receive" thread are created. Thus, a 10-chat-room scenario will create 10x20x2 = 400 client threads and 400 server threads, for a total of 800 threads. But there's more.
Each client "send" thread will send the specified number of messages to the server. For 10 chat rooms and 100 messages, the client will send 10x20x100 = 20,000 messages. The server "receive" thread will receive the corresponding number of messages. The chat room server will echo each of the messages back to the other users in the chat room. Thus, for 10 chat rooms and 100 messages, the server "send" thread will send 10x20x100x19 or 380,000 messages. The client "receive" thread will receive the corresponding number of messages.
The test starts by starting the chat server in a command-line session and the client in another command-line session. The client simulates the workload and the results represent the number of messages sent by the client. When the client ends its test, the server loops and accepts another start message from the client. In our measurement, we ran the benchmark with 20, 30, 40, and 50 chat rooms. The corresponding number of connections and threads are shown in Table 3.
****************************************(end of quote)
Here was the speedup table for the 2.4 SMP kernel:
Table 4. Effects of Hyper-Threading on chat throughput
Number of chat rooms 2419s-noht 2419s-ht Speed-up
20 164,071 202,809 24%
30 151,530 184,803 22%
40 140,301 171,187 22%
50 123,842 158,543 28%
Geometric Mean 144,167 178,589 24%
Note: Data is the number of messages sent by client: higher is better.
Here's the same results for the 2.6 kernel, which has explicit support for HT:
Table 7. Effects of Hyper-Threading on Linux kernel 2.5.32
chat workload
Number of chat rooms 2532s-noht 2532s-ht Speed-up
20 137,792 207,788 51%
30 138,832 195,765 41%
40 144,454 231,509 47%
50 137,745 191,834 39%
Geometric Mean 139,678 202,034 45%
As you can see, being able to update the kernel source and staying on top of improvements can make quite a difference. Which is why, in selecting a (micro)kernel for osFree, I give a lot of weight to whether or not we have a reasonable expectation of seeing work like this done in a timely fashion. The size and quality of the Linux kernel team and their desire to quash MS suggests to me that they have far more resource and motivation to do this than most (if not all) other contenders.
By the way, I'm not saying that it is important for osFree to support HT (or not). HT is just an example of a hardware improvement that a closed source, or out-of-reach kernel, or one we don't have the resource to maintain, can't exploit. Others will come along which may mean more or less to osFree.
> This is obviously THE argument, and it would be for anybody who is concerned
> about OS/2 survival, unless you want to have yet another Linux distribution with
> some OS/2 flavor.
It's like the old story of the blind men and the elephant. What OS/2 is to you depends on what you do with it. You can write apps or drivers for it, in which case you see the APIs. You can use it as a server, in which case you see the reliability, performance, and scalability. Or, you can use it as a client, in which case you see the WPS and the existing apps. You don't see most of OS/2, and you never will. If a replacement shows you all the same features you expect (same APIs, runs the same apps), then it's a good replacement.
> Either you're very lucky, or you don't mess very much with Linux.
> I had to mess with RH9 kernel just a month ago trying to install on an elder
> system, and I see I'm not alonem judging from the messages that have been posted
> recently.
> With OS/2 you NEVER have to mess with the kernel. If a device is supported by
> the system you just install the driver and you're done.
Umm, yeah. And if OS/2 *doesn't* support the device, then you're just up the proverbial creek, which doesn't sound like a better solution to me. I suspect that if OS/2 offered the ability to support additional hardware if you obtained and recompiled the kernel sources (16-bit C, assembler, and all), you'd be happy to do it.
The fact is, that with module support, most distributions choose every possible kernel option as a module under the theory that it's worth the disk space since if the hardware isn't present, the module just won't be loaded at runtime. And anyway, recompiling the Linux kernel is a matter of picking options from a graphical tool, pushing a button, and finding something else to do for an hour. I've been doing it since the mid-90s, although these days I only need to do it to debug problems where I need to modify kernel source.
The Mandrake distro has a total of about 9CDs worth of packages, all told. The base distro they put out, though, is 3 or 4 ISO images, so they are constantly picking and choosing what will "make it" to the base CDs. There was a small uproar on the Mandrake Cooker mailing list a while back because they chose to bump the 50MB package containing the kernel source off of the bare-bones distro. People said "but newbies won't be able to recompile the kernel if they need to". Mandrake's answer was "99.99% of them never do, and those that do know what they're doing and where to find it". Right or wrong, it's an indication that a commercial marketing team with a financial stake in the issue believes that kernel recompiles are pretty rare among users as opposed to developers.
> I think that the concept of multiple roots and single root is absolutely clear
> to anybody on this list, at least those that have some experience on Linux or
> other unices. It's not necessary to explain it.
> And how you can state that "there is no difference" between having separate
> partitions, each one with its own root, and having a single root where you mount
> partitions under subdirectories, well it really beats me.
I thought I explained that, but I'll try again. Other than a semi-religious stance of "drives are just BETTER", I see very little difference between, say,
xcopy C:\onefile.ext D:\twofile.ext
and
xcopy \C\onefile.ext \D\twofile.ext
As I said before, if the shell (CMD.EXE) parser wants to, it can accept the first form and translate it to the second. In a graphical app like "Drives", you would see no difference at all; it's still just a directory tree, whether you call the top-level node "C:", "C", "c-drive", or "/".
Partitions are a completely different issue. Suppose I have two partitions, seen under OS/2 as C: and D:. In native Linux, each partition would have a root directory corresponding to C:\ or D:\. I can define a directory called "D" in the C:\ directory, and then mount the second partition there, e.g.
mount /dev/hda2 /D
at which point I can refer to all of the files on the second partition as /D/filename-or-filepath. If you want to treat C no differently than D, just define a symbolic link from /C to /, and all of the first partition files will answer to /C/filename as well as /filename. Again, CMD.EXE or graphical file-choosers can make this look identical to current OS/2.
In short, you can use as many or as few partitions as you want, with as many or as few virtual drives on them as you want.
> Sure, I'm correct about saying that it's not related to the kernel, but what you
> say is resembling more and more a Linux distro with the capability to run OS/2
> apps, not a new OS based on Linux kernel.
Correct. I am describing as OS/2 personality built on top of a captive and more-or-less hidden Linux base. That achieves the objective of an OS/2 clone with OS/2-style reliability and scalability on day one, and a large, committed team of people supporting the parts of it which are unrelated to the OS/2 personaility.
Starting from there, if you then think that it's desirable to replace, say, X, then you have the leisure of doing it in parallel with the rest of the OS/2 community being able to run OS/2 stuff.
> And talking about the mess, obviously I'm not talking about window managers.
> What do you say about the lack of a global system clipboard, like in OS/2 and
> Win? Yes, I know that there is some software that tries to address the problem.
Old stuff. GNOME and KDE now share clipboard data. The only things that don't are old native X apps.
And, in point of fact, it's irrelevant to an osFree, since you would base PM and WinOS2 on one or the other of GNOME or KDE, not both, in which case there would never have been a clipboard issue to begin with, since these have always had internal clipboard support.
> What do you say about the lack of global keyboard mappings?
I'm ignorant of this issue. I know that Mandrake has what they call global keyboard mappings, but I don't know if they address the problem to which you refer. If you're referring to shortcut keys which are common across apps, it's the same as the clipboard issue: old X apps rolled their own. Modern apps written to Gtk or KDE or whatever use ones which are common to all apps which use the toolkit.
> What do you say about the lack of a system registry, instead of each application
> trying to solve the problem with it's own (often baroque) config files?
I'm not enough of a toolkit maven to swear to you that the newer ones don't have such a registry, but it doesn't really matter. The OS/2 API includes such a registry, so we'd provide one as part of implementing the API. If the toolkit has (or ever gets) one, we'd delegate to that. In any case, OS/2 apps would have one.
> You're obviously ignoring projects aimed at replacing X. Just do a google search
> for "xfree replacement" and you'll find a few, some quite advanced and some just
> "wannabe".
Well, yes, you're correct, I am ignoring them. Because no major Linux distro uses them. I'm sure there are people who dislike X enough to try writing a replacement, and I'm also sure that there are people who just want the experience of writing their own X. More power to them.
But the sigfnificant fact to me is that everybody who is actually putting their money on the line with Linux, e.g. IBM, RedHat, Mandrake, etc., seems to be satisfied enough with X and its ongoing progress.
> > But nobody programs to the X API, which is considered very low-level.
>
> See UDE for an example (Unix Desktop Environment).
>
You're correct, my statement was overly broad. Obviously, some people choose to program to X. I should have said that most new graphical apps included in the main distros don't program to X. They use either GNOME or KDE, and each of these will run the other's apps. Of course, UDE isn't an application, it's a Window Manager, and a full-fledged WM (as opposed to a layer on top of another) doesn't have much choice but to program to X.
I find your choice of UDE as an example interesting. Their project description suggests that they reject GTk+ and QT because of bloat, and find X (or the Xlibs) to be pure, untainted, and worthy of being a base for their new WM. Apparently they don't agree with the folks who are looking to replace X, which just goes to show that there are a lot of opinions out there.
Part 31 - Dec 19 2003
Re: Part 31
#926 Re: [osFree] Digest Number 203
Expand Messages
criguada@libero.it
Dec 23 7:50 AM
Hi Frank,
> sure why you would want to base your opinion about Linux on the opinions
> of people who focus on and contribute to OS/2. Not that those aren't
And I'm absolutely NOT sure why I should want to base my opinion on you. BTW,
there are people in the OS/2 community that are implementing Linux binary
compatibility on top of the OS/2 kernel (the reverse of what you want to do). I
surely trust THEIR opinion, because it is based on real work I can see.
> I don't see how you can discount this. These days, Intel processor
> improvements are done for one reason only: performance. If
These days, Intel processors improvements are done for one reason only:
performance OF MICROSOFT OSes. Which are well known to scale badly. Have you
ever heard about something called "Wintel"?
> As to the Intel patch, either it is based on "escaped" source that
> Serenity probably couldn't distribute legally, or else it is a binary
> hack on the distributed OS/2 kernel. I think it's great that somebody
Oh, that's nice. You know everything, better then Serenity itself. Especially
since the quote you reported was from Kim Cheung.
> If you'd like some more factual data about HT, here's a benchmark
> article done by the IBM Linux Technology Center (same folks who did the
> stress test):
Same folks who are (now) interested in showing how Linux performs well.
> For single-user stuff, HT actually ran a few percent slower in most
> cases (-1%, -3%). The real gains, as you might expect, come with heavy
> multiuser workloads. One such workload was a chat room simulation:
I really don't know why HT should perform better on multiuser workloads than on
single-user. If the kernel scales well on multiple CPUs (which are simulated by
the HT technology) every corectly multithreaded application should gain very
much from an SMP platform. And even when using single-threaded applications,
multitasking with several apps should agin very much from an SMP system. One
doesn't needs multiple users to effectively use multiple CPUs.
The fact that IBM is showing you the chat-room simulation and stressing its
gains over the poor gains (or losses) of the other simulations recalls a typical
use of benchmarks: show that my system is great by using the apps that behave
best on it. At one time, there were even some specifical changes made to the
microcode of Intel CPUs to make them perform great on some widely used benchmark
simulations.
And BTW, YOU say that there is a huge amount of tinkering in the kernel between
2.4 and 2.6, involving (at least) the multithreading code. So how should I trust
the results to come ONLY from the explicit HT support?
> doing and where to find it". Right or wrong, it's an indication that a
> commercial marketing team with a financial stake in the issue believes
> that kernel recompiles are pretty rare among users as opposed to
I have never said that kernel recompiles are frequent. I only said that I don't
want to have to recompile the kernel, and I suspect most OS/2 users won't. It's
one of the things I don't like about Linux: having to be (or to sometimes
become) a programmer to use what is "marketed" as a desktop OS. OS/2 users want
a desktop OS. I have done it several times under Linux, but I knew I had to
expect it. In OS/2 this is not true, and I wouldn't be pleased if this should
become true.
Also remember that OS/2 users often aren't on the "bleeding edge" of hardware.
> As I said before, if the shell (CMD.EXE) parser wants to, it can accept
> the first form and translate it to the second. In a graphical app like
Oh my. When someone is trying to tell you why he dislikes Linux, you reply with
"it can be made to...", or "it will with the 2.xxx kernel...". It's starting to
resemble M$ a bit too much to me.
Sure, we can choose Linux just to rewrite it completely, wow.
> Correct. I am describing as OS/2 personality built on top of a captive
> and more-or-less hidden Linux base. That achieves the objective of
That's enough, thnx.
> > And talking about the mess, obviously I'm not talking about window
> > managers.
> > What do you say about the lack of a global system clipboard, like in
> > OS/2 and
> > Win? Yes, I know that there is some software that tries to address the
> > problem.
>
>
> Old stuff. GNOME and KDE now share clipboard data. The only things
> that don't are old native X apps.
Exactly. You can layer some kind of "order" on top of the messy system, but what
if I want to use (or need to use) an "old native X app"? And what if I want (or
have) to use a CLI app?
On OS/2 this is no problem because the system is _consistent_ from the bottom
up. If an app is using clipboard services it is using the *global* clipboard
services provided by the OS, not by the WPS or some other layer. It can share
with all the other apps no mater if it is a CLI app, an old 1.x 16bit app, or a
modern 32bit app. Even Odin apps interface with the system clipboard, and Win3.x
too.
> And, in point of fact, it's irrelevant to an osFree, since you would
> base PM and WinOS2 on one or the other of GNOME or KDE, not both, in
> which case there would never have been a clipboard issue to begin with,
> since these have always had internal clipboard support.
I see you don't understand. You always talk about this or that GUI, while I'm
talking about an *OS*.
> > What do you say about the lack of global keyboard mappings?
>
> I'm ignorant of this issue. I know that Mandrake has what they call
> global keyboard mappings, but I don't know if they address the problem
> to which you refer. If you're referring to shortcut keys which are
> common across apps, it's the same as the clipboard issue: old X apps
> rolled their own. Modern apps written to Gtk or KDE or whatever use
> ones which are common to all apps which use the toolkit.
See above.
> But the sigfnificant fact to me is that everybody who is actually
> putting their money on the line with Linux, e.g. IBM, RedHat, Mandrake,
> etc., seems to be satisfied enough with X and its ongoing progress.
They actually have no choice but this.
> I find your choice of UDE as an example interesting. Their project
> description suggests that they reject GTk+ and QT because of bloat, and
> find X (or the Xlibs) to be pure, untainted, and worthy of being a base
> for their new WM. Apparently they don't agree with the folks who are
> looking to replace X, which just goes to show that there are a lot of
> opinions out there.
Nah, you didn't even read their site. Their project description says clearly:
-- quote --
We just use the standard Xlibs (both to keep UDE fast and slim and to avoid
dependencies)
-- quote --
And they state that they want UDE to supplant the GUI (not just the WM) at one
point in time, which sugest that they're unsatisfied with at least a part of X.
Bye
Cris
Expand Messages
criguada@libero.it
Dec 23 7:50 AM
Hi Frank,
> sure why you would want to base your opinion about Linux on the opinions
> of people who focus on and contribute to OS/2. Not that those aren't
And I'm absolutely NOT sure why I should want to base my opinion on you. BTW,
there are people in the OS/2 community that are implementing Linux binary
compatibility on top of the OS/2 kernel (the reverse of what you want to do). I
surely trust THEIR opinion, because it is based on real work I can see.
> I don't see how you can discount this. These days, Intel processor
> improvements are done for one reason only: performance. If
These days, Intel processors improvements are done for one reason only:
performance OF MICROSOFT OSes. Which are well known to scale badly. Have you
ever heard about something called "Wintel"?
> As to the Intel patch, either it is based on "escaped" source that
> Serenity probably couldn't distribute legally, or else it is a binary
> hack on the distributed OS/2 kernel. I think it's great that somebody
Oh, that's nice. You know everything, better then Serenity itself. Especially
since the quote you reported was from Kim Cheung.
> If you'd like some more factual data about HT, here's a benchmark
> article done by the IBM Linux Technology Center (same folks who did the
> stress test):
Same folks who are (now) interested in showing how Linux performs well.
> For single-user stuff, HT actually ran a few percent slower in most
> cases (-1%, -3%). The real gains, as you might expect, come with heavy
> multiuser workloads. One such workload was a chat room simulation:
I really don't know why HT should perform better on multiuser workloads than on
single-user. If the kernel scales well on multiple CPUs (which are simulated by
the HT technology) every corectly multithreaded application should gain very
much from an SMP platform. And even when using single-threaded applications,
multitasking with several apps should agin very much from an SMP system. One
doesn't needs multiple users to effectively use multiple CPUs.
The fact that IBM is showing you the chat-room simulation and stressing its
gains over the poor gains (or losses) of the other simulations recalls a typical
use of benchmarks: show that my system is great by using the apps that behave
best on it. At one time, there were even some specifical changes made to the
microcode of Intel CPUs to make them perform great on some widely used benchmark
simulations.
And BTW, YOU say that there is a huge amount of tinkering in the kernel between
2.4 and 2.6, involving (at least) the multithreading code. So how should I trust
the results to come ONLY from the explicit HT support?
> doing and where to find it". Right or wrong, it's an indication that a
> commercial marketing team with a financial stake in the issue believes
> that kernel recompiles are pretty rare among users as opposed to
I have never said that kernel recompiles are frequent. I only said that I don't
want to have to recompile the kernel, and I suspect most OS/2 users won't. It's
one of the things I don't like about Linux: having to be (or to sometimes
become) a programmer to use what is "marketed" as a desktop OS. OS/2 users want
a desktop OS. I have done it several times under Linux, but I knew I had to
expect it. In OS/2 this is not true, and I wouldn't be pleased if this should
become true.
Also remember that OS/2 users often aren't on the "bleeding edge" of hardware.
> As I said before, if the shell (CMD.EXE) parser wants to, it can accept
> the first form and translate it to the second. In a graphical app like
Oh my. When someone is trying to tell you why he dislikes Linux, you reply with
"it can be made to...", or "it will with the 2.xxx kernel...". It's starting to
resemble M$ a bit too much to me.
Sure, we can choose Linux just to rewrite it completely, wow.
> Correct. I am describing as OS/2 personality built on top of a captive
> and more-or-less hidden Linux base. That achieves the objective of
That's enough, thnx.
> > And talking about the mess, obviously I'm not talking about window
> > managers.
> > What do you say about the lack of a global system clipboard, like in
> > OS/2 and
> > Win? Yes, I know that there is some software that tries to address the
> > problem.
>
>
> Old stuff. GNOME and KDE now share clipboard data. The only things
> that don't are old native X apps.
Exactly. You can layer some kind of "order" on top of the messy system, but what
if I want to use (or need to use) an "old native X app"? And what if I want (or
have) to use a CLI app?
On OS/2 this is no problem because the system is _consistent_ from the bottom
up. If an app is using clipboard services it is using the *global* clipboard
services provided by the OS, not by the WPS or some other layer. It can share
with all the other apps no mater if it is a CLI app, an old 1.x 16bit app, or a
modern 32bit app. Even Odin apps interface with the system clipboard, and Win3.x
too.
> And, in point of fact, it's irrelevant to an osFree, since you would
> base PM and WinOS2 on one or the other of GNOME or KDE, not both, in
> which case there would never have been a clipboard issue to begin with,
> since these have always had internal clipboard support.
I see you don't understand. You always talk about this or that GUI, while I'm
talking about an *OS*.
> > What do you say about the lack of global keyboard mappings?
>
> I'm ignorant of this issue. I know that Mandrake has what they call
> global keyboard mappings, but I don't know if they address the problem
> to which you refer. If you're referring to shortcut keys which are
> common across apps, it's the same as the clipboard issue: old X apps
> rolled their own. Modern apps written to Gtk or KDE or whatever use
> ones which are common to all apps which use the toolkit.
See above.
> But the sigfnificant fact to me is that everybody who is actually
> putting their money on the line with Linux, e.g. IBM, RedHat, Mandrake,
> etc., seems to be satisfied enough with X and its ongoing progress.
They actually have no choice but this.
> I find your choice of UDE as an example interesting. Their project
> description suggests that they reject GTk+ and QT because of bloat, and
> find X (or the Xlibs) to be pure, untainted, and worthy of being a base
> for their new WM. Apparently they don't agree with the folks who are
> looking to replace X, which just goes to show that there are a lot of
> opinions out there.
Nah, you didn't even read their site. Their project description says clearly:
-- quote --
We just use the standard Xlibs (both to keep UDE fast and slim and to avoid
dependencies)
-- quote --
And they state that they want UDE to supplant the GUI (not just the WM) at one
point in time, which sugest that they're unsatisfied with at least a part of X.
Bye
Cris
Re: Part 31
#927 Re: [osFree] Digest Number 203
Expand Messages
Frank Griffin
Dec 23 10:00 AM
Various snippets:
Hide message history
> Oh my. When someone is trying to tell you why he dislikes Linux, you reply with
> "it can be made to...", or "it will with the 2.xxx kernel...". It's starting to
> resemble M$ a bit too much to me.
> Sure, we can choose Linux just to rewrite it completely, wow.
> > Correct. I am describing as OS/2 personality built on top of a captive
> > and more-or-less hidden Linux base. That achieves the objective of
>
> That's enough, thnx.
> Exactly. You can layer some kind of "order" on top of the messy system, but what
> if I want to use (or need to use) an "old native X app"? And what if I want (or
> have) to use a CLI app?
> On OS/2 this is no problem because the system is _consistent_ from the bottom
> up. If an app is using clipboard services it is using the *global* clipboard
> services provided by the OS, not by the WPS or some other layer. It can share
> with all the other apps no mater if it is a CLI app, an old 1.x 16bit app, or a
> modern 32bit app. Even Odin apps interface with the system clipboard, and Win3.x
> too.
> I see you don't understand. You always talk about this or that GUI, while I'm
> talking about an *OS*.
All of this is pretty off-topic. This is not an advocacy list for either Linux or OS/2, and I'm not trying to debate which is better, or to convince you to use Linux. The topic here is how suitable the Linux kernel and certain other parts of Linux distributions are suitable as the base for osFree.
That's why I keep making the point that the shortcomings you mention either no longer exist, can easily be worked around, or don't affect the use of Linux by osFree. For the purposes of the subject at hand, it doesn't really matter which one of these categories applies, as long as one of them does.
> > As to the Intel patch, either it is based on "escaped" source that
> > Serenity probably couldn't distribute legally, or else it is a binary
> > hack on the distributed OS/2 kernel. I think it's great that somebody
>
> Oh, that's nice. You know everything, better then Serenity itself. Especially
> since the quote you reported was from Kim Cheung.
Well, if you actually read the thread, (one of) Kim's comment(s) was:
> I like the idea of having a competition regarding writing proper support for OS2 and HT. But, then again, how many would attend... But, of course everything has it's prize so if we could gather some money together to offer the best code writer for it.. .. if it's possible to add this kind of feature without having to make kernal changes, doubt it.
What he's saying is exactly what I said: if it involves kernel changes, Serenity can't support it.
As regards the whole HT thing, I don't care why IBM is running these tests. All I care about is whether the tests are carried out correctly and what the results are. Contrary to your assertion that IBM is trying to have Linux show up Windows, no Windows system was part of the test. The test simply measured the performance of an SMP Linux system without HT support versus a Linux system with HT support.
If you can find technical fault with the tests, or with my hypothesis that if HT made such a great difference for one well-tuned SMP OS (Linux) then it probably would do so for another (OS/2), please do. Beyond that, I don't really care who the testers are dating or whether their feet smell.
> And BTW, YOU say that there is a huge amount of tinkering in the kernel between
> 2.4 and 2.6, involving (at least) the multithreading code. So how should I trust
> the results to come ONLY from the explicit HT support?
For the purpose of the point I'm making, it doesn't matter. Either it's the result of HT modifications that Serenity can't (and IBM won't) support, or its the result of other kernel modifications that Serenity can't (and IBM won't) support. Take your pick. I'm easy
As for UDE, I think you're misreading the English:
> -- quote --
> We just use the standard Xlibs (both to keep UDE fast and slim and to avoid
> dependencies)
> -- quote --
To me, "just" here means that they are using only X APIs, and not GTk or any other toolkit, not that they are using some parts of X and not others. I checked the site again, and I can't find any statements about wanting to replace X in the future. The bit about them wanting their own Look and Feel means that they are not reusing another toolkit's widgets, not that they don't want to use X as the engine.
As I said before, I didn't engage this discussion to argue about OS preferences. I prefer the scientific method, where I give an opinion, and supply research, quotes, articles, or other sources that can be independently checked by other people who read it. I'm not asking you to take my word for anything. If you can refute my hypotheses, I expect you to do so. If you can refute IBM's test results, or other sources I'm quoting, fine. If you want me to support some point with more detailed information, ask for it. But it's useless to just say that I must be wrong because you can't imagine any reason that I'd be right (my words, not yours).
Expand Messages
Frank Griffin
Dec 23 10:00 AM
Various snippets:
Hide message history
> Oh my. When someone is trying to tell you why he dislikes Linux, you reply with
> "it can be made to...", or "it will with the 2.xxx kernel...". It's starting to
> resemble M$ a bit too much to me.
> Sure, we can choose Linux just to rewrite it completely, wow.
> > Correct. I am describing as OS/2 personality built on top of a captive
> > and more-or-less hidden Linux base. That achieves the objective of
>
> That's enough, thnx.
> Exactly. You can layer some kind of "order" on top of the messy system, but what
> if I want to use (or need to use) an "old native X app"? And what if I want (or
> have) to use a CLI app?
> On OS/2 this is no problem because the system is _consistent_ from the bottom
> up. If an app is using clipboard services it is using the *global* clipboard
> services provided by the OS, not by the WPS or some other layer. It can share
> with all the other apps no mater if it is a CLI app, an old 1.x 16bit app, or a
> modern 32bit app. Even Odin apps interface with the system clipboard, and Win3.x
> too.
> I see you don't understand. You always talk about this or that GUI, while I'm
> talking about an *OS*.
All of this is pretty off-topic. This is not an advocacy list for either Linux or OS/2, and I'm not trying to debate which is better, or to convince you to use Linux. The topic here is how suitable the Linux kernel and certain other parts of Linux distributions are suitable as the base for osFree.
That's why I keep making the point that the shortcomings you mention either no longer exist, can easily be worked around, or don't affect the use of Linux by osFree. For the purposes of the subject at hand, it doesn't really matter which one of these categories applies, as long as one of them does.
> > As to the Intel patch, either it is based on "escaped" source that
> > Serenity probably couldn't distribute legally, or else it is a binary
> > hack on the distributed OS/2 kernel. I think it's great that somebody
>
> Oh, that's nice. You know everything, better then Serenity itself. Especially
> since the quote you reported was from Kim Cheung.
Well, if you actually read the thread, (one of) Kim's comment(s) was:
> I like the idea of having a competition regarding writing proper support for OS2 and HT. But, then again, how many would attend... But, of course everything has it's prize so if we could gather some money together to offer the best code writer for it.. .. if it's possible to add this kind of feature without having to make kernal changes, doubt it.
What he's saying is exactly what I said: if it involves kernel changes, Serenity can't support it.
As regards the whole HT thing, I don't care why IBM is running these tests. All I care about is whether the tests are carried out correctly and what the results are. Contrary to your assertion that IBM is trying to have Linux show up Windows, no Windows system was part of the test. The test simply measured the performance of an SMP Linux system without HT support versus a Linux system with HT support.
If you can find technical fault with the tests, or with my hypothesis that if HT made such a great difference for one well-tuned SMP OS (Linux) then it probably would do so for another (OS/2), please do. Beyond that, I don't really care who the testers are dating or whether their feet smell.
> And BTW, YOU say that there is a huge amount of tinkering in the kernel between
> 2.4 and 2.6, involving (at least) the multithreading code. So how should I trust
> the results to come ONLY from the explicit HT support?
For the purpose of the point I'm making, it doesn't matter. Either it's the result of HT modifications that Serenity can't (and IBM won't) support, or its the result of other kernel modifications that Serenity can't (and IBM won't) support. Take your pick. I'm easy
As for UDE, I think you're misreading the English:
> -- quote --
> We just use the standard Xlibs (both to keep UDE fast and slim and to avoid
> dependencies)
> -- quote --
To me, "just" here means that they are using only X APIs, and not GTk or any other toolkit, not that they are using some parts of X and not others. I checked the site again, and I can't find any statements about wanting to replace X in the future. The bit about them wanting their own Look and Feel means that they are not reusing another toolkit's widgets, not that they don't want to use X as the engine.
As I said before, I didn't engage this discussion to argue about OS preferences. I prefer the scientific method, where I give an opinion, and supply research, quotes, articles, or other sources that can be independently checked by other people who read it. I'm not asking you to take my word for anything. If you can refute my hypotheses, I expect you to do so. If you can refute IBM's test results, or other sources I'm quoting, fine. If you want me to support some point with more detailed information, ask for it. But it's useless to just say that I must be wrong because you can't imagine any reason that I'd be right (my words, not yours).
Re: Part 31
#928 Re: [osFree] Digest Number 203
Expand Messages
Lynn H. Maxson
Dec 23 10:22 AM
Cris and Frank,
You're arguing about the implementation of an OS/2
replacement. That makes sense if you have a process in
which implementation precedes (implement first, describe
later) or occurs concurrently with documentation (describe as
you go). In either case if you don't complete implementation,
then you don't complete documentation.
I would argue a disconnection exists between "how" you
implement and "what" gets implemented. I further suggest
that your history with third generation programming
languages in which the "what" gets described in the logic of
the "how" leads to this. Fourth generation programming
languages make a distinct separation "what" occurs and
"how" it occurs. In fourth generation the "what" remains the
programmer's responsibility while the "how" gets turned over
to the software.
We obviously have a difference of opinion about the "how",
while we should have none about the "what" with respect to
an OS/2 replacement. Having a completed "what" first instead
of after or concurrently with implementation offers a better
chance of overall success regardless of different "how" paths
chosen.
As one who believes that specification, the "what"
description, should precede construction, the "how"
description I favor their clear separation in this effort:
description first, construction second. If we agree on the first,
then we can pursue our separate ways on the second.
In reality the progression of generations in programming
languages from the first (machine or actual) language to the
second (symbolic assembly plus macro) to the third
(imperative HLLs) to the fourth (declarative HLLs) has
occurred to separate the "how" from the "what", leaving it to
the software to determine the "how". This has freed the
programmer to increasingly focus on the "what".
For example, no compiler on evaluating the expression '((a +
b) * (c - d))/ e**f' does so without first translating it into
reverse Polish notation (RPN). The same occurs for i-o
statements, exception or interrupt handling, and even API
invocations. All these have less to less writing on the part of
the programmer to more on the part of the software.
It takes a while to grasp what the shift from third generation,
imperative languages to fourth generation, declarative
languages or logic programming means in terms of the further
shift in writing responsibility from programmer to software.
For us its practical meaning lies in our ability to engage in
literate programming in which we can associate the informal
language of "what" we want to have happen with it formal
translation: two "what"s instead of a "what" (informal) and
the commitment of an "how" (formal).
I offer the use of a fourth generation specification language,
SL/I, as part of literate programming basis for documenting an
OS/2 replacement. Granted I have an active interest in seeing
SL/I as a programming language. That may or may not occur
at all or in any reasonable time frame. However, it does
provide a total logically unambiguous means of formally
specifying an informal requirement. This allows full
participation by everyone on the "what" regardless of any
disagreement thereafter on the "how".
Now I realize the trouble non-english-speaking participants
may have with my writing as it gives enough trouble to
english-speaking ones. Anytime Frank makes an argument
based on safety in numbers I cringe. To me such reliance
represents a threat to the very soul of open source. I have no
objection for as many people who like to participate. I have
deep concerns when that becomes a need to do so.
You cannot use the same software tools as IBM, M$, Linux
developers, or any other software vendor without committing
to the same level of human resource requirements. As open
source cannot easily achieve the organizational efficiency of
closed source without adapting it, open source based on
volunteerism has a need for even more human resources.
Frank makes the argument that we shouldn't reinvent the
wheel, that we should take advantage of the development in
Linux and the continuing number of developers engaged in it.
He argues further that we should not ignore "what" Linux
does and "how" it does it in its kernel with respect to "what"
we should have and "how" we should do it in an OSfree
kernel.
You do not have to disagree with any of this to agree that we
need to know the "what" of OS/2 in order to evaluate the
usefulness of the "what" of Linux without concern for
duplicating the "how". That, I think, allows us to begin to
establish a tabular means of comparing different kernels
under discussion here.
I can participate in this "what", knowing full well the interest
in my "how" differs greatly in terms of software tools used by
Cris or Frank. I want a software tool where group
participation is "nice" but not "necessary". That brings any
open source project within the scope of an individual. I will
continue to focus on such a tool as Frank is currently correct
but hopefully in time wrong.
Expand Messages
Lynn H. Maxson
Dec 23 10:22 AM
Cris and Frank,
You're arguing about the implementation of an OS/2
replacement. That makes sense if you have a process in
which implementation precedes (implement first, describe
later) or occurs concurrently with documentation (describe as
you go). In either case if you don't complete implementation,
then you don't complete documentation.
I would argue a disconnection exists between "how" you
implement and "what" gets implemented. I further suggest
that your history with third generation programming
languages in which the "what" gets described in the logic of
the "how" leads to this. Fourth generation programming
languages make a distinct separation "what" occurs and
"how" it occurs. In fourth generation the "what" remains the
programmer's responsibility while the "how" gets turned over
to the software.
We obviously have a difference of opinion about the "how",
while we should have none about the "what" with respect to
an OS/2 replacement. Having a completed "what" first instead
of after or concurrently with implementation offers a better
chance of overall success regardless of different "how" paths
chosen.
As one who believes that specification, the "what"
description, should precede construction, the "how"
description I favor their clear separation in this effort:
description first, construction second. If we agree on the first,
then we can pursue our separate ways on the second.
In reality the progression of generations in programming
languages from the first (machine or actual) language to the
second (symbolic assembly plus macro) to the third
(imperative HLLs) to the fourth (declarative HLLs) has
occurred to separate the "how" from the "what", leaving it to
the software to determine the "how". This has freed the
programmer to increasingly focus on the "what".
For example, no compiler on evaluating the expression '((a +
b) * (c - d))/ e**f' does so without first translating it into
reverse Polish notation (RPN). The same occurs for i-o
statements, exception or interrupt handling, and even API
invocations. All these have less to less writing on the part of
the programmer to more on the part of the software.
It takes a while to grasp what the shift from third generation,
imperative languages to fourth generation, declarative
languages or logic programming means in terms of the further
shift in writing responsibility from programmer to software.
For us its practical meaning lies in our ability to engage in
literate programming in which we can associate the informal
language of "what" we want to have happen with it formal
translation: two "what"s instead of a "what" (informal) and
the commitment of an "how" (formal).
I offer the use of a fourth generation specification language,
SL/I, as part of literate programming basis for documenting an
OS/2 replacement. Granted I have an active interest in seeing
SL/I as a programming language. That may or may not occur
at all or in any reasonable time frame. However, it does
provide a total logically unambiguous means of formally
specifying an informal requirement. This allows full
participation by everyone on the "what" regardless of any
disagreement thereafter on the "how".
Now I realize the trouble non-english-speaking participants
may have with my writing as it gives enough trouble to
english-speaking ones. Anytime Frank makes an argument
based on safety in numbers I cringe. To me such reliance
represents a threat to the very soul of open source. I have no
objection for as many people who like to participate. I have
deep concerns when that becomes a need to do so.
You cannot use the same software tools as IBM, M$, Linux
developers, or any other software vendor without committing
to the same level of human resource requirements. As open
source cannot easily achieve the organizational efficiency of
closed source without adapting it, open source based on
volunteerism has a need for even more human resources.
Frank makes the argument that we shouldn't reinvent the
wheel, that we should take advantage of the development in
Linux and the continuing number of developers engaged in it.
He argues further that we should not ignore "what" Linux
does and "how" it does it in its kernel with respect to "what"
we should have and "how" we should do it in an OSfree
kernel.
You do not have to disagree with any of this to agree that we
need to know the "what" of OS/2 in order to evaluate the
usefulness of the "what" of Linux without concern for
duplicating the "how". That, I think, allows us to begin to
establish a tabular means of comparing different kernels
under discussion here.
I can participate in this "what", knowing full well the interest
in my "how" differs greatly in terms of software tools used by
Cris or Frank. I want a software tool where group
participation is "nice" but not "necessary". That brings any
open source project within the scope of an individual. I will
continue to focus on such a tool as Frank is currently correct
but hopefully in time wrong.
Re: Part 31
#929 Re: [osFree] Digest Number 203
Expand Messages
Cristiano Guadagnino
Dec 23 3:53 PM
Hi Frank,
I have to say that your use of english is quite strange to me. I'm not a
native speaker, so it's probably my fault.
BTW, this thread has run too long for me, and it's getting quite harsh.
I'm sorry for what has been my fault in making it like this. The thread
is closed for me.
Lynn: regarding your approach I am concerned about time. Even if I liked
PL/I or fourth generation languages (which I don't), your time frame is
unknown, and you say that it may not even be "reasonable". I feel that
time is running away at a fast pace for OS/2, that's why I'm concerned
about your approach. What do you think?
For the record: I have been an IBM host programmer for a few years, so I
have a little experience in PL/I (although unfortunately most of the
development was done in Cobol).
I wish you both a merry Christmas with your families.
Bye
Cris
Expand Messages
Cristiano Guadagnino
Dec 23 3:53 PM
Hi Frank,
I have to say that your use of english is quite strange to me. I'm not a
native speaker, so it's probably my fault.
BTW, this thread has run too long for me, and it's getting quite harsh.
I'm sorry for what has been my fault in making it like this. The thread
is closed for me.
Lynn: regarding your approach I am concerned about time. Even if I liked
PL/I or fourth generation languages (which I don't), your time frame is
unknown, and you say that it may not even be "reasonable". I feel that
time is running away at a fast pace for OS/2, that's why I'm concerned
about your approach. What do you think?
For the record: I have been an IBM host programmer for a few years, so I
have a little experience in PL/I (although unfortunately most of the
development was done in Cobol).
I wish you both a merry Christmas with your families.
Bye
Cris
Re: Part 31
#930 Re: [osFree] Digest Number 203
Expand Messages
Lynn H. Maxson
Dec 23 5:15 PM
Cris,
Personally I feel Frank's english easier to understand than
mine.
I am working to create an interpreter/compiler for my
specification language, SL/I, such that it is also a programming
language. If you understand the five stage sequence of the
software development process (specification, analysis, design,
construction, and testing), you understand that using any
imperative programming language, whether first, second, or
third generation, requires that each stage incorporates a
manual writing process. Each stage has its own language.
Going from one stage to the next requires a manual
translation process. To synchronize a change in any stage
means manually rippling changes through all stages after it as
well as any that occur before it.
That means you have six sources to synchronize:
requirements, specifications, analysis, design, construction,
and testing. With any imperative language you maintain those
sources manually. You write them manually. You rewrite
them manually.
With a fourth generation language the number of manual
sources drops from six to two (requirements, specification)
and the number of manual stages from five to two or even
one.
Now you haven't seen this with any fourth generation
language like Prolog, due to incomplete implementations.
However, the theory exists in the practice of SQL. In SQL you
write only the specifications, i.e. what you want done and
rules governing its behavior: the specification stage. In turn
the SQL software performs the analysis, design, and
construction, the "how" of the query, corresponding to the
completeness proof of logic programming. If the completeness
proof is "true", the SQL software then performs an exhaustive
true/false proof, the testing stage.
So not liking fourth generation languages means your
willingness to accept the extra manual effort required with
any third generation, the extra number of sources to maintain,
the extra number of stages to execute. The choice is yours.
Regardless of that choice, whether you take a third
generation path or I take a fourth generation one, we both
have in common the need for written requirements as input to
the specification stage as well as the resulting specifications.
These themselves take time, the same time for both of us.
By the time we have these I may or may not have an
interpreter/compiler ready. That doesn't prevent you from
pursuing the following stages with current tools. In truth I
don't feel it important relative to progress on this project how
far along or how rapidily I progress on another. I only feel it
important to have assembled a completely detailed set of
requirements and specifications. With that any one or any
group can undertake the implementation in any manner of
their choosing.
I see this documentation as a first step in the journey without
regard to the second and following steps that others may
choose. To do the documentation we need to decide on a
requirements language and a specification language. I've
simply suggested using SL/I as the specification language.
As SL/I uses PL/I syntax rules and basically includes all of PL/I
data types, operators, and statements I don't see it as much
of a learning curve. Certainly it's far easier for the casual,
non-programming reader to acquire than C, C++, or JAVA.
I don't have the feeling that time is running out on OS/2. In
fact I have the opposite feeling. I think we have some years
of support remaining, if what I gleaned at Warpstock 2004
holds true. We have more than enough time to deal with all
the issues raised in the course of this thread as well as others.
I suggest that we set aside our differences to cooperate
where they don't exist: providing a detailed specification for
an OS/2 replacement.
Expand Messages
Lynn H. Maxson
Dec 23 5:15 PM
Cris,
Personally I feel Frank's english easier to understand than
mine.
I am working to create an interpreter/compiler for my
specification language, SL/I, such that it is also a programming
language. If you understand the five stage sequence of the
software development process (specification, analysis, design,
construction, and testing), you understand that using any
imperative programming language, whether first, second, or
third generation, requires that each stage incorporates a
manual writing process. Each stage has its own language.
Going from one stage to the next requires a manual
translation process. To synchronize a change in any stage
means manually rippling changes through all stages after it as
well as any that occur before it.
That means you have six sources to synchronize:
requirements, specifications, analysis, design, construction,
and testing. With any imperative language you maintain those
sources manually. You write them manually. You rewrite
them manually.
With a fourth generation language the number of manual
sources drops from six to two (requirements, specification)
and the number of manual stages from five to two or even
one.
Now you haven't seen this with any fourth generation
language like Prolog, due to incomplete implementations.
However, the theory exists in the practice of SQL. In SQL you
write only the specifications, i.e. what you want done and
rules governing its behavior: the specification stage. In turn
the SQL software performs the analysis, design, and
construction, the "how" of the query, corresponding to the
completeness proof of logic programming. If the completeness
proof is "true", the SQL software then performs an exhaustive
true/false proof, the testing stage.
So not liking fourth generation languages means your
willingness to accept the extra manual effort required with
any third generation, the extra number of sources to maintain,
the extra number of stages to execute. The choice is yours.
Regardless of that choice, whether you take a third
generation path or I take a fourth generation one, we both
have in common the need for written requirements as input to
the specification stage as well as the resulting specifications.
These themselves take time, the same time for both of us.
By the time we have these I may or may not have an
interpreter/compiler ready. That doesn't prevent you from
pursuing the following stages with current tools. In truth I
don't feel it important relative to progress on this project how
far along or how rapidily I progress on another. I only feel it
important to have assembled a completely detailed set of
requirements and specifications. With that any one or any
group can undertake the implementation in any manner of
their choosing.
I see this documentation as a first step in the journey without
regard to the second and following steps that others may
choose. To do the documentation we need to decide on a
requirements language and a specification language. I've
simply suggested using SL/I as the specification language.
As SL/I uses PL/I syntax rules and basically includes all of PL/I
data types, operators, and statements I don't see it as much
of a learning curve. Certainly it's far easier for the casual,
non-programming reader to acquire than C, C++, or JAVA.
I don't have the feeling that time is running out on OS/2. In
fact I have the opposite feeling. I think we have some years
of support remaining, if what I gleaned at Warpstock 2004
holds true. We have more than enough time to deal with all
the issues raised in the course of this thread as well as others.
I suggest that we set aside our differences to cooperate
where they don't exist: providing a detailed specification for
an OS/2 replacement.
Re: Part 31
#931 Re: [osFree] Digest Number 203
Expand Messages
criguada@libero.it
Dec 24 2:19 AM
Hi Lynn,
> Now you haven't seen this with any fourth generation
> language like Prolog, due to incomplete implementations.
> However, the theory exists in the practice of SQL. In SQL you
> write only the specifications, i.e. what you want done and
> rules governing its behavior: the specification stage. In turn
> the SQL software performs the analysis, design, and
> construction, the "how" of the query, corresponding to the
> completeness proof of logic programming. If the completeness
> proof is "true", the SQL software then performs an exhaustive
> true/false proof, the testing stage.
SQL is actually the only 4GL with which I have a deep experience (on IBM's
DB/2), and this is why I'm concerned about using any of them (but I don't know
how well SQL represents the category).
From my experiences, you need a good training to formulate queries that will
generate the best optimized code. If you don't know exactly how to formulate the
query, the pros and cons of a particular SQL implementation, and a bit of how
SQL internally works on matching database tables, you'll end up with formally
correct queries that generate very "heavy" code. And you never really know (even
when you're experienced) how optimized will be the code for your next query
until you try it and do a little of tweaking. With IBM's DB2 you base your
tweaking on the query "weight" that the SQL engine shows you upon executing your
query, and even that turns out to be wrong very often (i.e. weighty queries that
perform better than other lighter ones).
To me, it seems a little too "fuzzy" to base an OS on it, but of course I don't
know about your SL/I and I may be completely wrong.
> So not liking fourth generation languages means your
> willingness to accept the extra manual effort required with
> any third generation, the extra number of sources to maintain,
> the extra number of stages to execute. The choice is yours.
From my experience in software projects (not opensource), some of the steps you
mention, which are theoretically required, in reality are completely overlooked,
or performed in an order that's not the most logical one, but that often
simplifies the effort.
> Regardless of that choice, whether you take a third
> generation path or I take a fourth generation one, we both
> have in common the need for written requirements as input to
> the specification stage as well as the resulting specifications.
> These themselves take time, the same time for both of us.
I absolutely agree with you.
> By the time we have these I may or may not have an
> interpreter/compiler ready. That doesn't prevent you from
> pursuing the following stages with current tools. In truth I
This makes me feel better
My fear is/was that using SL/I for specification would lock us using it for
implementation as well.
> I don't have the feeling that time is running out on OS/2. In
> fact I have the opposite feeling. I think we have some years
> of support remaining, if what I gleaned at Warpstock 2004
> holds true. We have more than enough time to deal with all
> the issues raised in the course of this thread as well as others.
I hope you're right. I've learnt the hard way not to trust IBM statements very
much. Changes in management often make it become true what was false yesterday,
or the other way around.
> I suggest that we set aside our differences to cooperate
> where they don't exist: providing a detailed specification for
> an OS/2 replacement.
I agree.
Bye
Cris
Expand Messages
criguada@libero.it
Dec 24 2:19 AM
Hi Lynn,
> Now you haven't seen this with any fourth generation
> language like Prolog, due to incomplete implementations.
> However, the theory exists in the practice of SQL. In SQL you
> write only the specifications, i.e. what you want done and
> rules governing its behavior: the specification stage. In turn
> the SQL software performs the analysis, design, and
> construction, the "how" of the query, corresponding to the
> completeness proof of logic programming. If the completeness
> proof is "true", the SQL software then performs an exhaustive
> true/false proof, the testing stage.
SQL is actually the only 4GL with which I have a deep experience (on IBM's
DB/2), and this is why I'm concerned about using any of them (but I don't know
how well SQL represents the category).
From my experiences, you need a good training to formulate queries that will
generate the best optimized code. If you don't know exactly how to formulate the
query, the pros and cons of a particular SQL implementation, and a bit of how
SQL internally works on matching database tables, you'll end up with formally
correct queries that generate very "heavy" code. And you never really know (even
when you're experienced) how optimized will be the code for your next query
until you try it and do a little of tweaking. With IBM's DB2 you base your
tweaking on the query "weight" that the SQL engine shows you upon executing your
query, and even that turns out to be wrong very often (i.e. weighty queries that
perform better than other lighter ones).
To me, it seems a little too "fuzzy" to base an OS on it, but of course I don't
know about your SL/I and I may be completely wrong.
> So not liking fourth generation languages means your
> willingness to accept the extra manual effort required with
> any third generation, the extra number of sources to maintain,
> the extra number of stages to execute. The choice is yours.
From my experience in software projects (not opensource), some of the steps you
mention, which are theoretically required, in reality are completely overlooked,
or performed in an order that's not the most logical one, but that often
simplifies the effort.
> Regardless of that choice, whether you take a third
> generation path or I take a fourth generation one, we both
> have in common the need for written requirements as input to
> the specification stage as well as the resulting specifications.
> These themselves take time, the same time for both of us.
I absolutely agree with you.
> By the time we have these I may or may not have an
> interpreter/compiler ready. That doesn't prevent you from
> pursuing the following stages with current tools. In truth I
This makes me feel better
My fear is/was that using SL/I for specification would lock us using it for
implementation as well.
> I don't have the feeling that time is running out on OS/2. In
> fact I have the opposite feeling. I think we have some years
> of support remaining, if what I gleaned at Warpstock 2004
> holds true. We have more than enough time to deal with all
> the issues raised in the course of this thread as well as others.
I hope you're right. I've learnt the hard way not to trust IBM statements very
much. Changes in management often make it become true what was false yesterday,
or the other way around.
> I suggest that we set aside our differences to cooperate
> where they don't exist: providing a detailed specification for
> an OS/2 replacement.
I agree.
Bye
Cris
Re: Part 31
#932 Re: [osFree] Digest Number 203
Expand Messages
Lynn H. Maxson
Dec 24 8:35 AM
"SQL is actually the only 4GL with which I have a deep
experience (on IBM's DB/2), and this is why I'm concerned
about using any of them (but I don't know how well SQL
represents the category).
From my experiences, you need a good training to formulate
queries that will generate the best optimized code. If you
don't know exactly how to formulate the query, the pros and
cons of a particular SQL implementation, and a bit of how
SQL internally works on matching database tables, you'll end
up with formally correct queries that generate very "heavy"
code. ..."
Cris,
I chose the example of SQL to illustrate a 4GL in common use.
In fact in common use by people who do not consider
themselves as either programming or programmers. It
represents a "true" specification language in which the writer
only says "what" he wants in terms of data and conditions,
leaving it up to the software to determine "how".
The "what" lies in the specification. The "how" in the
software then performs the analysis, design, construction, and
testing automatically. It points out the difference between an
imperative (first, second, and third generation) language and
a declarative (fourth generation). I could have used any AI,
neural net, or other software based on logic programming. I
chose to use one that ought to convince the most stubborn of
doubters that logic programming works. It's used in practice
by millions daily. Thus it does not only exist in academia or in
some esoteric group somewhere.
Like everything else in logic programming SQL depends upon a
two-stage proof engine: a completeness proof and an
exhaustive true/false proof. The completeness proof engages
in the analysis, design, and construction stages while the
exhaustive true/false proof does the test stage. This leaves
only the manual writing (and rewriting) in the specification
stage.
Fortunately or unfortunately the "S" in SQL stands for
"Structured". In a standard SQL query you have three clauses:
SELECT, FROM, and WHERE. They occur in that order with
accounts for the "structured". Within the clauses the order is
unimportant relative to SQL, but as in the case of the SELECT
clause important to the writer to determine the order of the
output fields. Technically the SELECT, FROM, and WHERE
clauses could appear in any order, e.g. FROM table_names
SELECT field_names WHERE conditions.
Now optimization is a different issue. You optimize a database
design based on use to minimize physical i-o. That maximises
performance. It works for file as well as database design, for
all types of files (sequential, indexed, and direct) and all
types of databases (relational, hierarchical, and network).
Unfortunately use patterns vary over time resulting in a need
to physically reorganize the database to optimize
performance. The advantage that relational has over
hierarchical and network is the greater separation of its
logical organization, what the user perceives, from its
physical.
Now understand that optimizing access to a database differs
from source to executable code optimization. The concerns
you express relative optimizing a logical form mapping onto a
physical form do not exist in code generation. We have no
reason, a fact confirmed in practice, to believe that
declarative languages produce less efficient code than
imperative. We have every reason to believe, another fact
confirmed in practice, that declarative languages require less
source code, i.e. less writing, than imperative languages. In
addition we have every reason to believe, yet another fact
confirmed in practice, that declarative languages require less
"rewriting" than imperative.
I have no intention of locking anyone or any project other
than my own into using SL/I or dependent upon its
implementation. The request in this thread talked about using
a formal specification language to write the detailed
specifications for an OS/2 replacement. I simply said I have
such a language which you are free to use or not.
I personally believe that once you get some familiarity with it,
with its construction of rules (similar to the WHERE conditions
of SQL), with their direct association with data variables, that
you will quickly come to see the "write once" advantages of
declarative languages over the "write many" of imperative.
I follow the general guidelines of "let people do what
software cannot (write specifications) and software what
people need not (the rest)". In that manner we can maximise
the efforts of a few in the process of creating an OS/2
replacement.
Expand Messages
Lynn H. Maxson
Dec 24 8:35 AM
"SQL is actually the only 4GL with which I have a deep
experience (on IBM's DB/2), and this is why I'm concerned
about using any of them (but I don't know how well SQL
represents the category).
From my experiences, you need a good training to formulate
queries that will generate the best optimized code. If you
don't know exactly how to formulate the query, the pros and
cons of a particular SQL implementation, and a bit of how
SQL internally works on matching database tables, you'll end
up with formally correct queries that generate very "heavy"
code. ..."
Cris,
I chose the example of SQL to illustrate a 4GL in common use.
In fact in common use by people who do not consider
themselves as either programming or programmers. It
represents a "true" specification language in which the writer
only says "what" he wants in terms of data and conditions,
leaving it up to the software to determine "how".
The "what" lies in the specification. The "how" in the
software then performs the analysis, design, construction, and
testing automatically. It points out the difference between an
imperative (first, second, and third generation) language and
a declarative (fourth generation). I could have used any AI,
neural net, or other software based on logic programming. I
chose to use one that ought to convince the most stubborn of
doubters that logic programming works. It's used in practice
by millions daily. Thus it does not only exist in academia or in
some esoteric group somewhere.
Like everything else in logic programming SQL depends upon a
two-stage proof engine: a completeness proof and an
exhaustive true/false proof. The completeness proof engages
in the analysis, design, and construction stages while the
exhaustive true/false proof does the test stage. This leaves
only the manual writing (and rewriting) in the specification
stage.
Fortunately or unfortunately the "S" in SQL stands for
"Structured". In a standard SQL query you have three clauses:
SELECT, FROM, and WHERE. They occur in that order with
accounts for the "structured". Within the clauses the order is
unimportant relative to SQL, but as in the case of the SELECT
clause important to the writer to determine the order of the
output fields. Technically the SELECT, FROM, and WHERE
clauses could appear in any order, e.g. FROM table_names
SELECT field_names WHERE conditions.
Now optimization is a different issue. You optimize a database
design based on use to minimize physical i-o. That maximises
performance. It works for file as well as database design, for
all types of files (sequential, indexed, and direct) and all
types of databases (relational, hierarchical, and network).
Unfortunately use patterns vary over time resulting in a need
to physically reorganize the database to optimize
performance. The advantage that relational has over
hierarchical and network is the greater separation of its
logical organization, what the user perceives, from its
physical.
Now understand that optimizing access to a database differs
from source to executable code optimization. The concerns
you express relative optimizing a logical form mapping onto a
physical form do not exist in code generation. We have no
reason, a fact confirmed in practice, to believe that
declarative languages produce less efficient code than
imperative. We have every reason to believe, another fact
confirmed in practice, that declarative languages require less
source code, i.e. less writing, than imperative languages. In
addition we have every reason to believe, yet another fact
confirmed in practice, that declarative languages require less
"rewriting" than imperative.
I have no intention of locking anyone or any project other
than my own into using SL/I or dependent upon its
implementation. The request in this thread talked about using
a formal specification language to write the detailed
specifications for an OS/2 replacement. I simply said I have
such a language which you are free to use or not.
I personally believe that once you get some familiarity with it,
with its construction of rules (similar to the WHERE conditions
of SQL), with their direct association with data variables, that
you will quickly come to see the "write once" advantages of
declarative languages over the "write many" of imperative.
I follow the general guidelines of "let people do what
software cannot (write specifications) and software what
people need not (the rest)". In that manner we can maximise
the efforts of a few in the process of creating an OS/2
replacement.
Re: Part 31
#933 Re: [osFree] Digest Number 203
Expand Messages
Lynn H. Maxson
Dec 24 1:40 PM
Cris,
I felt the need to also respond to some other comments you
made in your last message. We should add another stage in
front of the software development process (SDP), that of
requirements or requirements gathering. Normally we gather
them into groups or batches before passing them into the
formal, five stages of SDP: specification, analysis, design,
construction, and testing.
Note that normally requirements do not occur in batches.
They also do not occur in logical order. In short they
frequently occur individually and in random order. Due to the
way we have implemented the SDP, the methodology use, and
the software tools supporting it, immediately transferring an
individual requirement, a change request, into the process in
random order would quickly bring the process to a halt.
We should note, however, that we want to effect a change
request as quickly as possible, immediately if possible,
regardless of the order or timing of their occurrence. That we
cannot currently do so does not represent a failure of the SDP
as sometimes falsely claimed, but a failure of implementation.
Imperative languages (first, second, and third generation)
created the need for the SDP which represents the optimal
logical order of processing. In fact it's more than optimal: it's
absolutelyl necessary. Any deviation from this order results in
non-optimal effort, i.e. a loss in efficiency or productivity.
That does not mean that deviations do not occur in practice.
They happen all too frequently. You get in analysis to
discover incomplete or missing specifications. You can have a
similar discovery in design of a failure in analysis or
specification. You can in construction discover a failure in
design related to one in analysis relating in turn to
specification.
These discoveries in later stages of errors in earlier should
mean going back to the earliest stage in which the error was
introduced, correcting it, and then proceeding forward. In
short it should follow Deming's theory of quality control,
which works for software as it does for any other process.
If you don't do this, if you don't reflect the correction in the
earlier stages, then the source for each stage gets out of
sync. That means new change requests coming into the
process will reflect the error condition in going from one stage
to the next.
You seem to indicate that these processes can occur in a
different order. In practice they do. In every instance it leads
to a rework of some kind, which may or may not get reflected
back to its source. When source, when documentation gets
out of sync beyond a loss of efficiency, i.e. productivity, you
also create a conflict situation, which frequently degenerates
to a personnel conflict, of one group distrustful of another or
believing them incompetent.
Now all this goes back to the use of imperative programming
languages. While requirements and specifications can occur in
any order the manual functions of analysis, design, and
construction demands that the input to code generation have
a logical organization of source code. In imperative languages
that logical organization, globally and locally, is the
programmer's responsibility, by definition a manual operation.
In logic programming, i.e. declarative languages, that logical
organization becomes the software's responsibility as part of
the completeness proof. It follows then that any logical
re-organization due to specification changes remains the
software's responsibility.
Thus declarative languages accept logical or rule-based
source segments in any, i.e. random, order which they in turn
impose an optimal logical organization. Thus as quickly as a
change request can be translated into its set of specifications
we can enter it into the SDP. It should be possible in practice
to effect changes in the solution set (the software) more
rapidly than they occur in the problem set (the user world).
In short no persistent or growing backlog should occur.
Therein lies the advantage of using a fourth generation,
declarative specification language which is also a
programming language. An advantage also occurs in using it
until it becomes a programming language, regardless of having
to translate it meanwhile into some third generation form like
C or C++.
Expand Messages
Lynn H. Maxson
Dec 24 1:40 PM
Cris,
I felt the need to also respond to some other comments you
made in your last message. We should add another stage in
front of the software development process (SDP), that of
requirements or requirements gathering. Normally we gather
them into groups or batches before passing them into the
formal, five stages of SDP: specification, analysis, design,
construction, and testing.
Note that normally requirements do not occur in batches.
They also do not occur in logical order. In short they
frequently occur individually and in random order. Due to the
way we have implemented the SDP, the methodology use, and
the software tools supporting it, immediately transferring an
individual requirement, a change request, into the process in
random order would quickly bring the process to a halt.
We should note, however, that we want to effect a change
request as quickly as possible, immediately if possible,
regardless of the order or timing of their occurrence. That we
cannot currently do so does not represent a failure of the SDP
as sometimes falsely claimed, but a failure of implementation.
Imperative languages (first, second, and third generation)
created the need for the SDP which represents the optimal
logical order of processing. In fact it's more than optimal: it's
absolutelyl necessary. Any deviation from this order results in
non-optimal effort, i.e. a loss in efficiency or productivity.
That does not mean that deviations do not occur in practice.
They happen all too frequently. You get in analysis to
discover incomplete or missing specifications. You can have a
similar discovery in design of a failure in analysis or
specification. You can in construction discover a failure in
design related to one in analysis relating in turn to
specification.
These discoveries in later stages of errors in earlier should
mean going back to the earliest stage in which the error was
introduced, correcting it, and then proceeding forward. In
short it should follow Deming's theory of quality control,
which works for software as it does for any other process.
If you don't do this, if you don't reflect the correction in the
earlier stages, then the source for each stage gets out of
sync. That means new change requests coming into the
process will reflect the error condition in going from one stage
to the next.
You seem to indicate that these processes can occur in a
different order. In practice they do. In every instance it leads
to a rework of some kind, which may or may not get reflected
back to its source. When source, when documentation gets
out of sync beyond a loss of efficiency, i.e. productivity, you
also create a conflict situation, which frequently degenerates
to a personnel conflict, of one group distrustful of another or
believing them incompetent.
Now all this goes back to the use of imperative programming
languages. While requirements and specifications can occur in
any order the manual functions of analysis, design, and
construction demands that the input to code generation have
a logical organization of source code. In imperative languages
that logical organization, globally and locally, is the
programmer's responsibility, by definition a manual operation.
In logic programming, i.e. declarative languages, that logical
organization becomes the software's responsibility as part of
the completeness proof. It follows then that any logical
re-organization due to specification changes remains the
software's responsibility.
Thus declarative languages accept logical or rule-based
source segments in any, i.e. random, order which they in turn
impose an optimal logical organization. Thus as quickly as a
change request can be translated into its set of specifications
we can enter it into the SDP. It should be possible in practice
to effect changes in the solution set (the software) more
rapidly than they occur in the problem set (the user world).
In short no persistent or growing backlog should occur.
Therein lies the advantage of using a fourth generation,
declarative specification language which is also a
programming language. An advantage also occurs in using it
until it becomes a programming language, regardless of having
to translate it meanwhile into some third generation form like
C or C++.
Re: Part 31
#934 The IT industry is shifting away from Microsoft
Expand Messages
Mark Hagelin
Dec 28 11:49 PM
http://www.theinquirer.net/?article=13350
The IT industry is shifting away from Microsoft
Comment In the beginning there was Microsoft. Then it
exploded
By Charlie Demerjian: Sunday 28 December 2003, 11:31
EVERY SO often, there is a big shift in an industry.
The shifts are not usually visible until long after
they've happened, making you look back and say: "Oh
yeah, things were different back then".
We are experiencing a major IT industry shift right
now, and if you know where to look you can actually
see it as it happens. This shift is all about
Microsoft and open source.
Until very recently, Microsoft owned everything in the
personal computer business, both low and high on the
food chain. The low end was occupied by Palm, the high
end by Sun, IBM and others. In the vast soft middle,
there was Microsoft and only Microsoft.
Everyone who challenged it was bought out, cheated out
of the technology , or generally beaten into the
ground with dirty tricks, by ruthless competition, or
on rare occasions, with a better product. Listing the
failures would consume more column inches than a
person could read in a year.
Netscape, Stac, Wordperfect, Novell, and others are
among the notable casualties. Those that technically
survived are ghosts of their former selves.
Just as the press proclaims the inability of anyone to
challenge the Redmond beast, control is slipping from
Microsoft. As with any company faced with a huge loss
of market share, Microsoft is acting predictably,
pretending it is not happening, and putting on a
smiley face when asked about prospects. On the inside,
Microsoft is as scared as hell.
One of the richest companies on earth, run by one of
the richest people on earth afraid? What can you mean?
Hung, Drawn and Quartered
To put things in perspective, Microsoft has always
performed better each quarter than the one before.
Whenever the financial types settle on quarterly
earnings, Microsoft always manages to pull a few more
cents per share out of their hat, and beat those
earnings. The collective bunch of jackals and worms
that are known as 'Wall Street' sit slack jawed in
amazement, and give half hearted golf claps. Rinse and
repeat every quarter, including the analysts
'amazement'.
How it does this is no trick. It has profit margins on
its two major products of over eighty per cent. The
rest of the products, from handhelds to MSN and the
Xbox are all horrific money losers. Its finances are
so opaque and badly presented, that it can shuffle
money around from one part of the company to another
without anyone noticing. Make too much money one
quarter? Stash it in the closet labeled investments,
or write off some losses. Not making the numbers? Cash
in some assets and make a 'profit'.
Overall, it has been able to show a smooth earnings
curve, and surprise on the upside every time it
reports a quarter? Monopolies and almost no cost to
make your physical product other than R&D has itss
advantages.
Corporations cry Linux
About a year ago, things started to change. The cries
that Linux would dethrone Microsoft remained the same,
but there was a shift in the corporate reaction to
those cries. CxOs started to say 'tell me about it'.
In a down economy, free is much cheaper than hundreds
of dollars, and infinitely more attractive. Linux
started gaining ground with real paying customers
using it for real work in the real world, really.
Up until then, Microsoft had simply ignored the
tuxedoed threat. Then it started reacting with the
usual FUD, the Halloween memos, various white papers
and clumsily purchased studies. Somehow, people didn't
buy the fact that $1,000 a head was cheaper than free,
and so Microsoft had to move on to a different tactic.
Since it couldn't buy the company that produced Linux,
the GPL prevented the usual embrace and extend, and
people had simply grown to hate Microsoft for all the
pain they had been caused over the years, the firm
found itself in a bind. How do you compete when all
your dirty tricks are either inapplicable or fail, and
buckets of cash can't buy your way out of the hole you
are in? Simple, you compete on their terms.
Other than in the last six months, when was the last
time Microsoft lowered prices, or gave anything other
than a trivial discount on anything? Yeah, right,
never. Faced with losing the home office market to
OpenOffice/StarOffice, the server side to Linux,
databases to MySQL, and the desktop to Linux in the
not too distant future, what could it do? It targeted
price cuts at those who matter most, the early
adopters and other key segments.
The first of these cuts was aimed at MySQL, with the
developer edition of SQLServer getting the axe to the
tune of about 80 per cent. Then it started a slush
fund to prevent high profile companies and
organizations from giving Linux that all important
mindshare beachhead. Then it came out with a 'student
and teacher' version of Office. Hint to the
readership, if you don't want to pay $500 for office,
the new version doesn't make you prove you a student
or a teacher like the last one. Well, none of these
tactics is working, and one of the reasons it isn't
going as well as Microsoft hoped is its own money
grubbing product activation scheme. Without starting
the old debate about the cost of pirated software, it
is hard to argue against the fact that even with the
numbers it spouts off about piracy, Microsoft still
clears about a billion dollars a quarter or more. If
it wasn't for piracy, the Gates sprouts (little 1.0
and 2.0) could afford to be sent to a good school. Cry
for them. In its wisdom, Microsoft decided to squeeze
the users a little, and to its abject horror it began
to realise that people were willing to take the
slightly less functionality of OpenOffice for the $500
a machine discount. Who would have guessed that
result? See foot, see gun, see gun shoot foot.
The next winning strategy was to circle the wagons,
and lock people in. If you prevent other programs from
working with your software, and make your stuff fairly
cheap, people will flock to it, right? Well, right to
a point, at least until you build up hatred and people
have an alternative.
Licensing 6.0, the new 'rent as you go, but do so at
our sufferance' was the catalyst here. When it
proposed this scheme, people laughed outright. When
Microsoft said do it or pay the retail price, people
blinked, and a few cried monopoly. This is when people
started to take Linux seriously.
Defections, Defections
When Microsoft set a deadline for licensing 6.0,
people balked. Adoption was less than the 100% it was
counting on, so it blinked and extended the deadline
that wasn't capable of being extended. People still
didn't flock to the plan, so Microsoft turned the
screws and, um, blinked again. Once it was clear that
customers weren't viewing 100% plus price increases as
a benefit, and Microsoft was looking weaker and weaker
with each delay, it stopped delaying. Any reasonable
observer would chalk up losing one third of a customer
base, and alienating it at the same time, as an
unmitigated disaster.
Microsoft touted this as a sign that people didn't
truly understand the generosity emanating from
Redmond, so it sweetened the pot by offering tidbits
to the reluctant. That included training and other
things, but no price break. That was the sacred line
that it would never cross. For a bit. People still
didn't flock back, and high profile clients started to
jump ship. What to do, what to do?
The answer was to head off the defections by offering
massive discounts. Send in the big names to woo the
simple. Threaten behind the scenes. Do anything it
takes, and when Microsoft says anything, rest assured
that there are things none of us have thought of
coming into play with the subtlety of a sledgehammer.
The strange thing is that even this didn't work.
People did the math. With expensive lock-ins on one
hand, and cheaper, more interoperable software on the
other, they started choosing the less expensive route.
Imagine that. The high profile defections started
happening with more and more regularity, and Redmond
was almost out of tricks.
Some defections were headed off, like the Thai
government, which pays $36 for Office and Windows XP
comes with a 95% discount if you compare it to list.
There are probably other similar deals elsewhere that
we have not heard about. For every one of the
Microsoft victories, there were two or three Linux
wins. Then four or five. Now it is not even a contest.
High profile defections like cities, governments, and,
gasp, IBM, are just the tip of the iceberg, and almost
everyone is looking at the pioneers to see if the
trail they are blazing is worth following.
If it turns out that these first few companies can
make it, expect the floodgates to open, and everyone
to follow. The designed in security flaws, that make
Microsoft software insecurable, are only adding to the
misery. Every day that a company is down due to worms
or viruses, it starts re-evaluating Microsoft
software. When bidding on the next round of contracts,
the memory of all night cleanups tends to weigh
heavily on the minds of many CIOs and CTOs.
The latest quarterly numbers showed something that
hadn't happened before -- flat Microsoft numbers. It
blamed this on large corporations who were skittish in
the wake of the Blaster worm. But if you stop and
think about that, most companies are on Licensing 6.0
or other long term contracts, so the income derived
from them is steady. People who are going to buy
Microsoft products will do so, people who have jumped
have jumped. A large corporation does not delay
purchases like this for a quarter because of a
security breach, they will have their licences run out
from under them, or they will just buy the software as
planned and sit on it if absolutely necessary.
Something does not smell right with this explanation.
If Microsoft can't pull off an upside surprise,
something is very wrong. It is now at the point where
it must beat the street, or the illusion is shattered,
and that has this nasty effect on stock prices. If
Microsoft didn't meet expectations this quarter, it
goes to show that it either couldn't do it, or made a
conscious decision not to.
Running low on Wiggle Room
If Microsoft can't beat the numbers, it shows that it
is running low on wiggle room, the core customers are
negotiating hard, and Microsoft is giving way. Without
billions to throw at money losing products like XBox
and MSN, can these properties survive? If they can't,
that would make a financially healthier Microsoft, but
would it still be Microsoft? Could it offer a complete
end to end solution if it found itself unable to
control the internet? Would it be able to fight the
phone wars without being able to casually sign off on
nine digit losses? How long will the set top box world
take to make money?
The more troubling aspect for the company is if
Microsoft decided to report what is really happening.
Wall Street is in a Microsoft fed la-la land when it
comes to numbers. The stock is absurdly high, and in
return, it is expected to do things in return. Once it
stops doing those things, it becomes a lot less
valuable. And when that happens, shareholders and the
Street start asking all those nasty questions that
executives don't want to answer. If the stock
plummets, those options that Microsoft is famous for
as employee incentives become much more expensive, and
morale goes down. In short, things get ugly.
For Microsoft to actively shift the company into this
mode would signal nothing less than a sea change, one
that would bring the company a lot of pain on purpose.
I can't see anyone purposely doing this unless backs
are to the wall and there is no other way out. A much
smarter way would be to ease out of it over the course
of a few years, and change the company slowly. That
way, you could still prep the analyst sheep, and
escape relatively intact.
If I have to guess, I would say that the competition
is starting to force Microsoft into a pricing war, and
any moron can tell you a price war against free is not
a good thing. Don't believe me? Just go ask Netscape.
Oh how the worm turns. But price wars are destructive,
and will sink Microsoft faster than you can say "$50
billion in the bank". Microsoft can afford to cut
prices but after a while those $10 million discounts
start to add up. It just won't work when everyone
knows the simple truth of Linux.
The fact is, if you are negotiating with Microsoft,
and you pull out a SuSE or Redhat box, prices drop 25
per cent from the best deal you could negotiate. Pull
out a detailed ROI (return on investment) study, and
another 25 per cent drops off, miraculously. Want
more? Tell Microsoft the pilot phase of the trials
went exceedingly well, and the Java Desktop from Sun
is looking really spectacular on the Gnome desktop
custom built for your enterprise, while training costs
are almost nil.
It isn't hard to put the boot in to Microsoft again
and again these days -- being a Microsoft rep must be
a tough job. And whatever it does, people are still
jumping ship.
Trusted Computing
The problem is that Microsoft just isn't trusted,
questionable surveys aside. That knowledge is
spreading up the executive ranks. Microsoft has a
habit of promising users things, but not delivering.
Security is a good example. A few years ago, Microsoft
promised to stop coding XP and do a complete security
audit and retraining. Everything would be good after
this, it said, trust us. People did. Blaster, Nachia,
and a host of others illustrate that Microsoft didn't
make anything close to a sincere effort.
So, what comes out of Redmond nowadays? Hot air and
Ballmer dance videos made on Macs. Monkey boy is funny
to watch, but after an all night patching stint with
the CEO yelling at you, it loses its charm. Remember
that same Ballmer who said that Microsoft would not
release a service pack for Win2K because it would not
be released until it was perfect? How about that same
security audit for XP that would erase the chances of
anything like Blaster ever happening? Anyone think the
masses will buy the line for the next release? The
truth is they will, and Microsoft knows it.
The phrase 'it will be fixed in six months, trust us'
seems to have a magic power when emanating from
Microsoft. Every time someone big enough comes to it
with a list of complaints, it announces an initiative,
comes out with a slick Powerpoint presentation, half a
dozen press releases, a Gates speech, and several
shiny things to distract people.
The fact remains that security has been getting worse
every year since Windows 95 was released. One hell of
a track record don't you think? The fact also is that
for the first time, Microsoft revenue is flat, it has
competition, and it publicly blames security woes for
the monetary loss.
The culture at Microsoft , however, prevents change. I
was talking to a high level person in charge of
security at the Intel Developer Forum last fall, and
we chatted about what Microsoft could do to fix
things. He asked the right questions, and I told him
the right answers, trust. Plus, throw everything you
have out and start again. He didn't get it. No, more
than that, he was impervious to the things I was
saying to him, the culture is so ingrained that the
truth can't penetrate it. Microsoft cannot fix the
'bugs' that lead to security problems because they are
not bugs, they are design choices. When faced with
Java, Microsoft reacted with ActiveX. That, it
claimed, could do everything that Java could not,
because Java was in a 'sandbox', and programs could
not get out.
The fact remains that Microsoft's entire
infrastructure is based on fundamentally flawed
designs, not buggy code. These designs can't be
changed.
To change them, Microsoft would have to dump all
existing APIs and break compatibility with everything
up till now. If Microsoft does do this, it will have
the opportunity to fix the designs that plague its
product lineup.
I doubt it will. Even .Net, the new secure
infrastructure, and built with security in mind, lets
you have access to the 'old ways'. Yes, you are not
supposed to, but people somehow do, and hackers will.
Microsoft and its customer are addicted to backwards
compatibility in a way that makes a heroin addict look
silly.
And if Microsoft does change its ways, what incentive
will you have to stick with Microsoft? If you have to
start over from scratch to build your app in this new,
secure Microsoft environment, will you pay the
hundreds or thousands of dollars to go the Microsoft
route, or the $0 to go with Linux?
Starting from Scratch
Starting over from scratch nullifies the one advantage
that Microsoft has, complete code and a trained staff.
Migration and retraining features prominently in most
Microsoft white papers, and if it has to throw all
that away, what chance does it have?
In light of the won't do and can't do, Microsoft sits
there, and watches its market share begin to erode.
That's happening slowly at first, but the snowball is
rolling. A few people are starting to look up the hill
and notice this big thing barreling down at them, and
some are bright enough to step out of the way.
The big industry change is happening, and we are at
the inflection point. Watch closely people, and
carefully read each and every press release. If you
can see the big picture, this is one shift that won't
be a surprise in hindsight. �
__________________________________
Do you Yahoo!?
New Yahoo! Photos - easier uploading and sharing.
http://photos.yahoo.com/
Expand Messages
Mark Hagelin
Dec 28 11:49 PM
http://www.theinquirer.net/?article=13350
The IT industry is shifting away from Microsoft
Comment In the beginning there was Microsoft. Then it
exploded
By Charlie Demerjian: Sunday 28 December 2003, 11:31
EVERY SO often, there is a big shift in an industry.
The shifts are not usually visible until long after
they've happened, making you look back and say: "Oh
yeah, things were different back then".
We are experiencing a major IT industry shift right
now, and if you know where to look you can actually
see it as it happens. This shift is all about
Microsoft and open source.
Until very recently, Microsoft owned everything in the
personal computer business, both low and high on the
food chain. The low end was occupied by Palm, the high
end by Sun, IBM and others. In the vast soft middle,
there was Microsoft and only Microsoft.
Everyone who challenged it was bought out, cheated out
of the technology , or generally beaten into the
ground with dirty tricks, by ruthless competition, or
on rare occasions, with a better product. Listing the
failures would consume more column inches than a
person could read in a year.
Netscape, Stac, Wordperfect, Novell, and others are
among the notable casualties. Those that technically
survived are ghosts of their former selves.
Just as the press proclaims the inability of anyone to
challenge the Redmond beast, control is slipping from
Microsoft. As with any company faced with a huge loss
of market share, Microsoft is acting predictably,
pretending it is not happening, and putting on a
smiley face when asked about prospects. On the inside,
Microsoft is as scared as hell.
One of the richest companies on earth, run by one of
the richest people on earth afraid? What can you mean?
Hung, Drawn and Quartered
To put things in perspective, Microsoft has always
performed better each quarter than the one before.
Whenever the financial types settle on quarterly
earnings, Microsoft always manages to pull a few more
cents per share out of their hat, and beat those
earnings. The collective bunch of jackals and worms
that are known as 'Wall Street' sit slack jawed in
amazement, and give half hearted golf claps. Rinse and
repeat every quarter, including the analysts
'amazement'.
How it does this is no trick. It has profit margins on
its two major products of over eighty per cent. The
rest of the products, from handhelds to MSN and the
Xbox are all horrific money losers. Its finances are
so opaque and badly presented, that it can shuffle
money around from one part of the company to another
without anyone noticing. Make too much money one
quarter? Stash it in the closet labeled investments,
or write off some losses. Not making the numbers? Cash
in some assets and make a 'profit'.
Overall, it has been able to show a smooth earnings
curve, and surprise on the upside every time it
reports a quarter? Monopolies and almost no cost to
make your physical product other than R&D has itss
advantages.
Corporations cry Linux
About a year ago, things started to change. The cries
that Linux would dethrone Microsoft remained the same,
but there was a shift in the corporate reaction to
those cries. CxOs started to say 'tell me about it'.
In a down economy, free is much cheaper than hundreds
of dollars, and infinitely more attractive. Linux
started gaining ground with real paying customers
using it for real work in the real world, really.
Up until then, Microsoft had simply ignored the
tuxedoed threat. Then it started reacting with the
usual FUD, the Halloween memos, various white papers
and clumsily purchased studies. Somehow, people didn't
buy the fact that $1,000 a head was cheaper than free,
and so Microsoft had to move on to a different tactic.
Since it couldn't buy the company that produced Linux,
the GPL prevented the usual embrace and extend, and
people had simply grown to hate Microsoft for all the
pain they had been caused over the years, the firm
found itself in a bind. How do you compete when all
your dirty tricks are either inapplicable or fail, and
buckets of cash can't buy your way out of the hole you
are in? Simple, you compete on their terms.
Other than in the last six months, when was the last
time Microsoft lowered prices, or gave anything other
than a trivial discount on anything? Yeah, right,
never. Faced with losing the home office market to
OpenOffice/StarOffice, the server side to Linux,
databases to MySQL, and the desktop to Linux in the
not too distant future, what could it do? It targeted
price cuts at those who matter most, the early
adopters and other key segments.
The first of these cuts was aimed at MySQL, with the
developer edition of SQLServer getting the axe to the
tune of about 80 per cent. Then it started a slush
fund to prevent high profile companies and
organizations from giving Linux that all important
mindshare beachhead. Then it came out with a 'student
and teacher' version of Office. Hint to the
readership, if you don't want to pay $500 for office,
the new version doesn't make you prove you a student
or a teacher like the last one. Well, none of these
tactics is working, and one of the reasons it isn't
going as well as Microsoft hoped is its own money
grubbing product activation scheme. Without starting
the old debate about the cost of pirated software, it
is hard to argue against the fact that even with the
numbers it spouts off about piracy, Microsoft still
clears about a billion dollars a quarter or more. If
it wasn't for piracy, the Gates sprouts (little 1.0
and 2.0) could afford to be sent to a good school. Cry
for them. In its wisdom, Microsoft decided to squeeze
the users a little, and to its abject horror it began
to realise that people were willing to take the
slightly less functionality of OpenOffice for the $500
a machine discount. Who would have guessed that
result? See foot, see gun, see gun shoot foot.
The next winning strategy was to circle the wagons,
and lock people in. If you prevent other programs from
working with your software, and make your stuff fairly
cheap, people will flock to it, right? Well, right to
a point, at least until you build up hatred and people
have an alternative.
Licensing 6.0, the new 'rent as you go, but do so at
our sufferance' was the catalyst here. When it
proposed this scheme, people laughed outright. When
Microsoft said do it or pay the retail price, people
blinked, and a few cried monopoly. This is when people
started to take Linux seriously.
Defections, Defections
When Microsoft set a deadline for licensing 6.0,
people balked. Adoption was less than the 100% it was
counting on, so it blinked and extended the deadline
that wasn't capable of being extended. People still
didn't flock to the plan, so Microsoft turned the
screws and, um, blinked again. Once it was clear that
customers weren't viewing 100% plus price increases as
a benefit, and Microsoft was looking weaker and weaker
with each delay, it stopped delaying. Any reasonable
observer would chalk up losing one third of a customer
base, and alienating it at the same time, as an
unmitigated disaster.
Microsoft touted this as a sign that people didn't
truly understand the generosity emanating from
Redmond, so it sweetened the pot by offering tidbits
to the reluctant. That included training and other
things, but no price break. That was the sacred line
that it would never cross. For a bit. People still
didn't flock back, and high profile clients started to
jump ship. What to do, what to do?
The answer was to head off the defections by offering
massive discounts. Send in the big names to woo the
simple. Threaten behind the scenes. Do anything it
takes, and when Microsoft says anything, rest assured
that there are things none of us have thought of
coming into play with the subtlety of a sledgehammer.
The strange thing is that even this didn't work.
People did the math. With expensive lock-ins on one
hand, and cheaper, more interoperable software on the
other, they started choosing the less expensive route.
Imagine that. The high profile defections started
happening with more and more regularity, and Redmond
was almost out of tricks.
Some defections were headed off, like the Thai
government, which pays $36 for Office and Windows XP
comes with a 95% discount if you compare it to list.
There are probably other similar deals elsewhere that
we have not heard about. For every one of the
Microsoft victories, there were two or three Linux
wins. Then four or five. Now it is not even a contest.
High profile defections like cities, governments, and,
gasp, IBM, are just the tip of the iceberg, and almost
everyone is looking at the pioneers to see if the
trail they are blazing is worth following.
If it turns out that these first few companies can
make it, expect the floodgates to open, and everyone
to follow. The designed in security flaws, that make
Microsoft software insecurable, are only adding to the
misery. Every day that a company is down due to worms
or viruses, it starts re-evaluating Microsoft
software. When bidding on the next round of contracts,
the memory of all night cleanups tends to weigh
heavily on the minds of many CIOs and CTOs.
The latest quarterly numbers showed something that
hadn't happened before -- flat Microsoft numbers. It
blamed this on large corporations who were skittish in
the wake of the Blaster worm. But if you stop and
think about that, most companies are on Licensing 6.0
or other long term contracts, so the income derived
from them is steady. People who are going to buy
Microsoft products will do so, people who have jumped
have jumped. A large corporation does not delay
purchases like this for a quarter because of a
security breach, they will have their licences run out
from under them, or they will just buy the software as
planned and sit on it if absolutely necessary.
Something does not smell right with this explanation.
If Microsoft can't pull off an upside surprise,
something is very wrong. It is now at the point where
it must beat the street, or the illusion is shattered,
and that has this nasty effect on stock prices. If
Microsoft didn't meet expectations this quarter, it
goes to show that it either couldn't do it, or made a
conscious decision not to.
Running low on Wiggle Room
If Microsoft can't beat the numbers, it shows that it
is running low on wiggle room, the core customers are
negotiating hard, and Microsoft is giving way. Without
billions to throw at money losing products like XBox
and MSN, can these properties survive? If they can't,
that would make a financially healthier Microsoft, but
would it still be Microsoft? Could it offer a complete
end to end solution if it found itself unable to
control the internet? Would it be able to fight the
phone wars without being able to casually sign off on
nine digit losses? How long will the set top box world
take to make money?
The more troubling aspect for the company is if
Microsoft decided to report what is really happening.
Wall Street is in a Microsoft fed la-la land when it
comes to numbers. The stock is absurdly high, and in
return, it is expected to do things in return. Once it
stops doing those things, it becomes a lot less
valuable. And when that happens, shareholders and the
Street start asking all those nasty questions that
executives don't want to answer. If the stock
plummets, those options that Microsoft is famous for
as employee incentives become much more expensive, and
morale goes down. In short, things get ugly.
For Microsoft to actively shift the company into this
mode would signal nothing less than a sea change, one
that would bring the company a lot of pain on purpose.
I can't see anyone purposely doing this unless backs
are to the wall and there is no other way out. A much
smarter way would be to ease out of it over the course
of a few years, and change the company slowly. That
way, you could still prep the analyst sheep, and
escape relatively intact.
If I have to guess, I would say that the competition
is starting to force Microsoft into a pricing war, and
any moron can tell you a price war against free is not
a good thing. Don't believe me? Just go ask Netscape.
Oh how the worm turns. But price wars are destructive,
and will sink Microsoft faster than you can say "$50
billion in the bank". Microsoft can afford to cut
prices but after a while those $10 million discounts
start to add up. It just won't work when everyone
knows the simple truth of Linux.
The fact is, if you are negotiating with Microsoft,
and you pull out a SuSE or Redhat box, prices drop 25
per cent from the best deal you could negotiate. Pull
out a detailed ROI (return on investment) study, and
another 25 per cent drops off, miraculously. Want
more? Tell Microsoft the pilot phase of the trials
went exceedingly well, and the Java Desktop from Sun
is looking really spectacular on the Gnome desktop
custom built for your enterprise, while training costs
are almost nil.
It isn't hard to put the boot in to Microsoft again
and again these days -- being a Microsoft rep must be
a tough job. And whatever it does, people are still
jumping ship.
Trusted Computing
The problem is that Microsoft just isn't trusted,
questionable surveys aside. That knowledge is
spreading up the executive ranks. Microsoft has a
habit of promising users things, but not delivering.
Security is a good example. A few years ago, Microsoft
promised to stop coding XP and do a complete security
audit and retraining. Everything would be good after
this, it said, trust us. People did. Blaster, Nachia,
and a host of others illustrate that Microsoft didn't
make anything close to a sincere effort.
So, what comes out of Redmond nowadays? Hot air and
Ballmer dance videos made on Macs. Monkey boy is funny
to watch, but after an all night patching stint with
the CEO yelling at you, it loses its charm. Remember
that same Ballmer who said that Microsoft would not
release a service pack for Win2K because it would not
be released until it was perfect? How about that same
security audit for XP that would erase the chances of
anything like Blaster ever happening? Anyone think the
masses will buy the line for the next release? The
truth is they will, and Microsoft knows it.
The phrase 'it will be fixed in six months, trust us'
seems to have a magic power when emanating from
Microsoft. Every time someone big enough comes to it
with a list of complaints, it announces an initiative,
comes out with a slick Powerpoint presentation, half a
dozen press releases, a Gates speech, and several
shiny things to distract people.
The fact remains that security has been getting worse
every year since Windows 95 was released. One hell of
a track record don't you think? The fact also is that
for the first time, Microsoft revenue is flat, it has
competition, and it publicly blames security woes for
the monetary loss.
The culture at Microsoft , however, prevents change. I
was talking to a high level person in charge of
security at the Intel Developer Forum last fall, and
we chatted about what Microsoft could do to fix
things. He asked the right questions, and I told him
the right answers, trust. Plus, throw everything you
have out and start again. He didn't get it. No, more
than that, he was impervious to the things I was
saying to him, the culture is so ingrained that the
truth can't penetrate it. Microsoft cannot fix the
'bugs' that lead to security problems because they are
not bugs, they are design choices. When faced with
Java, Microsoft reacted with ActiveX. That, it
claimed, could do everything that Java could not,
because Java was in a 'sandbox', and programs could
not get out.
The fact remains that Microsoft's entire
infrastructure is based on fundamentally flawed
designs, not buggy code. These designs can't be
changed.
To change them, Microsoft would have to dump all
existing APIs and break compatibility with everything
up till now. If Microsoft does do this, it will have
the opportunity to fix the designs that plague its
product lineup.
I doubt it will. Even .Net, the new secure
infrastructure, and built with security in mind, lets
you have access to the 'old ways'. Yes, you are not
supposed to, but people somehow do, and hackers will.
Microsoft and its customer are addicted to backwards
compatibility in a way that makes a heroin addict look
silly.
And if Microsoft does change its ways, what incentive
will you have to stick with Microsoft? If you have to
start over from scratch to build your app in this new,
secure Microsoft environment, will you pay the
hundreds or thousands of dollars to go the Microsoft
route, or the $0 to go with Linux?
Starting from Scratch
Starting over from scratch nullifies the one advantage
that Microsoft has, complete code and a trained staff.
Migration and retraining features prominently in most
Microsoft white papers, and if it has to throw all
that away, what chance does it have?
In light of the won't do and can't do, Microsoft sits
there, and watches its market share begin to erode.
That's happening slowly at first, but the snowball is
rolling. A few people are starting to look up the hill
and notice this big thing barreling down at them, and
some are bright enough to step out of the way.
The big industry change is happening, and we are at
the inflection point. Watch closely people, and
carefully read each and every press release. If you
can see the big picture, this is one shift that won't
be a surprise in hindsight. �
__________________________________
Do you Yahoo!?
New Yahoo! Photos - easier uploading and sharing.
http://photos.yahoo.com/