#1048 Re: [osFree] Programming: Part 2
Expand Messages
Lynn H. Maxson
Jul 26, 2004
I've dealt with Michal Necasek before. I respected him then
and I do now. I thank John Baker for beating me to the punch
relative to Multics. More to the point IBM itself uses an
"enhanced" PL/I, known internally as PL/S, for writing its
operating systems on different platforms MVS, DOS/VSE, VM,
OS/400, and AIX. I regret that Michal will not stick around.
On the other hand I'm absolutely delighted that Frank Griffin
has chimed in. Now the three of us, John, Frank, and I have
backgrounds in IBM mainframe systems. That means we are
use to languages from symbolic assembler on up that function
quite nicely with fixed and variable-length bit and character
strings without the use of null termination as well as the
variable-length, null-terminated, character strings of C. Of
course, the reverse is not true for C, which insists on having
two different data types for a single character string and a
multiple character string which PL/I does not.
In reading Frank's remarks I could hardly suppress a smile.
When I got my OS/2 PL/I compiler I had to deliberately set
down to learn how to do the source level debugging support
that IBM offered in its family of VisualAge compilers. My first
experience in PL/I came on an IBM S/360 operating system,
OS/MFT (Multiple Fixed Tasks) in 1965, fully 7 years before
the general availability of C and while Kernighan was writing
in PL/I on the Multics system, a cooperative venture between
GE, MIT, and AT&T. At that time and now when you had a
program failure PL/I told you the statement that failed, why it
failed, and the hierarchical inter-module flow with
corresponding statement numbers. In over 30 years of PL/I
programming I never had a need for source level debugging.
Moreover, contrary to the text-based, string i-o of C PL/I
supports two forms of i-o processing, record-oriented and
stream oriented. Stream-oriented offers three different
forms: data-oriented, list-oriented, and edit-oriented
(corresponding closest to the string i-o of C). The beauty of
data-oriented is that it outputs the name of each data
element as well as its value. So if you wanted a trace of the
program state at any point in its execution, all you had to do
was to include a 'put data;' statement at that point in the
program. If you didn't want all the values of all variables
known to a procedure, you could simple write 'put data( var1,
var2, ...)'. PL/I would output the name of each variable
followed by its value. I won't bother to explain how the 'get
data' works to allow set of paired data names and values on
input.
In fact the exception handling of PL/I is superb. Frank hints at
its emphasis, actually that of the IBM user groups, SHARE and
GUIDE, that the programmer set the tone and the
implementers responsibility was to conform.
Every error condition has an individual "ON error_type". Or
you can test for a specific type with an "ON ERROR" condition.
Riding above them all is the "ON SYSTEM" error condition in
which PL/I allows entry just prior to program termination and
returning control to the operating system. In effect if a PL/I
programmer so chooses he can prevent an application
program from abnormal termination. This ability is
indispensable in the writing of "self-correcting" programs.
I didn't mention that the programmer could dynamically define
error conditions of his own and with the 'SIGNAL' statement
invoke them. The list of things that PL/I supports that C, C++,
and their kind do not could go on and on and on. The fact is
that none of them are to the level PL/I had reached by 1970.
They don't have all the machine data types, which contrary to
Michal's assertion, PL/I does: the operands. They do not
"natively" support aggregate operands, the ability to add,
subtract, and, or, multiply, etc. structures and arrays. They
don't have all the operators.
You, and specifically you Michal, don't have to take my word
for it. You can go online to the Intel website and download
the Pentium Instruction Set Reference Manual. When you do
you will discover that each instruction is given in three ways,
corresponding to the first (actual), second (symbolic
assembly), and third (HLL) generation languages. If you look
at that HLL, you will discover that it closely resembles PL/I or
as Intel calls it "PL/M".
That says if you have a PL/M-capable language, due to the
1:1:1 relationship among the first, second, and third generation
forms, that you never have to resort to assembly language as
a "separate" language: that it is included within the HLL.
Now PL/E (SL/I) takes advantage of this by defining
two-lowest levels of machine-dependent specifications. The
lowest level for RISC architectures and the level above it for
CISC architectures. The level above the CISC represents the
lowest machine-independent specification level. That says
that the level above the CISC decomposes in CISC instructions
(HLL form) for a specific hardware platform, which in turn
decomposes (if present) into a RISC instruction set (again in
HLL form) for the same or different hardware platform.
Thus it is possible with PL/E to have specifications for multiple
RISC and CISC machines and within a single unit of work
generate code for multiple machines.
Now while I am on this soapbox, let me discuss another
absurdity which jacks up the cost and time to produce
software. That's the current compiler, not language,
restriction that allows compilation of only one external
procedure at a time or one executable module.
We are talking about creating an OS/2 replacement, an
operating system. The keywords here (and by the way
keywords are not reserved words in PL/I as they are in C) are
"an" and "system", implying oneness or a unity. Yet we than
any operating system consists of multiple "programs": .exe,
.sys, .dll, etc.. No available software tool will allow you to
compile and execute them as a single unit of work.
Now let's understand that however .exe, .sys, .dll, etc. differ in
execution we expect them to perform "seamlessly" In
execution these identities disappear. Looking at what Frank
Griffin refers to as "introspection", in logic programming this
occurs through a process of "backtracking", in which the
system tells you not only where a failure occurred, but also
leads you through all the logic that got you there.
You see PL/I is not "perfect", only more so than any other
programming language bar none. In fact if you want to throw
them all together, it remains true. it's the closes thing to
perfection ever attempted in a programming language...and it
occurred prior to 1970. PL/I supports list processing, but does
not have a "native" list aggregate, only a programmer defined
and maintained one. PL/I does not have the 'range' attribute
which allows encoding the data rules within the data
definition. The 'range' attribute allows all of the data
integrity rules of relational databases plus.
While that means we could write an OS/2 replacement system
faster, better, and cheaper with PL/I than C or C++, it still
would exceed the resources available to us to succeed,
specifically to maintain over the long run. I don't recommend
use of any third generation language. It's the problem, not
the solution.
I'm really not here to push PL/E (SL/I) except as illustrative of
what a properly constructed declarative (fourth generation)
language offers along with logic programming.
As a parting shot I have to differ with Frank's "There's
another facet of Lynn's argument that bothers me more, and
that's the "let the compiler catch everything" mentality. That
is very workable for applications programming, but it falls
apart if the language in question is to be used to produce
server or middleware code." First off, to err, even to forget,
is human. To have an "assistant" that does neither helps the
human overcome a "known" defect.
Beyond that this "assistant" operates normally in an
interpretive mode with an option for compiled output. it
allows the input of all specifications from application to
middleware to server as a single unit of work. It supports
then a level of global checking with all of logic programming's
benefits, e.g. backtracking on error exceptions. It allows, for
example, all the specifications for an application system, its
on-demand, daily, weekly, monthly, quarterly, semi-annually,
annually, and all report generating programs to be entered
and executed as a single unit of work. That says that a single
programmer can introduce changes to any subset of
specifications that apply globally to multiple modules (or
programs) and synchronize those changes, guarantee their
concurrent implementation, in a single session as a single unit
of work.
You can't do this currently, so you haven't had time to think
about what it means in terms of resource reduction, people,
costs,and time. I will give you time to think about this while
we return to the process we adopt to specify the OS/2 APIs.
Part 35 - Jul 23 2004
Re: [osFree] Methodology
#1049 Re: [osFree] Methodology
Expand Messages
Frank Griffin
Jul 27, 2004
John P Baker wrote:
Hide message history
>
>
>
> That is not to say that I advocate C++, C#, or Java. I find the syntax of C++ to be abhorrent, and all three languages suffer from extreme code bloat.
I'm not advocating any of those three either, but you need to realize that function is function, and if a language offers a function which is not available in the native hardware, then that function is being simulated by a library. Whether the library comes with the compiler or is a third-party add-on is immaterial.
>
>
>
> Of course, those of us who grew up programming in machine language consider all high-level languages to be machine hogs (and yes, for the purposes of discussing pork, C is a high-level language).
Not necessarily. You may be thinking of early mainframe C compilers, whose code generation was horrible. Much money and talent has been thrown at C since then, especially on Intel boxen. C code generation these days is within 90% of what you could hand-code.
You may be thinking that every OS is like MVS, where you have many ways to do any one thing, and the closer you get to the bare metal the faster it runs. For good or ill, most other Unix-like OS's aren't like that. There is no APF authorization. You either write kernel code, device driver code, or user code. In most cases you have only one choice about how to do something, and writing in assembler doesn't give you additional APIs unavailable to the HLL crowd.
>
>
>
> So, I would argue that control blocks should reside in protected storage, and should be accessible to the application program only by a “handle”. I am not at this point going to specify the representation of a “handle”. It is not really relevant to the API.
This was discussed to death in the FreeOS group previously. It is basically the same issue as running a microkernel in protected space and communicating between OS segments via message-passing rather than direct memory reference. It comes down to a performance versus security issue. Intel machines can enforce this with segment registers, but current OS's don't use the capability because of the high cost of modifying those registers.
As with MVS, you probably want a hybrid approach, where trusted code has the direct reference option, but user code goes through something like the MVS subsystem API.
>
Expand Messages
Frank Griffin
Jul 27, 2004
John P Baker wrote:
Hide message history
>
>
>
> That is not to say that I advocate C++, C#, or Java. I find the syntax of C++ to be abhorrent, and all three languages suffer from extreme code bloat.
I'm not advocating any of those three either, but you need to realize that function is function, and if a language offers a function which is not available in the native hardware, then that function is being simulated by a library. Whether the library comes with the compiler or is a third-party add-on is immaterial.
>
>
>
> Of course, those of us who grew up programming in machine language consider all high-level languages to be machine hogs (and yes, for the purposes of discussing pork, C is a high-level language).
Not necessarily. You may be thinking of early mainframe C compilers, whose code generation was horrible. Much money and talent has been thrown at C since then, especially on Intel boxen. C code generation these days is within 90% of what you could hand-code.
You may be thinking that every OS is like MVS, where you have many ways to do any one thing, and the closer you get to the bare metal the faster it runs. For good or ill, most other Unix-like OS's aren't like that. There is no APF authorization. You either write kernel code, device driver code, or user code. In most cases you have only one choice about how to do something, and writing in assembler doesn't give you additional APIs unavailable to the HLL crowd.
>
>
>
> So, I would argue that control blocks should reside in protected storage, and should be accessible to the application program only by a “handle”. I am not at this point going to specify the representation of a “handle”. It is not really relevant to the API.
This was discussed to death in the FreeOS group previously. It is basically the same issue as running a microkernel in protected space and communicating between OS segments via message-passing rather than direct memory reference. It comes down to a performance versus security issue. Intel machines can enforce this with segment registers, but current OS's don't use the capability because of the high cost of modifying those registers.
As with MVS, you probably want a hybrid approach, where trusted code has the direct reference option, but user code goes through something like the MVS subsystem API.
>
Re: [osFree] Methodology
#1050 Re: [osFree] Methodology
Expand Messages
Daniel Caetano
Jul 27, 2004
Hide message history
On Mon, 26 Jul 2004 22:51:31 -0400, John P Baker wrote:
Hi John!
>First, though I am primarily an IBM mainframe assembler programmer, I have
>been programming in PL/I for over 30 years. PL/I is great!
I do not think otherwise! Every language is the best in your own
camp. And in Operating Systems camp - AFAIK - the king is C. At least
for the "lower level parts". We must use assembly as well in several
parts, since no mid or high level language allow us to deal directly
with segments, memory protection and so on.
>I would argue that such mappings are not required at this point. I would
>also argue that some type of object-oriented methodology may be in order.
In API level, I think so.
>That is not to say that I advocate C++, C#, or Java. I find the syntax of
>C++ to be abhorrent, and all three languages suffer from extreme code bloat.
Well... I discovered with some tests that I could make C++ as faster
as C in some cases, after I understood how the hell the compiler generates
the code. Of course this will not work always. But for APIs maybe this is
not the real point. (^=
>Of course, those of us who grew up programming in machine language consider
>all high-level languages to be machine hogs (and yes, for the purposes of
>discussing pork, C is a high-level language).
Well... I do like assembly (not x86, anyway). But I don't think C is too
bloated. I usually program C/C++ using Watcom and the profiler suits me with
a confortable measure of "slowest" functions, where I can verify the
assembly code and optimize the C code so the assembly part will be the
better optimized as possible.
I really like this. Optimization in assembly level, but keeping the
C portability. Many people call me insane, but I really thing it's best
we develop something good or do nothing at all. If the desire is to have
a bloated operating system, then it's easier and faster got a copy of
Windows XP in the next shop.
[About API Description]
(...)
>Every API function can be specified in this manner. It is not necessary to
>map data types to specific machine representation at this point.
I couldn't agree more. That is: specify the workings (algorithm) of the
API, not the header file. (^=
>We need to consider what level of compatibility should be maintained. Do we
>wish to maintain binary compatibility? If so, that imposes a number of
>constraints. Do we wish to maintain source compatibility? If so, that
>imposes a slightly different set of constraints. Do we merely wish to
>maintain perceptual compatibility (in simple terms, command syntax and
>user-interface compatibility)? If so, that imposes the least constraints on
>our creativity.
Well, here starts a new "problem". (^= IBM always thought in source
compatibility (look OS/2 for PowerPC). It's an entirely new implementation
- its internal works had nothing to do with x86 OS/2 - and it works well
in therms of comapatibility (well, this is the idea I got from the IBM books
that talk about OS/2 for PPC...)
Anyway, I think maybe it's wise to support both modes. Binary and Source
compatibility (binary as some "legacy" mode and source as "new world" mode).
This would make possible the creation of multiple input queues (I had heard
many times IBM did not implemented this because it would break every single
existent binary), like they had done in OS/2 for PowerPC... (or WorkplaceOS,
or whatever). But this is detail inside the detail at this stage.
But I think this sould be discussed in a near future, after the definition of
how main API functions will work, besides the internal workings of task scheduler,
memory management, etc.
[]'s
Daniel Caetano
daniel@...
http://www.caetano.eng.br/
Expand Messages
Daniel Caetano
Jul 27, 2004
Hide message history
On Mon, 26 Jul 2004 22:51:31 -0400, John P Baker wrote:
Hi John!
>First, though I am primarily an IBM mainframe assembler programmer, I have
>been programming in PL/I for over 30 years. PL/I is great!
I do not think otherwise! Every language is the best in your own
camp. And in Operating Systems camp - AFAIK - the king is C. At least
for the "lower level parts". We must use assembly as well in several
parts, since no mid or high level language allow us to deal directly
with segments, memory protection and so on.
>I would argue that such mappings are not required at this point. I would
>also argue that some type of object-oriented methodology may be in order.
In API level, I think so.
>That is not to say that I advocate C++, C#, or Java. I find the syntax of
>C++ to be abhorrent, and all three languages suffer from extreme code bloat.
Well... I discovered with some tests that I could make C++ as faster
as C in some cases, after I understood how the hell the compiler generates
the code. Of course this will not work always. But for APIs maybe this is
not the real point. (^=
>Of course, those of us who grew up programming in machine language consider
>all high-level languages to be machine hogs (and yes, for the purposes of
>discussing pork, C is a high-level language).
Well... I do like assembly (not x86, anyway). But I don't think C is too
bloated. I usually program C/C++ using Watcom and the profiler suits me with
a confortable measure of "slowest" functions, where I can verify the
assembly code and optimize the C code so the assembly part will be the
better optimized as possible.
I really like this. Optimization in assembly level, but keeping the
C portability. Many people call me insane, but I really thing it's best
we develop something good or do nothing at all. If the desire is to have
a bloated operating system, then it's easier and faster got a copy of
Windows XP in the next shop.
[About API Description]
(...)
>Every API function can be specified in this manner. It is not necessary to
>map data types to specific machine representation at this point.
I couldn't agree more. That is: specify the workings (algorithm) of the
API, not the header file. (^=
>We need to consider what level of compatibility should be maintained. Do we
>wish to maintain binary compatibility? If so, that imposes a number of
>constraints. Do we wish to maintain source compatibility? If so, that
>imposes a slightly different set of constraints. Do we merely wish to
>maintain perceptual compatibility (in simple terms, command syntax and
>user-interface compatibility)? If so, that imposes the least constraints on
>our creativity.
Well, here starts a new "problem". (^= IBM always thought in source
compatibility (look OS/2 for PowerPC). It's an entirely new implementation
- its internal works had nothing to do with x86 OS/2 - and it works well
in therms of comapatibility (well, this is the idea I got from the IBM books
that talk about OS/2 for PPC...)
Anyway, I think maybe it's wise to support both modes. Binary and Source
compatibility (binary as some "legacy" mode and source as "new world" mode).
This would make possible the creation of multiple input queues (I had heard
many times IBM did not implemented this because it would break every single
existent binary), like they had done in OS/2 for PowerPC... (or WorkplaceOS,
or whatever). But this is detail inside the detail at this stage.
But I think this sould be discussed in a near future, after the definition of
how main API functions will work, besides the internal workings of task scheduler,
memory management, etc.
[]'s
Daniel Caetano
daniel@...
http://www.caetano.eng.br/
Re: [osFree] Methodology
#1051 Re: [osFree] Methodology
Expand Messages
Lynn H. Maxson
Jul 27, 2004
John,
I didn't want your message starting this thread to get lost in
the recent burst of activity.
"However, the question we have to ask at this point is in
defining an API, do we yet need to map data types to specific
machine representations?"
I think the answer is no. We want to define an API which is
machine-independent. That's why you would choose 'fixed
bin (31) signed' over 'int' or 'fixed bin (32) unsigned' over
'ulong'. You don't want the implementation of the language in
control here. You want the language, and thus the language
user, in control. You don't need a standards committee after
a decade of misuse to say what 'int', 'short', and 'long' stand
for. The language says it all.
On the other hand if you talk with the people using logic
programming they would prefer a language which offered a
"generic" definition like 'dcl jim numeric;' or 'dcl jane string;'.
To some in that group even this is too specific. They would
use only an "implicit" data declaration.
Now PL/I, like FORTRAN, but unlike C and COBOL, supports
"implicit" data declarations, where the variable name just
appears in source code without an "explicit" declare or data
definition statement, just 'jim' or 'jane' within an source code
expression.
Now you cannot have an executable until you have assigned
every variable name its explicit machine data representation.
But you can wait until you have encountered all uses of a
variable to decide what is the best, i.e. optimal, data
representation to use. You can allow the software to
automatically make this choice at this point. If you happen to
disagree with the software's choice, then you can explicitly
define it in a declare statement.
Now the non-programmer's here need to understand the
concept of "persistent" data, data whose existence extends
beyond the lifetime of an execution of a program. It's
program global, not strictly local. The most common forms for
containing such data are files and databases or simply generic
datastores.
The problem that you have with a generic or implicit definition
based on use of global data is the need to examine its use
globally, i.e. across all use instances in all modules. Any
decision made in one has to occur in all others. You can only
make the best or optimal decision after seeing it in all it uses.
You cannot do that unless you software tool has unrestricted
access to the source, unless it can accept on input all the
source from all the affected modules concurrently, i.e. as a
whole.
Now habit inhibits our ability to think beyond the box. This
habit we have ingrained in ourselves in our tools: the
restriction of a compilation to a single program or external
procedure. No such restriction exists in any programming
language. It only exists artificially in their implementations.
Now why do we have some procedures denoted as external
and others as internal? This tool restriction says that if we
want to compile multiple procedures concurrently then we
have a "main" procedure contained in no other and one or
more "nested" procedures with possibly other procedures
"nested" within them. The main procedure we denote as
"external", contained in no other, and the nested as
"internal", contained within the body of another.
Now recently we eliminated this restriction. We eliminated the
need to use internal procedures by allowing the tool to accept
multiple external procedures on input. In PL/I we refer to this
as a "package". While we allow the procedures to input in
any order we require that one, and only one, be designated as
the "main" procedure. Thus we still restrict the scope of
compilation to a single program.
Now logic programming has no nested or internal procedures,
only external. Thus Prolog allows an unlimited and unordered
number of "specifications" on input, but insists we designate
one and only one as the "main" goal, i.e. procedure.
Thus the major fourth generation language, Prolog, whose
users advocate generic and implicit data definitions do not
have a tool which allows the review of the global use of data.
Now for programmers and non-programmers alike let's be
clear. One procedure only talks to another through a "named"
interface, otherwise known as an API. It's the same for
"internal" as it is for "external". Thus if we input into a tool
source code for an unlimited and unordered number of
external procedures (no longer having the need for internal),
the tool is more than capable of keeping all the names on a
list. For every name on the list it can have a sub-list of every
name called within that name's source code.
When it is done creating the list entries and the sub-lists off
each, it can then locate those entries not called by any other.
These become "main" procedures. The only exception lies in
accounting for recursion, of a procedure invoking itself, i.e. it
appears on its own sub-list. We recognize the recursive use,
but do not count it as an invocation.
Thus we have a list with one or more entries not appearing on
another entry's sub-list. The key here lies in "one or more"
and not "just only one". We search this list for these "main"
entries, processing each one in turn through the hierarchy
formed by the sub-lists. This gives us all the source code for
a single program, for which we create an executable module.
Then we simply go on to repeat this process for every other
"main" entry on this list.
Thus in neither C nor PL/I do we need to designate a "main"
procedure in a package of external procedures. Nor do we
need to limit an internal list of procedure names to one "main"
entry. This is an arbitrary, artificial limit established for
machines with limited main storage before virtual storage and
gigabyte main storage. We have a thought and a thinking
process locked in a past which increasingly fewer of us have
ever experienced. If an "old dog" like myself is willing to give
it up, why would you "young pups" want to hang on to an no
longer necessary restriction?
While no single tool integrating all the functions I propose
exists, all the functions do in one form or the other. I get
somewhat exasperated when people don't recognize that I
have taken existing technology and simply repackaged it. You
can doubt that the functions work as described because that's
the way they have worked for years. I don't have to prove a
thing. I'm simply taking advantage of the way things work,
only integrating them in a different manner.
Now Daniel Caetano is wrong. I have to take the time
elsewhere to communicate why. But I do hope that I have
addressed your concerns (and more).
Expand Messages
Lynn H. Maxson
Jul 27, 2004
John,
I didn't want your message starting this thread to get lost in
the recent burst of activity.
"However, the question we have to ask at this point is in
defining an API, do we yet need to map data types to specific
machine representations?"
I think the answer is no. We want to define an API which is
machine-independent. That's why you would choose 'fixed
bin (31) signed' over 'int' or 'fixed bin (32) unsigned' over
'ulong'. You don't want the implementation of the language in
control here. You want the language, and thus the language
user, in control. You don't need a standards committee after
a decade of misuse to say what 'int', 'short', and 'long' stand
for. The language says it all.
On the other hand if you talk with the people using logic
programming they would prefer a language which offered a
"generic" definition like 'dcl jim numeric;' or 'dcl jane string;'.
To some in that group even this is too specific. They would
use only an "implicit" data declaration.
Now PL/I, like FORTRAN, but unlike C and COBOL, supports
"implicit" data declarations, where the variable name just
appears in source code without an "explicit" declare or data
definition statement, just 'jim' or 'jane' within an source code
expression.
Now you cannot have an executable until you have assigned
every variable name its explicit machine data representation.
But you can wait until you have encountered all uses of a
variable to decide what is the best, i.e. optimal, data
representation to use. You can allow the software to
automatically make this choice at this point. If you happen to
disagree with the software's choice, then you can explicitly
define it in a declare statement.
Now the non-programmer's here need to understand the
concept of "persistent" data, data whose existence extends
beyond the lifetime of an execution of a program. It's
program global, not strictly local. The most common forms for
containing such data are files and databases or simply generic
datastores.
The problem that you have with a generic or implicit definition
based on use of global data is the need to examine its use
globally, i.e. across all use instances in all modules. Any
decision made in one has to occur in all others. You can only
make the best or optimal decision after seeing it in all it uses.
You cannot do that unless you software tool has unrestricted
access to the source, unless it can accept on input all the
source from all the affected modules concurrently, i.e. as a
whole.
Now habit inhibits our ability to think beyond the box. This
habit we have ingrained in ourselves in our tools: the
restriction of a compilation to a single program or external
procedure. No such restriction exists in any programming
language. It only exists artificially in their implementations.
Now why do we have some procedures denoted as external
and others as internal? This tool restriction says that if we
want to compile multiple procedures concurrently then we
have a "main" procedure contained in no other and one or
more "nested" procedures with possibly other procedures
"nested" within them. The main procedure we denote as
"external", contained in no other, and the nested as
"internal", contained within the body of another.
Now recently we eliminated this restriction. We eliminated the
need to use internal procedures by allowing the tool to accept
multiple external procedures on input. In PL/I we refer to this
as a "package". While we allow the procedures to input in
any order we require that one, and only one, be designated as
the "main" procedure. Thus we still restrict the scope of
compilation to a single program.
Now logic programming has no nested or internal procedures,
only external. Thus Prolog allows an unlimited and unordered
number of "specifications" on input, but insists we designate
one and only one as the "main" goal, i.e. procedure.
Thus the major fourth generation language, Prolog, whose
users advocate generic and implicit data definitions do not
have a tool which allows the review of the global use of data.
Now for programmers and non-programmers alike let's be
clear. One procedure only talks to another through a "named"
interface, otherwise known as an API. It's the same for
"internal" as it is for "external". Thus if we input into a tool
source code for an unlimited and unordered number of
external procedures (no longer having the need for internal),
the tool is more than capable of keeping all the names on a
list. For every name on the list it can have a sub-list of every
name called within that name's source code.
When it is done creating the list entries and the sub-lists off
each, it can then locate those entries not called by any other.
These become "main" procedures. The only exception lies in
accounting for recursion, of a procedure invoking itself, i.e. it
appears on its own sub-list. We recognize the recursive use,
but do not count it as an invocation.
Thus we have a list with one or more entries not appearing on
another entry's sub-list. The key here lies in "one or more"
and not "just only one". We search this list for these "main"
entries, processing each one in turn through the hierarchy
formed by the sub-lists. This gives us all the source code for
a single program, for which we create an executable module.
Then we simply go on to repeat this process for every other
"main" entry on this list.
Thus in neither C nor PL/I do we need to designate a "main"
procedure in a package of external procedures. Nor do we
need to limit an internal list of procedure names to one "main"
entry. This is an arbitrary, artificial limit established for
machines with limited main storage before virtual storage and
gigabyte main storage. We have a thought and a thinking
process locked in a past which increasingly fewer of us have
ever experienced. If an "old dog" like myself is willing to give
it up, why would you "young pups" want to hang on to an no
longer necessary restriction?
While no single tool integrating all the functions I propose
exists, all the functions do in one form or the other. I get
somewhat exasperated when people don't recognize that I
have taken existing technology and simply repackaged it. You
can doubt that the functions work as described because that's
the way they have worked for years. I don't have to prove a
thing. I'm simply taking advantage of the way things work,
only integrating them in a different manner.
Now Daniel Caetano is wrong. I have to take the time
elsewhere to communicate why. But I do hope that I have
addressed your concerns (and more).
Re: [osFree] Methodology
#1052 Re: [osFree] Methodology
Expand Messages
Lynn H. Maxson
Jul 27, 2004
Frank Griffin writes:
"I'm not advocating any of those three either, but you need to
realize that function is function, and if a language offers a
function which is not available in the native hardware, then
that function is being simulated by a library. Whether the
library comes with the compiler or is a third-party add-on is
immaterial."
I think you know the expression "hide in plain sight". No PL/I
programmer has ever needed the use of a library to write an
application. Not a library that comes "with" the compiler nor
one from a third-party. No, the PL/I library comes "builtin",
through a set of "builtin functions". The difference is that the
compiler knows the function. Knows. Knows. Knows.
Certainly over the years the PL/I programmer has taken
advantage of available libraries like the Scientific Subroutine
Library. On the other hand it has never actually needed them
to provide something not available in the language to the
compiler. You can within the bounds of the language known
to the compiler write any application, including an operating
system, without depending upon some external library.
That is not true for C or any of its derivatives, all of which
are library-dependent. If all those libraries came in source
form and not binaries, you could argue that the compiler can
"know" them, but they do not. Some even come in different
source languages like assembler.
Libraries have a number of nasty habits. One, they often
choose not to like each other: they are incompatible.
Frequently deliberately so. They are restricted to given
hardware and software platforms. Three, they are
incomplete. Their incompatibilities, their inability to coexist
with other libraries, prevents the user from combining them to
the level of desired completeness.
I say to you, Frank, that the need for libraries is the problem,
not the solution. Just as I say to Daniel, that C is the problem,
not the solution. Insofar as PL/I remains a third generation
language it remains a problem, not the solution.
I should have added a fourth to the previous list. Libraries
most likely come in binaries, not source. The compiler cannot
"know" the functions they contain.
We have outsourcing as a major occurrence in programming
because our tools are the problem, not the solution.
Outsourcing goes to where people costs are lower, not where
productivity is higher. If you want to stop outsourcing, if you
want to compete with lower labor costs, then you need to
increase individual productivity.
If you can have one programmer synchronize updates for a
hundred programs at one time instead of trying to synchronize
a hundred programmers with their one program, you will
increase productivity...by a factor far greater than 100.
I don't want to hurt anyone's feelings, but because you have
this library dependence you will never have HLL code the
equal of assembly language. Why? Because in today's virtual
storage environment where main memory is no longer a
precious resource you can effectively perform all or most
functions in-line. No compiler can perform a library binary
in-line.
On the other hand if you have an HLL whose only library is a
source library it can match any assembly language
programmer in terms of optimized and in-line executable code.
That's because the library and all its internals is known to the
compiler. If the library contains all the source code for all the
applications and all their supporting, i.e. callable and reusable,
functions, you can input them in their entirety, producing all
executables as a single unit of work.
I'm not saying that you may want to do that, but you could if
you wanted to. It happens because no logical "separation"
between the tool and the library exists. Everything in the
library exists as source.
Why have vendors brag about how many source statements
per second their product processes if you don't take
advantage of it? Even at a hundred million statements would
you not want the ability to process them in their entirety? Or
two hundred million? Or three? Imagine having a thousand
applications across a dozen different machine architectures
and several dozen different OSes resulting from a single unit
of work from a single source library.
It's not that we cannot write and maintain applications, even
operating systems, with our existing tools. We can and do.
It's that the very same tools place limits on our productivity.
Unnecessary limits. Now are we masters of our tools or do we
allow them to continue to restrict us?
I don't want to engage in a language war, when the battle is
not over the language but its implementation. The
implementations limit our productivity. That productivity
determines our cost. If we want to lower our cost, then we
have to increase our productivity. We increase our
productivity by improving our tools. We don't have to change
"what" they do as much as "how" they do it.
If you look at what I propose in a tool, you will note that I do
the same things differently...primarily eliminating
non-language-based restrictions. It's the tools, not the
language, holding us back.
Expand Messages
Lynn H. Maxson
Jul 27, 2004
Frank Griffin writes:
"I'm not advocating any of those three either, but you need to
realize that function is function, and if a language offers a
function which is not available in the native hardware, then
that function is being simulated by a library. Whether the
library comes with the compiler or is a third-party add-on is
immaterial."
I think you know the expression "hide in plain sight". No PL/I
programmer has ever needed the use of a library to write an
application. Not a library that comes "with" the compiler nor
one from a third-party. No, the PL/I library comes "builtin",
through a set of "builtin functions". The difference is that the
compiler knows the function. Knows. Knows. Knows.
Certainly over the years the PL/I programmer has taken
advantage of available libraries like the Scientific Subroutine
Library. On the other hand it has never actually needed them
to provide something not available in the language to the
compiler. You can within the bounds of the language known
to the compiler write any application, including an operating
system, without depending upon some external library.
That is not true for C or any of its derivatives, all of which
are library-dependent. If all those libraries came in source
form and not binaries, you could argue that the compiler can
"know" them, but they do not. Some even come in different
source languages like assembler.
Libraries have a number of nasty habits. One, they often
choose not to like each other: they are incompatible.
Frequently deliberately so. They are restricted to given
hardware and software platforms. Three, they are
incomplete. Their incompatibilities, their inability to coexist
with other libraries, prevents the user from combining them to
the level of desired completeness.
I say to you, Frank, that the need for libraries is the problem,
not the solution. Just as I say to Daniel, that C is the problem,
not the solution. Insofar as PL/I remains a third generation
language it remains a problem, not the solution.
I should have added a fourth to the previous list. Libraries
most likely come in binaries, not source. The compiler cannot
"know" the functions they contain.
We have outsourcing as a major occurrence in programming
because our tools are the problem, not the solution.
Outsourcing goes to where people costs are lower, not where
productivity is higher. If you want to stop outsourcing, if you
want to compete with lower labor costs, then you need to
increase individual productivity.
If you can have one programmer synchronize updates for a
hundred programs at one time instead of trying to synchronize
a hundred programmers with their one program, you will
increase productivity...by a factor far greater than 100.
I don't want to hurt anyone's feelings, but because you have
this library dependence you will never have HLL code the
equal of assembly language. Why? Because in today's virtual
storage environment where main memory is no longer a
precious resource you can effectively perform all or most
functions in-line. No compiler can perform a library binary
in-line.
On the other hand if you have an HLL whose only library is a
source library it can match any assembly language
programmer in terms of optimized and in-line executable code.
That's because the library and all its internals is known to the
compiler. If the library contains all the source code for all the
applications and all their supporting, i.e. callable and reusable,
functions, you can input them in their entirety, producing all
executables as a single unit of work.
I'm not saying that you may want to do that, but you could if
you wanted to. It happens because no logical "separation"
between the tool and the library exists. Everything in the
library exists as source.
Why have vendors brag about how many source statements
per second their product processes if you don't take
advantage of it? Even at a hundred million statements would
you not want the ability to process them in their entirety? Or
two hundred million? Or three? Imagine having a thousand
applications across a dozen different machine architectures
and several dozen different OSes resulting from a single unit
of work from a single source library.
It's not that we cannot write and maintain applications, even
operating systems, with our existing tools. We can and do.
It's that the very same tools place limits on our productivity.
Unnecessary limits. Now are we masters of our tools or do we
allow them to continue to restrict us?
I don't want to engage in a language war, when the battle is
not over the language but its implementation. The
implementations limit our productivity. That productivity
determines our cost. If we want to lower our cost, then we
have to increase our productivity. We increase our
productivity by improving our tools. We don't have to change
"what" they do as much as "how" they do it.
If you look at what I propose in a tool, you will note that I do
the same things differently...primarily eliminating
non-language-based restrictions. It's the tools, not the
language, holding us back.
Re: [osFree] Methodology
#1053 Re: [osFree] Methodology
Expand Messages
Lynn H. Maxson
Jul 27, 2004
" I do not think otherwise! Every language is the best in your
own camp. And in Operating Systems camp - AFAIK - the king
is C. At least for the "lower level parts". We must use
assembly as well in several parts, since no mid or high level
language allow us to deal directly with segments, memory
protection and so on. ..."
Daniel,
I was controller of the 1968 SJCC (Spring Joint Computer
Conference), the COMDEX of its day, in Anaheim. At that
conference Dennis Ritchie presented his co-authored paper on
BCPL (Basic Compiler Programming Language), the "B"
language which preceded "C". As IBM had PL/I at the time as
well as a "powerful" S/360 symbolic assembler, later made
even more so by the H-level assembler, I found it hard to get
excited by someone whose academic work remain years
behind that available commercially.
Perhaps the one thing which impressed me least was its
absence of an address data type corresponding to the PL/I
pointer. I certainly found the use of *variable_name to
indicate an address clumsy as I had grown quite use to having
addresses as entities of their own, capable of moving about
freely without tag-along data attributes.
That doesn't seem like much until you get to C's 'union', the
ability to have multiple different data types occupy the same
data space. Having made one unnecessary mistake, by not
having a separate address data type, they compounded it by a
creation to overcome "some" of its limitations. In PL/I you
achieve the effects of 'union' through the use of 'based'
variables which take on any address you assign to them
including those occupied by other data.
Now supposedly, at least if you believe the claims of K&R, C
has fewer restrictions than other languages. Of course, in C
the procedure which initially receives control from the
operating system has the name 'main'. Of course, keywords
are reserved words. I'd better stop as the list of restrictions is
a fairly large one.
One of the common myths about C is its proximity to matching
the underlying machine hardware. Now I don't understand
how 'int' gets closer to machine hardware than 'fixed bin (31)
signed' or 'fixed bin (32) unsigned', representing two different
arithmetic instruction sets in the hardware.
Then you have the situation where you don't want it as a
binary integer but as a bit string. In PL/I you declare it as
'bit(32)', a fixed length bit string. You can pick any one or
any contiguous subset, shift it/them left or right, filling in
zeroes or around, wrapping the bits which go off to the right
or left. Moreover you can mask the bits and-in, or-ing, or
nor-ing as you choose. You can convert character strings to
bit strings and vice versa, string to arithmetic data and vice
versa, support fixed- and variable-length character and bit
strings without null termination (which machine instructions
support), and variable-length character strings with null
termination (not native to any machine or present in any
machine instruction). I won't bother you with
variable-precision, fixed-decimal arithmetic (or fixed binary
for that matter).
Thus I have considerable difficulty understanding where C lies
closer to representing actual machine architectures than PL/I.
I guess you can resort to K&R's argument that if it is not
present in C, it's unnecessary.<g>
I guess my reference to the Intel Pentium Instruction
Reference Manual with its HLL definition of every instruction
didn't impress you. You want to stick with C, even if it means
resorting to assembly language here and there, regardless of
the ability to use an HLL without such a need.
You cannot deny the popularity of C...or ethernet. It's an
application of Gresham's Law of Economics to software and
hardware: cheap money drives out good. Cheap moved C and
UNIX to dominate the universities and colleges: they could
afford them. It was finance, not function, that drove the
decision.
You can't write an operating system without exception or
interrupt handling. Here you hit upon one area which K&R
chose as "unnecessary" and in which PL/I excels. You have
system-defined interrupts and programmer-defined interrupts,
even by those programmers writing operating systems with a
need to provide for certain software-based error conditions
like running beyond the bounds of an array or structure.
Every programming language "is" a specification language.
That includes assembler. If as Intel has provided in its
Pentium Instruction Reference Manual an HLL form exists for
every possible instruction, then that form can deal directly
with "segments, memory protection, and so on". You "have"
to use assembly, because you use C. You could choose an
HLL in which you didn't.
"... I couldn't agree more. That is: specify the workings
(algorithm) of the API, not the header file. ..."
I couldn't disagree more. Every function, every API, has an
IPO representation. You can do the "I" and the "O" without
the "P", but you can't formally specify the "P", your
algorithm, without the "I" and the "O". That "P", my friend,
stands for "programming" as well as "process". If you want to
start programming as part of the design process, then you not
only agree to rewriting source code but source
documentation, i.e. the design, as well during the process.
We have certain "givens" in our effort: the OS/2 APIs. Except
for the process name all the remaining are the "I"s and the
"O"s. If we decide to leave them "as is" in C, I will do so as
part of the common effort. At the same time I will do my own
in PL/E in which many of the algorithms you refer to become
part of the data rules within the 'range' attribute. That
means once I write them they become the software tool's
responsibility to enforce.
Now you may revel in the extra clerical work you keep from
the software, but I know down deep in my heart of hearts
that the more the software does and the less I have to do
increases my productivity. That means it doesn't take as
many of "me" as it does of "you" to develop and maintain an
operating system. That difference amounts to more than 200
to 1, the same level of savings, i.e. productivity gains, we
provide our clients with their software solutions.
You see, the issue is not language per se, but productivity.
While language has an effect, changing the language is not as
important as changing, i.e. improving, the tools. That's how
you increase productivity. That's how the OS/2 community,
once it starts acting as such, can bring itself on a par with and
exceed those of Linux, M$, IBM, and the rest.
"...This would make possible the creation of multiple input
queues (I had heard many times IBM did not implemented this
because it would break every single existent binary..."
Ah, the SIQ versus MIQ. Well, how many queues do you have
in a bank? How many before each cash register in a fast food
restaurant? My gosh, doesn't experience count for
anything?<g>
Expand Messages
Lynn H. Maxson
Jul 27, 2004
" I do not think otherwise! Every language is the best in your
own camp. And in Operating Systems camp - AFAIK - the king
is C. At least for the "lower level parts". We must use
assembly as well in several parts, since no mid or high level
language allow us to deal directly with segments, memory
protection and so on. ..."
Daniel,
I was controller of the 1968 SJCC (Spring Joint Computer
Conference), the COMDEX of its day, in Anaheim. At that
conference Dennis Ritchie presented his co-authored paper on
BCPL (Basic Compiler Programming Language), the "B"
language which preceded "C". As IBM had PL/I at the time as
well as a "powerful" S/360 symbolic assembler, later made
even more so by the H-level assembler, I found it hard to get
excited by someone whose academic work remain years
behind that available commercially.
Perhaps the one thing which impressed me least was its
absence of an address data type corresponding to the PL/I
pointer. I certainly found the use of *variable_name to
indicate an address clumsy as I had grown quite use to having
addresses as entities of their own, capable of moving about
freely without tag-along data attributes.
That doesn't seem like much until you get to C's 'union', the
ability to have multiple different data types occupy the same
data space. Having made one unnecessary mistake, by not
having a separate address data type, they compounded it by a
creation to overcome "some" of its limitations. In PL/I you
achieve the effects of 'union' through the use of 'based'
variables which take on any address you assign to them
including those occupied by other data.
Now supposedly, at least if you believe the claims of K&R, C
has fewer restrictions than other languages. Of course, in C
the procedure which initially receives control from the
operating system has the name 'main'. Of course, keywords
are reserved words. I'd better stop as the list of restrictions is
a fairly large one.
One of the common myths about C is its proximity to matching
the underlying machine hardware. Now I don't understand
how 'int' gets closer to machine hardware than 'fixed bin (31)
signed' or 'fixed bin (32) unsigned', representing two different
arithmetic instruction sets in the hardware.
Then you have the situation where you don't want it as a
binary integer but as a bit string. In PL/I you declare it as
'bit(32)', a fixed length bit string. You can pick any one or
any contiguous subset, shift it/them left or right, filling in
zeroes or around, wrapping the bits which go off to the right
or left. Moreover you can mask the bits and-in, or-ing, or
nor-ing as you choose. You can convert character strings to
bit strings and vice versa, string to arithmetic data and vice
versa, support fixed- and variable-length character and bit
strings without null termination (which machine instructions
support), and variable-length character strings with null
termination (not native to any machine or present in any
machine instruction). I won't bother you with
variable-precision, fixed-decimal arithmetic (or fixed binary
for that matter).
Thus I have considerable difficulty understanding where C lies
closer to representing actual machine architectures than PL/I.
I guess you can resort to K&R's argument that if it is not
present in C, it's unnecessary.<g>
I guess my reference to the Intel Pentium Instruction
Reference Manual with its HLL definition of every instruction
didn't impress you. You want to stick with C, even if it means
resorting to assembly language here and there, regardless of
the ability to use an HLL without such a need.
You cannot deny the popularity of C...or ethernet. It's an
application of Gresham's Law of Economics to software and
hardware: cheap money drives out good. Cheap moved C and
UNIX to dominate the universities and colleges: they could
afford them. It was finance, not function, that drove the
decision.
You can't write an operating system without exception or
interrupt handling. Here you hit upon one area which K&R
chose as "unnecessary" and in which PL/I excels. You have
system-defined interrupts and programmer-defined interrupts,
even by those programmers writing operating systems with a
need to provide for certain software-based error conditions
like running beyond the bounds of an array or structure.
Every programming language "is" a specification language.
That includes assembler. If as Intel has provided in its
Pentium Instruction Reference Manual an HLL form exists for
every possible instruction, then that form can deal directly
with "segments, memory protection, and so on". You "have"
to use assembly, because you use C. You could choose an
HLL in which you didn't.
"... I couldn't agree more. That is: specify the workings
(algorithm) of the API, not the header file. ..."
I couldn't disagree more. Every function, every API, has an
IPO representation. You can do the "I" and the "O" without
the "P", but you can't formally specify the "P", your
algorithm, without the "I" and the "O". That "P", my friend,
stands for "programming" as well as "process". If you want to
start programming as part of the design process, then you not
only agree to rewriting source code but source
documentation, i.e. the design, as well during the process.
We have certain "givens" in our effort: the OS/2 APIs. Except
for the process name all the remaining are the "I"s and the
"O"s. If we decide to leave them "as is" in C, I will do so as
part of the common effort. At the same time I will do my own
in PL/E in which many of the algorithms you refer to become
part of the data rules within the 'range' attribute. That
means once I write them they become the software tool's
responsibility to enforce.
Now you may revel in the extra clerical work you keep from
the software, but I know down deep in my heart of hearts
that the more the software does and the less I have to do
increases my productivity. That means it doesn't take as
many of "me" as it does of "you" to develop and maintain an
operating system. That difference amounts to more than 200
to 1, the same level of savings, i.e. productivity gains, we
provide our clients with their software solutions.
You see, the issue is not language per se, but productivity.
While language has an effect, changing the language is not as
important as changing, i.e. improving, the tools. That's how
you increase productivity. That's how the OS/2 community,
once it starts acting as such, can bring itself on a par with and
exceed those of Linux, M$, IBM, and the rest.
"...This would make possible the creation of multiple input
queues (I had heard many times IBM did not implemented this
because it would break every single existent binary..."
Ah, the SIQ versus MIQ. Well, how many queues do you have
in a bank? How many before each cash register in a fast food
restaurant? My gosh, doesn't experience count for
anything?<g>
Re: [osFree] Methodology
#1054 Re: [osFree] Methodology
Expand Messages
Frank Griffin
Jul 27, 2004
Lynn H. Maxson wrote:
>I think you know the expression "hide in plain sight". No PL/I
>programmer has ever needed the use of a library to write an
>application. Not a library that comes "with" the compiler nor
>one from a third-party. No, the PL/I library comes "builtin",
>through a set of "builtin functions". The difference is that the
>compiler knows the function. Knows. Knows. Knows.
>
>
PL/I does have the handy ability to apply functions to arrays, but that
doesn't really require that the compiler "know" the function. So I'm
not really sure what you perceive as the benefit here. And I'm sorry,
but the runtime library is there whether you see it or not. On z/OS, it
is the linkage editor that pulls all of those functions in from the PL/I
libraries; the compiler doesn't generate (most of) them as inline code
or manually insert them itself. To me, that's a library. Given that
it's done that way, I would actually see the requirement that the
compiler "know" each function to be a detriment.
>Certainly over the years the PL/I programmer has taken
>advantage of available libraries like the Scientific Subroutine
>Library. On the other hand it has never actually needed them
>to provide something not available in the language to the
>compiler. You can within the bounds of the language known
>to the compiler write any application, including an operating
>system, without depending upon some external library.
>
>That is not true for C or any of its derivatives, all of which
>are library-dependent. If all those libraries came in source
>form and not binaries, you could argue that the compiler can
>"know" them, but they do not. Some even come in different
>source languages like assembler.
>
>
I think we're splitting hairs. Most C compilers, including Visual Age
and GCC, decide which functions they want the compiler to optimize, and
then define them as "inline" or define them using MACROs. This allows
the compiler to perform the same optimization on function code as it
does on generated code. I see no difference and no disadvantage to
getting "core" function code from a runtime library not under the
immediate control of the compiler.
And come on, you *cannot* write an operating system exclusively in
PL/I. Last I looked, PL/I had no language constructs for doing Load
Real Address, Purge TLB, or loading control registers. Yes, PL/I has
signal handling and error recovery, but both of these are implemented on
top of OS functionality which is not going to be there if you're writing
the OS itself.
>Libraries have a number of nasty habits. One, they often
>choose not to like each other: they are incompatible.
>Frequently deliberately so.
>
We're not talking about third-party application libraries here that you
slot in and out at will. We're talking about the core runtime library
for the compiler. Every compiler I've ever seen, including PL/I, has a
single set of runtime libraries with which it is designed to work, and
it doesn't work with others. Mainframes have Language Environment, but
all that really means is that IBM's compilers for mainframe languages
all use the same linkage conventions for library routines.
We've had this discussion before. I have enough respect for your acumen
that I'll assume there is some downside to using MAKE and other existing
tools that I just don't recognize, but I'll believe that SL/I does it
better when I see it (or it's function specs, or whatever).
Expand Messages
Frank Griffin
Jul 27, 2004
Lynn H. Maxson wrote:
>I think you know the expression "hide in plain sight". No PL/I
>programmer has ever needed the use of a library to write an
>application. Not a library that comes "with" the compiler nor
>one from a third-party. No, the PL/I library comes "builtin",
>through a set of "builtin functions". The difference is that the
>compiler knows the function. Knows. Knows. Knows.
>
>
PL/I does have the handy ability to apply functions to arrays, but that
doesn't really require that the compiler "know" the function. So I'm
not really sure what you perceive as the benefit here. And I'm sorry,
but the runtime library is there whether you see it or not. On z/OS, it
is the linkage editor that pulls all of those functions in from the PL/I
libraries; the compiler doesn't generate (most of) them as inline code
or manually insert them itself. To me, that's a library. Given that
it's done that way, I would actually see the requirement that the
compiler "know" each function to be a detriment.
>Certainly over the years the PL/I programmer has taken
>advantage of available libraries like the Scientific Subroutine
>Library. On the other hand it has never actually needed them
>to provide something not available in the language to the
>compiler. You can within the bounds of the language known
>to the compiler write any application, including an operating
>system, without depending upon some external library.
>
>That is not true for C or any of its derivatives, all of which
>are library-dependent. If all those libraries came in source
>form and not binaries, you could argue that the compiler can
>"know" them, but they do not. Some even come in different
>source languages like assembler.
>
>
I think we're splitting hairs. Most C compilers, including Visual Age
and GCC, decide which functions they want the compiler to optimize, and
then define them as "inline" or define them using MACROs. This allows
the compiler to perform the same optimization on function code as it
does on generated code. I see no difference and no disadvantage to
getting "core" function code from a runtime library not under the
immediate control of the compiler.
And come on, you *cannot* write an operating system exclusively in
PL/I. Last I looked, PL/I had no language constructs for doing Load
Real Address, Purge TLB, or loading control registers. Yes, PL/I has
signal handling and error recovery, but both of these are implemented on
top of OS functionality which is not going to be there if you're writing
the OS itself.
>Libraries have a number of nasty habits. One, they often
>choose not to like each other: they are incompatible.
>Frequently deliberately so.
>
We're not talking about third-party application libraries here that you
slot in and out at will. We're talking about the core runtime library
for the compiler. Every compiler I've ever seen, including PL/I, has a
single set of runtime libraries with which it is designed to work, and
it doesn't work with others. Mainframes have Language Environment, but
all that really means is that IBM's compilers for mainframe languages
all use the same linkage conventions for library routines.
We've had this discussion before. I have enough respect for your acumen
that I'll assume there is some downside to using MAKE and other existing
tools that I just don't recognize, but I'll believe that SL/I does it
better when I see it (or it's function specs, or whatever).
Re: [osFree] Methodology
#1055 Re: [osFree] Methodology
Expand Messages
Lynn H. Maxson
Jul 27, 2004
"...On z/OS, it is the linkage editor that pulls all of those
functions in from the PL/I libraries; the compiler doesn't
generate (most of) them as inline code or manually insert
them itself. To me, that's a library. Given that it's done that
way, I would actually see the requirement that the
compiler "know" each function to be a detriment. ..."
Well, Frank, you're about the last guy I want to argue with
when it comes to what happens on an IBM mainframe.
However, the following comes from my OS/2 PL/I Programming
Guide:
********************************************
The suboption NOINLINE indicates that procedures and begin
blocks should not be inlined.
Inlining occurs only when you specify optimization.
Inlining user code eliminates the overhead of the function call
and linkage, and also exposes the function's code
to the optimizer, resulting in faster code performance. Inlining
produces the best results when the overhead for
the function is nontrivial, for example, when functions are
called within nested loops. Inlining is also beneficial
when the inlined function provides additional opportunities
for optimization, such as when constant arguments
are used.
For programs containing many procedures that are not
nested:
o If the procedures are small and only called from a few
places, you can increase performance by specifying
INLINE.
o If the procedures are large and called from several places,
inlining duplicates code throughout the program.
This increase in the size of the program might offset any
increase of speed. In this case, you might prefer to
leave NOINLINE as the default and specify OPTIONS(INLINE)
only on individually selected procedures.
When you use inlining, you need more stack space. When a
function is called, its local storage is allocated at
the time of the call and freed when it returns to the calling
function. If that same function is inlined, its storage
is allocated when the function that calls it is entered, and is
not freed until that calling function ends. Ensure
that you have enough stack space for the local storage of
the inlined functions.
*************************************************
What is not clear here is with reference to builtin functions
like 'addr', 'substr', 'unspec', etc.. Some of these are
automatically performed inline and some only when you
specify 'optimize'.
Regardless of all this, of what PL/I does or does not do, the
compiler cannot do inline optimization unless it has access to
the source code. By default PL/I has access to the "source"
of the builtin functions.
Moreover the compiler can optimize certain "hidden" functions
like conversion of decimal to binary, character to decimal,
decimal to character, character to bit, bit to character, binary
to decimal, binary to float, float to decimal, float to binary.
They are "hidden" because PL/I supports "strong" typing with
default (hidden) conversion algorithms. These algorithms can
occur either inline or out-of-line.
It makes no difference. They cannot occur inline if the
compiler does not have access to source. "Standard" libraries
became a "must" for C for reasons of portability and such that
we have discussed before. However, if they exist in binary
form and not source, the compiler has no options with respect
to their processing.
All this arguing detracts from the point I'm trying to make for
a single, source-only library that includes all source. That
source includes the source for the language, i.e. its
specification. It includes the source for the tool, i.e. its
specification. It includes all other source code for all
applications including the operating system.
It does not disallow binaries, i.e. a mixed environment. It
allows the developer to choose anywhere from pure source to
any level of mixing. To support this the link-edit function is
builtin, i.e. integrated, into the tool.
No, you cannot completely write an OS in PL/I. You are
correct. Take a look at Intel's Pentium Instruction Reference
Manual to see if they do not offer an HLL version for the
instructions you mention. That means you need the capability
of separating "real" registers, i.e. machine-dependent ones,
from "logical" ones. There's nothing that says in a
machine-dependent specification you can't have data types
defining machine-dependent components access through an
instruction. While I don't have a PL/S manual handy and only
a vague recollection of PL/360 developed under Wirth at
Stanford in the 60's, I remain confident that a given language
can clearly separate machine-dependent from
machine-independent statments within the same syntax
structure.
I do not argue so much about what some language can or
cannot do. I do argue what we should be able to do within a
language for the simple reason that at some point someone in
some language has. I argue against "unnecessary" purity in a
language like Prolog so proud of its declarative facilities that
it disallows their imperative base into which they must
ultimately translate.
I look at the tool set. I see non-language related restrictions.
I see one-pass C compilers that force the use of "void"
statements, unnecessary writing in a multi-pass compiler. I
see a "main" name requirement remaining on systems not
using a UNIX shell language. I see a limit (1) on the number of
programs a set of external procedures can define. We have a
need to deal with a system from an application system to an
operating system as a whole to have software insure global
data and logical consistency. We have a need to
automatically generate all the possible test cases for any path
in a program, any progrom, or any set of programs.
Now you may not want a software tool that allows this
because you feel that it encourages laziness in programmers.
I feel that programmers should do what software cannot and
software what programmers need not. In that manner shift
more of the "clerical" work to software. In doing so increase
programmer productivity.
If in doing so I can get to the point where given a complete
source library a single developer can maintain and enhance an
entire operating system, then I think I can come up with the
people resource.<g>
Expand Messages
Lynn H. Maxson
Jul 27, 2004
"...On z/OS, it is the linkage editor that pulls all of those
functions in from the PL/I libraries; the compiler doesn't
generate (most of) them as inline code or manually insert
them itself. To me, that's a library. Given that it's done that
way, I would actually see the requirement that the
compiler "know" each function to be a detriment. ..."
Well, Frank, you're about the last guy I want to argue with
when it comes to what happens on an IBM mainframe.
However, the following comes from my OS/2 PL/I Programming
Guide:
********************************************
The suboption NOINLINE indicates that procedures and begin
blocks should not be inlined.
Inlining occurs only when you specify optimization.
Inlining user code eliminates the overhead of the function call
and linkage, and also exposes the function's code
to the optimizer, resulting in faster code performance. Inlining
produces the best results when the overhead for
the function is nontrivial, for example, when functions are
called within nested loops. Inlining is also beneficial
when the inlined function provides additional opportunities
for optimization, such as when constant arguments
are used.
For programs containing many procedures that are not
nested:
o If the procedures are small and only called from a few
places, you can increase performance by specifying
INLINE.
o If the procedures are large and called from several places,
inlining duplicates code throughout the program.
This increase in the size of the program might offset any
increase of speed. In this case, you might prefer to
leave NOINLINE as the default and specify OPTIONS(INLINE)
only on individually selected procedures.
When you use inlining, you need more stack space. When a
function is called, its local storage is allocated at
the time of the call and freed when it returns to the calling
function. If that same function is inlined, its storage
is allocated when the function that calls it is entered, and is
not freed until that calling function ends. Ensure
that you have enough stack space for the local storage of
the inlined functions.
*************************************************
What is not clear here is with reference to builtin functions
like 'addr', 'substr', 'unspec', etc.. Some of these are
automatically performed inline and some only when you
specify 'optimize'.
Regardless of all this, of what PL/I does or does not do, the
compiler cannot do inline optimization unless it has access to
the source code. By default PL/I has access to the "source"
of the builtin functions.
Moreover the compiler can optimize certain "hidden" functions
like conversion of decimal to binary, character to decimal,
decimal to character, character to bit, bit to character, binary
to decimal, binary to float, float to decimal, float to binary.
They are "hidden" because PL/I supports "strong" typing with
default (hidden) conversion algorithms. These algorithms can
occur either inline or out-of-line.
It makes no difference. They cannot occur inline if the
compiler does not have access to source. "Standard" libraries
became a "must" for C for reasons of portability and such that
we have discussed before. However, if they exist in binary
form and not source, the compiler has no options with respect
to their processing.
All this arguing detracts from the point I'm trying to make for
a single, source-only library that includes all source. That
source includes the source for the language, i.e. its
specification. It includes the source for the tool, i.e. its
specification. It includes all other source code for all
applications including the operating system.
It does not disallow binaries, i.e. a mixed environment. It
allows the developer to choose anywhere from pure source to
any level of mixing. To support this the link-edit function is
builtin, i.e. integrated, into the tool.
No, you cannot completely write an OS in PL/I. You are
correct. Take a look at Intel's Pentium Instruction Reference
Manual to see if they do not offer an HLL version for the
instructions you mention. That means you need the capability
of separating "real" registers, i.e. machine-dependent ones,
from "logical" ones. There's nothing that says in a
machine-dependent specification you can't have data types
defining machine-dependent components access through an
instruction. While I don't have a PL/S manual handy and only
a vague recollection of PL/360 developed under Wirth at
Stanford in the 60's, I remain confident that a given language
can clearly separate machine-dependent from
machine-independent statments within the same syntax
structure.
I do not argue so much about what some language can or
cannot do. I do argue what we should be able to do within a
language for the simple reason that at some point someone in
some language has. I argue against "unnecessary" purity in a
language like Prolog so proud of its declarative facilities that
it disallows their imperative base into which they must
ultimately translate.
I look at the tool set. I see non-language related restrictions.
I see one-pass C compilers that force the use of "void"
statements, unnecessary writing in a multi-pass compiler. I
see a "main" name requirement remaining on systems not
using a UNIX shell language. I see a limit (1) on the number of
programs a set of external procedures can define. We have a
need to deal with a system from an application system to an
operating system as a whole to have software insure global
data and logical consistency. We have a need to
automatically generate all the possible test cases for any path
in a program, any progrom, or any set of programs.
Now you may not want a software tool that allows this
because you feel that it encourages laziness in programmers.
I feel that programmers should do what software cannot and
software what programmers need not. In that manner shift
more of the "clerical" work to software. In doing so increase
programmer productivity.
If in doing so I can get to the point where given a complete
source library a single developer can maintain and enhance an
entire operating system, then I think I can come up with the
people resource.<g>
Re: [osFree] Methodology
#1056 Re: [osFree] Methodology
Expand Messages
Frank Griffin
Jul 27, 2004
Lynn H. Maxson wrote:
>"...On z/OS, it is the linkage editor that pulls all of those
>functions in from the PL/I libraries; the compiler doesn't
>generate (most of) them as inline code or manually insert
>them itself. To me, that's a library. Given that it's done that
>way, I would actually see the requirement that the
>compiler "know" each function to be a detriment. ..."
>
>Well, Frank, you're about the last guy I want to argue with
>when it comes to what happens on an IBM mainframe.
>However, the following comes from my OS/2 PL/I Programming
>Guide:
>
>********************************************
> The suboption NOINLINE indicates that procedures and begin
>blocks should not be inlined.
>
> Inlining occurs only when you specify optimization.
>
> Inlining user code eliminates the overhead of the function call
>and linkage, and also exposes the function's code
> to the optimizer, resulting in faster code performance. Inlining
>produces the best results when the overhead for
> the function is nontrivial, for example, when functions are
>called within nested loops. Inlining is also beneficial
> when the inlined function provides additional opportunities
>for optimization, such as when constant arguments
> are used.
>
> For programs containing many procedures that are not
>nested:
>
> o If the procedures are small and only called from a few
>places, you can increase performance by specifying
> INLINE.
>
> o If the procedures are large and called from several places,
>inlining duplicates code throughout the program.
> This increase in the size of the program might offset any
>increase of speed. In this case, you might prefer to
> leave NOINLINE as the default and specify OPTIONS(INLINE)
>only on individually selected procedures.
>
> When you use inlining, you need more stack space. When a
>function is called, its local storage is allocated at
> the time of the call and freed when it returns to the calling
>function. If that same function is inlined, its storage
> is allocated when the function that calls it is entered, and is
>not freed until that calling function ends. Ensure
> that you have enough stack space for the local storage of
>the inlined functions.
>*************************************************
>
>What is not clear here is with reference to builtin functions
>like 'addr', 'substr', 'unspec', etc.. Some of these are
>automatically performed inline and some only when you
>specify 'optimize'.
>
>
Lynn, I think you're misreading this. This is identical to the
"__inline__" (or whatever) capability of most C compilers. It allows
you to tell the compiler that some of the procedures or begin blocks for
which you supply the source code should be replicated inline wherever
they are used. It is only effective within the compilation unit which
includes the source code. And it has nothing to do with library functions.
>Regardless of all this, of what PL/I does or does not do, the
>compiler cannot do inline optimization unless it has access to
>the source code. By default PL/I has access to the "source"
>of the builtin functions.
>
>
>Moreover the compiler can optimize certain "hidden" functions
>like conversion of decimal to binary, character to decimal,
>decimal to character, character to bit, bit to character, binary
>to decimal, binary to float, float to decimal, float to binary.
>They are "hidden" because PL/I supports "strong" typing with
>default (hidden) conversion algorithms. These algorithms can
>occur either inline or out-of-line.
>
>
And the library writers have access to their source code. By choosing
to define some functions as MACROs which #include code blocks, they can
do exactly what you suggest. As far as "hidden" functions go, this is
the choice of the compiler; it can choose to do some (or all) of these
inline, or it may call binary portions of the runtime to do it. Since
both are possible (and since both are used), it follows that the
compiler authors have made some reasoned judgment about which functions
benefit from this.
>It makes no difference. They cannot occur inline if the
>compiler does not have access to source. "Standard" libraries
>became a "must" for C for reasons of portability and such that
>we have discussed before. However, if they exist in binary
>form and not source, the compiler has no options with respect
>to their processing.
>
>
As I've said, the library authors *do* have source code and are the ones
making the choice as to which library function source code should be
passed through the compiler as inline as which should not.
>All this arguing detracts from the point I'm trying to make for
>a single, source-only library that includes all source. That
>source includes the source for the language, i.e. its
>specification. It includes the source for the tool, i.e. its
>specification. It includes all other source code for all
>applications including the operating system.
>
>
I understand the all-source nature of the tool. What I don't understand
is the advantage to be gained from it. The only advantage of inline
code is (1) no entry/exit prolog overhead, and (2) possible reduced
paging due to locality of reference. Both of these are judgment calls
best left to the people who know the prolog cost and know the size of
the procedures involved. You're basically saying that everything should
be inlined, and I don't understand why this is better.
>No, you cannot completely write an OS in PL/I. You are
>correct. Take a look at Intel's Pentium Instruction Reference
>Manual to see if they do not offer an HLL version for the
>instructions you mention. That means you need the capability
>of separating "real" registers, i.e. machine-dependent ones,
>from "logical" ones. There's nothing that says in a
>machine-dependent specification you can't have data types
>defining machine-dependent components access through an
>instruction. While I don't have a PL/S manual handy and only
>a vague recollection of PL/360 developed under Wirth at
>Stanford in the 60's, I remain confident that a given language
>can clearly separate machine-dependent from
>machine-independent statments within the same syntax
>structure.
>
>
I don't remember the exact syntax for either case, but PL/S used
something like "GENERATE" to allow the inline writing of assembler code
in PL/S source, and GCC uses something like "__asm__" for the same
thing. No difference there.
>I look at the tool set. I see non-language related restrictions.
>I see one-pass C compilers that force the use of "void"
>statements, unnecessary writing in a multi-pass compiler. I
>see a "main" name requirement remaining on systems not
>using a UNIX shell language. I see a limit (1) on the number of
>programs a set of external procedures can define. We have a
>need to deal with a system from an application system to an
>operating system as a whole to have software insure global
>data and logical consistency. We have a need to
>automatically generate all the possible test cases for any path
>in a program, any progrom, or any set of programs.
>
>
I don't understand this preoccupation with "void" and "main", probably
because I don't see the advantage of compiling everything from source
for each build. The use of procedure definitions in headers because
one-pass compilers need to see them before their first use (which is
what I think you're referring to by "void") is only necessary if you
don't put all of the lower-level procedures first in the source file.
The use of "main" stems from the desire to be able to have the compiler
do a linkedit call internally without requiring the user to provide the
equivalent of a z/OS Linkage Editor ENTRY statement; the convention that
the entry point function be called "main" simply allows the compiler or
linkeditor to detect the entry point without interaction with the user.
I don't see any severe disadvantages in either of these. In fact, I
seem to recall that PL/I requires you to designate one procedure in an
executable with OPTIONS(MAIN) or some such.
Automatic generation of test cases has nothing to do with availability
of source. Most compilers have the ability to embed debugging data into
object files, which saves you having to reparse the source every time.
Granted, detecting branch patterns becomes a platform-specific process
(which it wouldn't be if you reparsed the source), but once a piece of
code has been debugged, why in the world would you want to regenerate
and rerun tests for the entire operating system and every application
just because you change a line of code in one application ? I mean, you
might want to regenerate and save tests specific to an application or
component whenever the source code for that component changes, so that
they could be rerun automatically whenever you suspect that something
which might affect them has changed, but why regenerate the tests based
on the component source if the source itself hasn't changed ?
>Now you may not want a software tool that allows this
>because you feel that it encourages laziness in programmers.
>I feel that programmers should do what software cannot and
>software what programmers need not. In that manner shift
>more of the "clerical" work to software. In doing so increase
>programmer productivity.
>
>
No, I'm just trying to satisfy myself that the forward strides you
attribute to your tool aren't already available with existing tools. Of
course, that doesn't preclude developing your tool anyway, but it may
influence folks' opinions here as to whether they want to wait for it to
begin. And a better understanding of the way your tool is supposed to
work would certainly ease migration to it later on.
Expand Messages
Frank Griffin
Jul 27, 2004
Lynn H. Maxson wrote:
>"...On z/OS, it is the linkage editor that pulls all of those
>functions in from the PL/I libraries; the compiler doesn't
>generate (most of) them as inline code or manually insert
>them itself. To me, that's a library. Given that it's done that
>way, I would actually see the requirement that the
>compiler "know" each function to be a detriment. ..."
>
>Well, Frank, you're about the last guy I want to argue with
>when it comes to what happens on an IBM mainframe.
>However, the following comes from my OS/2 PL/I Programming
>Guide:
>
>********************************************
> The suboption NOINLINE indicates that procedures and begin
>blocks should not be inlined.
>
> Inlining occurs only when you specify optimization.
>
> Inlining user code eliminates the overhead of the function call
>and linkage, and also exposes the function's code
> to the optimizer, resulting in faster code performance. Inlining
>produces the best results when the overhead for
> the function is nontrivial, for example, when functions are
>called within nested loops. Inlining is also beneficial
> when the inlined function provides additional opportunities
>for optimization, such as when constant arguments
> are used.
>
> For programs containing many procedures that are not
>nested:
>
> o If the procedures are small and only called from a few
>places, you can increase performance by specifying
> INLINE.
>
> o If the procedures are large and called from several places,
>inlining duplicates code throughout the program.
> This increase in the size of the program might offset any
>increase of speed. In this case, you might prefer to
> leave NOINLINE as the default and specify OPTIONS(INLINE)
>only on individually selected procedures.
>
> When you use inlining, you need more stack space. When a
>function is called, its local storage is allocated at
> the time of the call and freed when it returns to the calling
>function. If that same function is inlined, its storage
> is allocated when the function that calls it is entered, and is
>not freed until that calling function ends. Ensure
> that you have enough stack space for the local storage of
>the inlined functions.
>*************************************************
>
>What is not clear here is with reference to builtin functions
>like 'addr', 'substr', 'unspec', etc.. Some of these are
>automatically performed inline and some only when you
>specify 'optimize'.
>
>
Lynn, I think you're misreading this. This is identical to the
"__inline__" (or whatever) capability of most C compilers. It allows
you to tell the compiler that some of the procedures or begin blocks for
which you supply the source code should be replicated inline wherever
they are used. It is only effective within the compilation unit which
includes the source code. And it has nothing to do with library functions.
>Regardless of all this, of what PL/I does or does not do, the
>compiler cannot do inline optimization unless it has access to
>the source code. By default PL/I has access to the "source"
>of the builtin functions.
>
>
>Moreover the compiler can optimize certain "hidden" functions
>like conversion of decimal to binary, character to decimal,
>decimal to character, character to bit, bit to character, binary
>to decimal, binary to float, float to decimal, float to binary.
>They are "hidden" because PL/I supports "strong" typing with
>default (hidden) conversion algorithms. These algorithms can
>occur either inline or out-of-line.
>
>
And the library writers have access to their source code. By choosing
to define some functions as MACROs which #include code blocks, they can
do exactly what you suggest. As far as "hidden" functions go, this is
the choice of the compiler; it can choose to do some (or all) of these
inline, or it may call binary portions of the runtime to do it. Since
both are possible (and since both are used), it follows that the
compiler authors have made some reasoned judgment about which functions
benefit from this.
>It makes no difference. They cannot occur inline if the
>compiler does not have access to source. "Standard" libraries
>became a "must" for C for reasons of portability and such that
>we have discussed before. However, if they exist in binary
>form and not source, the compiler has no options with respect
>to their processing.
>
>
As I've said, the library authors *do* have source code and are the ones
making the choice as to which library function source code should be
passed through the compiler as inline as which should not.
>All this arguing detracts from the point I'm trying to make for
>a single, source-only library that includes all source. That
>source includes the source for the language, i.e. its
>specification. It includes the source for the tool, i.e. its
>specification. It includes all other source code for all
>applications including the operating system.
>
>
I understand the all-source nature of the tool. What I don't understand
is the advantage to be gained from it. The only advantage of inline
code is (1) no entry/exit prolog overhead, and (2) possible reduced
paging due to locality of reference. Both of these are judgment calls
best left to the people who know the prolog cost and know the size of
the procedures involved. You're basically saying that everything should
be inlined, and I don't understand why this is better.
>No, you cannot completely write an OS in PL/I. You are
>correct. Take a look at Intel's Pentium Instruction Reference
>Manual to see if they do not offer an HLL version for the
>instructions you mention. That means you need the capability
>of separating "real" registers, i.e. machine-dependent ones,
>from "logical" ones. There's nothing that says in a
>machine-dependent specification you can't have data types
>defining machine-dependent components access through an
>instruction. While I don't have a PL/S manual handy and only
>a vague recollection of PL/360 developed under Wirth at
>Stanford in the 60's, I remain confident that a given language
>can clearly separate machine-dependent from
>machine-independent statments within the same syntax
>structure.
>
>
I don't remember the exact syntax for either case, but PL/S used
something like "GENERATE" to allow the inline writing of assembler code
in PL/S source, and GCC uses something like "__asm__" for the same
thing. No difference there.
>I look at the tool set. I see non-language related restrictions.
>I see one-pass C compilers that force the use of "void"
>statements, unnecessary writing in a multi-pass compiler. I
>see a "main" name requirement remaining on systems not
>using a UNIX shell language. I see a limit (1) on the number of
>programs a set of external procedures can define. We have a
>need to deal with a system from an application system to an
>operating system as a whole to have software insure global
>data and logical consistency. We have a need to
>automatically generate all the possible test cases for any path
>in a program, any progrom, or any set of programs.
>
>
I don't understand this preoccupation with "void" and "main", probably
because I don't see the advantage of compiling everything from source
for each build. The use of procedure definitions in headers because
one-pass compilers need to see them before their first use (which is
what I think you're referring to by "void") is only necessary if you
don't put all of the lower-level procedures first in the source file.
The use of "main" stems from the desire to be able to have the compiler
do a linkedit call internally without requiring the user to provide the
equivalent of a z/OS Linkage Editor ENTRY statement; the convention that
the entry point function be called "main" simply allows the compiler or
linkeditor to detect the entry point without interaction with the user.
I don't see any severe disadvantages in either of these. In fact, I
seem to recall that PL/I requires you to designate one procedure in an
executable with OPTIONS(MAIN) or some such.
Automatic generation of test cases has nothing to do with availability
of source. Most compilers have the ability to embed debugging data into
object files, which saves you having to reparse the source every time.
Granted, detecting branch patterns becomes a platform-specific process
(which it wouldn't be if you reparsed the source), but once a piece of
code has been debugged, why in the world would you want to regenerate
and rerun tests for the entire operating system and every application
just because you change a line of code in one application ? I mean, you
might want to regenerate and save tests specific to an application or
component whenever the source code for that component changes, so that
they could be rerun automatically whenever you suspect that something
which might affect them has changed, but why regenerate the tests based
on the component source if the source itself hasn't changed ?
>Now you may not want a software tool that allows this
>because you feel that it encourages laziness in programmers.
>I feel that programmers should do what software cannot and
>software what programmers need not. In that manner shift
>more of the "clerical" work to software. In doing so increase
>programmer productivity.
>
>
No, I'm just trying to satisfy myself that the forward strides you
attribute to your tool aren't already available with existing tools. Of
course, that doesn't preclude developing your tool anyway, but it may
influence folks' opinions here as to whether they want to wait for it to
begin. And a better understanding of the way your tool is supposed to
work would certainly ease migration to it later on.
Re: [osFree] Methodology
#1057 Re: [osFree] Methodology
Expand Messages
Lynn H. Maxson
Jul 28, 2004
Frank,
I've started several different responses, gotten deep into
them, reviewed them, and then erased them. I have no
argument with your assessment of "what is" in terms of
current implementations. I fail to come up with any cogent
insights to resolve the niggly differences that separate us.
The project to produce an OS/2 replacement package, of
which the kernel is only a part, we should not begin if we do
not have the wherewithal to finish in a manner to compete
with other user OS choices. That applies to the initial version,
our development of the package, as well as successive
versions, our maintenance of the package.
Now anyone who believes we need any less people resources
or organization than those employed by competing OSes, open
or closed source, engages in self-deception. If you use the
same tools and methodologies, you require at least the same
people resources, whether they volunteer their effort or get
paid for it.
It goes beyond this. It goes to the very heart and soul of
open source. It does no good to have the source, if you
cannot maintain it competitively...or even maintain its
customization for your individual use. You can afford the disk,
CD, and DVD space to hold it. More often than not you
haven't the capacity to maintain it realistically, either "you" as
an individual or as a group.
Periodically I review the projects on Netlabs and SourceForge.
Both offer more than enough empirical evidence of the
previous assertion regarding open source projects.
While organization has an effect on group productivity, except
for inter-dependencies it has no effect on individual
productivity. Individual productivity depends upon skill level
and tool support. There comes a point at which skill level
cannot rise above tool support. You need better tools.
Tools place an upper limit on productivity. I would hope we
at least could agree on this. Tools, including languages,
impact methodologies. Methodologies place an upper limit on
productivity. Maybe we can have an agreement on this.
We have a process, a combination of methodolotgy and tools,
compose of inter-dependent sub-processes, time-dependent
sub-processes. We have times within them and times (delays)
between them. These times, their presence or absence,
determine our productivity.
Our challenge to increasing productivity lies in minimizing
those times. We need to write and rewrite less, spending less
time overall in doing either. We need to spend less times in
meetings. We need to lose less time in delays, waiting for
some dependent process or person to complete.
We need to spend less time and get more out of the time we
spend. Now that you can't see that I have addressed this
effectively and efficiently results from my personal failure to
communicate properly. I want any individual who has a
version of an OS/2 replacement package source to effectively
maintain it in a manner and at a rate commensurate with his
needs. In short, I want open source to succeed at the
individual level.
That doesn't detract from group participation. It only affirms
the open source promise of individual independence, his ability
to do it his way.
I want to return to this group's interest in designing an OS/2
replacement. I will continue my tool development
independently. Maybe in this manner as we broaden the
discussion of the design the differences we have will either
disappear or become more clearly defined so that at least we
will understand why they exist.
Expand Messages
Lynn H. Maxson
Jul 28, 2004
Frank,
I've started several different responses, gotten deep into
them, reviewed them, and then erased them. I have no
argument with your assessment of "what is" in terms of
current implementations. I fail to come up with any cogent
insights to resolve the niggly differences that separate us.
The project to produce an OS/2 replacement package, of
which the kernel is only a part, we should not begin if we do
not have the wherewithal to finish in a manner to compete
with other user OS choices. That applies to the initial version,
our development of the package, as well as successive
versions, our maintenance of the package.
Now anyone who believes we need any less people resources
or organization than those employed by competing OSes, open
or closed source, engages in self-deception. If you use the
same tools and methodologies, you require at least the same
people resources, whether they volunteer their effort or get
paid for it.
It goes beyond this. It goes to the very heart and soul of
open source. It does no good to have the source, if you
cannot maintain it competitively...or even maintain its
customization for your individual use. You can afford the disk,
CD, and DVD space to hold it. More often than not you
haven't the capacity to maintain it realistically, either "you" as
an individual or as a group.
Periodically I review the projects on Netlabs and SourceForge.
Both offer more than enough empirical evidence of the
previous assertion regarding open source projects.
While organization has an effect on group productivity, except
for inter-dependencies it has no effect on individual
productivity. Individual productivity depends upon skill level
and tool support. There comes a point at which skill level
cannot rise above tool support. You need better tools.
Tools place an upper limit on productivity. I would hope we
at least could agree on this. Tools, including languages,
impact methodologies. Methodologies place an upper limit on
productivity. Maybe we can have an agreement on this.
We have a process, a combination of methodolotgy and tools,
compose of inter-dependent sub-processes, time-dependent
sub-processes. We have times within them and times (delays)
between them. These times, their presence or absence,
determine our productivity.
Our challenge to increasing productivity lies in minimizing
those times. We need to write and rewrite less, spending less
time overall in doing either. We need to spend less times in
meetings. We need to lose less time in delays, waiting for
some dependent process or person to complete.
We need to spend less time and get more out of the time we
spend. Now that you can't see that I have addressed this
effectively and efficiently results from my personal failure to
communicate properly. I want any individual who has a
version of an OS/2 replacement package source to effectively
maintain it in a manner and at a rate commensurate with his
needs. In short, I want open source to succeed at the
individual level.
That doesn't detract from group participation. It only affirms
the open source promise of individual independence, his ability
to do it his way.
I want to return to this group's interest in designing an OS/2
replacement. I will continue my tool development
independently. Maybe in this manner as we broaden the
discussion of the design the differences we have will either
disappear or become more clearly defined so that at least we
will understand why they exist.