[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: gnubol: subsets



In a message dated 11/20/99 12:14:30 PM EST, TIMJOSLING@prodigy.net writes:

<< 
 If the compiler is to be machine independent, then it will not know length or
 alignment of etc. There are rules that a redefines is not allowed to be 
larger
 than what it redefines. So maybe we need something here. But as little as
 possible. The assembler should do this.
>>

Well, that is certainly a good stand. But I think that that is not good 
COBOL.  Alignment is out of the reach of syntax. But size is not, IMHO. It is 
certainly the case that the size of all DISPLAY type items are syntactically 
clear. But I would argue that the PACKED and BINARY things also have clearly 
visible size. The language evolved in the context of the early presence of 
OBJECT-COMPUTER paragraph. And their is strong programmer awareness of  of 
hardware-like considerations of the vendor specific implementation decisions 
about alternative sizes for BINARY items and critical issues of space 
potentially needed for 'unsigned' signs.

All of that is in the code and/or in the parameters visible to the syntax 
processor.

I am sensing that several participants would leave much to semantics that I 
think is possible in syntax. I am not much interested in opposing any such 
tilt. But my point here is that COBOL did not evolve _type_. It evolved 
_size_ and _USAGE_. All of that is visible to syntax.

When I use a COBOL compiler that will blind me to higher numeric values than 
9,999 in a BINARY 9(4) field; and do elect to force it to treat the field as 
max 9(4); I am then very disappointed that it does not complain when I ask if 
that field is => 10,000. And I don't care how impressive the vendors name is.

A 9(4) field that is restricted in range is typed, it is strongly typed. That 
is syntax. It is not rocket science to see its size. To fully, completely and 
competently type COBOL data items we inevitably know their size.  

FD entries have elements that can only be validated by doing counts of sizes 
in the 01 entries. All of that is on the surface. Again, IMHO, there is no 
need to leave that to semantics.  I think my slant is that semantics 
generates code. It needs lots of defense mechanisms for sure, but it is not a 
parser.  Generally, a redefines handed to semantics ought to be pretty close 
to right.

I get the feeling, this is meant as an explanation of my idea not as comment 
on others but this migh be usefully, ... I get the feeling that some 
participants view the parse as lexing. Ithink is much more than that. 

But specifially with attributes that relate to type, COBOL is somewhat 
different. It does not have type: it has size and USAGE which are on the 
surface. 

I would say that it is reasonable to suggest that we _should_ be aware of 
machine like factors.  We should be able to sense size in syntax. One 
dividing line might be to pass the problem to semantics only if SYNCH is 
involved (that would be within one or more of the sub elements in the group 
being redefined or being defined as the redefinition). Our list may be larger 
than just two possibilities here, but if the REDEFINES does not involve 
SYNCH, syntax could easily post it to semantics as already checked.

It is not inconceivable for sytax to know the size implications of SYNC 
either. Doing SYNC, as semantics would, does not mean the same thing as 
understanding it enough to count.
I feel that a size mismatch in a REDEFINES is just a syntactic the issue of a 
REDEFINE of an 01.  It is all on the surface to me.

Best Wishes
Bob Rayhawk
RKRayhawk@aol.com
 

--
This message was sent through the gnu-cobol mailing list.  To remove yourself
from this mailing list, send a message to majordomo@lusars.net with the
words "unsubscribe gnu-cobol" in the message body.  For more information on
the GNU COBOL project, send mail to gnu-cobol-owner@lusars.net.