Board index » delphi » TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...

TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...

[MVS/370 is an operating system for IBM mainframe computers.  Big iron.
Hulking giant.  Green screens, JCL, VTAM, VSAM... more three and four
letter acronyms than you can shake a stick at.  And raw, no-apologies,
twelve-processors-running-all-out *speed.*]

MVS is also a system that is geared toward -batch- processing.  And in
these days of world-wide webs (and ungodly numbers of simultaneous
users, once your site gets popular) there is a LOT that Delphi
programmers can learn from that "strange old world."

"You young whippersnappers" ;-) cut your teeth on a purely interactive
world.  {Hey, back off, I'm not -that- old!  ;-)}  Everything in the
standard PC world is immediate, multithreaded, simultaneous, and "now."

But everything in the PC world is *also* based on the notion that you
have "so <!>ed much computer power being wasted anyhow" (pardon the
expression) that you can afford to waste it.  Extravagantly.

Enter ... The Web, stage left.  Now you don't have just one user
clamoring for your attention -- you have hundreds.  Thousands.  

Or maybe you encounter your first application that considers 10-million
input records to be "just another day at the office."  

You may be startled to learn that programmers were writing systems that
could handle loads like that -- and could service thousands of connected
users -- on computers that were approximately the size of an original
IBM PC/AT.  Computers that cost millions of dollars at the time.  The
techniques that were used then, born of necessity, are just as apropos
today.  They still work.  And, heavy-workload situations like the World
Wide Web are bringing such techniques back into serious consideration.

Have you ever encountered a suddenly-popular web site, dropped in on it,
and found very quickly that it could not cut the mustard?  If you
haven't done that yet, you will.

So what did those MVS folks know back in the 1960's and 1970's that we
somehow cannot replicate today?  What is it that we are somehow failing
to teach programmers today??

There are really just a few key principles.... and there is actually
software, called Transaction Servers (like CICS, or Microsoft's MTS),
that does incorporate and automate many of them.

PRINCIPLE NUMBER ONE:  "USERS DO *NOT* EQUAL THREADS!"
Many computer programs are built like restauraunts in which every
customer which walks into the building immediately dons an apron and
cooks his own meal, start to finish.  If you notice, restauraunts are
NOT built that way.  If you notice, the number of people who are
executing the business process is always much smaller than the number of
customers being (efficiently) served.  Your order usually has to wait
for some time, for some reason, but it nearly-always gets served in a
predictable amount of time.

If the restauraunt were NOT organized this way, no one would eat.

PRINCIPLE NUMBER TWO:  "THOU SHALT NOT THRASH!"
A computer system that has become overloaded is an ugly thing to see.
The situation does not degrade gracefully; it goes to <!> all at once.
It's a performance-curve (transactions per second, response time, etc)
that "hits the wall."  Performance declines linearly until the thrash
point is reached, at which time the performance-graph takes a sharp
right-angle turn and goes straight up.  The moment that you ask the
computer to ATTEMPT too much work *at one time,* performance goes
straight to the big-hot-place-downstairs.

Programmers coped with that problem, and built systems that could
efficiently handle hundreds or thousands of users, by the efficient use
of queues...  intentionally holding requests until they could be served,
and thereby controlling the number of requests *attempted* at one time.

EXAMPLE:
I was once asked to investigate a problem at the college where I worked,
which involved a certain engineering package (ANSYS) that is extremely
computer-intensive.  It seems that if one student submitted an analysis
it took 4 minutes to complete.  But throw a classroom-ful of 12 students
at it and suddenly it took 14 hours.  IBM's sales-force smelled {*word*76}
and they were closing in to sell us a supercomputer.

I reasoned, correctly, that those 12 students could get their output in
"no more than half an hour, guaranteed" if we could only throttle the
system so that it would never attempt to run more than three jobs
simultaneously. At this rate, three jobs could be finished in about six
minutes.  Twelve jobs could be completed in 6 x 4 = 24 minutes.  More
than acceptable to the students, hell-to-pay for those IBM salesmen,
approximately $4 million in savings to the University.  And a classic
illustration of the power and impact of thrashing.

IN CLOSING:
Programmers today are not educated enough about the limitations of their
computers because, until recently, they never encountered them.  In the
same way that two generations of programmers were raised up to "never
assume that you have as much CPU-power as you would like to," the latest
crop of programmers and system-designers today have been raised up with
the notion that "you have power to burn!"  As I have illustrated in this
rather long-winded TechTips post... you *can* learn a lot from what
programmers used to do when what they used to do was all they had.

------------------------------------------------------------------
? Sundial Services :: Scottsdale, AZ (USA) :: (480) 946-8259
mailto:i...@sundialservices.com  (PGP public key available.)

Quote
> Fast(!), automatic table-repair with two clicks of the mouse!
> ChimneySweep(R):  "Click click, it's fixed!" {tm}
> http://www.sundialservices.com/products/chimneysweep

 

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Perhaps you need another tech tip to explain what thrashing is.

When I first got involved with micros I couldn't understand why it took so
much computing power to service a single process. After all the main-frame I
first worked on had 240KB RAM, 120MB of disk and concurrently executed
several dozen processes.

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


In article <38BC6A0B.1...@sundialservices.com>, i...@sundialservices.com
says...
Quote

Well, as someone who wrote on a PDP-11 with 16K words of memory, I aggree
partly.

Quote

>You may be startled to learn that programmers were writing systems that
>could handle loads like that -- and could service thousands of connected
>users -- on computers that were approximately the size of an original
>IBM PC/AT.

Yes, BUT the IBM/370 had I/O that is fast, much faster than the PC.  So swap
out/in (not thrashing) worked well.

Quote

>So what did those MVS folks know back in the 1960's and 1970's that we
>somehow cannot replicate today?  What is it that we are somehow failing
>to teach programmers today??

But the operating system did nothing.  No event processing, little error
trapping, and little if any I/O.  The processes ran in a predictable way.  
E.g., module A ran then B then C.  You got the speed by figuring out all that
could happen in A and then moving to B, then to C.  That made very unstable
programs because computations in C rested on the ones in A being correct.

If you look at the X11, the first books tried to teach the programmer to break
out this serial mode and do on demand computing, which verifies the data/values
at all stages.  Thus C is responsible for its stability.

This eats cpu cycles but produces much better interactions and code.

Quote

>There are really just a few key principles.... and there is actually
>software, called Transaction Servers (like CICS, or Microsoft's MTS),
>that does incorporate and automate many of them.

>PRINCIPLE NUMBER ONE:  "USERS DO *NOT* EQUAL THREADS!"
[snip]

The analogy is a little flawed because there is only one copy of an apron/stove
that all the threads use.  E.g., the code, and readonly, data is only in memory
once.  You are not duplicating all the resourse/thread.  In fact, only making
the data different

Quote

>PRINCIPLE NUMBER TWO:  "THOU SHALT NOT THRASH!"

The solution is to let the operating take care of the trashing.  E.g. let
it decide when too much time is spent and then adjust the quantum for trashing.  
And lock users out.  Now Windows doesn't do this, nor does *IX, VMS did this as
well as the ability to lock memory for certain sections of code.  But is better
served at a higher, not programmer, level.

Quote
>Programmers coped with that problem, and built systems that could
>efficiently handle hundreds or thousands of users, by the efficient use
>of queues...  intentionally holding requests until they could be served,
>and thereby controlling the number of requests *attempted* at one time.

>EXAMPLE:

[snip]

Yes, but this is NOT the problem faced in WEB traffic.  You are not submitting
batch jobs that run to conclusion with slightly different data.  Its not the
interactive demand for data.  At any instant in WEB traffice, you can lose the
link, the user can jump to another place, request data OUT OF ORDER.  The old
IBM programs assumed a serial, static way of processing data which ment one
could queue requests.

Quote

>IN CLOSING:
>Programmers today are not educated enough about the limitations of their
>computers because, until recently, they never encountered them.  In the
>same way that two generations of programmers were raised up to "never
>assume that you have as much CPU-power as you would like to," the latest
>crop of programmers and system-designers today have been raised up with
>the notion that "you have power to burn!"  As I have illustrated in this
>rather long-winded TechTips post... you *can* learn a lot from what
>programmers used to do when what they used to do was all they had.

But all your examples were not CPU examples but memory/trashing examples.  
Again, the IBM had fabulous Disk I/O and its Disk to Memory swap could happen
asynch with the CPU/Memory access.  

To close with an analogy, assembly programmers use to trash compilers because
how bloated and inefficient the code these compilers produced was.  However,
the advantage of high level code resulted, after 30 years, of compilers and
hardware that makes it very hard to write assembly language programs that can
beat the compilers.  Just try to beat a loop that moves memory in assembler on
an intel system.  This is the way I think of multiple access, thrashing, in
time the operating systems will do a better job most of the time over the
experienced programers.

We a NOT in a static batch enviornment anymore!

-John_Mer...@Brown.EDU

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Quote
"John_Mertus" <John_Mer...@brown.edu> wrote in message

news:89j1c2$3ns@cocoa.brown.edu...

Quote

> Yes, but this is NOT the problem faced in WEB traffic.  You are not
submitting
> batch jobs that run to conclusion with slightly different data.  Its not
the
> interactive demand for data.  At any instant in WEB traffice, you can lose
the
> link, the user can jump to another place, request data OUT OF ORDER.  The
old
> IBM programs assumed a serial, static way of processing data which ment
one
> could queue requests.

Transaction systems have always faced similar problems. At any point in time
a session can be interrupted and traffic can get corrupted or simply lost.
What users can do as far as moving around a web site really doesn't affect
the server. All the server is doing is servicing requests that it gets, same
as any mainframe transaction system.

I think the original post was right on the money when it suggested that
lessons could be learned from more mature systems, even when the problems
appear to be different. If we take a look at the evolution of micro hardware
we see fairly strict parallels with the mainframe development that preceeded
it by 20 or more years. Start with a single polled cpu, add interrupts, add
interrupt driven i/o, off-load i/o processing to sub-processors (or the
devices themselves), add memory management, add co-operative multi-taking,
add virtual memory, add pre-emptive multi-tasking, etc. Its no suprise that
the development of mainframes and micros parallel each other - their
limitations and the demands placed on them are so similar. No doubt the
resource demands being placed on the current generation of micros greatly
exceeds the demands placed on mainframes twenty years ago, but that is a
matter of volume. The actual work is fundamentally similar.

Quote

> >IN CLOSING:
> >Programmers today are not educated enough about the limitations of their
> >computers because, until recently, they never encountered them.  In the
> >same way that two generations of programmers were raised up to "never
> >assume that you have as much CPU-power as you would like to," the latest
> >crop of programmers and system-designers today have been raised up with
> >the notion that "you have power to burn!"  As I have illustrated in this
> >rather long-winded TechTips post... you *can* learn a lot from what
> >programmers used to do when what they used to do was all they had.

> But all your examples were not CPU examples but memory/trashing examples.
> Again, the IBM had fabulous Disk I/O and its Disk to Memory swap could
happen
> asynch with the CPU/Memory access.

I don't know it for a fact, but I suspect that if you took a look at the
hardware specs you would find that a 20 year old mainframe had much slower
disk i/o than today's pcs with SCSI interfaces. They certainly had less
physical memory, slower and smaller disks.

Quote

> To close with an analogy, assembly programmers use to trash compilers
because
> how bloated and inefficient the code these compilers produced was.
However,
> the advantage of high level code resulted, after 30 years, of compilers
and
> hardware that makes it very hard to write assembly language programs that
can
> beat the compilers.  Just try to beat a loop that moves memory in
assembler on
> an intel system.  This is the way I think of multiple access, thrashing,
in
> time the operating systems will do a better job most of the time over the
> experienced programers.

Thrashing has little to do with multiple access. What makes a machine thrash
is not enough physical memory to satisfy the working set(s) of the
program(s) it is running.

Quote

> We a NOT in a static batch enviornment anymore!

Plus ca change . . .

- Show quoted text -

Quote

> -John_Mer...@Brown.EDU

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Quote
John_Mertus wrote:

[...]

Quote
> But the operating system did nothing.  No event processing, little error
> trapping, and little if any I/O.  The processes ran in a predictable way.
> E.g., module A ran then B then C.  You got the speed by figuring out all that
> could happen in A and then moving to B, then to C.  That made very unstable
> programs because computations in C rested on the ones in A being correct.

There was, in the IBM world, "batch and TSO on one side, CICS on the
other."  And CICS was the innovation - the world's first
transaction-monitor since Whirlwind.

Quote
> >PRINCIPLE NUMBER ONE:  "USERS DO *NOT* EQUAL THREADS!"
> [snip]
> The analogy is a little flawed because there is only one copy of an apron/stove
> that all the threads use.  E.g., the code, and readonly, data is only in memory
> once.  You are not duplicating all the resourse/thread.  In fact, only making
> the data different

Actually, John, the analogy I was trying to make was that you could have
400 users on a CICS system but there were never 400 threads or processes
working that load.

There are lots of beginning web-programmers who think "well, this
on-line form will issue a query and get the results ..."  Bang, 400
users at one time try to use that form, somewhere in the world.  400
queries get posted to the server at precisely the same moment and -- the
whole thing dies.

Quote
> >PRINCIPLE NUMBER TWO:  "THOU SHALT NOT THRASH!"

> The solution is to let the operating take care of the trashing.  E.g. let
> it decide when too much time is spent and then adjust the quantum for trashing.
> And lock users out.  Now Windows doesn't do this, nor does *IX, VMS did this as
> well as the ability to lock memory for certain sections of code.  But is better
> served at a higher, not programmer, level.

An analogous behavior to swap-file thrashing can also occur in any other
type of overloaded system that does not exercise load-balancing.  When
the load (of whatever type) increases beyond some [what I call] "thrash
point," the ability of the system to handle the load falls apart because
everyone's standing around, waiting.

[...]

Quote
> Yes, but this is NOT the problem faced in WEB traffic.  You are not submitting
> batch jobs that run to conclusion with slightly different data.  Its not the
> interactive demand for data.  At any instant in WEB traffice, you can lose the
> link, the user can jump to another place, request data OUT OF ORDER.  The old
> IBM programs assumed a serial, static way of processing data which ment one
> could queue requests.

The web actually *is* a transaction-processing system in the sense that
there is a disconnect between the user sessions that request work to be
done, and the activity that goes on behind the scenes to fulfill them.
The user could blip away from the session at any time -- at which point
the transactions in-progress or scheduled relating to that session can
be cancelled.

With a transaction-monitor system this sort of thing is easy.  But how
many failed web-sites have we seen, with the kinds of unpredictable
hourglass-times that prompt users to give up and move away because
they're failing under load?

[...]

Quote
> But all your examples were not CPU examples but memory/trashing examples.
> Again, the IBM had fabulous Disk I/O and its Disk to Memory swap could happen
> asynch with the CPU/Memory access.

> To close with an analogy, assembly programmers use to trash compilers because
> how bloated and inefficient the code these compilers produced was.  However,
> the advantage of high level code resulted, after 30 years, of compilers and
> hardware that makes it very hard to write assembly language programs that can
> beat the compilers.  Just try to beat a loop that moves memory in assembler on
> an intel system.  This is the way I think of multiple access, thrashing, in
> time the operating systems will do a better job most of the time over the
> experienced programers.

> We a NOT in a static batch enviornment anymore!

The notion that programmers could do better than compilers, in ways that
make a difference -most- of the time, is something that was given up
long ago.  There are few operations that are that intensive in the
bit-twiddling department.  Most processes are I/O-bound in some way.
They spend their time waiting on a device.

Lots of low-level routines, like Move(), are written in assembler.  I
wrote a key digital-signature routine for ChimneySweep in assembler
because in empirical tests it did make a big difference.  But oiling the
CPU's way through a difficult piece of code to save 5 millionths of a
second won't make a difference if the program soon stops and waits 1
thousandth of a second (5,000 times longer!) for a disk drive.  

And so it goes.  

It's basically a difference of -algorithms- and -approach.-  I'm sure
we're in agreement on that.  We're just seeing some web-implementations
out there which are flopping under load for the same reasons that
earlier, successful "big iron" systems had no choice -but- to avoid.

------------------------------------------------------------------
Sundial Services :: Scottsdale, AZ (USA) :: (480) 946-8259
mailto:i...@sundialservices.com  (PGP public key available.)

Quote
> Fast(!), automatic table-repair with two clicks of the mouse!
> ChimneySweep(R):  "Click click, it's fixed!" {tm}
> http://www.sundialservices.com/products/chimneysweep

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Quote
Bruce Roberts wrote:

> > Again, the IBM had fabulous Disk I/O and its Disk to Memory swap could
> I don't know it for a fact, but I suspect that if you took a look at the
> hardware specs you would find that a 20 year old mainframe had much slower
> disk i/o than today's pcs with SCSI interfaces. They certainly had less
> physical memory, slower and smaller disks.

Yes, the IBM salesmen's old message about "unbeatable high I/O with all IBM
hardware" has really gone through.

Some years ago one guy stated that IBM AS400, and even the older System36
have/had unbeatable high I/O speed. "They are designed as multi user systems,
and that I/O speed still beats any existing PC hard disk, what comes
to I/O."
   When finally the real I/O-numbers were digged out, the I/O with AS400
was narrowly the same as with any average PC with Ultra-DMA IDE hard disk.
And could not at all compete with SCSI-disks.

I don't want to kill those old, good IBM legendas though:) Having them
around is like having some old, good, dependable grips to grab.

Markku Nevalainen

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


A new IBM myth I read TODAY from a computing magazine:

   "IBM AS/400 is by far the fastest environment
    to run Java applications."

There were no numbers to prove that, and I don't have comparable numbers
from other systems either.
So, we better take IBM's word, and believe AS/400 is the top iron in
the world, what comes running Java applications fast.

Markku Nevalainen

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Quote
"Markku Nevalainen" <m...@iki.fi> wrote in message

news:38BE3EB1.473C@iki.fi...

Quote
> A new IBM myth I read TODAY from a computing magazine:

>    "IBM AS/400 is by far the fastest environment
>     to run Java applications."

> There were no numbers to prove that, and I don't have comparable numbers
> from other systems either.
> So, we better take IBM's word, and believe AS/400 is the top iron in
> the world, what comes running Java applications fast.

And they have a bigger advertising budget than Sun to prove it!
Quote

> Markku Nevalainen

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


Quote
Bruce Roberts wrote:

> "Markku Nevalainen" <m...@iki.fi> wrote in message
> news:38BE3EB1.473C@iki.fi...
> > A new IBM myth I read TODAY from a computing magazine:

> >    "IBM AS/400 is by far the fastest environment
> >     to run Java applications."

Java and Fast in the same sentence is an oxymoron.

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


In article <38BD9DE5.6...@iki.fi>, m...@iki.fi says...

Quote

>Bruce Roberts wrote:

>> > Again, the IBM had fabulous Disk I/O and its Disk to Memory swap could

>> I don't know it for a fact, but I suspect that if you took a look at the
>> hardware specs you would find that a 20 year old mainframe had much slower
>> disk i/o than today's pcs with SCSI interfaces. They certainly had less
>> physical memory, slower and smaller disks.

We were talking past, not now:)  You are right that if you look at WEB server
and their similarites to Transation analysis, that there are many lessons to be
learned.  This is something I did not see before.  But I was reacting more to
that computing has changed that makes the modern so much different then in
1985.

In 1985 an IBM mainframe was the fastest I/O around, except for Cray.  It used
the 3380 disks that had 3MBytes/sec/channel.  One could add additional I/O
channels and they would preform in parallel.  A 4 Channel 370 could swap in/out
at almost 12 M/Sec.  This is close to the actual rate since an IBM channel put
I/O in without the CPU. Also the channel could store commands like modern SCSI.
But lets use 6M/Sec, its not going to really matter, as you will see.

The Mips of the 370/75 was about 1,89 in 1985.

SCSI rates vary, but even today an actual 40 MB/Sec SCSI transfer is still
very, very high.  A modern pentium III runs over 1000 mips.  

Now I realize that mips means meanless instructions/second, but since I could
not find the SPEC95 marks for an 370:) I want to use them.

This gives ratio if IO/MIPS
  370 of about            3
  Modern Pentium III    .004

Or about a 1000 times difference.  The actual numbers do not matter as much as
a HUGE increase, thus one would not expect to program these systems in the same
way.  The CPU cycles become increasing meaningless and the systems become I/O
bound.  20 years ago, the programmer was faced with fast I/O and slow cpus.

I am all for working on algorithms to make the program to run clean.  But my
point, not put well before by me, is that this ratio change means the
programmer must think in new ways. E.g., don't worry about CPU speed but how
fast an I/O can be complete.  In the old days, on the 370, you would worry
about the CPU and let the I/O complete itself.

Does IBM have the fastest I/O today?  I don't know, I'll look up the current
ES9000 and see how they compare to todays SCSI and ATE/66 drives.

-jam

Re:TechTip: Yes, {*word*269}ia, you can learn a thing or two from MVS/370 ...


John_Mertus skrev i meldingen <89sfd3$...@cocoa.brown.edu>...

Quote
>In article <38BD9DE5.6...@iki.fi>, m...@iki.fi says...
>Now I realize that mips means meanless instructions/second, but since I
could
>not find the SPEC95 marks for an 370:) I want to use them.

>This gives ratio if IO/MIPS
>  370 of about            3
>  Modern Pentium III    .004

>Or about a 1000 times difference.  The actual numbers do not matter as much
as
>a HUGE increase, thus one would not expect to program these systems in the
same
>way.  The CPU cycles become increasing meaningless and the systems become
I/O
>bound.  20 years ago, the programmer was faced with fast I/O and slow cpus.

>I am all for working on algorithms to make the program to run clean.  But my
>point, not put well before by me, is that this ratio change means the
>programmer must think in new ways. E.g., don't worry about CPU speed but how
>fast an I/O can be complete.  In the old days, on the 370, you would worry
>about the CPU and let the I/O complete itself.

Hi ! This is interesting !
There are a few more points to be made: Data sizes have increased even less
than I/O speed, well, it depends, but I'm sure you get my point. I bought a
State-of-The-Art Compac Laptop nearly 2 years ago, and it has Win95, NT4.0,
Oracle 7.3, Interbase 5.6, D1, D3 c/s, D4 c/s and MS Office installed. It
still hangs on pretty well, and it'll be around for still a couple of years,
I belive. If so, that's new to me, using a computer for 4 years. The bright
programmer heads around the world just haven't been able to keep the
resource-consumption increase ratio.

a) Now, on this machine I have 3 databases stored, belonging to 3 different
companies. They add up in somewhat 2 Gb. If you went 15 years back, the same
databases would ask for huge servers, today I just copy them into my laptop
for development purposes. I would also guess that much of the increase in
database size has to do with poor deletion functionality in the companys'
applications, I n these particular cases, I *know*, actually ;-) .

b) About the only performance issues in Pascal code in my part of programming
relates to displaying data. Building some tree structures with custom
drawing, icons, changing of colors & fonts may actually very easily be too
slow. It's still not possible to use variants in all parts of an application,
because of the vast overhead, e.g..

c) Network I/O is a bottleneck, too. And, as peaople are crying for Internet
enabled applications, efficiency is getting even more important. About the
only reason why people are frustrated about relatively large .exe files from
Delphi apps, are download times. Noone cares about a few hundred kb's of RAM,
actually, but a minute or five of download time - yes !.

When you look at database servers, there is remarkably little going on here.
As you said, as being all about I/O, it's not that much difference from 10
years ago. Except that my old laptop is an excellent database server platform
today.

Finally, you could add that a minimum of performance care has to be taken.
Writing thousands of lines of poor code may mean trouble also to non-critical
papplications. Discussions like whether or not to use aliged record fields
are getting more and more obscure, though...

What are the performance issues 10 years from now ? Maybe more on
programmer's performance than program's performance...

--
Bjoerge Saether
Consultant / Developer
Asker, Norway
bsaether.removet...@online.no (remove the obvious)

Other Threads