Wednesday, September 23, 2009

LPAR information

So I thought LPARS started back in the 1980's. Apparently they started back in the 1960's, who knew :)

Here is an email I recieved from a reader of my blog:

Jerry,

Here’s the response I got back from the mainframe folks at IBM. Pretty interesting history…virtualization/lpar’s can be traced back to 1960!

In early 1960, IBM used an internal tool call CP-67 to create virtual systems to test S/370, a predecessor of z/OS. The tool eventually became a commercial product in 1972, VM/370. After many iterations of VM/370, in the early 1980s, Poughkeepsie baked the assembler code of VM into "machine code" (we now call it microcode) such that it ran much closer to the hardware.

This was to reduce the already single digit overhead of VM and create LPARs at the machine level as opposed to create LPARs at the OS level. So the LPAR code in z and p today can trace their genealogy to CP-67. If I equate virtual machines to LPARs, then the use of "LPAR" really started in the 1960's when CP-67 was used internally in IBM as a test tool. Commercially, LPAR started in 1972 when VM/370 was announced and LPARs were created at the OS level. LPAR at the machine hardware level started in the late 1980s when Poughkeepsie shipped 3090 that had the ability to configure a physical box into multiple LPARs.

--Removed PII data as you never know who will read this!

Thursday, September 17, 2009

Datacenters = Today's Mainframe?

I have often spoken of how I love the green screen! Nostalgia aside, a mainframe computer was (and still is) a scalable solution, if you needed to run more programs you cut out another logical partition also known as a LPAR. LPARs were first used on old systems like the IBM ESA/390 circa mid 1980’s. Eventually you had to add more processors, memory, and disk space to your mainframe –alas buy it by the drink!

I often hear this phase; buy it by the drink. I often wonder what it really means. In looking to the past, I know this task was accomplished with mainframes. Every CIO that I know of wants to employ the “buy the drink” idea. Question is how do their direct reports develop a solution and execute that solution? I think this is going to require some thinking out of the box. Cloud computing or grid computing is a solution for a "buy it by the drink" requirement.

The boundary of a system should be the datacenter, inside it is chuck full of storage, processors, and memory. Google takes one 62u cabinet full of Linux Servers and they operate as one logical system. Imagine rows and rows of these machines operating as one super computer, one system. Google has taken the datacenter and transformed it into one computing system. Furthermore Google has developed multiple datacenters that load balance and provide Disaster Recovery to their computing enterprise.

When you step back and look at this concept, it seems very simple. The challenge is that many CIO shops are stuck in a paradigm. The paradigm is as follows:
Outsource the datacenters
Dictate architectural standards i.e. no virtualization, separate systems etc.
Imagine going to Outback and demanding to see your food getting cooked or telling the cook what pot to cook it in! Bottom line, keep out of the kitchen and let the service provider serve up your apps.

In order for cloud computing to work, the vendor needs to deliver a service model. They need to deliver it like a menu, for instance the Outback Special costs X and an “add on” lobster tail will cost X more (I think I am hungry for Outback). From my point of view there is a ton of money to be made with cloud computing, Google is already doing it. When someone can figure out how to deliver this concept to the Federal Government, they (and their company) will become very rich.
Here is the paradigm that needs to be broken:
1. Who cares about the hardware (Servers, switches, storage)
2. Who cares about virtual versus physical hardware
3. Yes, the cloud can be secured to FISMA standards

Build it and they will come, or if you are in the government –bid it and they will come!