Tag Archives: ERP

Traditional Lean?

Twice in the last month I’ve heard the phrase “Traditional Lean” used in public presentations.   In neither case did the presenter explain the expression, but traditionalleanone displayed a slide with a Venn diagram showing the overlap between Lean and Six Sigma.  I suppose this means that he defined Traditional Lean as meaning Lean plus something else, in his case, Six Sigma.   For both presenters, however, the word “Traditional” implied passé.  They were moving on.  Lean, or Lean-Sigma, if you prefer that definition of “traditional”, was a dated process, in need of enhancement or even replacement.  In the 1980’s, I referred to Lean and Six Sigma respectively as TPS and QC Tools.  Each was derived in part from W. Edwards Deming’s post-World War II reconstruction efforts in Japan.  In that pre-Lean era, there was little literature about TPS and few consultants.   Being one of those folks old enough to remember when there was neither Lean nor Six Sigma – at least in name – I find this latest buzzification of Lean to “traditional Lean” amusing.  It’s certainly not the first time we’ve been encouraged to employ alternative approaches to productivity improvement:

By 1985, when American manufacturing was reeling from declining market dominance, an HBS article entitled “MRP, JIT, OPT, FMS” began what has since become a veritable alphabet soup of acronyms each describing a supposed elixir to the problems of rising costs and disappearing customers.  The article is worth a read, if only from the standpoint of showing how focused we were at the time on better methods for scheduling production and reducing inventories.

JIT was the popular surrogate for TPS in the mid-80’s, often juxtaposed with MRP (Material Requirements Planning.)  Back then, the pejorative “traditional” adjective was used to describe push-production of which MRP, a network scheduling system, was a part; we called it “traditional manufacturing.”

FMS, flexible manufacturing systems, was a techno alternative to TPS that proposed superior flexibility through use of robotics and automation to move material and information through the factory.  In a notorious example of FMS, General Motors attempted in the 1980’s to create fully automated facilities.  All told, GM spent 90 billion (yes, billion) dollars to “modernize” its operations.  While TPS sought to elevate employees, GM tried to automate them out the picture.   Ultimately, GM’s lights-out people-less plants were deemed unworkable and were mothballed.

As for OPT, Optimized Production Technology, this was another alternative to TPS from Eliyahu Goldratt (author of “The Goal”) to develop a computer-assisted queuing model that scheduled around bottlenecks.   After taking a couple of Dr. Goldratt’s classes in the late 80’s, I decided that if I could actually define all of the parameters needed to run OPT, I would know so much about my plant that I wouldn’t need any software to schedule around bottlenecks.  I’ve always been an Eli Goldratt fan, but this particular TPS alternative, like MRP and FMS was yet another software/ technology solution to a problem that went far deeper than automation of information and material flow.  Interestingly, all of the alternatives to TPS noted in the HBS article, proposed information and production automation as viable solutions to flagging productivity and competitiveness.   Even TPS, was thought to be only a scheduling model.

In the 1990’s, along with renaming TPS to Lean, came more techno solutions: Agile manufacturing, boasted Lee Iacocca at Chrysler, would “leapfrog” Lean.   Agile, described at the time as the next step “beyond Lean,” promised faster response and greater flexibility through a combination of IT integration and physical re-organization.  It was big on concept but light on details.  At the same time, Jack Welch, CEO of General Electric, popularized yet another improvement method, 6s (Six Sigma), presented at first as an alternative to Lean and later through the marketing genius of a large consulting firm, mashed into “LeanSigma.” The mid-90’s, also brought us business process reengineering, BPS, an IT-driven methodology aimed at radical process change.  This top-down change method had a de-humanizing impact on organizations; a condition that many understand today is a major deterrent to continuous improvement.

And all of these software-assisted tools can be rolled into one mother acronym, ERP, which is MRP plus all of the above except TPS.   Readers of one my recent posts will recall ERP referenced as “the granddaddy of excuses” for not spending time on continuous improvement : )

So what does all of this have to do with “traditional Lean?”  Here’s my take:  Over the last three decades, organizations have spent too much time searching for technical alternatives or supplements to Lean without first understanding Lean basics.   I’ve listed just a few of these experiments: MRP, FMS, OPT, Agile, BPS, 6s, ERP.   Perhaps you can add some others.  While there may be merit to some of the thinking behind each of these concepts , they have unfortunately  diverted attention and resources away from the hard work of learning people-centric TPS.  I think “traditional Lean” is TPS.  It’s what Lean was before we consultants got our mitts into it.  Call me a TPS ideologue.  I’m good with that.   Do you agree or disagree?  Share a thought.

O.L.D.

And don’t forget:

  1. Today is the last day to get the early bird price on registration for The Northeast Lean Conference coming October 4-5 in Worcester MA. Visit www.NortheastLeanConference.org to learn much more.
  2. We’re still accepting list items for Kanban misconceptions from my last blog post and will randomly select a winner for one free registration for the conference on Friday of this week. See Eye of the Beholder to add your comment.
  3. GBMP’s calendar of Shingo Institute workshops is jam packed through October. Check it out here and join us for a workshop or two soon.

Artificial Ignorance

For a few years back in tart_inthe early ‘80’s I fell prey to information automation fascination. I managed an IT department transitioning first from a basic accounting system managed by an external service bureau to a batch inventory control system to an order processing and manufacturing control system running on a succession of minicomputers with names like Dec 1170 and HP3000.  If you recognize the names of these systems or if you are familiar with RPG, assembler, Cobol or FORTRAN, then you too may be an old lean dude. The hardware of that decade was slow, flimsy and subject to frequent crashes; the term “user-friendly” as applied to the user interface had not yet been invented.

Today, by comparison, I regularly carry around in my coat pocket a thumb drive with a million times the storage capacity accessible and one thousand times the speed of what was available to my entire company in 1980. And that’s puny compared to the multi-parallel processing power available to businesses today. Today’s super computers pile up so much logged data, that decision rules under the heading of “artificial ignorance” have been created to intentionally ignore data that has been deemed (by someone) to be insignificant. Amazing!

What’s more amazing, however, is that most of the software running on today’s super machines follows essentially the same network-scheduling model we were using in 1980. Call it MRP or ERP, it may be zippier and have more tentacles today than it did then, but the deterministic model that assumes we can know what to produce today based upon a forecast created weeks or months ago is still alive and well. Shigeo Shingo called this model “speculative production.” I call it computerized fortune telling. If production lead times are very long, the fact that we can now run that forecasting model one thousand times faster doesn’t improve its efficacy.

If we add to this model of forecasting and back scheduling a standard cost accounting system whose operating assumptions go back a century, then we have created a model that systematically optimizes local efficiency to four decimal places as it pyramids inventories. Eli Goldratt used to call these calculations “precisely wrong.”

I was at a company several weeks ago that is in the process of replacing a 1980’s MRP system with a later model ERP system. “We’ll be able to allocate our parts for specific orders,” the materials manager, Bob, explained to me. “Hmm,” I thought, “why is that a good thing?”

Bob continued, “We’ll have real-time data.” I reflected, “What does that mean? At best he’ll have a rear view mirror. He’ll be reading yesterday’s news.” Assuming the transactions have been completed correctly and in a timely fashion, Bob will still only know the last place the material has been. Is it still in department A or is it in transit? Or has it arrived in department B, but not yet been transacted? This out of phase situation causes many a supervisor to chase down either parts or transactions to enable production.

“What happened to the pull system you were implementing last year?”, I ask.

“We’ve had to put our continuous improvement activities on hold until we go live,” Bob apologized, “but things will run much smoother once the new system is completely rolled out. We’re discussing an electronic kanban – going paperless.”

And this is where I cringe. I know that it’s been months since the continuous improvement effort was mothballed in order to redeploy resources to the ERP implementation. And I also know that Bob will likely have many more reasons to postpone CI efforts once they do go live. There will be ugly discoveries regarding the differences in rules and assumptions between the new and legacy systems. Material will be over-planned to compensate for shortages arising from start-up misunderstandings. Overtime will be rampant to catch up late deliveries.

Pardon me for sounding cynical. I’ve witnessed it too many times. In the last three months alone, I’ve heard similar stories from nine different organizations large and small. Immense resources are consumed to install hardware and software that runs counter to the objectives of improvement efforts. Thousands of resource hours are spilled into the abyss of information automation with a promise of productivity improvement – hours that would be have been far better spent on simplifying or eliminating questionable business processes. But nobody wants to talk about it publicly. One executive confided recently, “We’ve spent too much money to turn this off now.”

In 1976, Joe Weizenbaum, one of the early leaders in the field of artificial intelligence warned in his landmark book, Computer Power and Human Reason (on Amazon for $.01) that while computers can make decisions based upon rules, only humans should make choices. I worry that with each step change in computer power, human reason takes a step back. Dr. Weizenbaum foresaw the era we now live in where choice is reduced to a set of rules that hide beneath the legitimacy of the “system.” Ironically, this system, built up of nothing more and 0’s and 1’s and once described by Weizenbaum as the “universal machine” because it could be programmed to do anything, has on the contrary become the hugest monument process in any organization. Today’s popular ERP systems, with more than a quarter-billion lines of code, have too often become the tails that wag the dogs.

Maybe I’m just old school, but it seems that thirty-five years after my first love affair with computerization I’m still feeling jilted. How about you? Is your IT strategy supporting productivity and competitiveness or is it the tail that wags the dog? Tell me I’m wrong. Or share a story.

Happy New Year.:)

O.L.D.