Artificial Ignorance

For a few years back in tart_inthe early ‘80’s I fell prey to information automation fascination. I managed an IT department transitioning first from a basic accounting system managed by an external service bureau to a batch inventory control system to an order processing and manufacturing control system running on a succession of minicomputers with names like Dec 1170 and HP3000.  If you recognize the names of these systems or if you are familiar with RPG, assembler, Cobol or FORTRAN, then you too may be an old lean dude. The hardware of that decade was slow, flimsy and subject to frequent crashes; the term “user-friendly” as applied to the user interface had not yet been invented.

Today, by comparison, I regularly carry around in my coat pocket a thumb drive with a million times the storage capacity accessible and one thousand times the speed of what was available to my entire company in 1980. And that’s puny compared to the multi-parallel processing power available to businesses today. Today’s super computers pile up so much logged data, that decision rules under the heading of “artificial ignorance” have been created to intentionally ignore data that has been deemed (by someone) to be insignificant. Amazing!

What’s more amazing, however, is that most of the software running on today’s super machines follows essentially the same network-scheduling model we were using in 1980. Call it MRP or ERP, it may be zippier and have more tentacles today than it did then, but the deterministic model that assumes we can know what to produce today based upon a forecast created weeks or months ago is still alive and well. Shigeo Shingo called this model “speculative production.” I call it computerized fortune telling. If production lead times are very long, the fact that we can now run that forecasting model one thousand times faster doesn’t improve its efficacy.

If we add to this model of forecasting and back scheduling a standard cost accounting system whose operating assumptions go back a century, then we have created a model that systematically optimizes local efficiency to four decimal places as it pyramids inventories. Eli Goldratt used to call these calculations “precisely wrong.”

I was at a company several weeks ago that is in the process of replacing a 1980’s MRP system with a later model ERP system. “We’ll be able to allocate our parts for specific orders,” the materials manager, Bob, explained to me. “Hmm,” I thought, “why is that a good thing?”

Bob continued, “We’ll have real-time data.” I reflected, “What does that mean? At best he’ll have a rear view mirror. He’ll be reading yesterday’s news.” Assuming the transactions have been completed correctly and in a timely fashion, Bob will still only know the last place the material has been. Is it still in department A or is it in transit? Or has it arrived in department B, but not yet been transacted? This out of phase situation causes many a supervisor to chase down either parts or transactions to enable production.

“What happened to the pull system you were implementing last year?”, I ask.

“We’ve had to put our continuous improvement activities on hold until we go live,” Bob apologized, “but things will run much smoother once the new system is completely rolled out. We’re discussing an electronic kanban – going paperless.”

And this is where I cringe. I know that it’s been months since the continuous improvement effort was mothballed in order to redeploy resources to the ERP implementation. And I also know that Bob will likely have many more reasons to postpone CI efforts once they do go live. There will be ugly discoveries regarding the differences in rules and assumptions between the new and legacy systems. Material will be over-planned to compensate for shortages arising from start-up misunderstandings. Overtime will be rampant to catch up late deliveries.

Pardon me for sounding cynical. I’ve witnessed it too many times. In the last three months alone, I’ve heard similar stories from nine different organizations large and small. Immense resources are consumed to install hardware and software that runs counter to the objectives of improvement efforts. Thousands of resource hours are spilled into the abyss of information automation with a promise of productivity improvement – hours that would be have been far better spent on simplifying or eliminating questionable business processes. But nobody wants to talk about it publicly. One executive confided recently, “We’ve spent too much money to turn this off now.”

In 1976, Joe Weizenbaum, one of the early leaders in the field of artificial intelligence warned in his landmark book, Computer Power and Human Reason (on Amazon for $.01) that while computers can make decisions based upon rules, only humans should make choices. I worry that with each step change in computer power, human reason takes a step back. Dr. Weizenbaum foresaw the era we now live in where choice is reduced to a set of rules that hide beneath the legitimacy of the “system.” Ironically, this system, built up of nothing more and 0’s and 1’s and once described by Weizenbaum as the “universal machine” because it could be programmed to do anything, has on the contrary become the hugest monument process in any organization. Today’s popular ERP systems, with more than a quarter-billion lines of code, have too often become the tails that wag the dogs.

Maybe I’m just old school, but it seems that thirty-five years after my first love affair with computerization I’m still feeling jilted. How about you? Is your IT strategy supporting productivity and competitiveness or is it the tail that wags the dog? Tell me I’m wrong. Or share a story.

Happy New Year.:)

O.L.D.

2 thoughts on “Artificial Ignorance

  1. So first the good news. At my last job in a small company I implemented a system automating our bills of materials and ordering. I actually entered the first 2000 parts into the system myself and also got a programmer to create an Excel macro to extract data from our Solid Edge library, massaged and corrected it and then wrote it all back into about 4000 parts in that system. Once we had that critical mass we rolled it out to everyone in engineering and purchasing. Every BOM for a year or so had a dozen parts without part numbers, and when it got to purchasing there were a dozen more that were missing prices, vendors etc. But that was manageable. When I left last year we had gone past the 12,000 parts mark and the time from creating a bill of materials for an entire job to getting it all ordered had gone from 2 weeks in purchasing to one day. The number of wrong parts ordered dropped to a level so low we stopped tracking how many were ordered wrong, it was now just an occasion thing, and once fixed it stayed fixed. We spent very little money to do this, less than $10,000 with 10 seats of software.

    Now for a chuckle.
    The company recently moved from 8,000 square feet in two locations 10 miles apart to one location of about 21,000 square feet. In the old 6,000 sq. ft. shop all the materials were stored upstairs on a mezzanine. So 20 feet from where the products were assembled was a staircase that you had to carry everything up and down. That was very tiring.

    A few days ago I visited, as I now work for their rep firm in Baltimore. A tour of the 16,000 sq. ft. shop showed they now had a dedicated recieving area, and a dedicated inventory area, at opposite ends of a building 200 feet long! So to put away inventory you had to cross through the shop to the far end of the building and back again. They put the old mezzanine back up too. They put all the most used small materials on top of the mezzanine with the idea that small parts would be easier to carry up and down! So what used to be a 20 ft plus a staircase trip to get parts is now a 150 ft. trip plus a staircase to get parts! The piles of old, obsolete or wrong parts are sitting in large boxes on pallets where they comsume all the floor space. I would really love to see the impact of these decisions on their productivity!

  2. Perfect timing Bruce. Our old system is no longer supported and crashes often. We made a very careful investment in a new ERP system (necessary evil?) yet we must make sure the tail does not wag the dog. It’s up to us to implement Lean IT so that it is not “mean” to our internal and external customers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s