Archive for the ‘Risk Management’ Category

In developing a supply chain network design there are many criteria to consider – including such factors as the impact of the facility choices on
• Cost of running the system,
• current and future customer service,
• ability to respond to changes in the market, and
• risk of costly intangible events in the future
to name a few.

Frequently we use models to estimate revenues / costs for a given facility footprint, looking at costs of production, transportation, raw materials and other relevant components. We also sometimes constrain the models to ensure that other criteria are addressed – a constraint requiring that DCs be placed so that 80% of demand be within a day’s drive of a facility, for instance, might be a proxy for “good customer service”.

Some intangibles, such as political risk associated with establishing / maintaining a facility in a particular location, are difficult to measure and include in a trade off with model cost estimates. Another intangible of great interest for many companies, and that has been difficult to make tangible, is water risk. Will water be available in the required quantities in the future, and if so, will the cost allow the company to remain competitive? For many industry groups water is the most basic of raw materials involved in production, and it is important to trade off water risk against other concerns.

As I wrote in a previous blog published in this forum,

There are several risks that all companies face, to varying degrees, as global water consumption increases, including
• Physical supply risk: will fresh water always be available in the required quantities for your operations?
• Corporate image risk: your corporate image will likely take a hit if you are called out as a “polluter” or “water waster”
• Governmental interference risk: governmental bodies are becoming increasingly interested in water consumption, and can impose regulations that can be difficult to deal with
• Profit risk: all of the above risks can translate to a deterioration of your bottom line.

The challenge has been: how to quantify such risks so that they can be used to compare network design options.

Recently a post entitled “How Much is Water Worth” on LinkedIn highlighted a website developed by Ecolab that offers users an approach to monetization of water risks. This website allows the user to enter information about their current or potential supply chain footprint – such as locations of facilities and current or planned water consumption – and the website combines this information with internal information about projected GDP growth for the country of interest, the political climate and other factors to calculate a projected risk-adjusted cost of water over the time horizon of interest.

This capability, in conjunction with traditional supply chain modeling methods, gives the planner a tool that can be used to develop a more robust set of information that can be used in decision-making.
For more details visit the website waterriskmonetizer.com

SCMR

This month, Supply Chain Management Review is featuring a 3-part series by Dr. Alan Kosansky and Michael Taus of Profit Point entitled Managing for Catastrophes: Building a Resilient Supply Chain. In this article we discuss the five key elements to building a resilient supply chain and the steps you can take today to improve your preparedness for the next catastrophic disruption.

Once a futuristic ideal, the post-industrial, globally-interconnected economy has arrived. With it have come countless benefits, including unprecedentedly high international trade, lean supply chains that deliver low cost consumer goods and an improved standard of living in many developing countries. Along with these advances, this interdependent global economy has amplified collective exposure to catastrophic events. At the epicenter of the global economy is a series of interconnected supply chains whose core function is to continue to supply the world’s population with essential goods, whether or not a catastrophe strikes.

In the last several years, a number of man-made and natural events have lead to significant disruption within supply chains. Hurricane Sandy closed shipping lanes in the northeastern U.S., triggering the worst fuel shortages since the 1970s and incurring associated costs exceeding $70 billion. The 2011 earthquake and tsunami that struck the coast of Japan, home to the world’s 3rd largest economy representing almost nine percent of global GDP caused nearly $300 billion in damages. The catastrophic impact included significant impairment of country-wide infrastructure and had a ripple effect on global supply chains that were dependent on Japanese manufacturing and transportation lanes. Due to interconnected supply chains across a global economy, persistent disruption has become the new norm.

You can find all three parts on the SCMR website here: Part 1, Part 2 and Part 3.

Are you ready to build a resilient supply chain?
Call us at (866) 347-1130 or contact us here.

What kind of risks are you prepared for?

As a supply chain manager, you have profound control over the operations of your business. However, it is not without limits, and mother nature can quickly and capriciously halt even the smoothest operation. Or other man-made events can seemingly conspire to prevent goods from crossing borders, or navigating traffic, or being produced and delivered on time. How can you predict where and when your supply chain may fall prey to unforeseen black swan events?

Prediction is very difficult, especially about the future. (Niels Bohr, Danish physicist)  But there are likely some future risks that your stockholders are thinking about that you might be expected to have prepare for. The post event second guessing phrase: “You should have known, or at least prepared for” has been heard in many corporate supply chain offices after recent supply chain breaking cataclysmic events: tsunami, hurricane, earthquake, you name it.

  • What will happen to your supply chain if oil reaches $300 / barrel? What lanes will no longer be affordable, or even available?
  • What will happen if sea level rises, causing ports to close, highways to flood, and rails lines to disappear?
  • What will happen if the cost of a ton of CO2 is set to $50?
  • What will happen if another conflict arises in the oil countries?
  • What will happen if China’s economy shrinks substantially?
  • What will happen if China’s economy really takes off?
  • What will happen if China’s economy really slows down?
  • What will happen if the US faces a serious drought in the mid-west?

What will happen if… you name it, it is lurking out there to have a potentially dramatic effect on your supply chain.

As a supply chain manager, your shareholders expect you to look at the effect on supply, transportation, manufacturing, and demand. The effect may be felt in scarcity, cost, availability, capacity, government controls, taxes, customer preference, and other factors.

Do you have a model of your supply chain that would allow you to run the what-if scenario to see how your supply chain and your business would fare in the face of these black swan events?

Driving toward a robust and fault tolerant supply chain  should be the goal of every supply chain manager. And a way to achieve that is to design it with disruption in mind.  Understanding the role (and the cost) of dual sourcing critical components, diversified manufacturing and warehousing, risk mitigating transportation contracting, on-shoring/off-shoring some manufacturing, environmental impacts, and customer preferences, just to begin the list, can be an overwhelming task. Yet, there are tools and processes that can help with this, and if you want to be able to face the difficulties of the future with confidence, do not ignore them.  The tools are about supply chain planning and modelling. The processes are about risk management, and robust supply chain design. Profit Point helps companies all over the world address these and other issues to make some of the of the best running supply chains anywhere.

The future is coming, are you ready for it?

There is nothing like a bit of vacation to help with perspective.

Recently, I read about the San Diego Big Boom fireworks fiasco — when an elaborate Fourth of July fireworks display was spectacularly ruined after all 7,000 fireworks went off at the same time. If you haven’t seen the video, here is a link.

And I was reading an article in the local newspaper on the recent news on the Higgs: Getting from Cape Cod to Higgs boson read it here:

And I was thinking about how hard it is to know something, really know it. The data collected at CERN when they smash those particle streams together must look a lot like the first video. A ton of activity, all in a short time, and a bunch of noise in that Big Data. Imagine having to look at the fireworks video and then determine the list of all the individual type of fireworks that went up… I guess that is similar to what the folks at CERN have to do to find the single firecracker that is the Higgs boson.

Sometimes we are faced with seemingly overwhelming tasks of finding that needle in the haystack.

In our business, we help companies look among potentially many millions of choices to find the best way of operating their supply chains. Yeah, I know it is not the Higgs boson. But it could be a way to recover from a devastating earthquake and tsunami that disrupted operations literally overnight. It could be the way to restore profitability to an ailing business in a contracting economy. It could be a way to reduce the greenhouse footprint by eliminating unneeded transportation, or decrease water consumption in dry areas. It could be a way to expand in the best way to use assets and capital in the long term. It could be to reduce waste by stocking what the customers want.

These ways of running the business, of running the supply chain, that make a real difference, are made possible by the vast amounts of data being collected by ERP systems all over the world, every day. Big Data like the ‘point-of’sale’ info on each unit that is sold from a retailer. Big Data like actual transportation costs to move a unit from LA to Boston, or from Shanghai to LA. Big Data like the price elasticity of a product, or the number of products that can be in a certain warehouse. These data and many many other data points are being collected every day and can be utilized to improve the operation of the business in nearly real time. In our experience, much of the potential of this vast collection of data is going to waste. The vastness of the Big Data can itself appear to be overwhelming. Too many fireworks at once.

Having the data is only part of the solution. Businesses are adopting systems to organize that data and make it available to their business users in data warehouses and other data cubes. Business users are learning to devour that data with great visualization tools like Tableau and pivot tables. They are looking for the trends or anomalies that will allow them to learn something about their operations. And some businesses adopting more specialized tools to leverage that data into an automated way of looking deeper into the data. Optimization tools like our Profit Network, Profit Planner, or Profit Scheduler can process vast quantities of data to find the best way of configuring or operating the supply chain.
So, while it is not the Higgs boson that we help people find, businesses do rely on us to make sense of a big bang of data and hopefully see some fireworks along the way.

I was sitting on the plane the other day and chatting with the guy in the next seat when I asked him why he happened to be traveling.  He was returning home from an SAP ERP software implementation training course.  When I followed up and asked him how it was going, I got the predictable eye roll and sigh before he said, “It was going OK.”  There are two things that were sad here.  First, the implementation was only “going OK” and second, that I had heard this same type of response from so many different people implementing big ERP that I was expecting his response before he made it.

So, why is it so predictable that the implementations of big ERP systems struggle?  I propose that one of the main reasons is that the implementation doesn’t focus enough on the operational decision-making that drives the company’s performance.

A high-level project history that I’ve heard from too many clients looks something like this:

  1. Blueprinting with wide participation from across the enterprise
  2. Implementation delays
    1. Data integrity is found to be an issue – more resources are focused here
    2. Transaction flow is found to be more complex than originally thought – more resources are focused here
    3. Project management notices the burn rate from both internal and external resources assigned to the project
  3. De-scoping of the project from the original blueprinting
    1. Reports are delayed
    2. Operational functionality is delayed
  4. Testing of transactional flows
  5. Go-live involves operational people at all levels frustrated because they can’t do their jobs

Unfortunately, the de-scoping phase seems to hit some of the key decision-makers in the supply chain, like plant schedulers, supply and demand planners, warehouse managers, dispatchers, buyers, etc. particularly hard, and it manifests in the chaos after go-live.  These are the people that make the daily bread and butter decisions that drive the company’s performance, but they don’t have the information they need to make the decisions that they must make because of the de-scoping and the focus on transaction flow.  (It’s ironic that the original sale of these big ERP systems are made at the executive level as a way to better monitor the enterprise’s performance and produce information that will enable better decision-making.)

What then, would be a better way to implement an ERP system?  From my perspective, it’s all about decision-making.  Thus, the entire implementation plan should be developed around the decisions that need to be made at each level in the enterprise.  From blueprinting through the go-live testing plan, the question should be, “Does the user have the information in the form required and the tools (both from the new ERP system and external tools that will still work properly when the new ERP system goes live) to make the necessary decision in a timely manner?”  Focusing on this question will drive user access, data accuracy, transaction flow, and all other elements of the configuration and implementation.  Why? Because the ERP system is supposed to be an enabler and the only reasons to enter data into the system or to get data out is either to make a decision or as the result of a decision.

Perhaps with that sort of a focus there will be a time when I’ll hear an implementation team member rave about how much easier it will be for decision-makers throughout the enterprise once the new system goes live.  I can only hope.

Change is hard.

Collapsed Souffle

Collapsed Souffle

So why do it? Why change when you can be the same?  If you have a well-worn recipe to make a great soufflé, you know that the risk of tampering with that recipe can result in the collapse of the soufflé. So why change what is already working?

In the businesses that I help, change comes for several reasons. It may be thrust upon the business from the outside, a change in the competitive landscape for instance, or a new regulation.   It may come from some innovative source within the company, looking for cost savings to increase profitability of productivity, or a new process or product with increased productivity. Change can come from the top down, or from the bottom up. Change can come in a directed way, as part of a larger program, or organically as part of a larger cultural shift.  Change can come that makes your work easier, or harder, and may even eliminate a portion (or all) of the job that you were doing. Change can come to increase the bottom line or the top line. But primarily change comes to continue the adaptation of the company to the business environment.  Change is the response to the Darwinian selector for businesses.  Adapt or decline. Change is necessary.  It is clear to me from my experience that businesses need to change to stay relevant.

This may seem trite or trivial, but accepting that change is not only inevitable, but that it is good, is the shift in attitude that separates the best companies (and best employees) from the others.

So, you say, I see the need to change, it is not the change itself that is so difficult, but rather the way that it is inflicted upon us that makes it hard.  So, why does it have to be so hard?  Good question.

Effective managers know that change is necessary but hard. They are wary of making changes, and rightly so.  Most change projects fail. People generally just don’t like it.  Netflix is a great example.  Recently, Netflix separated their streaming movie service from their DVD rental business. After what I am sure must have been careful planning, they announced the change, and formed Quikster, the DVD rental site, and the response from the customer base was awful. As you likely know, Netflix, faced with the terrible reception from their customer base and stockholders, reversed their decision to separate streaming from DVDs. What was likely planned as a very important change, failed dead. Dead, dead, dead. Change can be risky too.

If change is necessary, but hard and risky… how can you tame this unruly beast?

The secret of change is that it relies on three things: People, Process, and Technology. I name them in the order in which they are important.

People are the most important agents relative to change, since they are the one who decide on the success or failure of the change. People decided that the Netflix change was dead. People decide all the time about whether to adopt change. And people can be capricious and fickle. People are sensitive to the delivery of the change.  They peer into the future to try to understand the affect it will have on them, and if they do not like what they see…  It is the real people in the organization who have to live with the change, who have to make it work, and learn the new, and unlearn the old. It is likely the very same people who have proudly constructed the current situation that will have to let go of their ‘old’ way of doing things to adopt to the new. Barriers to change exist in many directions in the minds of people.  I know this to be true… in making change happen, if you are not sensitive to the people who you are asking to change, and address their fears and concerns, the change will never be accepted.  If you do not give them a clear sense of the future state and where they will be in it, and why it is a better place, they will resist the change and have a very high likely hood of stopping the change, either openly, or more likely passively and quietly, and you may never know why the fabulously planned for change project failed.

Process is the next aspect of a change project that matters.  A better business process is what drives costs down. Avoiding duplication of efforts, and removing extra steps. Looking at alternatives in a ‘what-if’ manner, in order to make better decisions, these are what make businesses smarter, faster, better.  A better business process is like getting a better recipe for the kitchen. Yet, no matter how good a recipe; it still relies on the chef to execute it and the ovens to perform properly. Every business is looking for better business processes, just as every Chef is looking for new recipes.   But putting an expert soufflé recipe, where the soufflé riser higher, in the hands of an inexperienced Chef does not always yield a better soufflé.  People really do matter more than the process.

Technology is the last aspect of the three that effect change. Better technology enables better processes. A better oven does not make a Chef better.  The Chef gets better when they learn to use the new oven in better ways, when they change the way they make the soufflé, since the oven can do it.  A better oven does not do it by itself.  An oven is just an oven. In the same way, better technology is still just technology.  It by itself changes nothing.  New processes can be built that use it, and people can be encouraged to use it in the new process.  Technology changes are the least difficult to implement, and it is likely due to this fact that they are often fixed upon as the simple answer to what are complex business problems requiring a comprehensive approach to changing the business via it people, process, and technology.

Nice Souffle

Nice Souffle

Change is necessary, but hard and risky. Without change businesses will miss opportunities to adapt to the unforgiving business world, and decline. However, change can be tamed if the attitude towards it is changed to be considered a good thing, and is addressed with a focus on people, process and technology, in that order.  Done right, you can implement the change that will increase the bottom line and avoid a collapse of your soufflé.

Supply chain design and infrastructure planning during economic expansions is a commonly accepted best practice within the community of logistics professionals. An often overlooked, but equally critical set of supply chain issues arise during economic contractions.


So in an effort to understand what concerns decision makers are presently experiencing, Profit Point conducted an informal survey of more than 140 logistics professionals worldwide. The survey results indicate that more than 40% of all respondents have plans to expand, rather than contractor their supply chain networks within the next two years.


It is worth noting that the smallest companies surveyed – those with $100 million or less in annual revenue – are experiencing the largest contractions. Conversely, 57% of the surveyed medium-sized companies (annual revenues ranging from $100-500 million) are expanding, not contracting.

To learn more about how Profit Point can help your supply chain expand or contract to meet your future needs, contact us.

What is a Monte Carlo model and what good is it? We’re not talking a type of car produced by General Motors under the Chevy nameplate. “Monte Carlo” is the name of a type of mathematical computer model. A Monte Carlo is merely a tool for figuring out how risky some particular situation is. It is a method to answer a question like: “what are the odds that such-and-such event will happen”. Now a good statistician can calculate an answer to this kind of question when the circumstances are simple or if the system that you’re dealing with doesn’t have a lot of forces that work together to give the final result. But when you’re faced with a complicated situation that has several processes that interact with each other, and where luck or chance determines the outcome of each, then calculating the odds for how the whole system behaves can be a very difficult task.

Let’s just get some jargon out of the way. To be a little more technical, any process which has a range of possible outcomes and where luck is what ultimately determines the actual result is called “stochastic”, “random” or “probabilistic”. Flipping a coin or rolling dice are simple examples. And a “stochastic system” would be two or more of these probabilistic events that interact.

Imagine that the system you’re interested in is a chemical or pharmaceutical plant where to produce one batch of material requires a mixing and a drying step. Suppose there are 3 mixers and 5 dryers that function completely independent of one another; the department uses a ‘pool concept’ where any batch can use any available mixer and any available dryer. However, since there is not enough room in the area, if a batch completes mixing but there is no dryer available, then the material must sit in the mixer and wait. Thus the mixer can’t be used for any other production. Finally, there are 20 different materials that are produced in this department, and each of them can have a different average mixing and drying time.

Now assume that the graph of the process times for each of the 8 machines looks somewhat like what’s called a ‘bell-shaped curve’. This graph, with it’s highest point (at the average) right in the middle and the left and right sides are mirror images of each other, is known as a Normal Distribution. But because of the nature of the technology and the machines having different ages, the “bells” aren’t really centered; their average values are pulled to the left or right so the bell is actually a little skewed to one side or the other. (Therefore, these process times are really not Normally distributed.)

If you’re trying to analyze this department, the fact that the equipment is treated as a pooled resource means it’s not a straightforward calculation to determine the average length of time required to mix and dry one batch of a certain product. And complicating the effort would be the fact that the answer depends on how many other batches are then in the department and what products they are. If you’re trying to modify the configuration of the department, maybe make changes to the scheduling policies or procedures, or add/change the material handling equipment that moves supplies to and from this department, a Monte Carlo model would be the best approach to performing the analysis.

In a Monte Carlo simulation of this manufacturing operation, the model would have a clock and a ‘to-do’ list of the next events that would occur as batches are processed through the unit. The first events to go onto this list would be requests to start a batch, i.e. the paperwork that directs or initiates production. The order and timing for the appearance of these batches at the department’s front-door could either be random or might be a pre-defined production schedule that is an input to the model.

The model “knows” the rules of how material is processed from a command to produce through the various steps in manufacturing and it keeps track of the status (empty and available, busy mixing/drying, possibly blocked from emptying a finished batch, etc.) of all the equipment. And the program also follows the progress and location of each batch. The model has a simulated clock, which keeps moving ahead and as it does, batches move through the equipment according to the policies and logic that it’s been given. Each batch moves from the initial request stage to being mixed, dried and then out the back-door. At any given point in simulated time, if there is no equipment available for the next step, then the batch waits (and if it has just completed mixing it might prevent another batch from being started).

What sets a Monte Carlo model apart however is that when the program needs to make a decision or perform an action where the outcome is a matter of chance, it has the ability to essentially roll a pair of dice (or flip a coin, or “choose straws”) in order to determine the specific outcome. In fact, since rolling dice means that each number has an equal chance of “coming up”, a Monte Carlo model actually contains equations known as “probability distributions”, which will pick a result where certain outcomes have more or less likelihood of occurrence. It’s through the use of these distributions, that we can accurately reflect those skewed non-Normal process times of the equipment in the manufacturing department.

The really cool thing about these distributions is that if the Monte Carlo uses the same distribution repeatedly, it might get a different result each time simply due to the random nature of the process. Suppose that the graph below represents the range of values for the process time of material XYZ (one of the 20 products) in one of the mixers. Notice how the middle of the ‘bell’ is off-center to the right (it’s skewed to the right).


So if the model makes several repeated calls to the probability distribution equation for this graph, sometimes the result will be the 2.0-2.5 hrs, other times 3.5-4.0 hrs, and on some occasions >4hrs. But in the long run, over many repetitions of this distribution, the proportion of times for each of the time bands will be the values that are in the graph (5%, 10%, 15%, 20%, etc.) and were used to define the equation.

So to come back to the manufacturing simulation, as the model moves batches through production, when it needs to determine how much time will be required for a particular mixer or dryer, it runs the appropriate probability equation and gets back a certain process time. In the computer’s memory, the batch will continue to occupy the machine (and the machine’s status will be busy) until the simulation clock gets to the correct time when the process duration has completed. Then the model will check the next step required for the batch and it will move it to the proper equipment (if there is one available) or out of the department all together.

In this way then, the model would continue to process batches until it either ran out of batches in the production schedule that was an input, or until the simulation clock reached some pre-set stopping point. During the course of one run, the computer would have been monitoring the process and recording in memory whatever statistics were relevant to the goal of the analysis. For example, the model might have kept track of the amount of time that certain equipment was block
ed from emptying XYZ to the next step. Or if the aim of the project was to calculate the average length of time to produce a batch, the model would have been following the overall duration of each batch from start to finish in the simulated department.

The results from just one run of the Monte Carlo model however are not sufficient to be used as a basis for any decisions. The reason for this is the fact that this is a stochastic system where chance determines the outcome. We can’t really rely on just one set of results, because just through the “luck of the draw” the process times that were picked by those probability distribution equations might have been generally on the high or low side. So the model is run repeatedly some pre-set number of repetitions, say 100 or 500, and results of each of these is saved.

Once all of the Monte Carlo simulations have been accumulated, it’s possible to make certain conclusions. For example, it might turn out that the overall process time through the department was 10 hrs or more on 8% of the times. Or the average length of blocked time, when batches are prevented from moving to the next stage because there was no available equipment, was 12 hrs; or that the amount of blocked time was 15hrs or more on 15% of the simulations.

With information like this, a decision maker would be able to weigh the advantages of adding/changing specific items of equipment as well as modifications to the department’s policies, procedures, or even computer systems. In a larger more complicated system, a Monte Carlo model such as the one outlined here, could help to decrease the overall plant throughput time significantly. At some pharmaceutical plants for instance, where raw materials can be extremely high valued, decreasing the overall throughput time by 30% to 40% would represent a large and very real savings in the value of the work in process inventory.

Hopefully, this discussion has helped to clarify just what a Monte Carlo model is, and how it is built. This kind of model accounts for the fundamental variability that is present is almost all decision making. It does not eliminate risk or prevent a worst-case scenario from actually occurring. Nor does it guarantee a best-case outcome either. But it does give the business manager added insight into what can go wrong or right and the best ways to handle the inherent variability of a process.

This article was written by John Hughes, Profit Point’s Production Scheduling Practice Leader.

To learn more about our supply chain optimization services, contact us here.

Profit Point’s consultants interact daily with companies at the senior and mid-management level. We are seeing many risk management issues as a common theme during these discussions.

If your supply chain and supplier base is complex, then understanding risk may be one of the top challenges you and your purchasing managers face.

  • Are you effectively and pro-actively managing supply assurance and risk?
  • Do you know where to focus your supply chain risk reduction planning?

If your supply chain and supplier base are complex, then this may be one of the top challenges you and your purchasing managers face today and in the coming years. Yesterday’s reactive approach to supply chain events and disruptions or just performing risk assessments on new suppliers is no longer adequate in today’s world of lean manufacturing and global sourcing.

If you would like to try a better, systematic approach to risk management, we have a new, proprietary methodology and quantitative risk assessment tool that can help you get your arms around the risk management issue. Companies that have tried our tool found the cost and resource requirements are modest while the savings are huge.

We can help you quickly assess specific commodity categories or your entire supply chain to identify the major sources of risk and potential business disruption so that either they can be eliminated or contingency plans can be developed to keep the impact within acceptable limits. If you would like to learn more about this tool and methodology, please contact us. We look forward to hearing from you.

Contact Us Now

610.645.5557

Contact Us

Contact UsInfo

Please call:
+1 (610) 645-5557

Meet our Team

Our Clients

Published articles

  • A Fresh Approach to Improving Total Delivered Cost
  • Filling the Gap: Tying ERP to the Business Strategy
  • 10 Guidelines for Supply Chain Network Infrastructure Planning
  • Making Sound Business Decisions in the Face of Complexity
  • Leveraging Value in the Executive Suite
  • Should you swap commodities with your competitors?
  • Supply Chain: Time to Experiment
  • Optimization Technology Review
  • The Future of Network Planning: On the Verge of a New Cottage Industry?
  • Greening Your Supply Chain… and Your Bottom Line
  • Profit Point’s CEO and CTO Named a "Pro to Know" by Supply & Demand Chain Executive Magazine