A system has to be outcome focused, with the desired end state.
Systems Thinking per se is a mental construct of the viewer of the system. The system works as designed, and the design satisfies the stated need and objectives. As such, the system does not have a problem. But an unidentified stakeholder who utilizes or is impacted by the system has a problem. In general, something must have changed in the required outcome generated by this system (in the perception of the user) to have caused a perceived problem? Dependant on what the nature of the problem is - out of set tolerances? Quantitative or qualitative issues? At the design stage, risk analysis would hopefully have indicated some consequences of implementation, known unknowns, unknown unknowns, etc.
Complex problems are seen and experienced differently by different people: And the context in which problems are embedded changes from one problem to the next. The ontological stance this implies is that reality is constructed, and that objective data about the external world cannot be obtained. When Systems Thinking is used to investigate problems, this framework suggests the need for an ontological stance based on constructivism and confirms the assumption that systems are mental constructs rather than things that are out there. Further Systems Thinking also implies an interpretative epistemological stance.
The analysis of risk is only proportional to how well you understand the risk. Usually, you only see the risk based on your own or some group effort to perform an analysis on the system in question. If a system is constructed as designed or architected, we know that the system will function as designed. When a problem occurs, it is an unintended consequence of the design. If a problem occurs due to the construction, not the design of the system, that particular problem is not an unintended consequence of the design. It is a consequence of the construction. For such problems, intervention would not be needed beyond the structure level (of the iceberg model).
At some level, the things you view which help you with risk analysis, are only static views of the system. If a company has reached a state of some stable emergent behavior, then risk analysis protocols are more likely to harm the system in question, the business or design etc., rather than help it. There is a time to 'buck' the system, so to speak in jump-starting a complex system into a system with complexity. If a system has reached complexity, then you have to look more to the patterns of it, rather than the static views, that most business practices, social, science etc., are based on.
A system has a purpose. A system is only a system in relation to a purpose and that purpose is not inherent in the objects and processes you build and create. A few examples. A bridge can be defined as a structure that has the purpose of permitting passage over a river. That purpose is not inherent in the physical structure called a bridge. A bus is a vehicle designed to carry passengers. This purpose is not on the bus itself but supplied by the human mind. Some system analysts may be aware of the axioms, heuristics, and rules to forecast so-called counterintuitive behavior within integrated systems:
- Very few people actually know how to conduct an inclusive holistic system analyses of an entity.
- Such inclusive analyses address all elements of the entity, including interfaces and interactions associated with: the human, organization, machine, and environment. There are many heuristics associated with systems analyses: many forms of thinking: temporal, inductive, deductive, system, abstract, historical, mathematical, and other forms of logic.
- Experienced analysts should be able to mix and match various methods to suit needs.
- There are contrivances that deal with semantics and taxonomies: families of systems, the system of systems, systems, subsystems, components, and parts.
- There are dynamic aspects: stacking of tolerances, variability.
- Further considerations involve various forms of system modeling and simulation.
- Thinking in terms of a system risk provides an integrated view of an adverse integration
So a system has to be outcome focused, with the desired end state. A system by its very nature moves through a range of stability. You may find what that range is in ideal conditions; yet when putting it to a test in the nonlinear aspects of the world, your system may not adapt as you think. It causes the problems, and the risk analysis via Systems Thinking needs to be dynamic, look more to the pattern, and actually, any risk can be evaluated applying the methods indicated: social, organizational, any adverse outcome against an entity can or should be addressed as well.
0 comments:
Post a Comment