About Jared Wade

Jared Wade is a freelance writer and former editor of the Risk Management Monitor and senior editor of Risk Management magazine. You can find more of his writing at JaredWade.com.
Игроки всегда ценят удобный и стабильный доступ к играм. Для этого идеально подходит зеркало Вавады, которое позволяет обходить любые ограничения, обеспечивая доступ ко всем бонусам и слотам.

The 5 Most Common ERM Errors

[Each year, the best Canadian risk managers gather to discuss the state of the discipline at the RIMS Canada Conference. The 2011 incarnation is taking place this week in Ottawa so I will be reporting from here for the next few days.

online pharmacy doxycycline with best prices today in the USA

]

The first session I attended at the 2011 RIMS Canada Conference in Ottawa promised to detail the top 10 common ERM errors — and how to avoid them. True to form, presenter Diana Del Bel Belluz of Risk Wise Inc and moderator Nowell Seaman, the head of risk management for the University of Saskatchewan and RIMS board member, did just that.

Here is a recap of Belluz’s list, highlighting the top five.

#1. Complacency Has Set In

Complacency is an enemy of risk management. Once it cements itself into the organization’s culture, it is difficult to get out from under. Risk mangers must determine if this is a hurdle at their company. Some warning signs Belluz says to look for are executives who respond to risks with statements like …

“It’s never happened before.”
“It can’t happen here”
“We can handle it.”
“Ignore it and it will go away.”

She mentioned one company she advised whose CEO took an ignore it and it will go away approach to one risk. “It worked,” she said. “It took seven years and a lawsuit, but he was right — eventually it did go away.”

#2. Not Understanding Your Risk Exposure

“At its heart, this mistake is really about not linking risk to strategy,” said Belluz. In an attempt to understand its exposure, most companies will start their risk identification by brainstorming. Various company stakeholders gather and throw out ideas about what worse-case scenarios could harm the organization. One big benefit, says Belluz, is that this allows you to tap into the expert knowledge.

But there are also many cons.

First off, success hinges upon the individuals in the room, so you need to ensure you get the right people. Second, groupthink — or simply one dominant personality — can skew the discussion, possibly towards concerns that are not actually the biggest threats. Additionally, because you are looking at each risk in isolation, you don’t factor in the interdependencies that exist between risk factors. You can ask just about any financial firm still in existence today how that can lead to a company’s downfall. And lastly, brainstorming tends to create a very large list of risks, which then makes prioritizing the threats difficult.

For these reasons, she suggests creating “an influence diagram,” which is essentially a flowchart map of risks that shows how they interact and allows you to use colors or shape size to demarcate which exposures are the most critical. It is a visual approach that lets you view and understand the interrelationships between multiple risks/objectives.

This, too, has its own con, however.

Because it relies on linking internal risks to one another, it can overlook big risk factors that are outside the organization. Think of the economic meltdown or terrorist attack. These could affect multiple parts of the operation. But the flowchart lines won’t show this connection to an external threat.

Thus, Belluz recommends that you don’t rely on either of these methods exclusively. Use both. Such an approach will leave fewer gaps in your  identification, quantification and prioritization of risks. And don’t stop there. Add checklists, “risk heat maps” and risk matrixes as well, she suggests.

Still, many companies are failing to use such formal procedurs.

online pharmacy buspar with best prices today in the USA

To highlight this, Belluz asked the room “what would it take in our organization to implement more structural approaches [to risk management]?”

Immediately, one risk manager in the crowd shouted out “more resources.”

I’m sure many others can relate.

#3. Relying on Gut Instinct to Assess Risk

This is an obvious mistake with a not-so-obvious solution. Essentially, it comes down to one question: “What role should judgment, experience and intuition play in analyzing and informing strategic decisions?”

Ironically, determining the right answer to that question might take more art than science, but there are a few pitfalls that Belluz pointed out.

  • Mistaking beliefs and opinions for facts
  • Confirmation bias
  • Group polarization (in which like-minded people gather and a risk they all had becomes intensified due to the discussion. For instance, a group of people very concerned about hazardous waste come together, discuss the issue and then walk out of the room thinking it’s an even worse problem than they did when they went in.)
  • Emotionally charged situations

A way to mitigate being too “gutsy” in your thinking, if you will, is ensuring that the methodology remains evidence-based. Because if you are using your gut during risk identification rather than using a process that is grounded in facts, you may actually even make the problem worse.

Could ignorance is better than a sense of false protection? Maybe.

As Nowell Seaman pointed out, use your gut rather than facts and you may come out of a meeting and “feel like we have done something but we really haven’t.”

Is an unknown risk that remains unmanaged better than a known risk that you is poorly managed? I would lean towards no, but it’s certainly debatable.

online pharmacy hydroxychloroquine with best prices today in the USA

#4. Overlooking the information you have

Belluz’s suggestion to avoid this one was simple: “Frame a question about the risk properly and then mine your data.” As we know, life consists of lies, damned lies and statistics. So numbers can usually be found to support any conclusion. And finding the right information is key.

This can be overwhelming, however. To ease the burden, Belluz suggests four useful measurement assumptions you should remember:

  • Someone has measured it before (Google is your friend)
  • You have more data than you think.
  • You need less data than you think
  • New data is more easily accessible than you think

In short, there is data out there. Be sure you use it.

#5. Focusing on the Wrong Risks 

The key question to ask here is whether or not the risk aligns with the company’s risk appetite. And this concept led to an even more interesting question from the audience. “What’s the difference between risk tolerance and risk appetite?” asked a risk manager.

Belluz’s answer? “I don’t think, as a discipline, we have decided on that.” Seaman agreed, but was able to add some insight he has learned from his years of managing risk in the trenches. “Tolerance is how much risk can you stand. How much you can stomach,” he said. “Appetite is how much you want to stand.”

So in trying to determine whether or not you’re focusing on the wrong risks, perhaps the best lesson is to always identify those areas in which your exposure is higher than the amount of risk you want to stand. If you look through that lens and everything seems kosher, you should be able to sleep a lot better at night.

The Media Is Increasingly Talking About a Recession

If you trust the media (which as a quasi-member of, I don’t) then the world may be back-sliding into another recession.

online pharmacy bactrim with best prices today in the USA

A chart without further granulation of the times/dates of such mentions of course suffers from the chicken-or-the-egg conundrum, but it is interest to wonder about.

online pharmacy sinequan with best prices today in the USA

From The Economist:

The Economist’s informal R-word index tracks the number of newspaper articles that use the word “recession” in a quarter. If not foolproof, it boasts a decent record: previous incarnations of the index pinpointed the start of American recessions in 1990, 2001 and 2007. The index had been declining steadily from a peak in early 2009. September, however, has brought a change in the weather.

If the “hacks…getting anxious,” as the Economist puts it, are wrong, it would certainly not be the first time. But their track record in this informal index should be concerning.

UK Infrastructure Providers “Accept an Unexpectedly High Level of Risk” of Cyber-Threats, and the National Response Is “Fractured and Incoherent”

Hacker organizations like Anonymous and LulSec are waging a worldwide cyberwar. These new combatants are highly sophisticated and have emerged as a true threat so quickly that it is understandable why many organizations remain vulnerable.

But according to a new report from Chatham House, the UK’s critical infrastructure providers actually do understand the gravity of the threat and are very concerned about what it means for their operations — they just have chosen to do little about it.

“Many of the organizations surveyed in the course of this project have developed an attitude to cyber security that is fundamentally contradictory. In most cases, they declared themselves to be aware of cyber security threats. Yet these same organizations were willing, for a variety of resource and other reasons, to accept an unexpectedly high level of risk in this area. In several cases it was even decided that cyber risk should be managed at arm’s length from the executive authority and responsibility of the board and senior management. Paradoxically, therefore, in these organizations a heightened perception of cyber security risk is being met with diminished resources and interest.”

It gets worse.

While the weak response by critical infrastructure providers is clearly presenting a risk to national security in the United Kingdom, governments aren’t helping. Those who should be leading the charge on raising security aren’t doing enough to help those they serve keep up with the threat, say the providers.

There was a perception among those interviewed (which include 100 larges businesses and banks) for the study that “the national response mechanism is for the most part fractured and incoherent” and that there is  “little sense either of governmental vision and leadership, or of responsibility and engagement” with critical infrastructure providers. Those providers also note that better information sharing could help them considerably in preparing their defenses, but so far the UK government is “more willing to solicit information than to divulge it.”

Of course, the responsibility to mitigate this threat is not solely a public responsibility. These companies must take it upon themselves to safeguard their own operations. As the report notes, however, “this will only be achieved to the extent that board members are themselves more aware of the opportunities and threats presented by cyberspace.”

It must start at the top.

But unbelievably, the report notes that some companies have “deliberately pushed [the threat] below the boardroom level in order to remove a complex and baffling problem from sight.”

To be fair, some organizations are responding better than others. But they are the exception not the rule. “The results were varied,” said study co-author Dave Clemente on a Chatham House podcast. “Some organizations have a fairly nuanced view of cybersecurity — it comes up on their board agendas on a regular basis. And others don’t. Others respond only when something unpleasant happens to them or a competitor, and then they have to do something very quickly. Something must be done. Money is thrown at the problem. And often it produces an over-reation [that’s] not very well thought-through.”

In talking about the risk to infrastructure providers, Clemente highlighted the cyberattack that Anoynmous launched against Sony, shutting down its popular online video game platform, the PlayStation Network, for more than weeks and exposed the personal info of tens of millions of users.

Games Beat explains the details of that attack.

The company was criticized for having outdated security software that did not adequately protect the PSN from hackers when they broke in. Security experts knew Sony was running outdated versions of the Apache Web server software that did not have a firewall installed. The company said hackers were able to breach the PSN and steal sensitive data while the company was fending off denial of service attacks from Anonymous, an online hacker group that typically takes up politically charged causes.

Hackers also hit a number of other high-profile companies like defense contractor Lockheed Martin and Bethesda Softworks, another game publisher. The number of hacking attacks has given network providers like Sony a new set of challenges when building security for company networks.

Clemente says that the hacker intrusion cost Sony $171 million.

That’s a large sum that should help any large company take notice. But he also notes that while these types of cyber threats are “substantial in a monetary sense but there’s also reputational damage as well.”

Incidentally, Tim Schaaff, a top Sony executive, told Games Beat that the attack a “great experience, really good time … Though I wouldn’t like to do it again.”

At first blush, that seems like a ridiculous position. But perhaps we can all learn a little bit from the PlayStation Network breach. Take this quote from  Schaaff that further elaborates on his view.

“A determined hacker will get you, the question is how you build your life so you’re able to cope with those things,” he said.

Of course prevention and fool-proof network security is the ultimate goal. But it is increasingly hard to achieve. As the Chatham study shows, the leap forward in sophistication of today’s hackers means that most companies are now playing catch-up — both technologically and in terms of understanding the nature of the threat.

In the interim, many companies can and will be hit.

Know that. And start game-planning a response. Perhaps going forward, conducting emergency, tabletop drills for cyberdisasters will be as beneficial in preparing for the real thing as emergency drills for natural disasters are today.

Companies that have money and reputations to lose would be silly not to be developing strategies to this threat. And those organizations tasked with providing the nation with critical infrastructure? Well, if they do not improve their defenses soon, “silly” will start looking a lot more like “negligent.”

(For more on cyberthreat whos, whats, whens, wheres, whys and and hows, stay tuned for our upcoming October issue of Risk Management magazine. We have an 8-page feature detailing the nature of the threat — and ways to combat it. )

How the NCAA Has Used the Term “Student-Athlete” to Avoid Paying Workers Comp Liabilities

Anyone who has spent much time following college sports should be aware of the NCAA’s hypocrisy. It demands purity from its “amateur” “student-athletes” while at the same time taking in billions in revenue from their on-field and on-court efforts. And whenever the nation expresses outrage at the revelation of yet another “scandal” in which a player received some compensation for their athletic abilities, there is much hand-wringing and finger-pointing from the sport’s governing body, which in turn imposes sanctions and other penalties against the offending schools and players.

Well, never before has anyone detailed this NCAA hypocrisy better than Taylor Branch did in the latest cover story of The Atlantic, “The Shame of College Sports.” If this sort of stuff interests you, the looooong account is well worth your time to read.

For our purposes, however, the most interesting excerpt chronicles the how and the why of the NCAA’s creation and widespread promotion of the term “student-athlete.” According to Branch, the main reason that former NCAA head Walter Byers, in his own words, “crafted the term student-athlete” and soon made sure it was “embedded in all NCAA rules and interpretations” was because it was an excellent defense against being held liable for workers compensation benefits that those injured in athletic competition could seek.

“We crafted the term student-athlete,” Walter Byers himself wrote, “and soon it was embedded in all NCAA rules and interpretations.” The term came into play in the 1950s, when the widow of Ray Dennison, who had died from a head injury received while playing football in Colorado for the Fort Lewis A&M Aggies, filed for workmen’s-compensation death benefits. Did his football scholarship make the fatal collision a “work-related” accident? Was he a school employee, like his peers who worked part-time as teaching assistants and bookstore cashiers? Or was he a fluke victim of extracurricular pursuits? Given the hundreds of incapacitating injuries to college athletes each year, the answers to these questions had enormous consequences. The Colorado Supreme Court ultimately agreed with the school’s contention that he was not eligible for benefits, since the college was “not in the football business.”

The term student-athlete was deliberately ambiguous. College players were not students at play (which might understate their athletic obligations), nor were they just athletes in college (which might imply they were professionals). That they were high-performance athletes meant they could be forgiven for not meeting the academic standards of their peers; that they were students meant they did not have to be compensated, ever, for anything more than the cost of their studies. Student-athlete became the NCAA’s signature term, repeated constantly in and out of courtrooms.

Using the “student-athlete” defense, colleges have compiled a string of victories in liability cases. On the afternoon of October 26, 1974, the Texas Christian University Horned Frogs were playing the Alabama Crimson Tide in Birmingham, Alabama. Kent Waldrep, a TCU running back, carried the ball on a “Red Right 28” sweep toward the Crimson Tide’s sideline, where he was met by a swarm of tacklers. When Waldrep regained consciousness, Bear Bryant, the storied Crimson Tide coach, was standing over his hospital bed. “It was like talking to God, if you’re a young football player,” Waldrep recalled.

Waldrep was paralyzed: he had lost all movement and feeling below his neck. After nine months of paying his medical bills, Texas Christian refused to pay any more, so the Waldrep family coped for years on dwindling charity.

Through the 1990s, from his wheelchair, Waldrep pressed a lawsuit for workers’ compensation. (He also, through heroic rehabilitation efforts, recovered feeling in his arms, and eventually learned to drive a specially rigged van. “I can brush my teeth,” he told me last year, “but I still need help to bathe and dress.”) His attorneys haggled with TCU and the state worker-compensation fund over what constituted employment. Clearly, TCU had provided football players with equipment for the job, as a typical employer would—but did the university pay wages, withhold income taxes on his financial aid, or control work conditions and performance? The appeals court finally rejected Waldrep’s claim in June of 2000, ruling that he was not an employee because he had not paid taxes on financial aid that he could have kept even if he quit football. (Waldrep told me school officials “said they recruited me as a student, not an athlete,” which he says was absurd.)

The long saga vindicated the power of the NCAA’s “student-athlete” formulation as a shield, and the organization continues to invoke it as both a legalistic defense and a noble ideal. Indeed, such is the term’s rhetorical power that it is increasingly used as a sort of reflexive mantra against charges of rabid hypocrisy.

Today, the term “student-athlete” is intended to carry with it the nobility of amateur athletics that the NCAA epitomizes.

Originally?

It was a good protection for keeping those carried off the field from suing the schools.