Want to scan your crypto wallet for risks? Check: AML crypto BTC, USDT, ETH. Checking cryptocurrency wallets for dirty money.

Beyond Pride: Building Strong Diversity and Inclusion Programs

Today, June 28, marks the 50th anniversary of the Stonewall riots, demonstrations widely considered the most formative event to formally start the fight for LGBTQ rights in New York City and the United States as a whole. As June comes to a close and the city begins celebrating World Pride this weekend, enterprises should be thinking about how to extend the spirit of Pride month and embrace the importance of diversity and inclusion. Long after companies have retired their rainbow logos, they still face increasing need to build and maintain meaningful policies and programs in practice.

Whether looking to start a formal diversity and inclusion initiative, review existing policies, or audit the efficacy of D&I programs, here are some key resources companies can use to build better workplaces for LGBTQ employees and the workforce at large:

RIMS has also been increasingly focused on diversity and inclusion initiatives for both members and the organization itself with Risk Management Magazine content, webinars and conference networking events. Special thanks to Joshua Lamangan, senior membership manager at RIMS, for sharing many of the resources above from his work leading the RIMS Diversity and Inclusion Task Force and Diversity and Inclusion Advisory Council.

Assessing the Legal Risks in AI—And Opportunities for Risk Managers

Last year, Amazon made headlines for a developing a human resources hiring tool fueled by machine learning and artificial intelligence. Unfortunately, the tool came to light not as another groundbreaking innovation from the company, but for the notable gender bias the tool had learned from the data input and amplified in the candidates it highlighted for hiring.

buy oseltamivir online thecifhw.com/wp-content/uploads/2023/10/jpg/oseltamivir.html no prescription pharmacy

As Reuters reported, the models detected patterns from resumes of candidates from the previous decade and the resulting hiring decisions, but these decisions reflect that the tech industry is disproportionately male. The program, in turn, learned to favor male candidates.

buy robaxin online thecifhw.com/wp-content/uploads/2023/10/jpg/robaxin.html no prescription pharmacy

As AI technology draws increasing attention and its applications proliferate, businesses that create or use such technology face a wide range of complex risks, from clear-cut reputation risk to rapidly evolving regulatory risk. At last week’s RIMS NeXtGen Forum 2019, litigators Todd J. Burke and Scarlett Trazo of Gowling WLG pointed toward such ethical implications and complex evolving regulatory requirements as highlighting the key opportunities for risk management to get involved at every point in the AI field.

For example, Burke and Trazo noted that employees who will be interacting with AI will need to be trained to understand its application and outcomes. In cases where AI is being deployed improperly, failure to train the employees involved to ensure best practices are being followed in good faith could present legal exposure for the company. Risk managers with technical savvy and a long-view lens will be critical in spotting such liabilities for their employers, and potentially even helping to shape the responsible use of emerging technology.

buy inderal online thecifhw.com/wp-content/uploads/2023/10/jpg/inderal.html no prescription pharmacy

To help risk managers assess the risks of AI in application or help guide the process of developing and deploying AI in their enterprises, Burke and Trazo offered the following “Checklist for AI Risk”:

  • Understanding: You should understand what your organization is trying to achieve by implementing AI solutions.
  • Data Integrity and Ownership: Organizations should place an emphasis on the quality of data being used to train AI and determine the ownership of any improvements created by AI.
  • Monitoring Outcomes: You should monitor the outcomes of AI and implement control measures to avoid unintended outcomes.
  • Transparency: Algorithmic decision-making should shift from the “black box” to the “glass box.”
  • Bias and Discrimination: You should be proactive in ensuring the neutrality of outcomes to avoid bias and discrimination.
  • Ethical Review and Regulatory Compliance: You should ensure that your use of AI is in line with current and anticipated ethical and regulatory frameworks.
  • Safety and Security: You should ensure that AI is not only safe to use but also secure against cyberattacks. You should develop a contingency plan should AI malfunction or other mishaps occur.
  • Impact on the Workforce: You should determine how the implementation of AI will impact your workforce.

For more information about artificial intelligence, check out these articles from Risk Management:

Pregnancy-Tracking Apps Pose Challenges for Employees

As more companies embrace health-tracking apps to encourage healthier habits and drive down healthcare costs, some employees are becoming uncomfortable with the amount and types of data the apps are sharing with their employers, insurance companies and others.

This is especially true for apps that track fertility and pregnancy. As the Washington Post recently reported, these apps collect huge amounts of personal health information, and are not always transparent about who has access to it. The digital rights organization Electronic Frontier Foundation even published a paper in 2017 titled The Pregnancy Panopticon detailing the security and privacy issues with pregnancy-tracking apps. Employers can also pay extra for some pregnancy-tracking apps to provide them with employees’ health information directly, ostensibly to reduce health care spending and improve the company’s ability to plan for the future.

Given the documented workplace discrimination against women who are pregnant or planning to become pregnant, users may worry that the information they provide the apps could impact employment options or treatment by colleagues and managers. Pregnancy-tracking apps also collect infinitely more personal data than traditional health-tracking apps and devices like step-counters or heart rate monitors. This can include everything from what medications users are taking and when they are having sex or their periods, to the color of their cervical fluid and their doctors’ names and locations.

Citing discomfort with providing this level of information, the Washington Post reported some women have even taken steps to obscure their personal details when using the apps, for fear that their employers, insurance companies, health care providers or third parties may have access to their data and could use it against them in some way. They use fake names or fake email addresses and only give the apps select details or provide inaccurate information. Fearing the invasion of their newborn children’s privacy, some have even chosen not to report their children’s births on the apps, despite this impacting their ability to track their own health and that of their newborn on the app.

Like many other apps or online platforms, it may be difficult to parse out exactly what health-tracking apps are doing with users’ information and what you are agreeing to when you sign up. When employers get involved, these issues get even more difficult. By providing incentives—either in the form of tangible rewards like cash or gift cards, or intangible benefits such as looking like a team player—companies may actually discourage their employees from looking closely at the apps’ terms of use or other key details they need to fully inform the choice to participate or not.

While getting more information about employees’ health may offer ways to improve a workforce’s health and reduce treatment costs, companies encouraging their employees to use these apps are also opening themselves up to risks. As noted above, apps are not always transparent as to what information they are storing and how. Depending on the apps’ security practices, employees’ data may be susceptible to hacking or other misuse by third-party or malicious actors. For example, in January 2018, fitness-tracking app Strava released a map of users’ activity that inadvertently exposed sensitive information about military personnel’s locations, including in war zones. Given the kinds of personal details that some apps collect, health app data could also put users at risk of identity theft or other types of fraud.

Tracking, storing, and using workers’ personal health information also exposes employers and insurance companies to a number of risks and liabilities, including third-party data storage vulnerabilities and data breaches. This is especially important in places governed by stringent online data protection regulations like the European Union’s General Data Protection Regulation (GDPR). In addition to the risks of reputation damage, companies that are breached or otherwise expose employees’ personal information could face significant regulatory fines.

People using health-tracking apps, especially fertility-related apps, should weigh the costs and benefits of disclosing personal information against how apps and others are using this information. Companies who encourage their employees to use these apps and collect their personal health details should also be as transparent as possible about how they are using it, and implement measures to protect workers’ personal data to the fullest extent possible and ensure that managers are not using this data to discriminate against workers.

How a Strong(er) SRM Program Could Have Helped Boeing

A strategic risk management (SRM) program is designed to assist organizations in identifying, prioritizing, and planning for the strategic risks that could impair or destroy businesses and reduces the chances of these kinds of crises. And while hindsight is 20-20, an SRM program – or a more effective one – could have helped Boeing avoid some of its recent high-profile crises.

Between October 2018 and March 2019, two crashes involving the Boeing 300 737 MAX 8 models resulted in the loss of 346 lives. Since then, Boeing has:

  • had a possible criminal investigation commenced against it,
  • lost $22 billion in market value in the week following the Ethiopian Airlines’ crash in October,
  • had more than 300 737 MAX 8s grounded worldwide,
  • sustained significant reputational harm,
  • received demands from airlines seeking compensation for lost revenue,
  • been sued by crash victims’ families, and
  • had sales orders cancelled or suspended.

This is a crisis from which it may be difficult to recover.

One could trace back some of the risks to its decades-long rivalry with Airbus and an effort to remain viable.

buy zetia online www.tvaxbiomedical.com/scripts/css/zetia.html no prescription pharmacy

When American Airlines indicated it was close to finalizing an exclusive deal with Airbus for hundreds of new jets, Boeing sprung to action. The New York Times reported that Boeing employees then had to move at “roughly double the normal pace” to avoid losing “billions in lost sales and potentially thousands of jobs.”

An SRM program would have required an assessment of the business model and the associated risks, including competitors, long before the call from the CEO of American Airlines. The risks would have been prioritized and this information would have been factored into strategic plans that would have included responses to material risks.

During the scramble, Boeing mirrored Airbus’ operations and mounted larger engines in existing models.

buy arava online www.tvaxbiomedical.com/scripts/css/arava.html no prescription pharmacy

 The objective seemed straightforward: Make minimum changes to avoid the need for training in a simulator, decrease costs, and build the redesigned model quickly. But a risk was that mounting larger engines changed the aerodynamics in the aircraft, requiring a consequential need for new software, a Maneuvering Characteristics Augmentation System (MCAS) which was supposed to prevent stalling. Boeing’s view was that pilots did not need to be trained on the software and federal regulators agreed.

However, in an effective SRM program the C-Suite would have been advised that the strategic and life safety risks were material and that training for pilots was indeed necessary.  In addition, all such risks would have been assessed to determine whether they could be used to obtain a competitive advantage.

For example, including vital safety features in the base cost of aircraft (as opposed to charging extra for them) and requiring a focus group of pilots with no financial relationship with Boeing to test the newly designed 737 MAX 8s and the MCAS system would have been a way to solidify Boeing’s reputation for safety first.

An SRM program, which monitors progress in achieving strategic objectives with a focus on continuous improvement, would have looked at the Indonesian Lion Air and the Ethiopian Airlines crashes as an opportunity to confirm that Boeing puts safety first by grounding the aircraft. Instead, Boeing urged the U.S. to keep flying its jets until after 42 regulators in other countries had grounded them and appeared to care more about economics than life safety. Only seven months ago, Boeing was synonymous with efficient jet planes and commercial aviation – it was a reputation that took decades to build. Now, the company has a long, uphill climb to resolve its many challenges and rebuild its brand.

buy zantac online www.tvaxbiomedical.com/scripts/css/zantac.html no prescription pharmacy

An SRM program cannot succeed without full support from the C-Suite as it has to be integrated into the business model and decision-making processes in order to be effective, and in time we will learn more about what risk management protocols were followed across Boeing’s organization.

At RIMS 2019, Marian Cope will lead a panel of industry experts in discussing reasons to transform an ERM program into a SRM program or develop a SRM program in NextGen ERM:  Strategic Risk Management. The session will take place April 29th at 1:30 pm.