Проблемы с доступом больше не помеха. Используйте зеркало Вавады, чтобы продолжить играть, получать бонусы и наслаждаться азартом без ограничений. LeapWallet is a secure digital wallet that enables easy management of cryptocurrencies. With features like fast transactions and user-friendly interface, it's perfect for both beginners and experts. Check it out at leapwallet.lu.

RIMS TechRisk/RiskTech: Opportunities and Risks of AI

On the first day of the RIMS virtual event TechRisk/RiskTech, author and UCLA professor Dr. Ramesh Srinivasan gave a keynote titled “The Opportunities and Downside Risks of Using AI,” touching on the key flashpoints of current technological advancement, and what they mean for risk management. He noted that as data storage has become far cheaper, and computation quicker, this has allowed risk assessment technology to improve. But with these improvements come serious risks.

Srinivasan provided an overview of where artificial intelligence and machine learning stand, and how companies use these technologies. AI is “already here,” he said, and numerous companies are using the technology, including corporate giants Uber and Airbnb, whose business models depend on AI. He also stressed that AI is not the threat portrayed in movies, and that these portrayals have led to a kind of “generalized AI anxiety,” a fear of robotic takeover or the end of humanity—not a realistic scenario.

However, the algorithms that support them and govern many users’ online activities could end up being something akin to the “pre-cogs” from Minority Report that predict future crimes because the algorithms are collecting so much personal information. Companies are using these algorithms to make decisions about users, sometimes based on data sets that are skewed to reflect the biases of the people who collected that data in the first place.

Often, technology companies will sell products with little transparency into the algorithms and data sets that the product is built around. In terms of avoiding products that use AI and machine learning that are built with implicit bias guiding those technologies, Srinivasan suggested A/B testing new products, using them on a trial or short-term basis, and using them on a small subset of users or data to see what effect they have.

When deciding which AI/machine learning technology their companies should use, Srinivasan recommended that risk professionals should specifically consider mapping out what technology their company is using and weigh the benefits against the potential risks, and also examining those risks thoroughly and what short- and long-term threats they pose to the organization.

Specific risks of AI (as companies currently use it) that risk professionals should consider include:

  • Economic risk in the form of the gig economy, which, while making business more efficient, also leaves workers with unsustainable income
  • Increased automation in the form of the internet of things, driverless vehicles, wearable tech, and other ways of replacing workers with machines, risk making labor obsolete.
  • Users do not get benefits from people and companies using and profiting off of their data.
  • New technologies also have immense environmental impact, including the amount of power that cryptocurrencies require and the health risks of electronic waste.
  • Issues like cyberwarfare, intellectual property theft and disinformation are all exacerbated as these technologies advance.
  • The bias inherent in AI/machine learning have real world impacts. For example, court sentencing often relies on biased predictive algorithms, as do policing, health care facilities (AI giving cancer treatment recommendations, for example) and business functions like hiring.

Despite these potential pitfalls, Srinivasan was optimistic, noting that risk professionals “can guide this digital world as much as it guides you,” and that “AI can serve us all.”

RIMS TechRisk/RiskTech continues today, with sessions including:

  • Emerging Risk: AI Bias
  • Connected & Protected
  • Tips for Navigating the Cyber Market
  • Taking on Rising Temps: Tools and Techniques to Manage Extreme Weather Risks for Workers
  • Using Telematics to Give a Total Risk Picture

You can register and access the virtual event here, and sessions will be available on-demand for the next 60 days.

Pregnancy-Tracking Apps Pose Challenges for Employees

As more companies embrace health-tracking apps to encourage healthier habits and drive down healthcare costs, some employees are becoming uncomfortable with the amount and types of data the apps are sharing with their employers, insurance companies and others.

This is especially true for apps that track fertility and pregnancy. As the Washington Post recently reported, these apps collect huge amounts of personal health information, and are not always transparent about who has access to it. The digital rights organization Electronic Frontier Foundation even published a paper in 2017 titled The Pregnancy Panopticon detailing the security and privacy issues with pregnancy-tracking apps. Employers can also pay extra for some pregnancy-tracking apps to provide them with employees’ health information directly, ostensibly to reduce health care spending and improve the company’s ability to plan for the future.

Given the documented workplace discrimination against women who are pregnant or planning to become pregnant, users may worry that the information they provide the apps could impact employment options or treatment by colleagues and managers. Pregnancy-tracking apps also collect infinitely more personal data than traditional health-tracking apps and devices like step-counters or heart rate monitors. This can include everything from what medications users are taking and when they are having sex or their periods, to the color of their cervical fluid and their doctors’ names and locations.

Citing discomfort with providing this level of information, the Washington Post reported some women have even taken steps to obscure their personal details when using the apps, for fear that their employers, insurance companies, health care providers or third parties may have access to their data and could use it against them in some way. They use fake names or fake email addresses and only give the apps select details or provide inaccurate information. Fearing the invasion of their newborn children’s privacy, some have even chosen not to report their children’s births on the apps, despite this impacting their ability to track their own health and that of their newborn on the app.

Like many other apps or online platforms, it may be difficult to parse out exactly what health-tracking apps are doing with users’ information and what you are agreeing to when you sign up. When employers get involved, these issues get even more difficult. By providing incentives—either in the form of tangible rewards like cash or gift cards, or intangible benefits such as looking like a team player—companies may actually discourage their employees from looking closely at the apps’ terms of use or other key details they need to fully inform the choice to participate or not.

While getting more information about employees’ health may offer ways to improve a workforce’s health and reduce treatment costs, companies encouraging their employees to use these apps are also opening themselves up to risks. As noted above, apps are not always transparent as to what information they are storing and how. Depending on the apps’ security practices, employees’ data may be susceptible to hacking or other misuse by third-party or malicious actors. For example, in January 2018, fitness-tracking app Strava released a map of users’ activity that inadvertently exposed sensitive information about military personnel’s locations, including in war zones. Given the kinds of personal details that some apps collect, health app data could also put users at risk of identity theft or other types of fraud.

Tracking, storing, and using workers’ personal health information also exposes employers and insurance companies to a number of risks and liabilities, including third-party data storage vulnerabilities and data breaches. This is especially important in places governed by stringent online data protection regulations like the European Union’s General Data Protection Regulation (GDPR). In addition to the risks of reputation damage, companies that are breached or otherwise expose employees’ personal information could face significant regulatory fines.

People using health-tracking apps, especially fertility-related apps, should weigh the costs and benefits of disclosing personal information against how apps and others are using this information. Companies who encourage their employees to use these apps and collect their personal health details should also be as transparent as possible about how they are using it, and implement measures to protect workers’ personal data to the fullest extent possible and ensure that managers are not using this data to discriminate against workers.

Disruptive Technologies Present Opportunities for Risk Managers, Study Finds

PHILADELPHIA–Disruptive technologies are used more and more by businesses, but those organizations appear to be unprepared. What’s more, companies seem to lack understanding of the technologies and many are not conducting risk assessments, according to the 14th annual Excellence in Risk Management report, released at the RIMS conference here.

The study found an apparent lack of awareness among risk professionals of their company’s use of existing and emerging technologies, including the Internet of Things (IoT), telematics, sensors, smart buildings, and robotics and their associated risks. When presented with 13 common disruptive technologies, 24% of respondents said their organizations are not currently using or planning to use any of them. This is surprising, as other studies have found that more than 90% of companies are either using or evaluating IoT technology or wearable technologies and that companies in the United States invested $230 billion on IoT in 2016.

Another finding was that despite the impact disruptive technology can have on an organization’s business strategy, model, and risk profile, 60% of respondents said they do not conduct risk assessments around disruptive technologies.

“Today’s disruptive technologies will soon be — and in many cases already are — the norm for doing business,” said Brian Elowe, Marsh’s U.S. client executive leader and co-author of the report said in a statement. “Such lack of understanding and attention being paid to the risks is alarming. Organizations cannot fully realize the rewards of using today’s innovative technology if the risks are not fully understood and managed.” According to the study:

Organizations generally, and risk management professionals in particular, need to adopt a more proactive approach to educate themselves about disruptive technologies — what is already in use, what is on the horizon, and what are the risks and rewards. Forward-leaning executives are able to properly identify, assess, and diagnose disruptive technology risks and their impact on business models and strategies.

This lack of clarity presents opportunity for risk professionals. In fact, previous Excellence reports have indicated that C-suite executives and boards of directors want to know what risks loom ahead for their organizations and increasingly rely on risk professionals to provide that insight.

“As organizations adapt to innovative technologies, risk professionals have the opportunity to lead the way in developing risk management capabilities and bringing insights to bear on business strategy decisions,” said Carol Fox, vice president of strategic initiatives for RIMS and co-author of the report. “As a first step, risk professionals are advised to proactively educate themselves about disruptive technologies, including what is already in use at their organizations, what technologies may be on the horizon, and the respective risks and rewards of using such technology.”

One thing companies can do to manage risks associated with disruptive technologies is facilitate discussions through cross-functional committees—yet fewer companies, only 48%, said they have one, a drop from 52% last year and 62% five years ago.

Whether discussed in weekly, monthly, or quarterly organization-wide committee meetings, emerging risks — including disruptive technologies — need to be examined regularly to anticipate and manage the acceleration of business model changes. When risk is siloed, too often the tendency can be toward an insurance-focused approach to risk transfer rather than an enterprise approach that may lead to pursuing untapped opportunities.

The Excellence survey, Ready or Not, Disruption is Here, is based on more than 700 responses to an online survey and a series of focus groups with leading risk executives in January and February 2017.

Findings from the survey were released today at the RIMS 2017 Annual Conference & Exhibition. Copies of the survey are available on www.marsh.com<http://www.marsh.com> and www.rims.org<http://www.rims.org>.