One other yr, one other AI platform making headlines.
Admittedly, we needed to do a double-take once we noticed information of DeepSeek come out — we initially thought we have been studying in regards to the deep freeze temps that hit the southern states this month. Many people in all probability didn’t wish to begin the brand new yr with deep freezes or DeepSeek, however right here we’re.
Protecting observe of the whirlwind developments in AI can generally really feel like attempting to chase a squirrel on caffeine. We completely get how overwhelming it may be.
However there’s no denying that AI has some fairly thrilling perks for companies, like price financial savings, boosting productiveness, and higher efficiencies — when applied accurately. That’s a key distinction as a result of, on the flip facet, AI can convey ample challenges when not used responsibly.
Because it’s a brand new yr stuffed with new potentialities, priorities, and AI platforms, we thought it the proper time to look into what skilled providers companies have to learn about AI, the dangers, and insurance coverage.
So take a break from shoveling snow and prepare to dive into all issues AI.
Let’s get into it.
- What’s occurring?
- Managing the dangers of AI
- AI, insurance coverage, and governance
- What’s new from Embroker
Subscribe for insurance coverage and {industry} suggestions, methods, and extra
What’s occurring?
Why DeepSeek Shouldn’t Have Been a Surprise — Harvard Enterprise Overview
There have been headlines aplenty in regards to the shock of DeepSeek. However is it actually such an sudden growth? As this text factors out, administration concept might probably have predicted DeepSeek — and it may possibly additionally provide perception into what could occur subsequent.
Public DeepSeek AI database exposes API keys and other user data — ZDNet
No shock with this one. As quickly as information about DeepSeek got here out, it was a on condition that there could be safety considerations.
AI’s Power to Replace Workers Faces New Scrutiny, Starting in NY — Bloomberg Legislation Information
This ought to be on each enterprise proprietor’s radar. Whereas New York is likely to be the primary state to make use of its Employee Adjustment and Retraining Notification (WARN) Act to require employers to reveal mass layoffs associated to AI adoption, it received’t be the one one.
How Thomson Reuters and Anthropic built an AI that lawyers actually trust — VentureBeat
A brand new AI platform is likely to be the reply to attorneys’ and tax professionals’ AI goals. This text has every little thing it is advisable learn about “one of many largest AI rollouts within the authorized {industry}.”
Managing the dangers of AI

“If your organization makes use of AI to supply content material, make choices, or affect the lives of others, it’s probably you can be responsible for no matter it does — particularly when it makes a mistake.”
That line is from a tech/ai/the-ai-industry-is-steaming-toward-a-legal-iceberg-5d9a6ac1″ goal=”_blank” rel=”noopener”>The Wall Avenue Journal article and is a good warning to all companies utilizing AI.
It’s no secret that each new expertise comes with danger. The shortcomings of AI have turn out to be well-documented, notably for hallucinations (a.ok.a. making stuff up), copyright infringement, and knowledge privateness and safety considerations. The terms of service for OpenAI, the developer of ChatGPT, even acknowledge accuracy issues:
“Given the probabilistic nature of machine studying, use of our Providers could, in some conditions, end in Output that doesn’t precisely mirror actual folks, locations, or details […] You need to consider Output for accuracy and appropriateness on your use case, together with utilizing human evaluation as applicable, earlier than utilizing or sharing output from the Providers.”
After all, not everybody reads the phrases of service. (Who hasn’t scrolled to the tip of a software program replace settlement and clicked settle for with out studying?) And taking what AI produces at face worth is the crux of the issue for a lot of firms utilizing the expertise.
An article from IBM notes, “Whereas organizations are chasing AI’s advantages […] they don’t all the time deal with its potential dangers, equivalent to privateness considerations, safety threats, and moral and authorized points.”
An instance is a lawyer in Canada who allegedly submitted false case law that was fabricated by ChatGPT. When reviewing the submissions, the opposing counsel found that a few of the cited instances didn’t exist. The Canadian lawyer was sued by the opposing attorneys for particular prices for the time they wasted finding out the false briefs.
Attorneys, monetary professionals, and others providing skilled providers might additionally discover themselves in severe authorized sizzling water if their shoppers sue for malpractice or errors associated to their AI use.
So, how can firms profit from AI whereas defending themselves from inherent dangers? By making proactive danger administration their firm’s BFF. That features:
- Assessing AI practices, together with how AI is used and understanding the related dangers.
- Creating tips for utilizing AI, together with how data ought to be vetted.
- Establishing a culture of risk awareness throughout the firm.
- Coaching staff on AI finest practices.
- Updating firm insurance policies to include AI utilization, tips, approvals, limitations, copyright points, and many others.
- Getting insured (a bit extra on that in a second).
- Don’t overlook about it. Issues transfer quick with AI, so staying on prime of recent developments, safety considerations, and rules is essential.
The underside line: Relating to AI, danger administration isn’t simply a good suggestion — it’s important.
(P.S. The National Institute of Standards and Technology has developed nice (and free) paperwork to assist organizations assess AI-related dangers: Artificial Intelligence Risk Management Framework and the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.)
AI, insurance coverage, and governance
Alright, in spite of everything that doom and gloom in regards to the perils of AI, let’s speak a little bit insurance coverage. Whereas there are dangers related to AI, let’s face it, companies that shrink back from it are prone to be left within the mud. That’s why safeguarding your organization is essential to harnessing the alternatives that AI has to supply.
A core side of danger administration for AI is having the suitable insurance coverage protection to supply a monetary and authorized security web for claims stemming from AI-related use:
When you’ve received insurance coverage protection to cope with potential AI conundrums, it’s necessary to recurrently evaluation and replace your insurance policies to deal with new developments, considerations, and rules to make sure your organization stays protected within the wake of potential new dangers. And for those who’re not sure, as a substitute of taking part in a guessing sport about the best way to defend your organization from AI dangers, chat together with your insurance coverage suppliers. Consider them as your trusty strategic enterprise associate for addressing AI (and different) dangers.
Since we’ve shone a light-weight on the potential AI dangers your organization might run into, you is likely to be questioning what the insurance coverage {industry} is cooking as much as deal with its personal AI woes. (Spoiler alert: We’re not simply crossing our fingers and hoping for the perfect!)
The excellent news is that the insurance coverage {industry} is actively stepping as much as deal with challenges and taking cost of accountable AI use. The National Association of Insurance Commissioners (NAIC) issued a model bulletin concerning insurer accountability for third-party AI methods. The bulletin outlines expectations for the governance of AI methods pertaining to equity, accountability and transparency, danger administration, and inner controls.
Moreover, many states have introduced regulations requiring insurance coverage firms to reveal using AI in decision-making processes and supply proof that their methods are free from bias. Plus, insurers are creating methodologies to detect and stop undesirable discrimination, prejudice, and lack of equity of their methods.
It’s additionally value mentioning that the impact of AI-related dangers within the insurance coverage {industry} is a little bit of a distinct ball sport in comparison with different sectors. “Importantly, the reversible nature of AI choices in insurance coverage implies that the related dangers differ considerably from these in different domains,” reads a research summary from The Geneva Association.
In even higher information, AI is providing substantial alternatives for insurance coverage suppliers to make extra correct danger assessments, together with bettering availability, affordability, and personalization of insurance policies to cut back protection gaps and improve the shopper expertise.
These are wins throughout for everybody.
What’s new from Embroker?
Upcoming occasions, tales, and extra
tech-risk-index-report/?utm_source=linkedin&utm_medium=social&utm_campaign=2024_Tech_Risk_Index_Report”>tech Danger Index Report
AI is likely to be remodeling tech, however is it creating new dangers as equally because it’s creating alternatives? Our tech Danger Index report reveals how AI adoption fuels optimism whereas additionally elevating considerations for privateness and safety. Notably, amongst 200 surveyed tech firms, 79% are hesitant to make use of AI internally as a consequence of dangers.
We are bringing together insurance rigor and advanced technologies: Embroker CEO
Our CEO, Ben Jennings, was interviewed for The Insurtech Management Podcast at Insurtech Join 2024. Within the interview, Ben shares his views on the insurance coverage {industry}, the steadiness between technological innovation and insurance coverage experience for enhancing the shopper expertise, and the way Embroker is main the Insurtech 2.0 motion.
The future of risk assessment: How technology is transforming risk management
Try our newest weblog to find out how AI and different cutting-edge applied sciences are reshaping danger evaluation for companies.