My insurance company spied on my house with a drone. Then the real nightmare began.

My insurance company spied on my house with a drone.  Then the real nightmare began.

It was already a hectic day when my insurance broker left a frantic voicemail telling me that my homeowner’s insurance had lapsed. I felt nauseous and naked. Suddenly, any leak, fire, or tree branch falling onto the hundred-year-old Hudson Valley house that’s been in my family for nearly 40 years could wipe out my bank account. I spiraled in shame. How did I let this happen? Did I forget to update a credit card? Did I miss a bill? Did I do something wrong with the policy? But when I checked my records, even the Travelers website, there was nothing.

A few hours later, my panic turned to bewilderment. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code.

I take privacy and surveillance extremely seriously — so seriously that I started one of the leading think tanks on the topic, the Surveillance Technology Oversight Project. But while I studied surveillance threats around the country for a living, I had no idea that my own insurance company was using my premium dollars to spy on me. Travelers not only uses aerial photography and AI to monitor their customers’ roofs, but also wrote patents on the technology — nearly 50 patents actually. And it may not be the only insurer spying from the skies.

This didn’t just feel creepy and invasive — it felt wrong. Literally wrong: There was nothing wrong with my roof.

I’m a lazy homeowner. I hate gardening, and I don’t clean as often as I should. But I still take care of the essentials. Whether it’s upgrading the electrical or installing a new HVAC, I try to make sure my home is safe. But to Travelers’ AI, it appeared, my laziness was too big a risk to insure. Its algorithm didn’t detect an issue with the foundation or a concern with a leaky pipe. Instead, as my broker revealed, the ominous threat that canceled my insurance was nothing more than moss.

Where there’s moisture, there’s moss, and if you leave a huge amount of it on a roof for a protracted period, it can undermine the roof’s lifespan. A small amount is largely harmless. Still, treating it couldn’t be simpler. Sure, I could have knocked out the moss sooner, but life was busy, and so it kept falling (and growing) between the cracks. Finally, in June, weeks before I knew my roof was being surveyed, I went to the hardware store, spent 80 bucks on moss killer, hooked the white bottle of chemicals up to the garden hose, and sprayed it on the roof. The whole thing took about five minutes. A few days later, much to my relief, the moss was dying. I thought it was the end of an entirely unmemorable story.

Who knows. Maybe if I’d done that a month sooner, Travelers’ technology would never have flagged me, never would’ve said I was an insurance risk. But one of the deep frustrations of the AI-surveillance age is that as companies and governments track ever more of our lives in ever more detail, we rarely know we’re being watched. At least not until it’s too late to change their minds.

While there’s no way to know exactly how many other Travelers customers have been targeted by the company’s surveillance program, I’m certainly not the first. In February, Boston’s ABC affiliate reported on a customer who was threatened with nonrenewal if she didn’t replace her roof. The roof was well within its life expectancy, and the customer had not encountered any issues with leaks; Still, she was told that without a roof replacement she wouldn’t be insured. She said she faced a $30,000 bill to replace a slate roof that experts estimated could have lasted another 70 years.

Insurers have every incentive to be overly cautious in how they build their AI models. No one can use AI to know the future; you’re training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you’re trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time.

Think about it: Every time the AI ​​gives the green light to a roof that actually has something wrong, the insurance company picks up the bill. Each time that happens, the company can add that data point to its model and train it to be even more risk-averse. But when homeowners are threatened with cancellation, they pick up the bill for repairs, even if repairs are unnecessary. If the Boston homeowner throws out a slate roof with 70 years of life left in it, the insurance company never knows it was wrong to remove it. It never updates the model to be less aggressive for similar homes.

Over time, insurance companies will have every incentive to make the models more and more unforgiving, threatening more Americans with loss of coverage and potentially driving millions or billions of dollars’ worth of unnecessary home repairs. And as insurers face increasing losses due to the climate crisis and inflation, the pressure to push unnecessary preventive repairs on customers will only rise.

A confusing coda to this whole order was what Travelers said when I reached out with a detailed list of fact-checking questions and a request for an interview. In response, a spokesperson sent a terse denial: “Artificial intelligence analysis/modeling and drone surveillance are not a part of our underwriting decision process. When available, our underwriters may reference high-resolution aerial imagery as part of a holistic review of property conditions “

How did this make sense given what was written on Travelers’ own website and patent applications? Then the precision and slipperiness of the language started to stand out. What exactly counts as the “underwriting decision process”? When Travelers boast online that their workers “rely on algorithms and aerial imagery to identify a roof’s shape — typically a time-consuming process for customers — with close to 90% accuracy,” does that classification not count as the underwriting process? And even though Travelers has flown tens of thousands of drone flights, are those not part of underwriting? And if AI and drones aren’t actually affecting customers, why file so many patent applications like “Systems and Methods for Artificial Intelligence (AI) Roof Deterioration Analysis”? It felt like the company was trying to have it both ways, boasting about using the latest and greatest technology while avoiding accountability for errors. When I brought these follow-up questions to the company, Travelers did not respond.


Fortunately, my own roof isn’t going anywhere, at least not for now. A few hours after my panicked order with Travelers began and I started to scramble to find new coverage, the situation resolved. Travelers admitted that it screwed up. It never conceded that its AI was wrong to tag me. But it revealed the reason I couldn’t find my cancellation notice: The company never sent it.

Travelers may have invested huge sums in neural networks and drones, but it apparently never updated its billing software to reliably handle the basics. Without a nonrenewal notice, it couldn’t legally cancel coverage. Bad cutting edge tech screwed me over; bad basic software danced me out.

Part of what is so disturbing about the whole episode is how opaque it was. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we’re all at risk. AI may give companies a quick way to save some money, but when these systems use our data to make decisions about our lives, we’re the ones who bear the risk. Maddening as dealing with a human insurance agent is, it’s clear that AI and surveillance are not the right replacements. And unless we take action, the situation will only get worse.

The reason I still have insurance is simple consumer-protection laws. New York state won’t allow Travelers to revoke my insurance without notice. But why do we let companies like Travelers use AI on us in the first place without any protections? A century ago, lawmakers saw the need to regulate the insurance market and make policies more transparent, but now updated laws are needed to protect us from the AI ​​trying to decide our fates. If not, the future looks unsettling. Insurance is one of the few things that protect us from the risks of modern life. Without AI safeguards, the algorithms will take what little peace of mind our policies give us.


Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOPa New York-based civil-rights and privacy group.

Read the original article on Business Insider