It takes a village…

ARTURO
6 min readJul 3, 2019

The saying goes, “It takes a village to raise a child,” meaning that it takes an entire community interacting with a child for that child to safely/healthily grow and succeed in the community. Suppose if you will that roughly the same is true of the AI/ML community and the resulting models that are consumed. In order to provide the actual breadth of models and information needed by the world of potential users, it will take a village of providers to create the landscape of necessary capabilities. I don’t believe any one provider is in a position to provide all the models and inferences that the world will need, but I do believe in the need for targeted specialization around specific industries and use-cases.

Photo by Ales Krivec on Unsplash

In that vein, I am excited for the opportunity to lead Arturo as CTO as we work to take our place in the AI/ML community and help to advance these offerings. I’ve spent most of my career building near-real-time spatial data processing and analysis systems, and this is an exciting next step in that journey. Most recently, I was working to bring automated analytics, predictive analytics, and emerging AI/ML capability to bear on the unique demands at the National Geospatial-Intelligence Agency (NGA). During my time at the NGA, I was exposed to a rich array of capabilities (spatial and non-spatial) leveraging AI, ML, DL, and CV to solve unique challenges at rates and scales never before seen. When I had the opportunity to see what was being built at Arturo, the decision to join the team was a simple one. It was clear to me that Arturo had something truly unique and was decidedly in the top 10% of all the geospatial analytic capabilities I had seen and experienced over the last few years. As such, I am excited to be part of the Arturo team bringing a suite of new offerings to this burgeoning community that will add unique new value.

Here at Arturo, we hold that AI/ML capabilities will advance our understanding of the world from a macro scale all the way down to a micro scale. We are fortunate to be surrounded by amazing companies and academic/government researchers who are building excellent models for macro scale (global and country level) down to regional and local inferences and predictions. At Arturo we are focused on the very micro scale of a specific address. I like to refer to it as focused and targeted insights. We take a residential address as an input and provide inferences specifically about this address in time and space. And we don’t rely on pre-processing and caching, rather we are delivering on-demand processing on an address by address basis.

Photo by Briana Tozour on Unsplash

We have built a carefully curated set of models that answer specific questions asked by the residential P&C insurance industry. We had the unique opportunity to have started our work inside American Family Insurance, where we received critical exposure to the needs and wants of a key customer. Using these lessons, we were able to build a solution that is keenly aimed at the specific needs of the residential P&C insurance community.

Our offering was designed using four foundational building blocks: On-demand Processing , Confidence Scores (stay tuned for a blog post on this topic in the next few weeks) , Feedback Loops, and Proprietary & Unique Data. Recently our applied machine learning lead wrote a great post on our Feedback Loops. Here I’d like to focus on our use of on-demand processing. By offering on-demand processing, we leverage the most current and highest resolution imagery available — with our most current and performant deep learning models. This increases understanding of risk to insurers and allows us to help with prefill for insurance applications, an interesting use case that we have tailored our system to support. In the case of prefill, the results need to be current and accessible within seconds to avoid delays in the application process. There are three key pieces of our system that support this: robust data partnerships, acutely tuned models, and a highly scalable processing system that can return an array of inferences in under 10 seconds.

Data Partnerships

Our team has strong ties to the geospatial community and a long history of using geospatial data to monitor the world. Through partnerships with the premier providers of geospatial data in the industry, we can attain data in seconds via access and discovery APIs hosted by these providers. This wealth of partnerships allows us to access the highest quality and most timely data relevant to each query of our API — specifically the highest resolution commercial satellite imagery available from our partners at MAXAR (formerly DigitalGlobe) and high-resolution aerial imagery collected from our partners at Nearmap.

Model Tuning

Our applied machine learning team is continuously exploring new data, developing new models, and refining our existing models. Our Full-Loop Deep Learning™ process consistently improves our models. Our knowledge and connections to the geospatial industry result in us constantly exploring new data options, this results in a rich pipeline of new and improved models that enhance our offerings. And finally, our team of machine learning engineers tunes every model to extract features from a specific property and its neighbors in a matter of seconds. Every new/updated model goes through rigorous testing to ensure it can keep up with our on-demand inference offerings.

Scalable Processing System

By harnessing several tools from the suite of capabilities offered by the Cloud Native Computing Foundation, the OSGeo Foundation, open source tools, and internally developed proprietary capabilities, we have built a processing platform that can generate a wide array of inferences in under 10 seconds. This system allows us to ingest myriad addresses through our API each second and return property insight to our end users. It is able to scale up and down seamlessly as traffic volumes increase or decrease and can handle large batch processing jobs for customers who want to monitor a list of properties. We regularly process batches of 10s to 100s of thousands of addresses for our users.

On-demand Processing

This allows us to support customer prefill of an online insurance application for a specific property at a pace that does not lose the attention of a potential consumer. And we are still moving forward with new and enhanced capabilities on top of this platform. Next up will be further enhancement of our offering that will monitor changes in properties quarter by quarter. Beyond that, we’ll be looking to help further improve the understanding of risk by looking at predictive models. These new models will be focused on predicting the likelihood of risk events — including a roof needing replacement in the next 1–3 years or the potential replacement cost of a given property.

Photo by Breno Assis on Unsplash

In Conclusion…

It’s an exciting time to be developing in this space, and we look forward to continuing to evolve. We are excited to expand our data partnerships and to explore collaborative opportunities with our colleagues in the geospatial and deep learning community. We have a vast field of exploration ahead, but I’m excited to see us arriving at a time when we can use automated analytics, deep learning, and AI to quickly deliver insights about our world from the global scale all the way down to a specific address in record time.

Ben Tuttle, PhD | Chief Technology Officer at ARTURO

--

--

ARTURO

ARTURO IS A DEEP LEARNING SPIN-OUT FROM AMERICAN FAMILY INSURANCE FOCUSED ON DELIVERING HIGHLY ACCURATE MEASUREMENT AND PREDICTIVE PROPERTY DATA.