#WeAreOscar: Hi Lamont

Meet @LamontCarolina, our highly passionate & enthusiastic Community Engagement Manager. A Brooklyn native, Lamont has experience working for non-profits, private sector companies and public service. He has worked on a number of high profile political campaigns across the country, including President Obama’s reelection team where he served as Director of Voter Registration for the state of North Carolina and helped to gather more than 340,000 voters! At Oscar, Lamont focuses on finding ways to connect with and give back to the community.

As I started my internship @ Oscar I had no idea what would be involved. I decided to focus my efforts on what I know best: people. I spent the week hopping on phone calls to tell people how amazing Oscar is. In this photo, I was able to sign up a family of 7 baby poms before I realized Oscar doesn’t cover pomeranians. I have a meeting with our CEO tomorrow to talk about pomeranian coverage.

As I started my internship @ Oscar I had no idea what would be involved. I decided to focus my efforts on what I know best: people. I spent the week hopping on phone calls to tell people how amazing Oscar is. In this photo, I was able to sign up a family of 7 baby poms before I realized Oscar doesn’t cover pomeranians. I have a meeting with our CEO tomorrow to talk about pomeranian coverage.

Devil’s in the Details: Visualizing Oscar’s Marketing Engine


Our goal at Oscar is to make insurance simple, intuitive, and human. That experience really starts when people decide to sign up with us.

Providing a high quality consumer experience while coordinating with New York State’s healthcare exchange (which, while not as unstable as the infamous healthcare.gov, has had its difficulties) and navigating an ever-changing web of regulations meant hiring an in-house sales team. In order to get people to our sales center, we devised a marketing strategy that would drive as many of our potential customers there as possible. While we couldn’t control the state exchange itself, we could mediate the experience with our healthcare guides.

But when you try to control the sales process for a product as complicated as health insurance, you’re faced with problems as complex as the system itself. Most companies have some notion of an acquisition ‘funnel’ – a way to think about how you develop relationships with potential customers and eventually convert them into consumers, or in our case, members. It’s a simple idea but can quickly become complicated when you have many points of failure (places where you lose potential customers) and a funnel that blends online and offline processes.

We’ve shared some of our funnel data below (scrubbed, of course) to give you some sense of what we’ve been experiencing in our first few months selling and providing health insurance.

Follow this link to interact with our data visualization.

There are two clear implications: (1) you need to touch a lot of customers to sell a health insurance plan, and (2) when designing a marketing channel, if you have the opportunity to “skip” one of the steps in the funnel, you should. For instance, people who search for Oscar online and go to our site are much harder to sign-up than people who click that same search link on a mobile phone and are instantly connected to our call center.

At the end of the day, understanding the acquisition funnel is about better understanding our customers’ behavior so that we can tailor the experience to fit their needs. We still have a lot to learn, but that’s the goal.

by Gabe Drapos
Visualization by Catherine Moresco

#WeAreOscar: Hi Cleo

Meet Cleo, a sales consultant here at Oscar. As a New York native, Cleo felt compelled to enlist after 9/11. He served as a Counter-Intelligence Special Agent and earned his 2nd degree in Intelligence Operations while deployed in both Iraq and Afghanistan. Although he no longer foils the plans of nefarious terror networks around the globe, Cleo now uses his interpersonal skills and natural communication techniques to guide potential Oscar members through the labyrinth of healthcare. And when he’s not helping folks understand their health insurance options, he stays busy with his podcast and website. Cleo’s positive attitude is as boundless as his smile is contagious.

Fish Plays Pokemon: Internet Phenomenon

Every so often, an Internet phenomenon arises unexpectedly and takes the world by storm. Less often, the architect of such phenomenon happens to be a bright student interning with us for the summer. Meet Catherine Moresco and her Pokemon-playing fighting fish, Grayson Hopper.

So…what is this exactly?
It’s a video stream of my fish playing Pokemon, naturally.

What inspired you?
The short version: it came to me in a dream.
The long version: it was a multi-step process. My friend Patrick and I bought the fish one morning on a whim (we named him Grayson Hopper after Grace Hopper, a pioneer of computer science). But since we both were working full-time tech internships, we didn’t get to see him much, so we bought a webcam and set up a live-stream so we could watch him on our computers at work. Then we just started thinking about what we could do with it—we toyed with the idea of making some sort of interactive game, but the idea for FishPlaysPokemon just struck me in the middle of the night, and it seemed simple and funny enough to work. Patrick and I are part of the awesome HackNY Fellows program, and we built FishPlaysPokemon over the course of about 24 hours for the end-of-summer DemoFest.

How did you build it?
We built a simple motion tracker using python and OpenCV which uses the command-line xdotool utility to send the key press corresponding to Grayson’s location every few frames. It runs on a DigitalOcean droplet.

How did the story get out and gain so much traction so quickly?
Honestly, I’m not quite sure. Virality is strange. I went to bed one night with eight viewers, and woke up the next morning with 22,000 viewers, a subreddit, some really cool fan art, and articles on TechCrunch, BuzzFeed, the Guardian, and BBC. It’s been quite sudden and completely surreal.

What’s next?
We’re going to keep making it better—experimenting with input mapping, cleaning up the aesthetics, and the like. We’re also arranging a better living situation for Grayson, using the donations we’ve received to move him into a bigger, fancier tank. Any donations beyond that will go to the National Fish and Wildlife Foundation.

Beyond Pokemon, though, there are lots of cool things we might do with the stream! A fish-powered random number generator, perhaps. We’re building an API. The sky’s the limit, really!

At the time of writing, Grayson was happy and healthy (despite dark speculation of his untimely death), and enjoying his newfound fame with a staggering 2.1MM views and counting.

#WeAreOscar: Hi Gabe


Meet Gabe, Oscar’s very own Data Analyst. He’s responsible for analyzing member demographics, marketing and sales data, as well as modeling growth to help forecast and stage future initiatives.

Prior to joining the Oscar team, Gabe worked as an investment associate intern at Bridgewater Associates, a researcher at the Technology and Entrepreneurship Center at Harvard, and as an intern at The Daily Show with Jon Stewart. He graduated magna cum laude from Harvard in 2013 with a degree in Philosophy and was editor-in-chief of the Harvard Review of Philosophy.

Gabe was very sick when he was younger, so he and his family learned firsthand how important health insurance is — and how confusing it can be. He believes in Oscar’s mission to make health insurance more helpful and easier to understand.

Evolution of Backend Data


In this post, we’ll be taking an exclusive look at the backend stack for data processing here at Oscar and how it continues to evolve. This is a lengthy discussion, so it will come in three parts over the next few weeks.

Building an engineering program at a new tech company is an exciting and often tricky business. There are so many amazing new technologies out there, so many battle-tested and reliable services, and so many tools that could be right for the job if you just make a choice and tweak it until it works for you. It’s fun to try new things and experiment, and it’s rewarding to build rock-solid services. At Oscar, we’re all about maintaining a high degree of curiosity and technical exploration, while relying on proven methods and technologies. Does that sound consistent to you? It’s not!

So, how do you reconcile conflicting engineering impulses while still having a good time? By making sure you’re addressing the stuff that matters first and foremost. Let’s establish some foundational premises from which arguments can proceed. Those premises will be the properties our systems must achieve by whatever means we come up with. We’ll get back to the conflict in a later post.

Data Acquisition
First, what are the properties we need? Remember, we’re talking about backend data processes alone here. Let’s start with one of the first things we had to do in engineering - connecting partner feeds. The insurance business is a complicated space, and we’re currently handling multiple data feeds from about ten different partner companies. As of this writing, we’re receiving about 60 individual data feeds and that number is likely to explode over time. Some of the feeds are trivial, requiring us to simply copy new data into our storage systems and leave it there, but that’s the exception. One of our feeds is an ASCII database transaction log representing hundreds of tables from a remote database written in Cobol. Many are fixed-width text files or CSV that must be parsed according to a schema of byte offsets and data types. For good measure, we also see EDI and HL7.

Whatever the format, the mission remains the same: to create a unified and up-to-date view of the data for users - both external and internal. And of course, don’t let a single bit get misplaced! This data really matters. It affects the lives of our customers and if we mishandle it, we can create inconvenience or even add significant stress at a time when focusing on medical treatment should be top priority. Basically, we shouldn’t ever screw up our data, and when it gets screwed up (yes, it will happen), we must be able to recover fast. Now we have our properties - whatever our choices, our feed systems must:

Respect order.
If you process things out of order, you risk corrupting your data.

Break on failure.
Never write bad data or otherwise continue in an abnormal state. Break vocally (trigger an alert) so engineers are notified immediately.

Be idempotent.
Mid-process or mid-transaction errors are to be expected, and if you can’t fix and resume from an arbitrary point easily without corrupting data, you’re going to spend a great deal of time recovering from one-off errors.

Be low-latency.
We’re not talking about response times for user requests, but rather the latency between the creation of a datum and its integration into our data stores.

Maintain privacy.
Our jobs are often handling user data, and they must handle such data with great care behind a privacy curtain.

That’s already a fair amount of properties. If we have several competing approaches, they all have to maintain the same general behavior and that’s more work. Also, the harder the properties are to implement, the slower is your development and the more error-prone is your code. Given that, let’s add a few more higher-order properties. Our systems should be: consistently implemented and consistently deployed.

So, how have we been going about that?

In the beginning…
We first had to get off the ground by constructing processors for each of the partner feeds. It started small with just a few vendors to connect, maybe 5 important feeds. 5 became 10, which then grew to 50. Formats became more varied, as did the properties of the feeds themselves. At that point, it was fairly obvious that we didn’t want to write separate handlers for each feed, so we came up with the following general architecture:


Each feed clearly needed it’s own reader, and processing each feed had to respect the schemas associated with each feed format. However, the generated data objects could be standard, as well as the framework for managing the data in a transactional and idempotent manner. Readers generated TransactionLogEntry objects that combine a standard set of metadata (transaction ID, date, record type, feed source, etc.) along with an attribute-value payload containing the transaction data. Different SchemaManager subclasses were created to support different schema types (CSV versus fixed width versus whatever) but they all did the same thing - generate bindings from TransactionLogEntry objects to persistent storage, and passing those bindings to the StateManager to be handled safely.

We started out as a python shop (we still are with a few exceptions) so we obviously had a lot of libraries and frameworks at our disposal that we could have used for this, particularly with regard to data modeling. Should we autogenerate SQLAlchemy object mappers or Thrift or Protocol Buffers? Honestly, we weren’t too particular at that point. Plain old data is easy to deal with and easy to convert to other formats down the line. Maintaining flexibility early on has allowed us to experiment more easily with systems that don’t speak SQLAlchemy, which we’ll discuss in a later post.

At that point, we had made it fairly easy to achieve some of our core properties - strict ordering, break on failure, and idempotence. Good start, but that’s all it was - a start. In our next post, we’ll explore the initial runtime framework we used to achieve the other properties we needed.