We've been crazy busy and want to get updates going again. The round officially closed a couple weeks ago, and yesterday we just got the list of investors (with addresses) so we can send out the perqs for those who are supposed to get them. So, expect those soon if you put in a high enough amount.
Now, on with the updates. There are three main areas of interest for any startup:
Let's jump right in to the first one today, with the next two to follow.
Technology. As you may remember, we were using Microsoft's Cognitive Services for both Tagging objects in the video and Face Recognition. After much work and testing we finally came to the conclusion that we can and should do better. So, we moved what we call "Tagger" over to Tensforflow (Google's open source machine learning platform), where it is much improved. Now our users can search for "person" or "animal" or "vehicle" and see only video containing those objects -- a huge time savings when you have 100+ cameras.
BTW: We're big believers in Open Source Software, the crowd/community really does know best. Private/expensive software solutions may spend more money on marketing, but that doesn't mean they are actually better. So we're quite happy with our Open Source choice.
Face Recognition is a tough technology nut to crack. We took a long view of our options and decided not to take the easy path (like a paid solution) but to focus on quality and performance. That meant Open Source again, but we first had to figure out which Open Source platform was best. Then we had to architect things so that we could swap in a new one if we wanted to.
Before you can get to face recognition, you first have to do face detection and then you have to clean up the faces (crop them, rotate them so that they're upright, flatten the colors, etc.) It's a really long chain with many computationally expensive steps. Get one wrong and it could cost you too much money, in terms of computational power, to recognize all the faces you want to at scale.
The paper linked in the image below showed us that ArcFace is on top right now. So, if you know anything about us ... you can guess that's the one we're using.
Remember how we tag all the objects in the video first? This turns out to be a huge competitive advantage for us. We only send to the Face Recognition system videos in which we have already detected a person. This saves us a ton of resources because we're not trying to find/crop/rotate/recognize faces in blowing branches or passing cars.
We intend to use Tagger as a pre-filter again when it comes to recognizing license plates. Only videos with a vehicle in them will go to the relatively computationally intensive license plate recognizer (when we get around to turning that on too). This can be extended to almost anything else. Want to read all text that passes by a camera (e.g. name tag, employee badge, UPS, FedEX, etc.)? Best to only process video that is tagged as to have text in it first, and then try to read what's in those. So Tagger is paying us dividends beyond just the feature itself.
In other tech news, we moved from Microsoft's Azure to Google Cloud Platform (GCP). That cut our bill in about half and got us much better support on what we think is a better platform. For the tech geeks among us, we also Dockerized our entire system and are using Kubernetes to manage it -- we can deploy a whole new copy of our system with the push of a button onto any platform we want (e.g. we could move to AWS easily), this helps us keep our prices down by keeping us from getting locked into any given provider.
In short, the tech is going great. We're crazy busy but we're also doing things that seem indistinguishable from magic when we demo them.
This has become a long update, I hope it wasn't too boring. We'll do #2, Sales Traction, by Wednesday!