No blackbox - we get out of your way and let you write features, train models, and wrap it up with your business logic.
Single platform that just works - no more stitching & maintaining a dozen MLOps systems yourself.
Hyper developer friendly - fully managed, zero ops burden, beautiful Python APIs. It doesn't get easier than this.
Realtime features and model deployment. Lag of few minutes. Blazing fast. Massively scalable.
Built by creators of ML Infra at Facebook & Google, the platform encodes best practices learned over years of battle testing.
Not at all. We believe that every product domain is unique and it's impossible to build a generic blackbox recommendation system that works for everyone. Instead, we let you write your own features, train your own models and decide your own end to end logic. We just make all these things really easy by abstracting away all domain agnostic engineering challenges.
Many other ML platforms (e.g. H2O, Databricks etc.) focus primarily on the model training side. We don't do any training and instead focus on all things that run on the production path — like extraction of realtime features, model scoring, monitoring, vector search etc. As a result, these training focused ML Platforms are orthogonal and complimentary to Fennel's platform. You're welcome to use any of these (or not - up to you) along with Fennel.
These are frameworks to define and train the ML models and are pretty orthogonal to Fennel. Once you train a model, you still need to deploy it along with many other ML Infra components - components that aren't a part of any of these frameworks. Fennel provides these components.
Fun fact - some of us used to work on PyTorch at Facebook and are deeply inspired by its beautiful APIs and overall architectural philosophy - you'd see that the moment you see our API docs :)
Deploying a realtime ML application like recommendations or fraud is not just about deploying models. There are many more layers to it like realtime feature extraction, feature/model monitoring, vector search, business logic serving etc. Fennel's platform provides all this including model deployment.
This integrated nature of the platform saves months that would be otherwise spent in stitching and then maintaining a frankenstein stack.
Not at all. The platform is architected from the ground up to be modular. If you wanted, you could use a subset of the components too — in fact, any subset of your choice. Further, whatever subset you choose to use will work perfectly with any of your home grown tools or other MLOps tools.
A typical ranking/recommendation request with a few hundred candidates and couple hundred features takes few hundred milliseconds. Non-ranking applications (e.g. fraud) that involve extracting features of a single item at once should be lot faster. Everything is horizontally scalable and has been operated in production at scale of 1000 recommendation QPS which in turn involves millions of feature extractions / model scoring per second.
We strongly suspect that our Python SDK is Turing complete (though we haven’t proven it yet). In other words, the platform is built from the ground up so that you could express arbitrary computations in it. And so it is extremely flexible.
You can either deploy the platform in the Fennel Cloud or deploy inside your own cloud. In either case, we manage the cluster resulting in zero operational overhead for you.
Your data is opaque byte strings to us and we don't know what it means. We encrypt the data in transit and store it securely inside our cloud. If you want an even higher level of protection, the platform can also be deployed inside your cloud -- in which case your data will never leave your cloud and will be subject to the same security policies as the rest of your infrastructure.
Training a collaborative filtering based recommendation system on a toy dataset is a sophomore year project in colleges these days. But where the rubber meets the road is...
The way ML features are typically written for NLP, vision, and some other domains is very different from the way they are written for recommendation systems. In this three part series...
A very powerful trend is playing out right now — more and more top tech companies are making a larger part of their machine learning as realtime as possible. So much so, many...