Modern Realtime Recommendation

Platform

Built for data scientists and product engineers. All the control of an in-house ML Platform, none of the hassle. At half the cost.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Modern Realtime Recommendation

Platform

Built for data scientists and product engineers. All the control of an in-house ML Platform, none of the hassle. At half the cost.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ship realtime recommendations 10x faster at half the cost

Full control

No blackbox - we get out of your way and let you write features, train models, and wrap it up with your business logic.

Bundled

Single platform that just works - no more stitching & maintaining a dozen MLOps systems yourself.

Ease of use

Hyper developer friendly - fully managed, zero ops burden, beautiful Python APIs. It doesn't get easier than this.

Realtime & fast

Realtime features and model deployment. Lag of few minutes. Blazing fast. Massively scalable.

FAQs

Is this another blackbox recommendation solution?

Not at all. We believe that every product domain is unique and it's impossible to build a generic blackbox recommendation system that works for everyone. Instead, we let you write your own features, train your own models and decide your own end to end logic. We just make all these things really easy by abstracting away all domain agnostic engineering challenges.

How is this different from other ML platforms?

Many other ML platforms (e.g. H2O, Databricks etc.) focus primarily on the model training side. We don't do any training and instead focus on all things that run on the production path — like extraction of realtime features, model scoring, monitoring, vector search etc. As a result, these training focused ML Platforms are orthogonal and complimentary to Fennel's platform. You're welcome to use any of these (or not - up to you) along with Fennel.

How is this different from PyTorch or Tensorflow?

These are frameworks to define and train the ML models and are pretty orthogonal to Fennel. Once you train a model, you still need to deploy it along with many other ML Infra components - components that aren't a part of any of these frameworks. Fennel provides these components.

Fun fact - some of us used to work on PyTorch at Facebook and are deeply inspired by its beautiful APIs and overall architectural philosophy - you'd see that the moment you see our API docs :)

How is it different from model deployment tools?

Deploying a realtime ML application like recommendations or fraud is not just about deploying models. There are many more layers to it like realtime feature extraction, feature/model monitoring, vector search, business logic serving etc. Fennel's platform provides all this including model deployment.

This integrated nature of the platform saves months that would be otherwise spent in stitching and then maintaining a frankenstein stack.

Do I have to buy into the whole platform?

Not at all. The platform is architected from the ground up to be modular. If you wanted, you could use a subset of the components too — in fact, any subset of your choice. Further, whatever subset you choose to use will work perfectly with any of your home grown tools or other MLOps tools.

How fast is it and how much scale can it handle?

A typical ranking/recommendation request with a few hundred candidates and couple hundred features takes few hundred milliseconds. Non-ranking applications (e.g. fraud) that involve extracting features of a single item at once should be lot faster. Everything is horizontally scalable and has been operated in production at scale of 1000 recommendation QPS which in turn involves millions of feature extractions / model scoring per second.

How flexible is the platform?

We strongly suspect that our Python SDK is Turing complete (though we haven’t proven it yet). In other words, the platform is built from the ground up so that you could express arbitrary computations in it. And so it is extremely flexible.

What is the deployment model?

You can either deploy the platform in the Fennel Cloud or deploy inside your own cloud. In either case, we manage the cluster resulting in zero operational overhead for you.

How does it respect the safety and the privacy of the data?

Your data is opaque byte strings to us and we don't know what it means. We encrypt the data in transit and store it securely inside our cloud. If you want an even higher level of protection, the platform can also be deployed inside your cloud -- in which case your data will never leave your cloud and will be subject to the same security policies as the rest of your infrastructure.

Jumpstart your recommendation engine

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

From our blog