Tecton

How Do Real-Time Features Work in Machine Learning?

Last updated: October 31, 2023

From detecting fraudulent transactions in milliseconds to delivering recommendations for new products while a customer shops, real-time machine learning is gaining traction. For use cases like these, models need access to extremely current, fresh context through real-time features. 

In this post, I’ll give an overview of real-time features—the benefits, the importance of accurate and efficient data pipelines, and a step-by-step guide on how to build your own data pipelines. 

What are real-time features in ML?

ML features are transformations of raw data that are used as input signals for ML models. In feature pipelines, there are two categories of features: pre-computed and real-time features.

  1. Pre-computed features are materialized ahead of prediction time and can be stored in a feature store, only to be used by a model both during training and when a prediction is needed. Typically, they are computed by transforming batch or streaming data. 
  2. Real-time features are features that are computed at prediction time; i.e., at the same time a request is made to the model. They may use request data, materialized data from the feature store, or a combination of both.

Real-time feature pipelines typically contain the context that is necessary for making a prediction but wouldn’t be possible to compute ahead of time. An example would be a model on a shopping site that needs context such as a user’s current shopping cart state in order to serve the most accurate, personalized product recommendations. 

Real-time features can also be used to compare some current context to historical data—a fraud model, for instance, might compare a customer’s current purchase details with their historical averages to determine if the purchase is suspicious. Finally, real-time features can be helpful in computing more advanced features, such as feature crosses, which tap into non-linear interactions that can extract more nuanced insights from data.  

Performance benefits of real-time features for ML

Not only are real-time features essential for many real-time ML use cases, they have a number of added benefits, too. They can lead to lower feature storage and feature computation costs, because on-demand feature views (ODFVs) for serving real-time features do not materialize data in the feature store. This is especially useful when it would be expensive to compute all possible feature crosses over your entire data source as opposed to computing feature crosses over just the training and inference data samples. 

Real-time features could also reduce costs related to third-party requests for users on a website where most users don’t show up often. For example, a user looking to renew their insurance policy annually may not show up to the insurance website very often, but real-time requests can be made to fetch the most relevant quotes related to a user’s inputs. With real-time features, third-party data is only requested for users that land on your site to retrieve fresh features, which translates into reduced costs due to fewer requests.

Second, real-time features can result in more stable ML pipelines. When working with feature embeddings, for instance, it’s often necessary to use expensive dimensionality reduction techniques before inputting features into a model. Embeddings are often not stable and may change frequently, so you would instead want to store stable user and product data in your database, and compute embeddings on-the-fly with the latest version of your embedding model. 

Finally, real-time features can be easier to work with and build upon than their pre-computed counterparts. With real-time features, there’s no waiting for features to be materialized or concern about how features are stored. This simplicity makes it easier to integrate new real-time features into your pipelines, leading to quicker development processes and more effortless iterations.

How real-time features work in Tecton

Tecton’s On-Demand Feature Views (ODFVs) allow you to create your own real-time feature pipelines customized to your use case. These real-time features can be easily defined as a standard Python transformation. ODFVs operate similarly to how Tecton Batch and Stream Feature Views work, but instead of only performing transformations on data sources, they can perform transformations on request sources and pre-computed feature sources. In using Batch/Stream Feature Views and ODFVs, the entire real-time feature pipeline can be defined in Python using Tecton’s declarative feature engineering framework.

ODFVs are included in a feature service, just like batch or streaming feature views; however, instead of materializing features into your feature store, any computations are executed at request time. In the diagram below, you can see that while standard feature views materialize data in an offline and online feature store, ODFVs are added at the end of the feature service pipeline in order to compute features ad-hoc and in real time.

Example: Tecton On-Demand Feature View

Imagine that you are a retail company looking to recommend relevant products based on a user’s search query. You would be able to pre-compute and store product attributes in advance, but a user’s search item would only be available once a user landed on the website and entered their query. Thus, to serve relevant product recommendations, you would want to compute the similarity between a product name and a search query in real time. First, you might define your batch feature view to store product attributes as shown below:

from tecton import batch_feature_view, materialization_context
from datetime import datetime, timedelta
from Search.entities import search_product
from Search.data_sources import product_attributes_src

@batch_feature_view(
    description='''product attributes from the product attributes table, updated daily''',
    entities=[search_product],
    sources=[product_attributes_src],
    batch_schedule=timedelta(days=1),
    incremental_backfills=True,
    mode='spark_sql'
)
def product_attributes(product_attributes_src, context=materialization_context()):
  return f"""
    select *,
    TO_TIMESTAMP('{context.end_time}') - INTERVAL 1 MICROSECOND as TIMESTAMP 
    from {product_attributes_src}
        pivot (
            MIN(value) AS v
            for name in ('MFG Brand Name', 'Color Family', 'Material', 'Color/Finish', 'Color')
        )
        where product_uid is not null
  """

Then, you would define your ODFV to compute the similarity between a product and a search query as shown below:

import tecton
from tecton import RequestSource, on_demand_feature_view
from tecton.types import String, Timestamp, Float64, Field, Int64,Bool
from Search.features.product_attributes import product_title

request_schema = [
                  Field('search_term', String),
                  Field('product_uid', String)
                  ]
search_query = RequestSource(schema=request_schema)

output_schema = [
  Field('jaccard_similarity_query_token_title_token', Float64)
]


@on_demand_feature_view(
  description='''Jaccard similarity between the tokenized input query and the product title, computed in real-time''',  
  sources=[search_query, product_title],
  schema=output_schema,
  mode='python'
)
def search_query_product_similarity(search_query, product_attributes, product_title):
  def jaccard(list1, list2):
    intersection = len(list(set(list1).intersection(list2)))
    union = (len(list1) + len(list2)) - intersection
    return float(intersection) / union
    
  #Normalizing and tokenizing search query
  search_term = search_query.get('search_term')
  search_term = search_term.lower()
  tokenized_query = search_term.split(' ')

  #Normalizing and tokenizing product title
  product_title = product_title.get('product_title')
  product_title = product_title.lower()
  product_title_tokenized = product_title.split(' ')
  
  #Compute Jaccard similarity
  jaccard_similarity = jaccard(tokenized_query, product_title_tokenized)
  
  return {
    'jaccard_similarity_query_token_title_token': jaccard_similarity
    }

After creating your ODFV, requests that happen in real time can now be transformed into real-time features that your model can consume to recommend relevant products.

Interested in learning more about real-time machine learning and how it can be used to deliver real-time recommendations? Check out this blog post, “A Practical Guide to Building an Online Recommendation System.

Book a Demo

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button