Tecton

Using LangChain and Tecton to Enhance LLM Applications with Up-to-Date Context

  • Sergio Ferragut
Published: August 26, 2024

In the world of generative AI app development it is now clear that integrating large language models (LLMs) with current contextual data is critical in order to improve accuracy and relevance of its responses. This post explores how to improve AI applications by incorporating up-to-date context derived from feature pipelines using LangChain and Tecton.

The Power of Context for AI

While LLMs are impressive for general tasks, their responses can be meaningfully improved when provided with relevant and current context. This contextual enhancement allows AI models to overcome their inherent limitations of static knowledge and lack of real-time information. By incorporating up-to-date facts, domain-specific details, or user-relevant information, LLMs can generate more accurate, nuanced, and tailored responses. 

This additional context serves multiple purposes: it helps disambiguate queries, reduces the likelihood of hallucinations or factual errors, enables more precise and targeted answers, and allows for deeper reasoning on specific topics. Essentially, context acts as a bridge between the LLM’s vast but fixed knowledge base and the dynamic, ever-changing real world, resulting in outputs that are not only more relevant but also more aligned with current realities and user needs. But building and serving this context is a significant data engineering undertaking. That’s where Tecton comes in.

Tecton is a powerful platform that offers a comprehensive solution for managing the entire lifecycle of context for structured and unstructured data. Beyond simple storage and retrieval of context like features and embeddings, Tecton handles all the data engineering, transformation, serving, and monitoring across batch, streaming, and real-time data sources. When combined with LangChain, a framework for developing LLM applications, the result is AI that’s not just intelligent, but contextually aware and highly relevant.

To demonstrate this approach, we’ll use a restaurant recommendation system as an example of using streaming data to provide up-to-date context. It’s important to note that while our example uses streaming data, Tecton’s capabilities extend to batch and real-time data as well, such as current GPS location for truly instantaneous context.

Starting with a Basic LangChain Model

I started with a simple LangChain application using OpenAI’s GPT-4o-mini model:

model = ChatOpenAI(model="gpt-4o-mini")

prompt = ChatPromptTemplate.from_template(
    """You are a concierge service that recommends restaurants. 
        Respond to the user query about dining. 
        If the user asks for a restaurant recommendation, respond with a specific restaurant that you know, and suggest menu items. 
        User query:{user_query}""")

chain = prompt | model | StrOutputParser()

I tested this model with a user question, specifying the location, the time and requesting the reasons for the recommendation:

inputs = {"user_query":"suggest a restaurant for tonight in Ballantyne area of Charlotte and tell me why you suggest it"}
chain.invoke(inputs).splitlines()

The output:

I recommend trying **The Capital Grille** in the Ballantyne area of Charlotte. 
The address is **7830 Gateway Village Blvd, Charlotte, NC 28277**.

The Capital Grille is an upscale steakhouse known for its dry-aged steaks and an extensive wine list. The ambiance is elegant and perfect for a nice dinner out. 

I suggest trying the **Bone-In Ribeye** or the **Filet Mignon**, both cooked to perfection. For a delicious side, the **Truffle Fries** or **Lobster Mac 'n' Cheese** are highly recommended. If you're in the mood for something lighter, their **Wedge Salad** is a refreshing choice.

Overall, it's a great place for a special occasion or a memorable dining experience. Enjoy your evening!

The response is pretty good, but it lacks personalization, because it doesn’t have any user information. What if the user doesn’t eat meat?

Enhancing the Model with Tecton’s Feature Platform

I then tested by using Tecton’s feature platform to enhance the model, providing up-to-date user context from feature pipelines to the LLM prompt. I added the user’s rating summary by cuisine {cuisines} to the prompt along with some instructions on how to use this data:

personalized_prompt = ChatPromptTemplate.from_template(
    """You are a concierge service that recommends restaurants. 
        Respond to the user query about dining. 
        If the user asks for a restaurant recommendation, respond with a specific restaurant that you know and suggest menu items. 
        Respond to the user query by taking into account the user's dining history. 
        Show their rating of the cuisine you recommend.
        If the user does not provide a cuisine, choose a restaurant that fits a cuisine from their highest average ratings:
        User's dining history by cuisine: {cuisines}
        User query:{user_query}""")

personalized_chain = personalized_prompt | model | StrOutputParser()

A Tecton client to fetch up-to-date user features from the feature platform:

from tecton_client import TectonClient

tecton_client = TectonClient(url="https://[your account].tecton.ai/")

By retrieving the features from the “cuisines_service” feature service for the current user and incorporating them into the input variables along with the user’s query, up-to-date context is added into the LangChain prompt:

user_id = 'a6afb498-b24f-4314-93df-5a5040cf1cb7'

# get features from Tecton
features = tecton_client.get_features(
                    feature_service_name="cuisines_service", 
                    join_key_map={'user_id': user_id}
           ).get_features_dict()

# add cuisines parameter to langchain inputs  
vars = { 
        "cuisines" :features['user_ratings_and_total_visits_by_cuisine'],
        "user_query":"suggest a restaurant for tonight in Ballantyne area of Charlotte and tell me why you suggest it"
 }
   
# invoke the chain 
personalized_chain.invoke(vars).splitlines()

The output of the personalized chain is now:

For a delightful dining experience tonight in the Ballantyne area of Charlotte, I recommend **The Cowfish Sushi Burger Bar**. 

**Address:** 4720 Holly Crest Ln, Charlotte, NC 28277
This restaurant offers a unique fusion of American and Japanese cuisines, 
which aligns perfectly with your high average ratings in American (4.02) 
and your interest in Asian Fusion (3.95). 

**Suggested Menu Items:**
- **Burgushi Rolls**: A fun combination of sushi and burgers, perfect for a twist on traditional dishes.
- **The Cowfish Burger**: A classic American burger with a variety of toppings to choose from.
- **Lobster and Shrimp Roll**: A delicious sushi option for seafood lovers.
- **Signature Milkshakes**: Don't miss out on their creative milkshakes for dessert!

I believe you'll enjoy the vibrant atmosphere and the innovative menu that combines two of your favorite cuisines!

The result demonstrates the power of contextual AI. This recommendation is personalized based on the user’s up-to-date preferences, showcasing the effectiveness of integrating Tecton’s Feature Platform.

Broader Applications and Real-Time Potential

While this example focuses on restaurant recommendations using streaming data from feature pipelines, Tecton’s Feature Platform capabilities extend to real-time data and complex feature engineering. Here are some potential applications across different industries:

  1. Financial services: Incorporate recent transaction history (streaming) and current market data (real-time) with sophisticated feature engineering for personalized investment advice.
  2. Support: Provide a history of account activity along with the latest customer events to provide relevant support help with each specific support situation. 
  3. Healthcare: Utilize up-to-date patient records (streaming) and real-time vital signs, applying advanced feature computations for more accurate diagnostic assistance.
  4. E-commerce: Leverage recent browsing history (streaming) and current inventory levels (real-time) with intricate feature calculations for targeted product recommendations.
  5. Smart city applications: Use real-time traffic data and streaming weather information, applying complex feature transformations for optimized route planning.

Key Insights

The integration of LangChain and Tecton’s Feature Platform offers several significant advantages:

  1. LangChain provides a robust framework for building sophisticated LLM-based applications.
  2. Tecton’s Feature Platform is crucial for managing the entire feature lifecycle, from engineering to serving, ensuring up-to-date and real-time data integration into AI models.
  3. The combination of LLMs with current context from well-managed feature pipelines significantly enhances the accuracy and relevance of AI-generated responses.
  4. This approach is versatile, supporting the combination of ongoing batch, streaming, and real-time data with the ability to perform complex feature transformations and create any context that’s needed.

By leveraging Tecton’s comprehensive feature platform to provide fresh, relevant context to AI models, it’s possible to create applications that don’t just generate responses, but provide insights that are timely, relevant, and tailored to the current business situation.

As AI continues to evolve, the importance of a robust feature platform in enhancing model performance cannot be overstated. With LangChain to control the AI workflow and Tecton building and serving features in real-time, developers can create much better AI solutions that are contextually aware.

For those working on AI applications, exploring the integration of streaming and real-time data through a comprehensive feature platform like Tecton is highly recommended. The resulting improvements in relevance, accuracy, and operational efficiency are substantial and can significantly enhance the user experience across a wide range of applications.

Book a Demo

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button