Why AI Needs Better Context

November 7, 2024

As companies increasingly rely on AI to drive personalized, real-time decisions, the need for robust, fresh context has never been greater. When models shift from development to production, however, they often fail to perform as expected, leading to …

GenAI Engineering Horror Stories (And How to Avoid Them)

October 29, 2024

Who needs a haunted house to get some thrills, when you’re an engineer trying to build a useful AI-powered app?  Predictive machine learning already has its share of engineering challenges to begin with. Now generative AI use cases add to the …

Enhancing LLM Chatbots: Guide to Personalization 

October 16, 2024

Summary:  Tecton introduced new GenAI capabilities (in private preview) in the 1.0 release of its SDK that makes it much easier to productionize RAG applications. This post shows how the SDK can enable AI teams to use the SDK to build …

Why RAG Isn’t Enough Without the Full Data Context

September 20, 2024

Oh no! Your flight is delayed! You’re going to miss your connection and you need to change flights ASAP. Ever the savvy traveler, you start chatting with the new AI-powered customer support chatbot in the airline’s app. But does the chatbot know …

Enriching LLMs with Real-Time Context using Tecton

September 15, 2024

Large language models (LLMs) have revolutionized natural language AI, but they still face challenges when it comes to accessing up-to-date information and providing personalized responses. In Tecton 1.0, Tecton introduces innovative solutions to …

Key Takeaways From Ray Summit

October 11, 2023

Read this post for a summary of interesting announcements, learnings, and insights from Ray Summit, from LLMs and RAG, to generative AI and predictive ML.

Create Amazing Customer Experiences With LLMs & Real-Time ML Features

September 13, 2023

Did you know connecting large language models (LLMs) to a centralized feature platform can provide powerful, real-time insights from customer events? This post explains the benefits and how you can fit LLMs into production machine learning pipelines.