Tecton

Introducing Tecton 1.0

  • Cong Xu
Published: September 17, 2024
Share

We are excited to announce Tecton 1.0, a major milestone release. This release introduces new GenAI capabilities that make it simple for engineering teams to productionize rich, context-aware LLM applications along with a number of foundational core platform improvements that improve performance, cost efficiency and ease of use for both predictive and generative AI applications.

New GenAI Capabilities

Fully-managed embeddings service

Tecton 1.0 now supports an Embedding (available in Private Preview) feature type that allows you to declaratively define embeddings in Tecton, using top open source or proprietary embeddings model. Tecton can generate embeddings on a schedule or in real time and comes out of the box with optimized resource management, data pipeline orchestration and retrieval at scale.

Learn more about the Embedding Engine →

Prompt enrichment and management

With Tecton 1.0, you can now declaratively define your prompts as Tecton objects using the @prompt decorator (available in Private Preview). Tecton will handle the central management of your prompts and serve the prompts at low latency and high scale to your LLM via a Tecton AgentService() .

You can also enrich your prompts with additional feature data. Simply specify Tecton feature_views as sources for your prompt, and Tecton will handle serving those features to your prompt in real time. And you can use get_features_for_events to generate historical training datasets with point in time accurate enriched prompts to train and fine-tune your LLM.

For example, using @prompt(sources**=**[user_info_fv]), you can define a prompt that in real-time incorporates features for current user_ids from the user_info Feature View.

Learn more about prompts in Tecton →

Features as tools

In Tecton 1.0, you can now also register your feature views as tools (available in Private Preview). That means these feature views are not part of the prompt, but the LLM can decide in an agentic way if it wants to use them for additional information to answer a user question. Tecton will automatically provide the feature views’ metadata to the LLM to help it make that determination. Simply feed the feature views you want to use as tools to your Tecton AgentService.

For example, AgentService(...tools=[recent_eats_fv]) will now allow your LLM to decide at time of inference whether it wants to leverage the recent_eats_fv for additional information.

Learn more about features as tools →

Knowledge

With Tecton knowledge (available in Private Preview), we make it easy to both ingest unstructured data into your vector database of choice and serve the relevant information from your vector databases to your LLMs at time of response generation. You can simply define a knowledge object in Tecton that points to your raw unstructured data source, and Tecton will handle the embeddings generation and ingestion of the unstructured data into the vector db, cost efficiently and at high scale. Setting the knowledge parameter in your AgentService(...knowledge=[restaurant_knowledge]) allows the LLM to access that knowledge at response generation time.

AI-generated features

Use AI models to generate new features in Tecton (available in Private Preview). We support any open source, vendor or proprietary model, including predictive ML models like text classification and image analysis or LLMs. Similar to your other features in Tecton, these AI generated features are centrally managed and can be deployed into production and served at low latency and high scale with one simple tecton apply command.

Learn more about AI-generated features →

Core Platform Improvements

Remote Dataset Generation

You can now generate evaluation datasets for model training or fine-tuning from any environment (available in Private Preview). All you need is the tecton Python package. You can start remote Dataset jobs using start_dataset_job , and the output will be easily accessible via a Tecton Dataset .

Learn more about remote dataset generation →

Compaction for Streaming Aggregations

You can now turn on compaction_enabled=True on your Streaming Feature Views (available in Private Preview). This will tell Tecton to periodically perform compaction on the online store. This implementation optimizes read time performance and minimizes write costs for Streaming Time-Window aggregation Feature Views.

Learn more about compaction for streaming aggregations →

Real time compute and serving isolation

We introduced two new first-class objects for provisioning real-time infrastructure (available in Private Preview): Feature Server Groups to provision autoscaling nodes that serve feature vectors at low latency; and Transform Server Groups to provision live compute nodes that calculate Realtime Feature View values. Both are isolated in that they only serve or compute features within a predefined scope such as a workspace, preventing cross-team disruption.

Enhanced integration tests

Integration tests that are initiated as part of tecton plan --integration-test or tecton apply --integration-test (available in Private Preview). They will attempt to materialize data to fully test the materialization pipeline without actually writing any data to any store.

Learn more about enhanced integration tests →

Improved UX for real-time features

On Demand Feature Views have been renamed to Realtime Feature Views and enhanced with better RealtimeContext objects. Users can get access to a request_timestamp, consistent in both online and offline query paths, by adding a context argument to a Realtime Feature View’s method signature.

Learn more about Tecton’s improved UX for real-time features →

And much more!

Individual features can now accept metadata and descriptions, improving organization and discovery of Tecton features within large organizations.

FeatureTables now support caching so you can reduce costs and latency for common lookup patterns.

Timestamps can now be leveraged as a feature in Tecton.

Etc.

In Summary

With Tecton 1.0, it’s never been easier for ML and data teams to activate all of their data for both predictive and generative AI applications. That means better model performance, faster time to production and cost savings along the way!

If you’re interested in learning more, check out our webinar next week Tuesday, where we will talk a lot more about Tecton 1.0 and go through a live demo.

Share

Book a Demo

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Contact Sales

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button

Request a free trial

Interested in trying Tecton? Leave us your information below and we’ll be in touch.​

Unfortunately, Tecton does not currently support these clouds. We’ll make sure to let you know when this changes!

However, we are currently looking to interview members of the machine learning community to learn more about current trends.

If you’d like to participate, please book a 30-min slot with us here and we’ll send you a $50 amazon gift card in appreciation for your time after the interview.

CTA link

or

CTA button