If playback doesn't begin shortly, try restarting your device.
•
You're signed out
Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.
CancelConfirm
Share
An error occurred while retrieving sharing information. Please try again later.
We are proud to announce our biggest and most innovative release yet. 4.0 implies game changing innovations for building AI systems, whether they are batch, real-time or LLMs applications, through an AI Lakehouse infrastructure.
Try now:
https://hopsworks.ai/tryIntroduction and Overview of Hopsworks 4.0
**AI Lakehouse Concept**: Hopsworks 4.0 is introduced as the first unified factory for AI systems, suitable for various applications (batch, real-time, large language models).
We are proud to announce our biggest and most innovative release yet. 4.0 implies game changing innovations for building AI systems, whether they are batch, real-time or LLMs applications, through an AI Lakehouse infrastructure.
Try now:
https://hopsworks.ai/tryIntroduction and Overview of Hopsworks 4.0
**AI Lakehouse Concept**: Hopsworks 4.0 is introduced as the first unified factory for AI systems, suitable for various applications (batch, real-time, large language models).
**Need for AI Lakehouse**: Emphasized due to Python being a second-class citizen in existing lakehouses that are usually designed for SQL and Spark, and the necessity for real-time data and availability.
Key Enhancements in Hopsworks 4.0
**Performance Improvements**: Significant advancements in real-time performance, API enhancement, user interface, and user experience.
**Feature Query Service**: A new service that boosts throughput significantly when reading feature data, making data scientists more productive and happier.
Kubernetes Deployments
**Ease of Setup**: Introduction of seamless, Kubernetes deployments, allowing quick setup on various environments including cloud providers and on-premises.
**Feature Monitoring**: A new capability for tracking data changes over time, enabling proactive model retraining and ensuring up-to-date and accurate predictions.
Large Language Models (LLMs) Support
**End-to-End LLM Management**: Supports the entire pipeline from creating instruction datasets and fine-tuning to model serving with vLLM and Kserve.
**Vector Indexing**: Added to the feature store, this enables indexing documents and querying feature groups in a single pipeline.
Data Reliability and Availability
**Distributed Systems**: Both online and offline stores are designed to handle hardware and network failures ensuring system durability.
**Cross-Region Replication**: Unique to Hopsworks, offering high availability and seamless geographical data center switches without data loss.
RonDB
**Feature Lookup Speed**: Single feature lookups in less than a millisecond, with scalable batch lookups and low latency.
**Resource Scalability**: RonDB allows scaling up and down in seconds, with built-in rate limits and quotas to balance resource distribution.