Working Groups
Working Groups (WBs) are temporary groups created to achieve a specific goal spanning multiple SIGs.
Filter:
Topics:
The AI Gateway Working Group focuses on the intersection of AI and networking, particularly in the context of extending load-balancer, gateway and proxy technologies to manage and route traffic for AI Inference.
The AI Integration Working Group focuses on enabling seamless integration of AI/ML control planes with Kubernetes, as well as providing standardized patterns for deploying, managing, and operating AI applications at scale on Kubernetes.
Discuss and enhance the support for Batch (eg. HPC, AI/ML, data analytics, CI) workloads in core Kubernetes. We want to unify the way users deploy batch workloads to improve portability and to simplify supportability for Kubernetes providers.
This working group aims to provide a central location for the community to discuss the integration of Checkpoint/Restore functionality into Kubernetes.
A Working Group dedicated to promoting data protection support in Kubernetes, identifying missing functionality and working together to design features to enable data protection support. Involves collaboration with multiple SIGs such as Apps and Storage.
This work-in-progress doc tracks missing building blocks we have identified and what we are working on to fill the gaps.
Enable simple and efficient configuration, sharing, and allocation of accelerators and other specialized devices.
Additional context
The working group is dedicated to enabling automatic and efficient operation of etcd clusters in Kubernetes using an etcd-operator. The working group will discuss the requirement and use cases of such an etcd-operator. It will also try to create a roadmap to develop such an etcd-operator.
Note: the etcd clusters, to be managed by the etcd-operator, are to support applications instead of Kubernetes itself.
In addition to the slack channel, mailing list, and meetings, you can join our discussion through issues and PRs in the etcd-operator repo
Explore and improve node and pod lifecycle in Kubernetes. This should result in better node drain/maintenance support and better pod disruption/termination. It should also improve node and pod autoscaling, better application migration and availability, load balancing, de/scheduling, node shutdown, and cloud provider and third-party integrations.
Feedback
Was this page helpful?