You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First, I want to express my genuine appreciation for this project. After diving into PipeCD, I am incredibly impressed by its functionality and vision. I strongly believe PipeCD has the potential to become a leading CD tool, easily standing alongside or replacing current market leaders. The effort the maintainers and contributors have poured into this is evident, and I have massive respect for what you have built.
As I explored the architecture and documentation, I spent some time thinking about Piped.
During the presentation at KubeCon EU '26, the rationale for the Piped architecture was explained clearly: it prevents sensitive secrets from being processed directly within the central PipeCD components. I completely agree with this core philosophy. Security is paramount, and secrets absolutely should not pass through the control plane.
However, introducing an agent inherently adds operational overhead and adoption friction. When we look at the historical evolution of our ecosystem, a clear pattern emerges:
Config Management: Early configuration tools required heavily managed agents, but Ansible eventually captured the largest market share largely due to its agentless, low-friction design.
Kubernetes Package Management: Helm v2 relied on the in-cluster Tiller agent. The community eventually recognized the security and operational hurdles this caused, leading to the agentless architecture of Helm v3.
The industry consistently moves toward agentless architectures when the underlying security and state requirements can be solved through other means. This brings me to a question I’d love to discuss: Can we achieve PipeCD's strict security posture without the operational footprint of an agent?
Based on my experience architecting disaster recovery (DR) and deployment pipelines across several roles, the answer relies on decoupling secret management from the deployment tool entirely. We can achieve this through tools like the External-Secrets Operator (ESO).
In an agentless CD paradigm, the workflow looks like this:
Cluster Bootstrap: In a DR scenario where a cluster is lost, a new cluster is provisioned, and the CD tool (operating agentlessly) is granted access.
Order of Operations (Sync Waves): The CD tool begins syncing applications based on predefined hooks or waves. The very first application deployed is the External-Secrets Operator.
Secret Resolution: ESO connects directly to external providers (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, etc.) and native Kubernetes Secret objects are generated directly in the cluster.
Application Deployment: Subsequent applications are deployed. The manifests synced by the CD tool contain zero secret data—only references to the secrets that ESO has already securely provisioned.
In this model, no secrets ever pass through the CD tool's control plane, fulfilling PipeCD's primary security requirement, but without needing a dedicated, long-running agent in every target environment.
I believe exploring an agentless pathway—or offering it as a first-class alternative architecture—could significantly lower the barrier to entry for PipeCD and accelerate its adoption in the wider cloud-native community.
I would love to hear the maintainers' thoughts on this. What are the specific technical hurdles or edge cases in PipeCD's current roadmap that make Piped strictly necessary? Is an agentless future something the community would be open to exploring?
Thank you again for your time and for building such a fantastic tool.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello PipeCD Community,
First, I want to express my genuine appreciation for this project. After diving into PipeCD, I am incredibly impressed by its functionality and vision. I strongly believe PipeCD has the potential to become a leading CD tool, easily standing alongside or replacing current market leaders. The effort the maintainers and contributors have poured into this is evident, and I have massive respect for what you have built.
As I explored the architecture and documentation, I spent some time thinking about Piped.
During the presentation at KubeCon EU '26, the rationale for the Piped architecture was explained clearly: it prevents sensitive secrets from being processed directly within the central PipeCD components. I completely agree with this core philosophy. Security is paramount, and secrets absolutely should not pass through the control plane.
However, introducing an agent inherently adds operational overhead and adoption friction. When we look at the historical evolution of our ecosystem, a clear pattern emerges:
Config Management: Early configuration tools required heavily managed agents, but Ansible eventually captured the largest market share largely due to its agentless, low-friction design.
Kubernetes Package Management: Helm v2 relied on the in-cluster Tiller agent. The community eventually recognized the security and operational hurdles this caused, leading to the agentless architecture of Helm v3.
The industry consistently moves toward agentless architectures when the underlying security and state requirements can be solved through other means. This brings me to a question I’d love to discuss: Can we achieve PipeCD's strict security posture without the operational footprint of an agent?
Based on my experience architecting disaster recovery (DR) and deployment pipelines across several roles, the answer relies on decoupling secret management from the deployment tool entirely. We can achieve this through tools like the External-Secrets Operator (ESO).
In an agentless CD paradigm, the workflow looks like this:
Cluster Bootstrap: In a DR scenario where a cluster is lost, a new cluster is provisioned, and the CD tool (operating agentlessly) is granted access.
Order of Operations (Sync Waves): The CD tool begins syncing applications based on predefined hooks or waves. The very first application deployed is the External-Secrets Operator.
Secret Resolution: ESO connects directly to external providers (AWS Secrets Manager, HashiCorp Vault, Azure Key Vault, etc.) and native Kubernetes Secret objects are generated directly in the cluster.
Application Deployment: Subsequent applications are deployed. The manifests synced by the CD tool contain zero secret data—only references to the secrets that ESO has already securely provisioned.
In this model, no secrets ever pass through the CD tool's control plane, fulfilling PipeCD's primary security requirement, but without needing a dedicated, long-running agent in every target environment.
I believe exploring an agentless pathway—or offering it as a first-class alternative architecture—could significantly lower the barrier to entry for PipeCD and accelerate its adoption in the wider cloud-native community.
I would love to hear the maintainers' thoughts on this. What are the specific technical hurdles or edge cases in PipeCD's current roadmap that make Piped strictly necessary? Is an agentless future something the community would be open to exploring?
Thank you again for your time and for building such a fantastic tool.
Beta Was this translation helpful? Give feedback.
All reactions