Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

looks like Pipelines Controller is not freeing memory #7691

Open
jhutar opened this issue Feb 21, 2024 · 3 comments
Open

looks like Pipelines Controller is not freeing memory #7691

jhutar opened this issue Feb 21, 2024 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@jhutar
Copy link

jhutar commented Feb 21, 2024

Expected Behavior

I would expect after some time Pipelines Controlled under load memory consumption would become constant - Pipelines Controller will start freeing memory.

Actual Behavior

This is a memory graph of a Pipelines Controller processing 10k very simple PipelineRuns with just one Task just printing "hello world" (Pipeline, PipelineRun)

These PRs were running from about 13:00 to to 15:30, script was creating new PRs in a way to make sure at max 100 of them is in pending/running:

image

Is this expected, or is this some sort of memory leak?

Steps to Reproduce the Problem

  1. Run 10k PipelineRuns and observe Pipelines Controller memory consumption over time
  2. This was automated in this repo with signing-tr-tekton-bigbang scenario

Additional Info

  • Kubernetes version:

Cluster is gone already, but it was ROSA OpenShift 4.14.11 with 5 compute nodes AWS EC2 m6a.2xlarge

  • Tekton Pipeline version:
  • chains-info: v0.19.0
  • pipelines-as-code-info: v0.22.4
  • pipelines-info: v0.53.2
  • triggers-info: v0.25.3
  • openshift-pipelines-operator-cluster-operations: v0.69.0
  • openshift-pipelines-operator-lifecycle: 1.13.0

Reported this together with tektoncd/chains#1058

@jhutar jhutar added the kind/bug Categorizes issue or PR as related to a bug. label Feb 21, 2024
@vdemeester
Copy link
Member

@jhutar are the PipelineRun cleaned (deleted) from the cluster in that scenario ? Because if not, they will be "cached" by the informers and thus, they will be kept in memory up until they are deleted from the cluster (and informers cache updated).

@jhutar
Copy link
Author

jhutar commented Mar 27, 2024

Yes, they are not deleted. So this means Pipelines Controller memory usage will just grow with number and size of PRs in the cluster. If there are excessively big PRs (maybe with long script), this might become a problem even for smaller number of PRs.

I understand this is property of underlying Go library (these informers you mentioned), but is there a way how to drop oldest records from the cache or so? Just to have a way how to keep memory usage flat.

@vdemeester
Copy link
Member

I understand this is property of underlying Go library (these informers you mentioned), but is there a way how to drop oldest records from the cache or so? Just to have a way how to keep memory usage flat.

We need to explore this yes. There might be ways to "optimize" or filter some objects once they are done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants