Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

connection_pool_per_downstream_connection - close upstream connection when downstream closes #37617

Open
Pawan-Bishnoi opened this issue Dec 11, 2024 · 5 comments
Labels
area/connection enhancement Feature requests. Not bugs or questions.

Comments

@Pawan-Bishnoi
Copy link
Contributor

Title: Close upstream connection soon after downstream connection is closed.

Description:
Enabling connection_pool_per_downstream_connection increases the number of open connections on the server app side (which is expected).
What is worrisome though is that these connections remain open for much longer than needed and are closed by idleTimeout (which takes atleast 8 minutes? Even if tcp keepalive time it set really small)

In case of connection_pool_per_downstream_connection, we don't have to wait for idleTimeout once the downstream is disconnected since the mapping is 1:1.

@Pawan-Bishnoi Pawan-Bishnoi added enhancement Feature requests. Not bugs or questions. triage Issue requires triage labels Dec 11, 2024
@Pawan-Bishnoi
Copy link
Contributor Author

Original issue: #12370 where this feature was added.

@Pawan-Bishnoi
Copy link
Contributor Author

I read somewhere (unable to find the doc now) that the 8 minute number is coming from this calculation:

  1. keepalive_probes - 7 is the default value
  2. keepalive_interval - 75 seconds
  3. keepalive_time - tried setting it to 1 second. This is the only configurable value?

keepalive_time + keepalive_probes * keepalive_interval = 526 seconds → 8.77 minutes

@wbpcode wbpcode added area/connection help wanted Needs help! and removed triage Issue requires triage labels Dec 13, 2024
@wbpcode
Copy link
Member

wbpcode commented Dec 13, 2024

I think this logic may could be optimized. But I cannot think of anyone have related experience. Will mark this as help wanted first.

@wbpcode wbpcode removed the help wanted Needs help! label Dec 13, 2024
@wbpcode
Copy link
Member

wbpcode commented Dec 13, 2024

Wait a minute, cannot the idle_timeout (in the upstream cluster configuration) helps?

@Pawan-Bishnoi
Copy link
Contributor Author

oh, I see. The other two fields are also configurable. Trying this, will update:

apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
..
spec:
  configPatches:
  - applyTo: CLUSTER
    match:
      cluster:
        name: "inbound_cluster_name"
    patch:
      operation: MERGE
      value:
        "upstream_connection_options":
         "tcp_keepalive":
          "keepalive_time": 30
          "keepalive_probes": 2
          "keepalive_interval": 50

Btw the connections are always supposed to be 1:1, right? Or can the upstream pool have more than one connection for any given downstream connection?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/connection enhancement Feature requests. Not bugs or questions.
Projects
None yet
Development

No branches or pull requests

2 participants