Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds optional kubeconfig persistence to get_client via a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
tests/test_resource.py (2)
312-339: Consider adding an integration test for token resolution.These tests verify the token resolution logic in isolation by duplicating the code from
get_client. While this validates the algorithm, an integration test usingget_clientwith mocked dependencies would provide stronger coverage against regressions if the logic is refactored.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_resource.py` around lines 312 - 339, Replace this unit-style duplicate of the token-parsing logic with an integration-style test that calls get_client so the real resolution code is exercised: create a fake or mocked client_configuration where client_configuration.api_key = {"authorization": "Bearer sha256~oauth-resolved-token"}, call get_client (mock out network/cluster calls as needed and/or monkeypatch save_kubeconfig to write to tmp_path), then assert that save_kubeconfig was invoked with the resolved token or that the written kubeconfig contains the expected token; reference get_client, client_configuration.api_key, and save_kubeconfig when updating the test.
265-270: Consider adding a test for write failure handling.The
save_kubeconfigfunction catchesOSErrorand logs an error without re-raising. A test verifying this behavior (e.g., by mockingos.opento raiseOSError) would confirm that write failures are handled gracefully without breaking client creation.♻️ Example test for write failure
def test_save_kubeconfig_write_failure(self, tmp_path, caplog): """Test that write failures are logged but don't raise.""" kubeconfig_path = str(tmp_path / "kubeconfig") with patch("ocp_resources.utils.kubeconfig.os.open", side_effect=OSError("Permission denied")): save_kubeconfig(path=kubeconfig_path, host="https://api.example.com:6443", token="test-token") assert "Failed to save kubeconfig" in caplog.text assert not os.path.exists(kubeconfig_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_resource.py` around lines 265 - 270, Add a new test that verifies save_kubeconfig handles write failures by mocking os.open to raise OSError; specifically, create a test named test_save_kubeconfig_write_failure (in tests/test_resource.py) that patches ocp_resources.utils.kubeconfig.os.open to raise OSError("Permission denied"), calls save_kubeconfig(path=kubeconfig_path, host=..., token=...), asserts the error message "Failed to save kubeconfig" appears in caplog, and asserts the kubeconfig file does not exist after the call.ocp_resources/utils/kubeconfig.py (1)
20-22: Consider handling file read errors explicitly.When
config_fileis provided but the file doesn't exist or is unreadable,FileNotFoundErrororIOErrorwill propagate up. This is inconsistent with the design goal of non-fatal write errors. Consider wrapping in try/except to match the error-handling pattern used for writes.♻️ Optional: Add explicit error handling for config_file read
elif config_file: + try: with open(config_file) as f: _config = yaml.safe_load(f) + except OSError: + LOGGER.error(f"Failed to read config file {config_file}", exc_info=True) + return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ocp_resources/utils/kubeconfig.py` around lines 20 - 22, Wrap the config_file open/read in a try/except to catch FileNotFoundError/OSError/IOError and handle it non-fatally (like the write error pattern) so a missing/unreadable config doesn't propagate; surround the open(config_file) / yaml.safe_load(f) block and on error log a warning (using the module's logger) including the exception details, leave _config as None or empty, and continue execution (do not raise); update the block referencing config_file, _config, and yaml.safe_load to implement this behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@ocp_resources/utils/kubeconfig.py`:
- Around line 20-22: Wrap the config_file open/read in a try/except to catch
FileNotFoundError/OSError/IOError and handle it non-fatally (like the write
error pattern) so a missing/unreadable config doesn't propagate; surround the
open(config_file) / yaml.safe_load(f) block and on error log a warning (using
the module's logger) including the exception details, leave _config as None or
empty, and continue execution (do not raise); update the block referencing
config_file, _config, and yaml.safe_load to implement this behavior.
In `@tests/test_resource.py`:
- Around line 312-339: Replace this unit-style duplicate of the token-parsing
logic with an integration-style test that calls get_client so the real
resolution code is exercised: create a fake or mocked client_configuration where
client_configuration.api_key = {"authorization": "Bearer
sha256~oauth-resolved-token"}, call get_client (mock out network/cluster calls
as needed and/or monkeypatch save_kubeconfig to write to tmp_path), then assert
that save_kubeconfig was invoked with the resolved token or that the written
kubeconfig contains the expected token; reference get_client,
client_configuration.api_key, and save_kubeconfig when updating the test.
- Around line 265-270: Add a new test that verifies save_kubeconfig handles
write failures by mocking os.open to raise OSError; specifically, create a test
named test_save_kubeconfig_write_failure (in tests/test_resource.py) that
patches ocp_resources.utils.kubeconfig.os.open to raise OSError("Permission
denied"), calls save_kubeconfig(path=kubeconfig_path, host=..., token=...),
asserts the error message "Failed to save kubeconfig" appears in caplog, and
asserts the kubeconfig file does not exist after the call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5d081c13-5920-4ed7-916c-69a33de07d53
📒 Files selected for processing (3)
ocp_resources/resource.pyocp_resources/utils/kubeconfig.pytests/test_resource.py
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ocp_resources/utils/kubeconfig.py`:
- Around line 32-34: The code currently uses a truthy check on config_dict (if
config_dict:) which treats empty mappings like {} as "not provided" and lets
them fall through; change the conditional to explicitly test for presence (e.g.,
if config_dict is not None:) so an explicit empty mapping is accepted and
written as-is; update any related branches that assume falsy means missing to
use the same None check and ensure behavior for config_file and other branches
remains unchanged.
- Around line 35-39: Update the exception handling around YAML operations to
also catch yaml.YAMLError (and subclasses) in addition to OSError: where
yaml.safe_load(config_file) is called (the block that sets _config and logs via
LOGGER.error) and the similar block around yaml.safe_dump, catch yaml.YAMLError
so malformed or unserializable YAML is logged with exc_info=True and the
function continues as the docstring promises; reference the calls to
yaml.safe_load, yaml.safe_dump, the local variable _config, config_file, and
LOGGER.error when making the change.
- Around line 62-66: The current write (os.open(..., 0o600) + os.fdopen(...))
can leave an existing kubeconfig with world-readable perms after truncation;
instead write to a secure temporary file and atomically replace the target:
create the parent dir as before, create a temp file in the same directory (e.g.
using tempfile.NamedTemporaryFile(delete=False, dir=os.path.dirname(path)) or
build path + ".tmp"), open/create the temp with explicit 0o600 perms (os.open or
tempfile with os.fchmod), write yaml.safe_dump(_config) to the temp, fsync the
file descriptor, close it, then call os.replace(temp_path, path) to atomically
swap in the secure file; reference variables/functions: path, _config,
yaml.safe_dump, os.replace, os.open, os.fchmod (or tempfile.NamedTemporaryFile)
when making the change.
In `@tests/test_resource.py`:
- Around line 312-367: The tests currently duplicate the token-resolution branch
instead of exercising the real implementation; replace the in-test replication
with calls to the actual get_client(...) code path (passing
kubeconfig_output_path or equivalent parameter) so token resolution occurs in
production code, or refactor the resolution into a single helper (e.g.,
resolve_token_from_api_key) in the resource module and call that helper from
both get_client and the tests; update the two test functions
(test_save_kubeconfig_resolved_token_from_bearer and
test_save_kubeconfig_resolved_token_explicit_takes_precedence) to invoke
get_client(..., kubeconfig_output_path=kubeconfig_path) or the new
resolve_token_from_api_key helper instead of reimplementing the _resolved_token
logic, and keep assertions on the produced kubeconfig token.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9ae40dfd-f5b3-466c-a881-dfbf6f072c9e
📒 Files selected for processing (2)
ocp_resources/utils/kubeconfig.pytests/test_resource.py
…oken resolution helper
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/test_resource.py`:
- Line 276: Calls to save_kubeconfig pass literal secret strings (e.g.,
token="test-token"), which triggers Ruff S106; replace these hardcoded token
keyword arguments with a non-literal variable or fixture (for example use a
module-level TEST_TOKEN constant, a pytest fixture like kube_token, or read from
an env var via os.environ.get("KUBE_TOKEN", "test-token")) and update every
save_kubeconfig(...) invocation in the test file (including the other
occurrences) to pass that variable (e.g., token=TEST_TOKEN or token=kube_token)
instead of the literal string.
- Around line 345-346: Test currently patches os.open but save_kubeconfig uses
tempfile.mkstemp, so change the test to patch tempfile.mkstemp to raise an
OSError (e.g. Permission denied) to exercise the write-failure path;
specifically, replace the patch target "ocp_resources.utils.kubeconfig.os.open"
with "tempfile.mkstemp" (or patch
"ocp_resources.utils.kubeconfig.tempfile.mkstemp" if importing tempfile in that
module) so save_kubeconfig hits the exception path and the test asserts the
expected handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ff283ab9-5479-4dbc-826e-bcb2e702516d
📒 Files selected for processing (3)
ocp_resources/resource.pyocp_resources/utils/kubeconfig.pytests/test_resource.py
🚧 Files skipped from review as they are similar to previous changes (2)
- ocp_resources/resource.py
- ocp_resources/utils/kubeconfig.py
| ) | ||
|
|
||
|
|
||
| def _resolve_bearer_token( |
There was a problem hiding this comment.
If none of them passed the function do nothing, please raise error
There was a problem hiding this comment.
_resolve_bearer_token is a local helper to avoid code duplication — it extracts the Bearer token extraction logic that would otherwise be duplicated between the host + token and host + username + password paths.
It intentionally returns None rather than raising because:
- It's called for all
kubeconfig_output_pathflows, includingconfig_dictandconfig_filewhere no token exists inclient_configuration.api_key— and that's perfectly valid - A kubeconfig without a token is valid (e.g., cert-based auth)
- The actual validation ("do we have enough data?") is handled by
save_kubeconfig, which now raisesValueErrorwhen it has nothing to work with
So the "raise on insufficient data" feedback is addressed in save_kubeconfig where it belongs, not in this helper.
ocp_resources/utils/kubeconfig.py
Outdated
| "current-context": "context", | ||
| } | ||
| else: | ||
| LOGGER.warning("kubeconfig_output_path provided but not enough data to build kubeconfig") |
There was a problem hiding this comment.
Please raise an error if someone calls this function and we do nothing. This is an error.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/test_resource.py (1)
245-265: Strengthenconfig_filetest to assert byte-for-byte copy behavior.Current assertions confirm semantic YAML equivalence, but they won’t detect content rewriting. If this flow should preserve source content as-is, assert raw bytes equality too.
✅ Test enhancement
source_path = str(tmp_path / "source-kubeconfig") with open(source_path, "w") as f: yaml.safe_dump(source_config, f) + with open(source_path, "rb") as f: + source_raw = f.read() @@ with open(output_path) as f: saved_config = yaml.safe_load(f) + with open(output_path, "rb") as f: + output_raw = f.read() assert saved_config == source_config + assert output_raw == source_raw🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_resource.py` around lines 245 - 265, The test test_save_kubeconfig_with_config_file currently only checks YAML semantic equality; update it to also assert a byte-for-byte copy when save_kubeconfig is called with config_file by reading both source_path and output_path in binary mode and asserting their raw bytes are identical (keep or add this alongside the existing yaml.safe_load assertion) so that save_kubeconfig's behavior of preserving exact file content is validated; reference test name test_save_kubeconfig_with_config_file and variables source_path/output_path and function save_kubeconfig to locate where to add the binary-read/assert.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ocp_resources/utils/kubeconfig.py`:
- Around line 35-38: The code currently reads config_file into _config with
yaml.safe_load and later re-serializes it (yaml.dump), which alters
formatting/comments; instead preserve the file verbatim by copying the original
bytes: remove the yaml.safe_load/yaml.dump round-trip for the config_file branch
and use a binary copy (e.g., shutil.copyfile or open read/write in 'rb'/'wb') to
write the destination kubeconfig file; update the logic around the config_file
variable and eliminate use of yaml.safe_load/_dump for this path so the original
content is preserved exactly.
- Around line 23-24: The docstring in ocp_resources/utils/kubeconfig.py
currently states that errors are "logged but not raised," which is out-of-date;
update the docstring for the kubeconfig read/write function to state that the
function will raise exceptions on read or write failures (and that files are
created with 0o600 permissions) so the documentation matches runtime behavior;
refer to the kubeconfig module and the function that performs the kubeconfig
read/write in the docstring text when making this change.
---
Nitpick comments:
In `@tests/test_resource.py`:
- Around line 245-265: The test test_save_kubeconfig_with_config_file currently
only checks YAML semantic equality; update it to also assert a byte-for-byte
copy when save_kubeconfig is called with config_file by reading both source_path
and output_path in binary mode and asserting their raw bytes are identical (keep
or add this alongside the existing yaml.safe_load assertion) so that
save_kubeconfig's behavior of preserving exact file content is validated;
reference test name test_save_kubeconfig_with_config_file and variables
source_path/output_path and function save_kubeconfig to locate where to add the
binary-read/assert.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 76566eb3-b29d-467e-9ac5-306056ad3afd
📒 Files selected for processing (3)
ocp_resources/resource.pyocp_resources/utils/kubeconfig.pytests/test_resource.py
🚧 Files skipped from review as they are similar to previous changes (1)
- ocp_resources/resource.py
ocp_resources/resource.py
Outdated
| host (str): host for the cluster | ||
| verify_ssl (bool): whether to verify ssl | ||
| token (str): Use token to login | ||
| kubeconfig_output_path (str): path to save the kubeconfig file. If provided, the kubeconfig will be saved to this path. |
There was a problem hiding this comment.
what is the perpus of this new feature?
for me this looks like a cli tool feature that generate kubeconfig.
For our project which is API usage it should be saved to tmp folder and return to the caller, then the caller can use it in the code.
There was a problem hiding this comment.
for example in places where you have more than one cluster and have only cluster creds
There was a problem hiding this comment.
so my comment is valid, we can rename the arg to generate_kubeconfig and return to the caller the path. right?
There was a problem hiding this comment.
so the user would not be able to contorl when the file is written to?
- Replaced kubeconfig_output_path: str with generate_kubeconfig: bool in get_client() - save_kubeconfig() now writes to a temp file and returns the path - Temp file is auto-cleaned via atexit on process exit - When generate_kubeconfig=True, get_client() returns (DynamicClient, str) tuple - Added @overload signatures for proper mypy type narrowing
What
Add
kubeconfig_output_pathparameter toget_client()to save kubeconfig to a file.Closes #2678
Changes
ocp_resources/resource.py— Newkubeconfig_output_pathparameter, token resolution from OAuthapi_keyocp_resources/utils/kubeconfig.py— Newsave_kubeconfig()utility functiontests/test_resource.py— 11 tests covering all flows, permissions, edge casesKey design decisions
None0o600set atomically viaos.open()(no race condition)Summary by CodeRabbit
New Features
Tests