Skip to content

Comments

Pub/q1 dev#4000

Closed
priti-parate wants to merge 271 commits intopub/build_streamfrom
pub/q1_dev
Closed

Pub/q1 dev#4000
priti-parate wants to merge 271 commits intopub/build_streamfrom
pub/q1_dev

Conversation

@priti-parate
Copy link
Collaborator

Issues Resolved by this Pull Request

Please be sure to associate your pull request with one or more open issues. Use the word Fixes as well as a hashtag (#) prior to the issue number in order to automatically resolve associated issues (e.g., Fixes #100).

Fixes #

Description of the Solution

Please describe the solution provided and how it resolves the associated issues.

Suggested Reviewers

If you wish to suggest specific reviewers for this solution, please include them in this section. Be sure to include the @ before the GitHub username.

pullan1 and others added 30 commits February 4, 2026 20:12
Signed-off-by: pullan1 <sudha.pullalaravu@dell.com>
Fix aarch64 base image fact name for package collection
Signed-off-by: sakshi-singla-1735 <sakshi.s@dell.com>
Support for Configuring Additional Container Images from User Registry to Pulp
Custom slurm confs paths now read from omnia_core rather than oim
Updating additional_packages.json for aarch64 with service_k8s functional groups
jagadeeshnv and others added 28 commits February 13, 2026 13:06
cleanup of files under offline_repo dir during pulp cleanup
Update service_k8s.json for doca package
Signed-off-by: Vrinda_Marwah <vrinda.marwah@dell.com>
critical service detection in uninstall operations
issue where ssh key changes on slurmctld, live
Added changes to handle munge key updates for live slurm cluster
Signed-off-by: pullan1 <sudha.pullalaravu@dell.com>
Fix for local repo is failing as cuda run package download issue
…lback after upgrade to 2.1 (#3978)

* Added user guidance messages in rollback_omnia.yml and upgrade_cluster.yml

* Modification of Rollback guidance message

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml
Add user registry to crio.conf for image pull on K8s nodes
Signed-off-by: Vrinda_Marwah <vrinda.marwah@dell.com>
Fix for return status in execute command for file type
* Initial set of changes for iDRAC Telemetry add and remove node

* Ansible link and pylint fixes

* Ansible lint fixes

* Updated Copyrights to 2026

* Addressed the comments
* LDMS slurm node add/delete

* pr review comments update
Signed-off-by: Vrinda_Marwah <vrinda.marwah@dell.com>
Mask Docker Credentials in Local Repo Package Logs
…in JSON files.

Signed-off-by: pullan1 <sudha.pullalaravu@dell.com>
Fix for local_repo.yml allows passes even with invalid package names in JSON files
…3986)

* Node drain logic for deletion

* Shell instead of command for piping

* lint fixes

* Updated permission for slurmdbd
Added new force_conf option for allowing confs pass through validation

* removede new file

* Renamed force_conf to skip_merge
package_identifier += f":{package['tag']}"

with remote_creation_lock:
if package['package'].startswith('docker.io/') and docker_username and docker_password:

Check failure

Code scanning / CodeQL

Incomplete URL substring sanitization High

The string
docker.io/
may be at an arbitrary position in the sanitized URL.

Copilot Autofix

AI 2 days ago

In general, the correct way to distinguish Docker Hub images (or any particular registry) is to parse the image reference to separate the registry host from the rest of the path, and then compare the host (or a normalized form of it) to the expected value(s). Relying on raw string prefix checks on the entire image string is fragile and is what CodeQL is warning about.

The best fix here, without changing existing functionality, is:

  1. Reuse get_repo_url_and_content(package) to derive a normalized base URL for the registry associated with package['package'].
  2. From that base URL, extract the hostname using urllib.parse.urlparse.
  3. Decide whether to use Docker Hub auth by comparing that hostname to the known Docker Hub host (registry-1.docker.io is what get_repo_url_and_content already maps docker.io to).
  4. Keep the existing behavior: only if the registry is Docker Hub and docker_username/docker_password are provided do we call create_container_remote_with_auth; otherwise we call create_container_remote.

Concretely:

  • Add an import for urlparse from urllib.parse at the top of common/library/module_utils/local_repo/download_image.py.
  • In the elif "tag" in package: block, replace the condition if package['package'].startswith('docker.io/') and docker_username and docker_password: with logic that:
    • Calls get_repo_url_and_content(package['package']) to retrieve base_url.
    • Uses urlparse(base_url).hostname to get the registry host.
    • Checks if registry_host in ("docker.io", "registry-1.docker.io") and docker_username and docker_password: (or simply == "registry-1.docker.io" given the current mapping) before choosing the auth path.

We must be careful not to change other variables such as base_url and package_content that are already computed earlier in the function; the new logic should use package['package'] solely to determine the registry host for the auth decision and leave the rest of the flow unchanged.


Suggested changeset 1
common/library/module_utils/local_repo/download_image.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/common/library/module_utils/local_repo/download_image.py b/common/library/module_utils/local_repo/download_image.py
--- a/common/library/module_utils/local_repo/download_image.py
+++ b/common/library/module_utils/local_repo/download_image.py
@@ -18,6 +18,7 @@
 import re
 import json
 from multiprocessing import Lock
+from urllib.parse import urlparse
 from jinja2 import Template
 from ansible.module_utils.local_repo.standard_logger import setup_standard_logger
 from ansible.module_utils.local_repo.parse_and_download import execute_command,write_status_to_file
@@ -331,7 +332,14 @@
             package_identifier += f":{package['tag']}"
 
             with remote_creation_lock:
-                if package['package'].startswith('docker.io/') and docker_username and docker_password:
+                # Determine registry host from the package reference instead of relying on a raw prefix check.
+                try:
+                    pkg_base_url, _ = get_repo_url_and_content(package['package'])
+                    registry_host = urlparse(pkg_base_url).hostname
+                except Exception:
+                    registry_host = None
+
+                if registry_host == "registry-1.docker.io" and docker_username and docker_password:
                     result = create_container_remote_with_auth(
                         remote_name, base_url, package_content, policy_type,
                         tag_val, logger, docker_username, docker_password
EOF
@@ -18,6 +18,7 @@
import re
import json
from multiprocessing import Lock
from urllib.parse import urlparse
from jinja2 import Template
from ansible.module_utils.local_repo.standard_logger import setup_standard_logger
from ansible.module_utils.local_repo.parse_and_download import execute_command,write_status_to_file
@@ -331,7 +332,14 @@
package_identifier += f":{package['tag']}"

with remote_creation_lock:
if package['package'].startswith('docker.io/') and docker_username and docker_password:
# Determine registry host from the package reference instead of relying on a raw prefix check.
try:
pkg_base_url, _ = get_repo_url_and_content(package['package'])
registry_host = urlparse(pkg_base_url).hostname
except Exception:
registry_host = None

if registry_host == "registry-1.docker.io" and docker_username and docker_password:
result = create_container_remote_with_auth(
remote_name, base_url, package_content, policy_type,
tag_val, logger, docker_username, docker_password
Copilot is powered by AI and may make mistakes. Always verify output.

try:
with socket.create_connection((ip, port), timeout=timeout) as sock:
with context.wrap_socket(sock, server_hostname=ip):

Check failure

Code scanning / CodeQL

Use of insecure SSL/TLS version High

Insecure SSL/TLS protocol version TLSv1 allowed by
call to ssl.create_default_context
.
Insecure SSL/TLS protocol version TLSv1_1 allowed by
call to ssl.create_default_context
.

Copilot Autofix

AI 2 days ago

In general, the fix is to configure the SSLContext so that TLS versions below 1.2 are disabled, regardless of environment defaults. In modern Python (3.7+), the preferred approach is to set context.minimum_version = ssl.TLSVersion.TLSv1_2. For older Python versions where minimum_version or TLSVersion may not exist, the conventional approach is to set context.options flags OP_NO_TLSv1 and OP_NO_TLSv1_1 (and, if relevant, also disable SSLv2/v3 if not already disabled by create_default_context).

The single best fix here, without altering existing functionality, is:

  • Keep using ssl.create_default_context() as before.
  • Immediately after creating the context, explicitly restrict protocol versions:
    • Prefer context.minimum_version = ssl.TLSVersion.TLSv1_2 when available.
    • Fall back to OR‑ing ssl.OP_NO_TLSv1 and ssl.OP_NO_TLSv1_1 into context.options if the newer API isn’t available.
  • Keep the existing check_hostname = False and verify_mode = CERT_NONE lines intact, since they are part of this “probe” function’s current behavior.

This change is local to is_https in common/library/module_utils/local_repo/registry_utils.py, at and just after line 26. We do not need extra imports because everything required comes from the existing ssl import. No other functions in the file need to change to address this specific alert.

Suggested changeset 1
common/library/module_utils/local_repo/registry_utils.py

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/common/library/module_utils/local_repo/registry_utils.py b/common/library/module_utils/local_repo/registry_utils.py
--- a/common/library/module_utils/local_repo/registry_utils.py
+++ b/common/library/module_utils/local_repo/registry_utils.py
@@ -24,6 +24,16 @@
 
     # Don't verify server cert; just see if TLS works
     context = ssl.create_default_context()
+    # Explicitly restrict to TLS 1.2+ to avoid older, insecure protocol versions.
+    try:
+        # Preferred on Python 3.7+.
+        context.minimum_version = ssl.TLSVersion.TLSv1_2
+    except AttributeError:
+        # Fallback for older Python/OpenSSL: disable TLSv1 and TLSv1_1 explicitly.
+        if hasattr(ssl, "OP_NO_TLSv1"):
+            context.options |= ssl.OP_NO_TLSv1
+        if hasattr(ssl, "OP_NO_TLSv1_1"):
+            context.options |= ssl.OP_NO_TLSv1_1
     context.check_hostname = False
     context.verify_mode = ssl.CERT_NONE
 
EOF
@@ -24,6 +24,16 @@

# Don't verify server cert; just see if TLS works
context = ssl.create_default_context()
# Explicitly restrict to TLS 1.2+ to avoid older, insecure protocol versions.
try:
# Preferred on Python 3.7+.
context.minimum_version = ssl.TLSVersion.TLSv1_2
except AttributeError:
# Fallback for older Python/OpenSSL: disable TLSv1 and TLSv1_1 explicitly.
if hasattr(ssl, "OP_NO_TLSv1"):
context.options |= ssl.OP_NO_TLSv1
if hasattr(ssl, "OP_NO_TLSv1_1"):
context.options |= ssl.OP_NO_TLSv1_1
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE

Copilot is powered by AI and may make mistakes. Always verify output.
Unable to commit as this autofix suggestion is now outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.