Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
## Step 1: Prerequisites to Deploy Granite-3.3-8b-Instruct Model on Xeon with Keycloak

Ensure the Enterprise Inference stack with Keycloak is already deployed before proceeding.

Edit `core/scripts/generate-token.sh` and set your values before sourcing it:

| Variable | Description |
| ------------------------- | ------------------------------------------------------------------------ |
| `BASE_URL` | Hostname of your cluster (e.g. `api.example.com`), without `https://` |
| `KEYCLOAK_ADMIN_USERNAME` | Keycloak admin username |
| `KEYCLOAK_PASSWORD` | Keycloak admin password |
| `KEYCLOAK_CLIENT_ID` | Keycloak client ID configured during EI deployment |

Then run:

```bash
export HUGGING_FACE_HUB_TOKEN="your_token_here"

cd ~/Enterprise-Inference
source core/scripts/generate-token.sh
```

This exports: `BASE_URL`, `KEYCLOAK_CLIENT_ID`, `KEYCLOAK_CLIENT_SECRET`, and `TOKEN`.

## Step 2: Deploy Granite-3.3-8b-Instruct Model

```bash
helm install vllm-granite-3-3-instruct-cpu ./core/helm-charts/vllm \
--values ./core/helm-charts/vllm/xeon-values.yaml \
--set LLM_MODEL_ID="ibm-granite/granite-3.3-8b-instruct" \
--set global.HUGGINGFACEHUB_API_TOKEN="$HUGGING_FACE_HUB_TOKEN" \
--set ingress.enabled=true \
--set ingress.secretname="${BASE_URL}" \
--set ingress.host="${BASE_URL}" \
--set oidc.client_id="$KEYCLOAK_CLIENT_ID" \
--set oidc.client_secret="$KEYCLOAK_CLIENT_SECRET" \
--set apisix.enabled=true \
--set tensor_parallel_size="1" \
--set pipeline_parallel_size="1"
```

## Step 3: Verify the Deployment

```bash
kubectl get pods
kubectl get apisixroutes
```

Expected Output:

```
NAME READY STATUS RESTARTS
keycloak-0 1/1 Running 0
keycloak-postgresql-0 1/1 Running 0
vllm-granite-3-3-instruct-cpu-<hash>-<hash> 1/1 Running 0
```

> Note: The pod name suffix `<hash>-<hash>` is auto-generated by Kubernetes and will differ on each deployment. Ensure all pods show `1/1 Running`.

```
NAME HOSTS
vllm-granite-3-3-instruct-cpu-apisixroute api.example.com
```

## Step 4: Test the Deployed Model

```bash
curl -k https://${BASE_URL}/granite-3.3-8b-instruct-vllmcpu/v1/completions \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"model": "ibm-granite/granite-3.3-8b-instruct",
"prompt": "What is Deep Learning?",
"max_tokens": 25,
"temperature": 0
}'
```

If successful, the model will return a completion response.

## To undeploy the model

```bash
helm uninstall vllm-granite-3-3-instruct-cpu
```

## Parameters

| Parameter | Description |
| ----------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
| `--set LLM_MODEL_ID="ibm-granite/granite-3.3-8b-instruct"` | Defines the target model from **Hugging Face** to deploy. |
| `--set global.HUGGINGFACEHUB_API_TOKEN="..."` | Authenticates access to gated or private Hugging Face models. Replace with your own secure token. |
| `--set ingress.enabled=true` | Enables Kubernetes **Ingress** to expose the model service externally. |
| `--set ingress.host="${BASE_URL}"` | Public hostname or FQDN for the inference endpoint (maps to your Ingress controller IP). |
| `--set ingress.secretname="${BASE_URL}"` | Kubernetes **TLS Secret** used for HTTPS termination at the ingress layer. |
| `--set oidc.client_id="..."` | Keycloak OIDC client ID used for token-based authentication. |
| `--set oidc.client_secret="..."` | Keycloak OIDC client secret corresponding to the client ID. |
| `--set apisix.enabled=true` | Enables **APISIX** as the API gateway for routing and authentication. |
| `--set tensor_parallel_size="1"` | Number of tensor parallel workers. Set to the number of available CPUs/GPUs per node. |
| `--set pipeline_parallel_size="1"` | Number of pipeline parallel stages. Typically `1` for single-node deployments. |
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# granite-3.3-8b-instruct

This model uses granite-3.3-8b-instruct, a large-scale instruction-tuned language model developed by IBM as part of the Granite model family. It is optimized for enterprise-grade instruction following, reasoning, summarization, and code-aware natural language tasks, with a strong emphasis on safety, reliability, and governance.

For full details including model specifications, licensing, intended use, safety guidance, and example prompts, please visit the official Hugging Face page: **Official Hugging Face Page**

https://huggingface.co/ibm-granite/granite-3.3-8b-instruct

This model provides inference services only; weights are hosted by Hugging Face under IBM’s open enterprise license.

Ensure compliance with the applicable Granite license terms before using this model.

### Model Attribution

**Developer:** IBM (Granite Team)

**purpose:** Instruction-tuned enterprise reasoning and language understanding

**Sizes/Variants:** 8B parameters

**Modalities:** Text → Natural Language + Code

**Parameter Size:** 8 Billion

**Max Context:** Up to ~128K tokens (depending on backend integration)

**License:** IBM Open License (enterprise-friendly, commercial use permitted with conditions)

### Usage Notice

**By using this model, you agree that:**

- Inputs and outputs are processed by the granite-3.3-8b-instruct model under IBM’s license terms.
- You are responsible for validating outputs before production deployment.
- This model should not be used for generating malicious, deceptive, or unsafe content.
- All enterprise, regulatory, and data-residency requirements must be respected during usage.

### Intended Applications

- Enterprise conversational AI and copilots
- Retrieval-Augmented Generation (RAG) systems
- Secure document summarization and classification
- Knowledge base question answering
- Business process automation and workflow agents
- Policy, compliance, and governance assistants
- Technical documentation analysis and generation

### Limitations

- Requires more compute and memory than lightweight (≤3B) models
- Not intended for real-time ultra-low-latency edge devices
- May hallucinate in low-context or ambiguous prompts
- Should not be used as a fully autonomous decision engine
- Long-context performance depends on backend configuration

### References

Hugging Face Model Page — https://huggingface.co/ibm-granite/granite-3.3-8b-instruct