Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
## Step 1: Prerequisites to Deploy TinyLlama Model on Xeon with Keycloak

Ensure the Enterprise Inference stack with Keycloak is already deployed before proceeding.

Edit `core/scripts/generate-token.sh` and set your values before sourcing it:

| Variable | Description |
| ------------------------- | ------------------------------------------------------------------------ |
| `BASE_URL` | Hostname of your cluster (e.g. `api.example.com`), without `https://` |
| `KEYCLOAK_ADMIN_USERNAME` | Keycloak admin username |
| `KEYCLOAK_PASSWORD` | Keycloak admin password |
| `KEYCLOAK_CLIENT_ID` | Keycloak client ID configured during EI deployment |

Then run:

```bash
export HUGGING_FACE_HUB_TOKEN="your_token_here"

cd ~/Enterprise-Inference
source core/scripts/generate-token.sh
```

This exports: `BASE_URL`, `KEYCLOAK_CLIENT_ID`, `KEYCLOAK_CLIENT_SECRET`, and `TOKEN`.

## Step 2: Deploy Tinyllama-1.1b-chat-v1.0 Model

```bash
helm install tinyllama-1-1b-cpu ./core/helm-charts/vllm \
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting an error saying the ingress for this model already exists. Even after uninstalling the model with helm and confirming the ingress for tinyllama is deleted, rerunning the helm install command results in this error:

Error: INSTALLATION FAILED: 1 error occurred:
* ingresses.networking.k8s.io "tinyllama-1-1b-cpu-vllm-ingress" already exists

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alex please verify if ingress is deleted with 'kubectl get ingress' after running helm uninstall

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to replicate the issue, I deployed tinyllama with helm command and running helm uninstall command also removed ingress.

redeploying tinyllama for the second time didnt give me any error. Below is the output

**user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx api.example.com 80, 443 28m
tinyllama-1-1b-cpu-vllm-ingress alb api.example.com 80, 443 6m39s

user@ubuntuxeon2:~/Enterprise-Inference$ helm uninstall tinyllama-1-1b-cpu
release "tinyllama-1-1b-cpu" uninstalled

user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx api.example.com 80, 443 36m

user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get pods
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 36m
keycloak-postgresql-0 1/1 Running 0 36m

user@ubuntuxeon2:~/Enterprise-Inference$ helm install tinyllama-1-1b-cpu ./core/helm-charts/vllm --values ./core/helm-charts/vllm/xeon-values.yaml --set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0" --set global.HUGGINGFACEHUB_API_TOKEN="$HUGGING_FACE_HUB_TOKEN" --set ingress.enabled=true --set ingress.secretname="${BASE_URL}" --set ingress.host="${BASE_URL}" --set oidc.client_id="$KEYCLOAK_CLIENT_ID" --set oidc.client_secret="$KEYCLOAK_CLIENT_SECRET" --set apisix.enabled=true --set tensor_parallel_size="1" --set pipeline_parallel_size="1"
NAME: tinyllama-1-1b-cpu
LAST DEPLOYED: Wed May 6 15:42:36 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
**

--values ./core/helm-charts/vllm/xeon-values.yaml \
--set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0" \
--set global.HUGGINGFACEHUB_API_TOKEN="$HUGGING_FACE_HUB_TOKEN" \
--set ingress.enabled=true \
--set ingress.secretname="${BASE_URL}" \
--set ingress.host="${BASE_URL}" \
--set oidc.client_id="$KEYCLOAK_CLIENT_ID" \
--set oidc.client_secret="$KEYCLOAK_CLIENT_SECRET" \
--set apisix.enabled=true \
--set tensor_parallel_size="1" \
--set pipeline_parallel_size="1"
```

Comment thread
alexsin368 marked this conversation as resolved.
## Step 3: Verify the Deployment

```bash
kubectl get pods
kubectl get apisixroutes
```

Expected Output:

```
NAME READY STATUS RESTARTS
keycloak-0 1/1 Running 0
keycloak-postgresql-0 1/1 Running 0
tinyllama-1-1b-cpu-vllm-<hash>-<hash> 1/1 Running 0
```

> Note: The pod name suffix `<hash>-<hash>` is auto-generated by Kubernetes and will differ on each deployment. Ensure all pods show `1/1 Running`.

```
NAME HOSTS
tinyllama-1-1b-cpu-vllm-apisixroute api.example.com
```

## Step 4: Test the Deployed Model

```bash
curl -k https://${BASE_URL}/tinyLlama-1.1B-Chat-v1.0-vllmcpu/v1/completions \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{
"model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"prompt": "What is Deep Learning?",
"max_tokens": 25,
"temperature": 0
}'
```

If successful, the model will return a completion response.

## To undeploy the model

```bash
helm uninstall tinyllama-1-1b-cpu
```

## Parameters

| Parameter | Description |
| --------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| `--set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0"` | Defines the target model from **Hugging Face** to deploy. |
| `--set global.HUGGINGFACEHUB_API_TOKEN="..."` | Authenticates access to gated or private Hugging Face models. Replace with your own secure token. |
| `--set ingress.enabled=true` | Enables Kubernetes **Ingress** to expose the model service externally. |
| `--set ingress.host="replace-ingress"` | Public hostname or FQDN for the inference endpoint (maps to your Ingress controller IP). |
| `--set ingress.secretname="replace-secret"` | Kubernetes **TLS Secret** used for HTTPS termination at the ingress layer. |
| `--set oidc.client_id="..."` | Keycloak OIDC client ID used for token-based authentication. |
| `--set oidc.client_secret="..."` | Keycloak OIDC client secret corresponding to the client ID. |
| `--set apisix.enabled=true` | Enables **APISIX** as the API gateway for routing and authentication. |
| `--set tensor_parallel_size="1"` | Number of tensor parallel workers. Set to the number of available CPUs/GPUs per node. |
| `--set pipeline_parallel_size="1"` | Number of pipeline parallel stages. Typically `1` for single-node deployments. |












Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# TinyLlama-1.1B-Chat-v1.0

This model uses TinyLlama-1.1B-Chat-v1.0, a compact large language model developed by the TinyLlama Project team. It is a chat-tuned variant of the TinyLlama 1.1B base model, optimized for instruction-following, conversational AI, and lightweight reasoning tasks. Despite its small size, TinyLlama delivers strong performance for edge AI, embedded systems, rapid prototyping, and cost-efficient inference scenarios.

For full details including model specifications, licensing, intended use, safety guidance, and example prompts, please visit the official Hugging Face page: **Official Hugging Face Page**

https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0

This model provides inference services only; weights are hosted by Hugging Face under the Apache 2.0 License.

Ensure compliance with the Apache 2.0 License terms before using this model.

### Model Attribution

**Developer:** TinyLlama Project

**purpose:** Lightweight instruction-tuned conversational AI

**Sizes/Variants:** 1.1B parameters

**Modalities:** Text → Natural Language

**Parameter Size:** 1.1 Billion

**Max Context:** ~2K tokens

**License:** Apache 2.0 (commercial-friendly)

### Usage Notice

**By using this model, you agree that:**

- Inputs and outputs are processed by the TinyLlama-1.1B-Chat-v1.0 model under the Apache 2.0 license.
- You are responsible for validating outputs before production use.
- This model should not be used for generating malicious, deceptive, or unsafe content.
- Outputs may contain inaccuracies and must be reviewed for correctness and compliance.

### Intended Applications

- Lightweight chatbots and virtual assistants
- Edge AI and on-device inference
- Rapid prototyping and AI experimentation
- CPU-based conversational agents
- Educational tools and demos
- RAG-based document assistants for low-resource environments
- Dev/test automation helpers

### Limitations

- Limited reasoning depth compared to large models (7B+)
- Reduced long-context understanding
- Not suitable for complex multi-step logic or heavy code generation
- May hallucinate or oversimplify responses
- Not designed for safety-critical or regulated decision systems

### References

TinyLlama Project — https://github.com/jzhang38/TinyLlama

Hugging Face Model Page — https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0