-
Notifications
You must be signed in to change notification settings - Fork 26
cld2labs/TinyLlama-1.1B-Chat-v1.0 #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
arpannookala-12
wants to merge
6
commits into
opea-project:main
Choose a base branch
from
cld2labs:cld2labs/TinyLlama-1.1B-Chat-v1.0
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
3c03b81
feat: add TinyLlama-1.1B-Chat-v1.0 model card and deployment guide fo…
arpannookala-12 77baea3
update tinyllama deployment guide
2190ba6
Enable ingress and update deployment instructions
HarikaDev296 3e95da3
update tinyllama deployment.md
3ff7abe
update tinyllama deployment.md
8725bce
update tinyllama deployment.md
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
113 changes: 113 additions & 0 deletions
113
third_party/Dell/model-deployment/TinyLlama-1.1B-Chat-v1.0/deployment.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,113 @@ | ||
| ## Step 1: Prerequisites to Deploy TinyLlama Model on Xeon with Keycloak | ||
|
|
||
| Ensure the Enterprise Inference stack with Keycloak is already deployed before proceeding. | ||
|
|
||
| Edit `core/scripts/generate-token.sh` and set your values before sourcing it: | ||
|
|
||
| | Variable | Description | | ||
| | ------------------------- | ------------------------------------------------------------------------ | | ||
| | `BASE_URL` | Hostname of your cluster (e.g. `api.example.com`), without `https://` | | ||
| | `KEYCLOAK_ADMIN_USERNAME` | Keycloak admin username | | ||
| | `KEYCLOAK_PASSWORD` | Keycloak admin password | | ||
| | `KEYCLOAK_CLIENT_ID` | Keycloak client ID configured during EI deployment | | ||
|
|
||
| Then run: | ||
|
|
||
| ```bash | ||
| export HUGGING_FACE_HUB_TOKEN="your_token_here" | ||
|
|
||
| cd ~/Enterprise-Inference | ||
| source core/scripts/generate-token.sh | ||
| ``` | ||
|
|
||
| This exports: `BASE_URL`, `KEYCLOAK_CLIENT_ID`, `KEYCLOAK_CLIENT_SECRET`, and `TOKEN`. | ||
|
|
||
| ## Step 2: Deploy Tinyllama-1.1b-chat-v1.0 Model | ||
|
|
||
| ```bash | ||
| helm install tinyllama-1-1b-cpu ./core/helm-charts/vllm \ | ||
| --values ./core/helm-charts/vllm/xeon-values.yaml \ | ||
| --set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0" \ | ||
| --set global.HUGGINGFACEHUB_API_TOKEN="$HUGGING_FACE_HUB_TOKEN" \ | ||
| --set ingress.enabled=true \ | ||
| --set ingress.secretname="${BASE_URL}" \ | ||
| --set ingress.host="${BASE_URL}" \ | ||
| --set oidc.client_id="$KEYCLOAK_CLIENT_ID" \ | ||
| --set oidc.client_secret="$KEYCLOAK_CLIENT_SECRET" \ | ||
| --set apisix.enabled=true \ | ||
| --set tensor_parallel_size="1" \ | ||
| --set pipeline_parallel_size="1" | ||
| ``` | ||
|
|
||
|
alexsin368 marked this conversation as resolved.
|
||
| ## Step 3: Verify the Deployment | ||
|
|
||
| ```bash | ||
| kubectl get pods | ||
| kubectl get apisixroutes | ||
| ``` | ||
|
|
||
| Expected Output: | ||
|
|
||
| ``` | ||
| NAME READY STATUS RESTARTS | ||
| keycloak-0 1/1 Running 0 | ||
| keycloak-postgresql-0 1/1 Running 0 | ||
| tinyllama-1-1b-cpu-vllm-<hash>-<hash> 1/1 Running 0 | ||
| ``` | ||
|
|
||
| > Note: The pod name suffix `<hash>-<hash>` is auto-generated by Kubernetes and will differ on each deployment. Ensure all pods show `1/1 Running`. | ||
|
|
||
| ``` | ||
| NAME HOSTS | ||
| tinyllama-1-1b-cpu-vllm-apisixroute api.example.com | ||
| ``` | ||
|
|
||
| ## Step 4: Test the Deployed Model | ||
|
|
||
| ```bash | ||
| curl -k https://${BASE_URL}/tinyLlama-1.1B-Chat-v1.0-vllmcpu/v1/completions \ | ||
| -X POST \ | ||
| -H "Content-Type: application/json" \ | ||
| -H "Authorization: Bearer $TOKEN" \ | ||
| -d '{ | ||
| "model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", | ||
| "prompt": "What is Deep Learning?", | ||
| "max_tokens": 25, | ||
| "temperature": 0 | ||
| }' | ||
| ``` | ||
|
|
||
| If successful, the model will return a completion response. | ||
|
|
||
| ## To undeploy the model | ||
|
|
||
| ```bash | ||
| helm uninstall tinyllama-1-1b-cpu | ||
| ``` | ||
|
|
||
| ## Parameters | ||
|
|
||
| | Parameter | Description | | ||
| | --------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | | ||
| | `--set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0"` | Defines the target model from **Hugging Face** to deploy. | | ||
| | `--set global.HUGGINGFACEHUB_API_TOKEN="..."` | Authenticates access to gated or private Hugging Face models. Replace with your own secure token. | | ||
| | `--set ingress.enabled=true` | Enables Kubernetes **Ingress** to expose the model service externally. | | ||
| | `--set ingress.host="replace-ingress"` | Public hostname or FQDN for the inference endpoint (maps to your Ingress controller IP). | | ||
| | `--set ingress.secretname="replace-secret"` | Kubernetes **TLS Secret** used for HTTPS termination at the ingress layer. | | ||
| | `--set oidc.client_id="..."` | Keycloak OIDC client ID used for token-based authentication. | | ||
| | `--set oidc.client_secret="..."` | Keycloak OIDC client secret corresponding to the client ID. | | ||
| | `--set apisix.enabled=true` | Enables **APISIX** as the API gateway for routing and authentication. | | ||
| | `--set tensor_parallel_size="1"` | Number of tensor parallel workers. Set to the number of available CPUs/GPUs per node. | | ||
| | `--set pipeline_parallel_size="1"` | Number of pipeline parallel stages. Typically `1` for single-node deployments. | | ||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
|
|
||
60 changes: 60 additions & 0 deletions
60
third_party/Dell/model-deployment/TinyLlama-1.1B-Chat-v1.0/model-card.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,60 @@ | ||
| # TinyLlama-1.1B-Chat-v1.0 | ||
|
|
||
| This model uses TinyLlama-1.1B-Chat-v1.0, a compact large language model developed by the TinyLlama Project team. It is a chat-tuned variant of the TinyLlama 1.1B base model, optimized for instruction-following, conversational AI, and lightweight reasoning tasks. Despite its small size, TinyLlama delivers strong performance for edge AI, embedded systems, rapid prototyping, and cost-efficient inference scenarios. | ||
|
|
||
| For full details including model specifications, licensing, intended use, safety guidance, and example prompts, please visit the official Hugging Face page: **Official Hugging Face Page** | ||
|
|
||
| https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 | ||
|
|
||
| This model provides inference services only; weights are hosted by Hugging Face under the Apache 2.0 License. | ||
|
|
||
| Ensure compliance with the Apache 2.0 License terms before using this model. | ||
|
|
||
| ### Model Attribution | ||
|
|
||
| **Developer:** TinyLlama Project | ||
|
|
||
| **purpose:** Lightweight instruction-tuned conversational AI | ||
|
|
||
| **Sizes/Variants:** 1.1B parameters | ||
|
|
||
| **Modalities:** Text → Natural Language | ||
|
|
||
| **Parameter Size:** 1.1 Billion | ||
|
|
||
| **Max Context:** ~2K tokens | ||
|
|
||
| **License:** Apache 2.0 (commercial-friendly) | ||
|
|
||
| ### Usage Notice | ||
|
|
||
| **By using this model, you agree that:** | ||
|
|
||
| - Inputs and outputs are processed by the TinyLlama-1.1B-Chat-v1.0 model under the Apache 2.0 license. | ||
| - You are responsible for validating outputs before production use. | ||
| - This model should not be used for generating malicious, deceptive, or unsafe content. | ||
| - Outputs may contain inaccuracies and must be reviewed for correctness and compliance. | ||
|
|
||
| ### Intended Applications | ||
|
|
||
| - Lightweight chatbots and virtual assistants | ||
| - Edge AI and on-device inference | ||
| - Rapid prototyping and AI experimentation | ||
| - CPU-based conversational agents | ||
| - Educational tools and demos | ||
| - RAG-based document assistants for low-resource environments | ||
| - Dev/test automation helpers | ||
|
|
||
| ### Limitations | ||
|
|
||
| - Limited reasoning depth compared to large models (7B+) | ||
| - Reduced long-context understanding | ||
| - Not suitable for complex multi-step logic or heavy code generation | ||
| - May hallucinate or oversimplify responses | ||
| - Not designed for safety-critical or regulated decision systems | ||
|
|
||
| ### References | ||
|
|
||
| TinyLlama Project — https://github.com/jzhang38/TinyLlama | ||
|
|
||
| Hugging Face Model Page — https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting an error saying the ingress for this model already exists. Even after uninstalling the model with helm and confirming the ingress for tinyllama is deleted, rerunning the helm install command results in this error:
Error: INSTALLATION FAILED: 1 error occurred:
* ingresses.networking.k8s.io "tinyllama-1-1b-cpu-vllm-ingress" already exists
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alex please verify if ingress is deleted with 'kubectl get ingress' after running helm uninstall
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to replicate the issue, I deployed tinyllama with helm command and running helm uninstall command also removed ingress.
redeploying tinyllama for the second time didnt give me any error. Below is the output
**user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx api.example.com 80, 443 28m
tinyllama-1-1b-cpu-vllm-ingress alb api.example.com 80, 443 6m39s
user@ubuntuxeon2:~/Enterprise-Inference$ helm uninstall tinyllama-1-1b-cpu
release "tinyllama-1-1b-cpu" uninstalled
user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
keycloak nginx api.example.com 80, 443 36m
user@ubuntuxeon2:~/Enterprise-Inference$ kubectl get pods
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 36m
keycloak-postgresql-0 1/1 Running 0 36m
user@ubuntuxeon2:~/Enterprise-Inference$ helm install tinyllama-1-1b-cpu ./core/helm-charts/vllm --values ./core/helm-charts/vllm/xeon-values.yaml --set LLM_MODEL_ID="TinyLlama/TinyLlama-1.1B-Chat-v1.0" --set global.HUGGINGFACEHUB_API_TOKEN="$HUGGING_FACE_HUB_TOKEN" --set ingress.enabled=true --set ingress.secretname="${BASE_URL}" --set ingress.host="${BASE_URL}" --set oidc.client_id="$KEYCLOAK_CLIENT_ID" --set oidc.client_secret="$KEYCLOAK_CLIENT_SECRET" --set apisix.enabled=true --set tensor_parallel_size="1" --set pipeline_parallel_size="1"
NAME: tinyllama-1-1b-cpu
LAST DEPLOYED: Wed May 6 15:42:36 2026
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
**