Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/model-lineup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The table below shows the models that are currently available in Tinker. We plan

- In general, use MoE models, which are more cost effective than the dense models.
- Use Base models only if you're doing research or are running the full post-training pipeline yourself
- If you want to create a model that is good at a specific task or domain, use an existing post-trained model model, and fine-tune it on your own data or environment.
- If you want to create a model that is good at a specific task or domain, use an existing post-trained model, and fine-tune it on your own data or environment.
- If you care about latency, use one of the Instruction models, which will start outputting tokens without a chain-of-thought.
- If you care about intelligence and robustness, use one of the Hybrid or Reasoning models, which can use long chain-of-thought.

Expand Down