Main Page: Difference between revisions
Jump to navigation
Jump to search
| (One intermediate revision by the same user not shown) | |||
| Line 39: | Line 39: | ||
[[:Category:AI on demand|AI on demand]] | [[:Category:AI on demand|AI on demand]] | ||
* [[AI on demand: Usage]] (list models, show usage, ...) | * [[AI on demand: Usage]] (list models, show usage, ...) | ||
* | * [[AI on demand: BAAI/bge-m3]] | ||
* [[AI on demand: | * [[AI on demand: BAAI/bge-reranker-v2-m3]] | ||
* [[AI on demand: MiniMaxAI/MiniMax-M2.5]] | * [[AI on demand: MiniMaxAI/MiniMax-M2.5]] | ||
* [[AI on demand: NVIDIA/NVIDIA-Nemotron-3-Super-120B-A12B]] | |||
* [[AI on demand: Qwen/Qwen3-Coder-Next]] | |||
* [[AI on demand: Qwen/Qwen3.5-35B-A3B-FP8]] | |||
* [[AI on demand: allenai/olmOCR-2-7B]] | |||
* [[AI on demand: mistralai/Voxtral-Mini-3B-2507]] | * [[AI on demand: mistralai/Voxtral-Mini-3B-2507]] | ||
* [[AI on demand: | * [[AI on demand: opendatalab/MinerU2.5-2509-1.2B]] | ||
* [[AI on demand: | * <s>[[AI on demand: swiss-ai/Apertus-70B-Instruct-2509]]</s> (discontinued) | ||
[[:Category:Troubleshooting|Troubleshooting]] | [[:Category:Troubleshooting|Troubleshooting]] | ||
Latest revision as of 14:29, 7 May 2026
Introduction
Welcome to the wiki of our OpenStack based stoney cloud infrastructure. This collection serves as a comprehensive resource for all aspects of our cloud environment. Here, you will find guides for managing and utilizing our stoney cloud.
Getting started
Overview
- AI on demand: Usage (list models, show usage, ...)
- AI on demand: BAAI/bge-m3
- AI on demand: BAAI/bge-reranker-v2-m3
- AI on demand: MiniMaxAI/MiniMax-M2.5
- AI on demand: NVIDIA/NVIDIA-Nemotron-3-Super-120B-A12B
- AI on demand: Qwen/Qwen3-Coder-Next
- AI on demand: Qwen/Qwen3.5-35B-A3B-FP8
- AI on demand: allenai/olmOCR-2-7B
- AI on demand: mistralai/Voxtral-Mini-3B-2507
- AI on demand: opendatalab/MinerU2.5-2509-1.2B
AI on demand: swiss-ai/Apertus-70B-Instruct-2509(discontinued)
Quick links