KW-20250916-001

September 17, 2025

πŸ† KataGo Custom Fine-Tuned Model Release: KW-20250916-001

KataGo

πŸ“Œ Overview

This is a high-performance 19x19 Go AI model, fine-tuned from the powerful kata1-b28c512nbt foundation using a sophisticated two-stage training strategy. The model has reached amateur high-dan level strength, capable of completing full games with sophisticated tactical understanding.

🧠 Model Information

Attribute Value
Model Name KW-20250916-001-s10784975104-d43600.bin.gz
Model Configuration b28c512nbt (28 blocks, 512 channels)
Board Size 19x19
File Size ~331 MB
Base Model kata1-b28c512nbt-s10784871168-d5287365110
Training Steps 107.85 billion + 100,000 fine-tuning steps
Training Data 43,600 rows
Training Time ~1.5 hours (RTX 5080 laptop GPU)
Training Framework KataGo v1.17.0+

πŸ“Š Performance Metrics

Final Training Results

Validation Results

Strength Assessment

| Metric | Value | Description | |——–|β€”β€”-|β€”β€”β€”β€”-| | Strength Level | Amateur 7-8 dan | Strong amateur level | | Eye Formation | Excellent | Can recognize complex living groups | | Life & Death | Good | Can handle most common life and death problems | | Endgame | Medium | Some mistakes in late game | | Middle Game | Excellent | Strong tactical calculation | | Opening | Good | Solid understanding of common patterns |

βš™οΈ Training Methodology

Two-Stage Fine-Tuning Strategy

Stage 1: Foundation Adaptation

TORCH_LOAD_WEIGHTS_ONLY=0 ./selfplay/train.sh ~/KataGo/ KW-20250916-001-phase1 b28c512nbt 16 main \
  -initial-checkpoint ~/KataGo/kata1-b28c512nbt-s10784871168-d5287365110/model.ckpt \
  -lr-scale 0.05 \
  -max-train-bucket-per-new-data 1 \
  -max-train-bucket-size 100000 \
  -samples-per-epoch 50000 \
  -max-epochs-this-instance 1 \
  -sub-epochs 1 \
  -max-train-steps-since-last-reload 10000 \
  -pos-len 19

Stage 2: Precision Optimization

TORCH_LOAD_WEIGHTS_ONLY=0 ./selfplay/train.sh ~/KataGo/ KW-20250916-001 b28c512nbt 16 main \
  -initial-checkpoint ~/KataGo/train/KW-20250916-001-phase1/checkpoint.ckpt \
  -lr-scale 0.01 \
  -max-train-bucket-per-new-data 1 \
  -max-train-bucket-size 100000 \
  -samples-per-epoch 50000 \
  -max-epochs-this-instance 1 \
  -sub-epochs 1 \
  -max-train-steps-since-last-reload 10000 \
  -pos-len 19

πŸ†š Comparison with Base Model

Metric Base Model KW-20250916-001 Improvement
First Move Accuracy 63.64% 64.93% +1.29%
Value Variance 0.457 0.4486 -0.0084
Policy Entropy 0.621 0.6535 +0.0325
Estimated ELO ~2350 ~2375 +25

πŸ’‘ Note: The two-stage fine-tuning strategy allowed the model to retain the knowledge from the base model while adapting to new data patterns. The slight increase in policy entropy indicates the model has become more decisive in its moves.

πŸš€ Usage Instructions

1. Download the model

wget https://github.com/changcheng967/Kata_web/releases/download/KW-20250916-001/KW-20250916-KW-20250916-001-s10784975104-d43600.bin.gz

2. Use with KataGo engine

# In KataGo directory
./cpp/katago gtp -model ./models/KW-20250916-001-s10784975104-d43600.bin.gz

3. In GTP command line

boardsize 19
clear_board
genmove B

4. Use with GUI software

πŸ“¦ File Description

🌟 Features & Advantages

πŸ“… Future Plans

πŸ“ License

This model follows the KataGo license requirements.


πŸ’‘ Tip: This model is suitable for serious Go study, analysis, and as a strong training partner. For best results, use it with a GUI like Sabaki or Lizzie for visualization of win rates and variations.

KataGo
Trained based on KataGo open-source framework - Built for Go AI research and education

Assets

Full Changelog

View comparison on GitHub β†’

← Back to Home