RL environment for training LLMs to write optimized GPU kernels.
POST /reset - Start a new episodePOST /step - Submit kernel codeGET /state - Get current stateGET /health - Health checkGET /problems - List available problemsThis is a demo instance running on CPU. Full kernel evaluation requires GPU.
For GPU evaluation, run locally with Docker:
docker run --gpus all -p 8000:8000 kernrl