How should the model's thinking switch be configured for a service deployed via Docker?
How should the model's thinking switch be configured for a service deployed via Docker?