Many large-scale language models, including ChatGPT, are adjusted so as not to give 'harmful' answers, for example, they will not answer even if you ask how to make drugs or bombs. Although such ...