Charles Explorer logo
🇬🇧

Utilizing large language models

Class at Faculty of Arts |
AMLV00081

Annotation

At this point, a human in synergy with a machine is still better in the vast majority of creative activities than the machine alone. Therefore, it makes sense to perfect ourselves in this synergy. This seminar will focus on large language models (LLMs), which emerged at the end of the 2010s, gained popularity with the arrival of Chat GPT, and will probably stay with us.

We will create the structure of the seminar together on the spot. As I don't know what topics will interest you and what tools will be released during the semester, the latent space of possible syllabi is too wide to describe here.

We'll probably start by understanding how language models work in general and how transformers (the architecture they are based on) function, how the base models are further improved, what is finetuning, RLHF, and so on.

But then we'll plunge into the whirlwind of practical demonstrations of how to work effectively with models.

How to find the right simulacrum and create the right environment for it.

How to make the simulacrum work consistently across a wide range of different tasks.

Anthropomorphization and demonomorphization.

Useful memes.

Gaslighting and other manipulative techniques in "prompt ingeneering".

Prompt injecting, jailbreaking, the Waluigi effect.

Ethics and notkilleveryoneism.

Plugins for ChatGPT.

Perhaps we will remember this seminar with a slight irony, similar to people who attended seminars like "How to use Google correctly" in 2003, but I kind of hope that we will remember it.