Article

The Final Modifiers: A Eulogy for the Brief Life of Prompt Engineering

Tabrez Syed

When New Technologies enter the scene, they often create temporary ("shim" or "wedge") jobs that are later rendered obsolete as the technology becomes mainstream. Roles like telephone operators and projectionists were once common but are now obsolete. The same phenomenon may be happening with Prompt Engineering as AI models become more usable.

Flagmen and the early automobile

Early automobiles were loud, frightening machines that were difficult to operate and seemed dangerous. Most people were not accustomed to these "road locomotives," and in response, laws were passed. In the UK, the Locomotives Act of 1865 required a person to walk 60 yards ahead of a vehicle waving a red flag. These individuals, known as "pilots" or flagmen, warned others of the approaching vehicle.

Source

In the late 19th century, it might have seemed reasonable to expect the number of flagmen to increase as cars became more popular. Although they played a role in the automobile’s initial adoption, their usefulness faded. Looking back, it's evident that having someone walk in front of a car with a flag is absurd. Governments realized this, too; by 1896, they had started repealing these laws.

The rise of GPT

The first version of GPT emerged in a limited form around 2017. It could handle rudimentary tasks like sentence completion. Prompt it with a fragment of a sentence or phrase, it would generate a plausible completion. Think of it as an enhanced text predictor, the algorithm that guesses the next word as you type on your phone.

Type "- The old clock tower chimed midnight. The town was quiet, except for the faint sounds of…"

and it might respond:

"- The old clock tower chimed midnight. The town was quiet, except for the faint sounds of a dog barking in the distance."

But GPT's responses were not always coherent and could often veer off into nonsense.

GPT-2 and GPT-3 signaled a leap forward in language generation. Feed them prompts, and they could produce lengthy paragraphs of coherent text, stories, articles, code, and more. Prompts could be open-ended, merely indicating a topic, style, or concept to explore. Responses became more flexible, nuanced, and reliable.

Yet prompting still required finesse.

According to Peter Welinder, VP of Product at OpenAI, early versions of GPT-3 were challenging to work with, demanding careful prompt engineering to achieve good results. As he explains:

"Early on when we had the first version of the API, we had a really smart guy who is a world-renowned author, but also a programmer; Andrew Mayne. He was one of the early users of the API and he got the internal name of "the prompt whisperer," or "GPT-3 whisperer". He really knew how to craft the prompts to get the best results.

You really had to be good at understanding the intricacies of GPT-3 and design really good prompts."

As Andrej Karpathy, the AI researcher, noted, "Prompt engineering is like being an LLM psychologist."

As GPT-3's reach expanded, prompt engineering had a moment. Some heralded it as software engineering 3.0, and prompts were soon bought and sold online.

PromptBase.com

The Washington Post wrote of “Tech’s hottest new job: AI Whisperer” and Anthropic listed a job for a “prompt engineer and librarian” paying up to $335,000. Like marketing copywriters, people hired prompt writers to craft custom prompts.

Yet as the world busied itself unraveling the magic words to speak to GPT, its makers weren't idle.

OpenAI kept improving its models. After GPT 3, they built InstructGPT, which was tuned to follow instructions. They then released ChatGPT, which could chat with context and memory.

ChatGPT allowed people to have genuine back-and-forth conversations with a model for the first time. Previously, if you needed to clarify or correct GPT-3, you had to make an entirely new API call with the full prompt. With ChatGPT, if you needed to clarify or correct ChatGPT, you could simply say “make it shorter”

ChatGPT’s newfound conversational abilities made it usable by everyday users, and its popularity exploded.

Still, ChatGPT had its limitations, including subpar math skills and a lack of access to current information. OpenAI addressed these issues by integrating an App Store into ChatGPT, which allowed it to use Wolfram for computation and browse the web for up-to-date information. With each passing week, models continue to improve as limitations and workarounds are eliminated.

In a 2022 discussion, Open AI CEO Sam Altman predicted we won't be doing prompt engineering in 5 years.

What is prompting anyways?

Prompting is how we communicate with AI models like GPT-3. Instead of coding in a programming language, we use natural language to prompt the model to generate a response.

Coding is highly precise but less expressive. When programming, the code you write must follow strict syntactic and logical rules. There is little room for interpretation or ambiguity. While code can realize complex behaviors, its possibilities are still limited by its symbolic representational system

Natural language is highly expressive but less precise. It can convey subtle ideas, culturally-dependent concepts, emotions, and more. However, natural language is subject to interpretation and ambiguity.

Prompting techniques allow us to provide context to help the model understand our intent.

As Simon Willison argues in his article "in defense of prompt engineering", prompt engineering is about having great communication skills. After all,  If you can’t articulate what you want, you likely won’t get useful results.

Examples of prompting techniques

One effective prompting technique is specifying the role or domain you want the model to operate within. The "Act as a" approach narrows the scope by asking the model to take on a particular persona. This persona provides guardrails for the types of responses the model will generate. For instance in this example from awesome-chatgpt-prompts:

I want you to act as a travel guide. I will write you my location and you will suggest a place to visit near my location. In some cases, I will also give you the type of places I will visit. You will also suggest me places of similar type that are close to my first location. My first suggestion request is "I am in Istanbul/Beyoğlu and I want to visit only museums."

Another useful prompting technique is providing examples, known as few-shot prompting. This gives the model an instance of the type of response or behavior you're looking for so it can infer the right approach.

The following is an example from Dair AI's prompt engineering guide.

A "whatpu" is a small, furry animal native to Tanzania. An example of a sentence that uses
the word whatpu is:
We were traveling in Africa and we saw these very cute whatpus.
To do a "farduddle" means to jump up and down really fast. An example of a sentence that uses
the word farduddle is:

The ai responds with

When we won the game, we all started to farduddle in celebration.

Generating images from text prompts requires significantly more context than generating text alone. Images have many attributes like framing, focus, lighting, and content that must be specified for the model to produce the desired result.

The more details the prompt provides, the more control you have over the resulting image. Today, the solution is to include all of these details directly in the prompt (see example below).

Source: DALL-E prompt book

A study published in "A taxonomy of prompt modifiers for text-to-image generation" analyzed prompts and images shared on Twitter to identify common prompt modifiers and how they impact generation. The images below demonstrate how an image evolves as you add more modifiers to the prompt.

Source: A taxonomy of prompt modifiers for text-to-image generation

The excitement around discovering effective prompts for models like GPT-3 is reminiscent of early programming, when Perl developers would compete to write intentionally obscure yet functional code. Between 1996 and 2000, The Perl Journal hosted an Annual Obfuscated Perl Contest "to determine who can write the most devious, inhuman, disgusting, amusing, amazing, and bizarre Perl code."

David Powell won Most Creative Award for his console skiing game with this beauty below.

undef $/;open(_,$0);/ \dx([\dA-F]*)/while(<_>);@&=split(//,$1);@/=@&;
$".=chr(hex(join("",splice(@&,0,2))))while(@&); eval$”;

($C,$_,@\)=(($a=$/[1]*4)*5+1, q| |x(0x20).q|\||.chr(32)x(0x10).q$*$.
chr(0x20)x(0x10).(pack("CC",124,10)), sub{s/.\|(\s*?)(\S)./\|$1 $2/},
sub{s/\|(\s*?).(\S)/ \|$1$2 /}, sub{$2.$1.$3},sub{$tt=(3*$tt+7)%$C},
sub{$1.$3.$2});
while ($_) {
    select $/, undef, $/, $C/1E3;
    (sysread(STDIN, $k, 1),s/(.)(\*)(.)/(&{$\[(ord($k)-44&2)+2]})/e)
    if (select($a=chr(1),$/,$/,0));

print 0x75736520504F5349583B2024743D6E657720504F5349583A3A5465726D696F73
3B24742D3E676574617474722828303D3E2A5F3D5C2423292F32293B2024742D3E
365746C666C61672824742D3E6765746C666C6167267E284543484F7C4543484F4
7C4943414E4F4E29293B2024742D3E7365746363285654494D452C31293B24742D
E7365746174747228302C544353414E4F57293B24643D224352415348215C6E223B0A;

  ($p?(/.{70}\|$/):(/^\|/))||(&{$\[3]}<$/[0])?($p=!$p):&{$\[$p]}||die("$d");
  (&{$\[3]}<$/[1])&&(s/ \|$/\|/);
  (/\|.*\*.*\|$/)||die("$d");
}

Today there is a similar kind of alchemy to crafting the prompt that brings the right response from the model.

But we've seen this story before. As technologies become mainstream, their complexity is abstracted to increase accessibility to everyday users.

Operating systems progressed from command lines to graphical user interfaces. Command line options became checkboxes, dropdowns, and other visual elements that were easier to understand. This transition developed a visual vocabulary for how we interact with software. With the rise of mobile, touchscreens enabled swiping and tapping to form a gestural vocabulary.

We're seeing the early signs of this progression with AI generation tools like Adobe's Project Firefly. It uses natural language prompts and UI elements like checkboxes to specify the attributes of an image. This hybrid interface leverages both the expressiveness of language and the efficiency of visual components.

Adobe Firefly

Intuitive interfaces like these will incorporate prompts and modifiers, packaging them into visual elements that can be mixed and matched. This will open the experience to those without specialized knowledge, just as GUIs democratized computing for people less familiar with command lines.

Prompt Engineering will still have a place, though. Once essential to unlocking AI's potential for language, it may become a niche tool. My project boxcars.ai (an open-source Langchain inspired port to Ruby), for instance, applies advanced prompt techniques to yield normalized language generations from models.

So what should you do with your new Prompt Engineer certificate for which you paid good money?

You might take a lesson from Orlim Vargas, one of Europe's last grand hotel lift attendants. At the Les Trois Rois hotel in Basel, Orlim's white-gloved service evokes the nostalgia of genteel travel as he whisks guests between floors. Though technology has replaced most elevator operators, Orlim's human touch imparts a sense of occasion and maintains a fading grace. Perhaps one day, you’ll sell hand-crafted prompts on Etsy.

Want to read more on how UX will change? Check out my previous article on this subject here: AI is making us rethink UX.

← Back to Blog

Want these insights delivered? Subscribe below.