Train & Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases (2024)

12016

\pageonefooter

Action editor: Tal Linzen. Submission received: 15 March 2024; revised version received: 15 July 2024; accepted for publication: 20 September 2024.

Tyler LoakmanDepartment of Computer Science, TheUniversity of Sheffield, tcloakman1@sheffield.ac.uk  Chen TangThe co-first author.Department of Computer Science, University of Manchester, chen.tang@manchester.ac.uk  Chenghua LinThe corresponding author.Department of Computer Science, University of Manchester, chenghua.lin@manchester.ac.uk

Abstract

Previous work in phonologically and phonetically grounded language generation has mainly focused on domains such as puns and poetry. In this article, we present new work on the generation of English tongue twisters - a form of language that is required to be conditioned on a phoneme level to maximize sound overlap, while maintaining semantic consistency with an input topic or phrase and still being grammatically correct. We present TwisterLister, a pipeline for generating phonologically informed tongue twisters from large language models (LLMs) that we use to generate TwistList 2.0, the largest annotated dataset of tongue twisters to date, consisting of 17K+ examples from a combination of human and LLM authors. Our generation pipeline involves the use of a phonologically constrained vocabulary alongside LLM prompting to generate novel, non-derivative tongue twister examples. We additionally present the results of automatic and human evaluation of smaller models trained on our generated dataset to demonstrate the extent to which phonologically motivated language types can be generated without explicit injection of phonological knowledge. Additionally, we introduce a phoneme-aware constrained decoding module (PACD) that can be integrated into an autoregressive language model and demonstrate that this method generates good quality tongue twisters both with and without fine-tuning the underlying language model. We also design and implement a range of automatic metrics for the task of tongue twister generation that is phonologically motivated and captures the unique essence of tongue twisters, primarily based on phonemic edit distance (PED).111Code and resources available at https://github.com/tylerL404/Train-and-Constrain-TT

issue: 1

1 Introduction

Train & Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases (1)

Whilst the dawn of large language models (LLMs) such as OpenAI’s GPT-4 OpenAI etal. (2023), Meta’s Llama 2 Touvron etal. (2023) and Google’s Gemini Anil etal. (2023) has brought unprecedented performance improvements in many natural language generation (NLG) tasks, these models are highly resource-hungry in terms of data, compute, and API expenses. Consequently, many works have started to investigate the ways in which LLM capabilities and knowledge can be infused into smaller models, using the larger model for data enhancement via the generation of pseudo-labelsTang etal. (2023) or additional training examplesYang, Tang, and Lin (2024).

Additionally, LLMs are primarily designed to select the most probable continuation of a span of text based on their training data.Creative language, in direct opposition to non-creative language, is predominantly desired to be non-derivative, containing phrases and word sequences that are not ubiquitous in everyday language in order to evoke various emotions and engage a reader rather than purely convey information in a linguistic form. As a result, creative language generation’s goals are at odds with the primary training paradigm of LLMs, as the goal is often not to select the most probable continuation, and instead surprise and engage readers Roush etal. (2022).

In particular, tongue twisters represent a type of phonologically constrained language that aims to engage a reader with high levels of phoneme overlap to encourage mispronunciations, often containing rhyme and humorous semantics, or simply conveying information in a form that is enjoyable to read due to the articulatory patterns that the lexical choices present. Consequently, tongue twister generation presents myriad unique challenges for NLG, including the need to consider the phonetic realization and underlying phonological representation of the chosen vocabulary, all while still maintaining a grammatically valid output sequence despite the often obscure and highly restricted candidate vocabulary. In addition to being a fruitful area for further investigation by the NLP/NLG communities, tongue twisters also present a wide range of real-world applications, making the case for their automatic generation even stronger. These applications include (1) being used as an educational tool for language teaching Sugiharto, Santoso, and Shofyana (2022); Somoff (2014); Wilshire (1999); (2) being a source of entertainment and humor stemming primarily from unintentional misarticulation; (3) as a literary device for engaging young children in developing their literacy (such as the approach taken in numerous Dr. Seuss stories, Geisel, 1965); (4) as a method of designing memorable slogans and tag lines Guerini, Özbal, and Strapparava (2015); and (5) as stimuli in neuroscience and physiology research to investigate the localization of functions within the brain and how linguistic perception links to production on a neurological level O’Halloran (2020); Wong etal. (2019); Kember, Connaghan, and Patel (2017). Consequently, the ability to automatically generate tongue twisters constrained on particular topics and phoneme combinations has many real-world applications. Moreover, findings from the generation of tongue twisters also have wider applicability in phoneme-conditioned language generation, such as the more widely studied areas of poetry and lyric generation, where being able to exert phoneme-level control of the output is desirable.

1.1 Contributions

Towards the automatic generation of high-quality tongue twisters, we expand upon prior works to present TwisterLister, an LLM-based pipeline for the generation of unique, non-derivative tongue twisters to provide more extensive training data to enable the training of smaller language models. TwisterLister employs semantic and phonological knowledge in the form of sentence embeddings and phonemic edit distance to restrict a candidate vocabulary list to pass to an LLM. In doing so, we create TwistList 2.0, the largest existing dataset of tongue twisters with over 17k examples, of which approximately 15k are derived directly from the proposed TwisterLister pipeline. We motivate the need for this extended wealth of training data by demonstrating the impact on both automatic metrics and human evaluation as a function of training-data volume by fine-tuning various smaller-scale language models (such as BART and Flan-T5, etc.) on various splits of this dataset. We present these results as part of two different tongue twister generation approaches, topic-to-twister and, inspired by Keh etal. (2023), style-transfer (i.e., prose-to-twister). With the aforementioned real-world applications of tongue twisters, we motivate the topic-to-twister setting for applications such as language learning, where multiple outputs can be generated to test an individual’s articulatory abilities in a new language whilst simultaneously expanding their vocabulary. On the other hand, the style-transfer setting is motivated by applications such as marketing, where a standard sentence conveying a desired meaning (e.g., a brand’s mission statement or a product’s features) can be reworded to have increased phonetic complexity to become a tongue twister, consequently engaging the reader and increasing memorability. We additionally introduce PACD, a Phoneme Aware Constrained Decoding module, that can be used with any causal autoregressive language model to ensure token outputs meet phoneme-level criteria. Overall, our contributions may be summarized as follows:

  • TwisterLister - A phonologically and semantically informed pipeline for generating English tongue twisters with large language models, both as a stand-alone generation method and as a data synthesis approach for additional training data.

  • TwistList 2.0 - The most extensive English tongue twister dataset to date, containing 17,000+ tongue twisters produced via TwisterLister (similar-to\sim15k) and human authors (similar-to\sim2k), including extensive quality control procedures, for use in training tongue twister generation models as well as presenting a resource for the study of this language form on a linguistic level.222To the best of our knowledge, the previous record belongs to the original TwistList (1.0) from Loakman, Tang, and Lin (2023), containing just over 2.1k human-authored examples.

  • PACD - A phoneme-aware constrained decoding algorithm that applies hard lexical constraints on the outputs of autoregressive language models to achieve phoneme-level overlap and generate tongue twisters.

  • iPED/oPED - Novel phonemic edit distance (PED) based metrics for assessing the articulatory characteristics of tongue twisters on a word-initial and overall basis.

  • A range of experiments training smaller language models (i.e., GPT-2, DialoGPT, BART, Flan-T5, ByT5, and Baichuan) to generate tongue twisters in topic-to-twister and style-transfer settings using TwistList 2.0, including extensive automatic and human evaluation.

  • Extensive qualitative linguistic analysis of generations from different models trained on varying quantities of training data, with or without the constrained decoding PACD module, in the form of case studies.

2 Phonetics & Phonology

Due to the strong reliance on theory and ideas from the fields of phonetics and phonology in this article, we find it apt to begin with a short introduction to these domains. Firstly, phonetics refers to an area of linguistics that studies the production and perception of speech sounds present in spoken languages Gick, Wilson, and Derrick (2013); Jessen (2008); Ladefoged (1996) whilst the related field of phonology focuses more strongly on the abstract mental representations of speech sounds and the development of feature-based taxonomies for the categorization of related sounds Clements and Ridouane (2011); DeLacy (2007); Klausenburger (1970).

2.1 Place & Manner of Articulation

Figure2 presents the primary pulmonic consonants present in human languages. For each consonant, three important pieces of information on their phonetic characteristics are interpretable. Firstly, the rows each represent a manner of articulation which refers to the physical process that occurs to produce a particular sound (such as a "plosive" involving the build-up and sudden release of air pressure within the mouth, whilst a "nasal" involves the nasal cavity through the lowering of the velum). On the other hand, each column refers to a place of articulation, which relates to the main location that articulators (e.g., tongue, teeth, and lips) make contact within the vocal tract (for example, "bilabial" sounds involve both lips and "labiodental" sounds involve the lips and teeth in their production). Finally, the last remaining detail is voicing, which refers to whether or not the glottal folds (also known as vocal folds or vocal cords) are vibrating during the sound’s production.

Train & Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases (2)

2.2 Phonological Features

In addition to physical production-based characteristics, there are also further phonological features that can be used to explain the patterns and processes that phonemes undergo in human speech. An example of the different phonological features that phonemes may have is presented in Table1. It is phonological features such as these that the phonemic edit distance (PED) measures introduced in §4 and used throughout this article rely on to determine the similarity between two speech sounds (and therefore the likelihood of mispronunciation when transitioning between them).

SoundConsonantalSonorantVoicedNasalContinuantCoronal
\textipa/t/+----+
\textipa/d/+-+--+
\textipa/s/+---++
\textipa/l/+++-++
\textipa/i/-++-+-

2.3 Phonetic Transcription Standards

Multiple transcription standards exist for phonetic and phonological representations of text. In the majority of this article, the preferred standard is the International Phonetic Alphabet (IPA), where each speech sound is represented by a single symbol (e.g., \textipa/t, d, s, z/). The IPA is the standard transcription convention used within linguistic research and is used within Figure2 and Table1.We additionally make use of another transcription standard, ARPABET, which uses 1 or 2 characters to represent a given sound (or 3 where vowels have their stress marked). Table2 presents an example of these standards, alongside another common standard, SAMPA.

Standard OrthographyIPAARPABETSAMPA
Hello World\textipa/hEl@U w@:ld/HH EH L OW W AX L DhEl@U w@:ld

3 Related Work

3.1 Creative Language Generation

Numerous efforts have been made toward the generation of creative language forms, with a range of findings regarding whether or not popular LLMs truly exhibit human-level creativity in these tasks. Chakrabarty etal. (2023) present work applying the Torrance Tests of Creative Thinking (TTCT) to objectively analyze the outputs of LLMs and human authors on a narrative writing task, finding that LLM generations perform measurably worse than humans, passing 3-10x fewer criteria outlined by the TTCT. However, it is important to note that this was in comparison to professional authors, who represent a very niche subset of the best human writers available. On the other hand, Gómez-Rodríguez and Williams (2023) compare human and LLM-authored narratives and find that LLMs are able to match or surpass human performance on several of the evaluation criteria they present. However, in this case, the "creativity" of the task was somewhat diminished by having prescribed rules about the topic, characters, and writing style, where the task may be more construed as emulating the writing of an existing work. However, similarly, both Franceschelli and Musolesi (2023) and Clark etal. (2021) observe that humans are infrequently able to distinguish creative works written by other humans from those authored by LLMs, with the latter often achieving very high-quality outputs. Overall, there is clear potential and room for improvement in the field of automatically generating creative language forms.

A popular trend is to investigate the extent to which models can be trained to generate language forms where training data is scarce. Wöckener etal. (2021) investigate this for the generation of poetry using similar-to\sim16k and similar-to\sim67k quatrains of English and German poetry, respectively, and notice difficulties in GPT-2 learning sub-lexical phenomena including rhyme from this number of training examples alone. However, poetry presents a highly restrictive form of literary language where many types contain formal constraints regarding length, syllable count, and metrical patterns. Additionally, Vande Cruys (2020) presents work on the generation of Shakespearean sonnets, another literary niche that contains an even more limited number of available training samples. They therefore approach the task via adding constraints at decoding time in a pipeline approach that includes the stages of content planning, rhyme generation, and output polishing to imbue literary sensibilities into the outputs of models trained exclusively on non-literary text. Finally, Popescu-Belis etal. (2023) use data synthesis techniques to increase the amount of rhyming data they can train GPT-2 Radford etal. (2019) on, to realize GPoeT, which shows an increased ability to generate consecutive rhymes.

Tongue twister generation, as a niche subdomain of creative language generation (which in itself is a branch of NLG more generally), has only received attention in recent years. Keh etal. (2023) presented PANCETTA, the first major work on the automatic generation of this language form in the modern post-BERT Devlin etal. (2019) NLP era, and released TT-Corp, a dataset of over 600 tongue twisters taken from various online sources. They train variations of naive and "phoneme aware" GPT-2-based models Radford etal. (2019) in a topic-to-twister and style-transfer setting, progressing the inclusion of phoneme-level awareness by pre-training models on the International Phonetic Alphabet (IPA) representation of WikiText data, and utilizing these models in conjunction with off-the-shelf orthographic models with the aim of exploiting the link between the phonological and orthographic representations of text.Shortly following PANCETTA, Loakman, Tang, and Lin (2023) presented the precursor to the present work and released TwistList (referred to as TwistList 1.0 in this article), a dataset of 2.1k human-authored tongue twisters collected from various online sources in-line with Keh etal. (2023). Additionally, TwistList was used to train a wide range of language models, including GPT-2 Radford etal. (2019), DialoGPT Zhang etal. (2020b), BART Lewis etal. (2020) and T5 Raffel etal. (2020) solely in a topic-to-twister setting based on orthographic text. Two naive phonemic evaluation metrics were introduced, including Phoneme Overlap (PO) and Initial Phoneme Overlap (IPO), which assessed the homogeneity of the generations in terms of unique sounds. However, these metrics considered all sounds to be equidistant in phonological space (hence being naive).

Other forms of language where phonemic and phonetic information are essential have also been generated, including rap Xue etal. (2021); Manjavacas, Kestemont, and Karsdorp (2019); Potash, Romanov, and Rumshisky (2018) and song lyrics more generally Tian etal. (2023); Chang etal. (2023); Zhuo etal. (2023); Zhang etal. (2022). Computational research on creative works is not restricted to such domains, however, with extensive work also existing in the area of narrative generation Hong etal. (2023); Tang etal. (2022); Chen etal. (2021), humor generation Loakman, Maladry, and Lin (2023); Sun etal. (2022); Tian, Sheth, and Peng (2022), metaphor processing Wang etal. (2024, 2023); Li etal. (2023a); Li, Guerin, and Lin (2022), and music generation Li etal. (2024); Yuan etal. (2024); Yu etal. (2023).

3.2 Constrained Generation

An inherent paradox often exists when using language models for creative language generation. Models trained to generate the most probable sequence of tokens are used to output text where the most probable continuation is sometimes one of the least preferable. In the simplest form, control over the output form can be exercised through simple methods such as restricting the output vocabulary at decoding time Hokamp and Liu (2017); Valitutti etal. (2013), or through the application of penalties for n𝑛nitalic_n-gram repetition, therefore encouraging diversity Zhang etal. (2021); Foster and White (2007).Several works have been performed in the direction of toolkits to aid in constrained language generation. Roush etal. (2022) present the Constrained Text Generation Studio (CTGS), an AI-writing assistant that in its most basic form applies a range of pre-made filters to the output of a probabilistic language model (such as "avoid the letter e", Wright, 2016), scanning the most probable next-word generations and only selecting those which fall in line with the selected constraints (of which numerous can be applied in parallel). However, due to the design of the constraints, CTGS struggles with the generation of well-formed outputs for models trained using the predominant paradigm of sub-word tokenization.Similar approaches have also been utilized for the generation of other non-creative language types, such as machine translation and summarization, where stylistic constraints and editorial decisions may result in a preferred output structure. For example, Yao etal. (2023) present COLLIE, a grammar-based framework for the application of advanced compositional constraints on the outputs of language models, in addition to a tool for generating example task instances from raw text. Additionally, Iso (2022) presents AutoTemplate, a method of formatting a task structure to realize lexically constrained text generation.However, whilst many constraint-based systems exist, most work on lexical constraints is focused on the inclusion of specific word choices within the output, and therefore the desired candidate vocabulary has to be known a-priori. Lu etal. (2022) propose to solve a common downfall of autoregressive decoding where it is necessary to plan ahead and introduce NeuroLogic A*esque, a decoding approach that uses lookahead heuristics to more carefully consider future token generations.We build upon work in the area of constrained generation in §7 with PACD, a phoneme aware constrained decoding approach that dynamically applies constraints on allowable tokens at each generation timestep.

3.3 Knowledge Distillation

Alongside the advent of LLMs, so too arose the area of knowledge distillation Gupta and Agrawal (2022); Hinton, Vinyals, and Dean (2015); Buciluǎ, Caruana, and Niculescu-Mizil (2006) whereby the aim is to achieve the successful transfer of knowledge from a much larger model (often referred to as a teacher model) to one or more smaller models (often referred to as student models). In doing so, much more robust generalist models, such as GPT-3 Brown etal. (2020) and GPT-4 OpenAI etal. (2023) can have elements of their abilities passed down to smaller models that do not possess the same level of computing requirements Yang etal. (2024). Whilst there are numerous methods of achieving the desired distillation, perhaps the simplest and easiest of these approaches, particularly in domains where data is scarce, is to generate new synthetic training data using the larger models, either via the generation of completely new examples or via data augmentation and perturbation Whitehouse, Choudhury, and Aji (2023); Askari etal. (2023). However, generating novel instances is not always straightforward, with the quality of generations often being dependent on the task type. For instance, Li etal. (2023b) find that more subjective tasks may result in lower quality synthetic data, such as not reflecting the same level of diversity as human-written equivalents, something that is key to creative language domains and that we aim to overcome in §4.1 for the creation of TwistList 2.0.

4 TwistList 2.0 Dataset Construction

Train & Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases (3)

Loakman, Tang, and Lin (2023) presented TwistList 1.0, a dataset of 2.1k+ human-authored tongue twisters from various sources available on the web including listicles and works of fiction. While this allowed a high level of quality in the examples within the dataset (as all instances were automatically filtered, reviewed by a linguist, and then underwent quality control on a subset), the small size of this dataset (even whilst being the largest dataset of tongue twisters we are aware of to-date) means that smaller models struggle to learn the key features of tongue twisters from such limited examples, specifically regarding the need to balance high levels of phonemic repetition alongside maintaining grammatical coherence.

To combat this shortfall, we extend TwistList 1.0 8-fold into TwistList 2.0, containing 17,000+ unique tongue twister examples. To achieve this feat, we note the near-human performance that was achieved by early versions of ChatGPT in our previous human evaluation studies Loakman, Tang, and Lin (2023), and opt to build a generation pipeline we name TwisterLister using GPT-3.5-Turbo to generate novel examples to facilitate the training of smaller parameter count models from the resulting synthetic dataset.333In both Loakman, Tang, and Lin (2023) and the present work, we access GPT-3.5-Turbo (i.e., ”ChatGPT”) via the OpenAI API.

4.1 TwisterLister Pipeline

A key discovery in previous work Loakman, Tang, and Lin (2023) was the common reliance of ChatGPT on slightly modifying well-known existing tongue twisters that had been memorized from the training data when presented with a new topic (for example, “silver shiny ships” generated “How much wood could a woodchuck chuck if a woodchuck could chuck silver shiny ships”). To avoid this pitfall, we create a more constrained pipeline for tongue twister generation that promotes the generation of unique, non-derivative examples, illustrated in Figure3.Initially, we generate the topics for the tongue twisters by building a set of topic phrases by combining a randomly sampled adverb or adjective with a noun to represent an abstract topic (using part-of-speech tags from NLTK’s Brown Corpus, Francis and Kucera, 1979). Following this, we then randomly select a phoneme on which to focus the tongue twister, restricting these choices to consonants (due to these being more commonly exploited in tongue twisters than vowels) and additionally removing any phonemes that are not legal in word-initial position in standard English phonotactics (such as the glottal stop \textipa/P/) or phonemes with very few entries in our vocab bank (such as the voiced postalveolar fricative \textipa/Z/).Following this, we search the CMU Pronouncing Dictionary (CMUDict)444Available at https://github.com/Alexir/CMUdict/tree/master. for words starting with our preferred phoneme and calculate the cosine similarity between the SentenceBERT embedding of our NLTK-generated topic phrase with each candidate word retrieved from the CMUDict. Following this, the top N retrieved candidate words with the highest semantic similarity are kept, and the others are discarded (N = 5 or 10).

We then calculate the pairwise weighted phonemic edit distance (PED) between our initially selected phoneme and all other allowable phonemes, and select the lowest scoring phoneme (i.e., most similar) to act as the best secondary phoneme.555Phonemic edit distance is implemented with the panphon package Mortensen etal. (2016). Where multiple share the same edit distance, the first reached when iterating over the list is selected. This is due to tongue twisters frequently relying on the reader mispronouncing a sound, often due to confusion with a phonetically/phonemically similar sound (e.g., “she sells sea shells” exploiting \textipa/S/ and \textipa/s/). Consequently, allowing the generation process to select from 2 banks of words that start with similar sounds means that we can more directly promote mispronunciations rather than solely relying on the repetition of a single sound. We then repeat the process of generating a candidate list of words that are semantically related to the topic and start with our desired secondary phoneme.

We then combine the word lists and shuffle their order to promote alternation between words with different, yet similar, initial phonemes as experimentation showed that requesting the LLM to “alternate” between words from 2 different word banks, and presenting them separately, still resulted in the words being used largely in the same order that they were presented, and therefore not resulting in the desired alternation. The list is then fed into the LLM (GPT-3.5-Turbo, accessed August-September 2023 via the API) with one of the prompts presented in Table3.

Prompt A
Generate a sensible and grammatical tongue twister using words from the following list: [word-list].
The output should be a single sentence and be grammatical and coherent.
Prompt B
Generate a tongue twister by primarily using words from the following list: [word-list].
The output should be grammatical and coherent.

In Prompt A, where we specify single-sentence outputs, we see much more concise outputs that may suffer from coherence issues, whilst in Prompt B we see more coherent outputs but which often resemble standard poetry more than tongue twisters (to which we apply further filtering as discussed in §4.3). Consequently, we use a combination of tongue twisters generated with either prompt to have a diverse range of styles.666We did not perform extensive prompt-engineering to arrive at these but observed the differing behavior in our preliminary testing. All generations are performed with max_tokens set to 1000 and a temperature of 0.8 to facilitate creative completions. In total, we generated 17,500 example tongue twisters, of which 11,500 were generated with Prompt A, and 6,500 with Prompt B.777The imbalance here is due to wishing to promote the generation of ”twisty” content, which is more prevalent with Prompt A. Consequently, we sample fewer Prompt B examples to contain a moderate amount of literary/poetic text.

4.2 Style-Transfer Paraphrase Generation

In line with Keh etal. (2023), we present an additional task setting alongside the topic-to-twister approach that utilizes style-transfer. Whilst Keh etal. (2023) generate non-tongue twister examples of their dataset entries via simple rule-based synonym replacement conditioned on part-of-speech tags, we leverage GPT-3.5-Turbo to paraphrase each entry in our tongue twister dataset. To achieve this, we pass the following system prompt to GPT-3.5-Turbo: "In this task you will pretend that you’re an author who is rewriting existing works into a non-literary form that more resembles prose. You will be presented with a tongue twister and asked to rewrite it using synonym replacement so that there are no longer high levels of phonetic overlap and sound repetition. Example 1: INPUT = "She sells sea shells by the seashore." OUTPUT = "The girl sells conches by the ocean." Example 2: INPUT = "Peter Piper picks pickled peppers" OUTPUT = "Peter Piper selects preserved capsicums". We then present the following user message to GPT-3.5-Turbo for each dataset entry: "INPUT = "[twister], OUTPUT = " where [twister] is a standard tongue twister from TwistList 2.0. This approach is superior to simple synonym substitution as the LLM can dynamically select new vocabulary terms in a way that further ensures the new paraphrase uses combinations of vocabulary that are sensible. This is because raw synonym replacement can result in semantic drift (as very few words are true synonyms), potentially risking nonsensical outputs Chiang and Lee (2023).

4.3 Refining Outputs

To further refine the generated outputs, we process the resulting dataset in several stages. Firstly, all outputs generated using Prompt A (which promotes succinct tongue twisters that are not as coherent) are re-fed into GPT-3.5-Turbo in the prompt “Improve the following tongue twister by editing it so that is makes more sense and is grammatical: [tongue twister]”. Consequently, this step fixes errors that arise from the original word lists given to the initial prompt containing morphological variants that are difficult to turn into a coherent output (for example, all nouns being in possessive form, or verb tenses being mixed in a way that is detrimental to coherence).To further check that the remaining outputs are sensible, we calculate the perplexity (PPL) of the generations using a pre-trained language model (in this case GPT-2) as a basic heuristic for well-formedness. We then compare these scores with the average PPL across the original TwistList dataset of human-authored tongue twisters and remove any outputs from our new dataset that have perplexities that are higher than the original dataset’s mean plus one standard deviation. This stage removed 397 examples.

Additionally, as previously mentioned, Prompt B promoted the generation of longer, more poetic outputs, at the cost of not always resembling tongue twisters. In order to maintain only the best generations, we apply a metric for assessing the phonological characteristics of tongue twisters using weighted phonemic edit distance (PED) Mortensen etal. (2016), which is outlined in further detail in §5.3. Again, we compare the results to the mean score from the original TwistList dataset, and filter our new dataset by removing any examples that do not score lower (i.e., better) than the original dataset’s mean plus one standard deviation. However, no examples were caught in this filtering stage, suggesting that the majority of the more poetic works still exhibited tongue-twister-esque phonetics.

Next, to encourage diversity, we remove examples based on word-overlap. To achieve this, we apply a pairwise fuzzy-matching algorithm based on the token sort ratio (which is indifferent to word order) across the remaining tongue twisters and remove examples with more than a 60% overlap with an existing twister in the dataset.888We arbitrarily decide which to remove and continue until no remaining samples exhibit this level of overlap due to these examples having passed the other two layers of filtering, and therefore being acceptable. We implement this using the RapidFuzz package: https://github.com/rapidfuzz/RapidFuzz. This step consequently led to the removal of 1,747 entries, therefore reducing repetition and increasing diversity in the remaining dataset.

We then filter out offensive examples that may have been created by poor topic/phoneme combinations that resulted in undesirable stereotypes or associations being expressed. These include words relating to the topics of racism, sexism, homophobia, transphobia, and additional terms that some may find offensive (including general expletives and references to various anatomy). We perform this by comparing the tongue twisters with a bank of offensive words and removing any twisters containing any of the examples (regardless of context). This stage removed a final 69 entries from our dataset.

Finally, to ensure the diversity of input/output pairs when training in a topic-to-twister fashion, we remove any entries with duplicate NLTK topic phrases, as these cases would result in 2 tongue twister outputs for a single input (but with different phonological characteristics, as they would have been caught and filtered by previous stages if not). This stage removed a final 135 examples.

The final additions to the dataset comprise 15,151 examples, which when combined with the existing TwistList 1.0 results in 17,278 unique tongue twisters in TwistList 2.0. We maintain a distinction between human and machine-authored tongue twisters in the final dataset so that different communities can make use of whichever is most pertinent to their application.

4.4 Additional Processing & Quality Control

As in Loakman, Tang, and Lin (2023), we then enhance the dataset with the addition of phonetic transcription using the g2p-en Python package.999Available at: https://github.com/Kyubyong/g2p/tree/master. We experimented with other grapheme-to-phoneme (G2P) solutions to see if an improvement was possible here such as SoundChoice Ploujnikov and Ravanelli (2022) which better accounts for correctly transcribing homographs, but opted to utilize g2p-en again. This is primarily due to our dataset generation pipeline relying on the CMUDict to retrieve vocabulary, and g2p-en queries the CMUDict for transcriptions before falling back on a trained neural network model to infer pronunciations for out-of-vocabulary (OOV) tokens. Consequently, the majority of our less common vocabulary will already have a gold standard transcription in CMUDict (in the General American accent, due to the limitations of this resource). We then convert these transcriptions into the International Phonetic Alphabet (IPA) to facilitate the use of our phoneme-based metrics, but also to provide a transcription standard that is more common in the linguistics domain.

Finally, whilst in Loakman, Tang, and Lin (2023) RAKE Rose etal. (2010) was used to extract keywords from the human-authored examples to represent the topic, we skip this stage here and utilize the topic phrases we used in the dataset generation step. As a result, unlike the keywords for the original tongue twister collection, those that are new to TwistList 2.0 have more abstract topics as the topic words are not forced to appear in the output twister (rather, only a semantic link is present). We hypothesize that this may also help to reduce drawbacks seen in the original work, where our trained models often repeated the topic keywords numerous times to achieve a “tongue twister”, rather than learning a deeper representation of semantics.

Quality Control

Quality control on our dataset was performed in multiple ways. Five human evaluators, who are native speakers of English, were provided with 50 sampled instances from the dataset from different conditions to rate the quality of the resulting tongue twisters (25 from Prompt A and 25 from Prompt B). Scores for the criteria were given on a scale from 1 (low quality) to 5 (excellent quality) for 5 criteria:(i)“Twister” refers to the assessed quality of the tongue twister, and whether it exhibits the expected characteristics of a traditional tongue twister (and is analogous to the "overall" score present in later human evaluation of our model generations in §6.1). (ii) "Topic" refers to how well the input topic phrase is represented in the output via semantics.(iii) "Paraphrase Quality" refers to whether or not the paraphrase generation step maintains a meaningful and grammatical text, whilst (iv) "Paraphrase Prosaic" refers to what extent the paraphrase is believed to have successfully removed the sound overlap and tongue twister nature of the original input to more resemble standard text. Finally, (v) "Overall" offers a holistic assessment of the dataset entry as a whole. Table4 presents the breakdown of the human evaluation results.

Overall, what may be concluded is that the resulting dataset is considered to be of good quality, particularly in regard to the paraphrased versions and overall evaluation, When compared to other aspects, the ratings for the tongue twisters themselves and topic semantics are lower but still indicate reasonably good quality (with 3 being the middle rating, akin to "neither agree nor disagree"). This is particularly true when considering that the tongue twister quality is a highly subjective measure due to entertainment value being a fundamental component that different people perceive to different levels. Additionally, the topic association is lower due to not enforcing that the input terms are present in the output. On the other hand, extracting keywords from twisters and using these as the inputs would lead to an artificially inflated rating for the topic criteria, as the topic would be guaranteed to be represented explicitly in the output. Overall, the dataset samples were given a mean rating of 3.778, akin to a rating very close to "high quality", which is rather good for a creative, and therefore subjective, domain. Further details of human participant recruitment are reported in §5.4.

TwistList 2.0 Quality Control
TwisterTopicParaphrase QualityParaphrase ProsaicOverall
3.138**3.333*4.421*3.969**3.778**

4.5 TwistList 2.0 Dataset Summary

Statistical details of TwistList 2.0 can be seen in Table5. Additionally, two example entries in the dataset are outlined in Table6. As the examples demonstrate, the combination of adjective/adverb and noun used as a topic phrase during the TwistLister generation pipeline can be directly output as part of the final twister, or can instead be represented solely by semantics. For example, in the top example (TT_ID 68), the input adjective "public" is replicated in the output. The reason for this is due to the chance of the phoneme \textipa/p/ being selected at random at generation time. Consequently, it makes sense to expect "public" (and morphological variants thereof), to be in the top-k most semantically relevant words to the topic word "public", and therefore constitute part of the constrained candidate vocabulary. The clear selection of the phoneme \textipa/b/ as the secondary phoneme (selected via minimizing phonemic edit distance) can also be seen, where words such as "broadcaster" and the proper noun "Berman" have been selected (where \textipa/p/ and \textipa/b/ differ only in the presence/absence of voicing). Similarly, in the second example (generated from the input "direct language") we see "non-direct" in the output due to the selected initial or secondary phoneme being \textipa/n/ or \textipa/m/ (alveolar and bilabial nasal consonants, respectively), whilst "language" has been referenced less directly via terms such as "monolingual", "novelistic" and "Mandarin".The impact of the different prompt forms can also be seen, with the first example (using Prompt A) being much more succinct than the second example (which used Prompt B). Finally, regarding the paraphrasing used to enable training of style-transfer models, it can be seen that GPT-3.5-Turbo maintains the grammaticality and coherence of the original tongue twister, but replaces much of the vocabulary. However, some terms remain for semantic reasons, such as "BBC" in the first example and "non-verbal" in the second example.

Due to exercising limited control over the paraphrasing stage, some synonym replacements may result in maintaining similar levels of sound overlap (at least in the word-initial alliterative sense). For instance, "pollster" has been replaced by "poller" in the first example, and "methods" has been replaced by "means" in the bottom example. The reason for the former example is similar to why the input phrase may be in the output of the final tongue twister as the closest synonym to many words is a derivative of the word itself (i.e., variants of "poll-").

DatasetTrain (A)Train (B)Full TrainTestTotal
# Tongue Twisters9653547115124212417248
# Vocabulary Size6030661160935171033698703
Avg. # Topic Words2.002.002.003.162.14
Avg. # Paraphrase Words26.6357.5137.8017.1935.26
Avg. # Tongue Twister Words23.9053.6034.6414.9832.22

TT ID:68
Topic:"public commentator"
\hdashline[3pt/5pt]Source:GPT-3.5-Turbo
\hdashline[3pt/5pt]Prompt:A
\hdashline[3pt/5pt]Tongue Twister:The public-spirited BBC broadcasts presented by persistent presenters perplexed the publicist Berman the broadcaster and the pollster profoundly.
\hdashline[3pt/5pt]Paraphrase:The civic-minded BBC airs hosted by determined hosts confused the publicist Berman the broadcaster and the poller deeply.
\hdashline[3pt/5pt]Twister ARPABET:DH AH0 P AH1 B L IH0 K S P IH1 R IH0 T AH0 D B IY2 B IY0 S IY1 B R AO1 D K AE2 S T S P R IY0 Z EH1 N T AH0 D B AY1 P ER0 S IH1 S T AH0 N T P R EH1 Z AH0 N T ER0 Z P ER0 P L EH1 K S T DH AH0 P AH1 B L IH0 S IH0 S T B ER1 M AH0 N DH AH0 B R AO1 D K AE2 S T ER0 AH0 N D DH AH0 P OW1 L S T ER0 P R OW0 F AW1 N D L IY0.
\hdashline[3pt/5pt]Paraphrase ARPABET:DH AH0 S IH1 V Z IH0 K EY2 D M IH0 D B IY2 B IY0 S IY1 EH1 R Z HH OW1 S T IH0 D B AY1 D IH0 T ER1 M AH0 N D HH OW1 S T S K AH0 N F Y UW1 Z D DH AH0 P AH1 B L IH0 S IH0 S T B ER1 M AH0 N DH AH0 B R AO1 D K AE2 S T ER0 AH0 N D DH AH0 P AA1 L ER0 D IY1 P L IY0.
TT ID:10397
Topic:"direct language"
\hdashline[3pt/5pt]Source:GPT-3.5-Turbo
\hdashline[3pt/5pt]Prompt:B
\hdashline[3pt/5pt]Tongue Twister:Non-direct, multilingual, and non-verbal, Monolingual speakers were nominal. Some could speak Mandarin natively, While others relied on mediated means. The novelistic twist began when mispronouncing, Novellus words of complex morphology. Non-native speakers struggled with Macromedia, And nonnatives found Mandarin challenging. Nonaccrual in Marathi, they preferred, As net learners of a morphological world. But despite the hurdles, they persevered, Finding beauty in languages unfurled.
\hdashline[3pt/5pt]Paraphrase:Indirect, polyglot, and non-verbal, Single-language speakers were nominal. Some could converse in Mandarin from birth, While others depended on mediated methods. The narrative turn commenced when mispronouncing, Novellus terms of intricate structure. Non-indigenous speakers struggled with Macromedia, And non-natives found Mandarin demanding. Non-accumulation in Marathi, they favored, As online learners of a structural world. But despite the obstacles, they persisted, Discovering elegance in languages unfolded.

5 Topic-to-Twister and Style-Transfer Tongue Twister Generation

5.1 Task Definition

Inspired by Keh etal. (2023), we define two different settings for the task of tongue twister generation we describe as topic-to-twister and style-transfer. In the former topic-to-twister setting, for a given topic phrase (as generated by randomly sampling adjective and noun combinations, as described in §4.1), we aim to generate a tongue twister T𝑇Titalic_T, whereby T𝑇Titalic_T comprises a sequence of words {w1,w2,wn}subscript𝑤1subscript𝑤2subscript𝑤𝑛\{w_{1},w_{2},...w_{n}\}{ italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … italic_w start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }.The generated output must satisfy the following constraints: (1) the output should be semantically related to the input topic phrase; (2) the output should show maximal levels of phonological overlap across tokens; and (3) the output should be grammatically valid. On the other hand, for the style-transfer setting we aim to generate a tongue twister again, T={w1,w2,wn}𝑇subscript𝑤1subscript𝑤2subscript𝑤𝑛T=\{w_{1},w_{2},...w_{n}\}italic_T = { italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … italic_w start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }, but provide as input a non-tongue twister phrase and aim to convert it via style-transfer into a tongue twister through learning to replace vocabulary with more phonemically similar entries.

5.2 Trained Models

In order to realize our goals of tongue twister generation in topic-to-twister and style-transfer settings, we fine-tune a range of popular language models with varying parameter counts on our TwistList 2.0 dataset.

  • GPT-2 (117M) Radford etal. (2019) - A popular transformer-based text generation model consisting of both an encoder and a decoder.

  • DialoGPT (117M) Zhang etal. (2020b) - A version of GPT-2 that has been fine-tuned on an extensive corpus of dialogues to enable better conversational performance (and therefore consequently often better understands natural language task prompts).

  • BART (139M) Lewis etal. (2020) - A popular denoising autoencoder model consisting of a BERT-like encoder Devlin etal. (2019) with a GPT-2-like decoder Radford etal. (2019).

  • Flan-T5 (250M) Chung etal. (2022) - A further instruction fine-tuned version of the T5 model Raffel etal. (2020).

  • ByT5 (582M) Xue etal. (2022) - A version of T5 trained with byte/character level tokenization, rather than subwords.

  • Baichuan (7B) Yang etal. (2023) - A large-scale open-source LLM trained on English and Mandarin, achieving SoTA performance on many tasks for a model of its size. Due to the significant size of Baichuan, we perform fine-tuning with the help of Low-Rank Adaptation (LoRA) Hu etal. (2022).

  • ChatGPT (GPT-3.5-Turbo) Ouyang etal. (2022) - A large language model fine-tuned for chat-based interactions and instruction following, which excels in few and zero-shot tasks.Importantly, we use ChatGPT in a zero-shot manner and do not perform any fine-tuning.101010GPT-3.5-Turbo was accessed for this purpose during August-September 2023 via the Chat Completions API.

Training Splits

In order to investigate the benefit of access to different amounts of training data for a data-driven approach to tongue twister generation in topic-to-twister and style-transfer task settings (and to motivate the contribution of our large dataset), we train all of the above models on training sets of various sizes, including 2k, 4k, 8k, and 13k training samples from TwistList 2.0. One exception here is Baichuan, which due to compute requirements we train only on the largest 13k split. We keep the test set (for automatic evaluation) to 2,124 samples, covering the entirety of TwistList 1.0, and have a validation set of equal size. Due to using TwistList 1.0 entries as our test set, all reference-based metrics are compared to human-authored outputs, and human evaluation scores can be directly compared to human performance.

Hyperparameters and Training Details

To leverage pre-trained parameters, we restore the encoder, decoder, and embedding layers from public checkpoints: (1) GPT-2: gpt2-base https://huggingface.co/gpt2; (3) DialoGPT: DialoGPT-medium https://huggingface.co/microsoft/DialoGPT-medium; (4) BART: bart-base https://huggingface.co/facebook/bart-base; (5) Flan-T5: flan-t5-base https://huggingface.co/google/flan-t5-base; (6) ByT5: byt5-base https://huggingface.co/google/byt5-base. The checkpoints of Baichuan are the exception, which require downloading the weights directly from their repositories (https://github.com/baichuan-inc/Baichuan2). Here we use the 7B checkpoint of Baichuan 2. As the vanilla Baichuan model requires extremely large computing resources, we implement the LoRAHu etal. (2021) technique to reduce computational costs. LoRA adapts large language models by incorporating low-rank modifications. This approach involves adjusting or fine-tuning extensive language models to address specific tasks or domains with a reduced computational cost. By introducing low-rank modifications, LoRA aims to enhance the adaptability and efficiency of these models while maintaining their performance. This technique is particularly beneficial for tailoring pre-trained language models to better suit specialized or narrower domains without requiring excessive computational resources. Our use of ChatGPT consists of GPT-3.5-Turbo via the OpenAI API, where we provide no information other than the same prompt we use with all other models. All other settings remain at default.

Our local experiments are carried out on a single Nvidia A40 GPU, which has 48GB of VRAM. When training neural models, we implement the PyTorch Lightning framework to set up training processes. The training parameters are as follows: The batch size is set to 16 (excl. Baichuan w/LoRA, where it is 8); the learning rate is 1e-4; max source length is set to 512, and the max target length is set to 100; the optimizer uses Adam Kingma and Ba (2014), and the ϵitalic-ϵ\epsilonitalic_ϵ of Adam is set to 1e-8. The whole training process lasts for 10 epochs, and the validation checking runs every half epoch. However, the presented results only consider the checkpoint with the best performance (i.e., lowest loss).

For all topic-to-twister settings, we use the prompt ‘Generate a tongue twister on the topic of "[TOPIC]"’ where [TOPIC] refers to the input topic phrase from the test set, and for all style-transfer settings we use the prompt "Generate a tongue twister by rewriting the following text: [PARAPHRASE]" where [PARAPHRASE] refers to the non-literary paraphrase of a tongue twister from the test set.

5.3 Automatic Metric Suite

We present extensive automatic evaluation on the following metrics: Perplexity (PPL), BLEU (B-1/B-2/B-3/B-4) Papineni etal. (2002), ROUGE (Ro-1/Ro-2/Ro-L) Lin (2004), and BERTScore Precision, Recall, and F-Measure Zhang etal. (2020a) (BS-P/BS-R/BS-F). PPL, BLEU, and ROUGE are standard metrics in language generation to assess quality, whilst BERTScore assesses semantic similarity to a gold reference. It should be noted that due to the nature of our task, many potential “gold standard” tongue twisters exist for any given input. Consequently, as with all creative generation works, these reference-based metrics should be interpreted cautiously whilst being aware of their limitations (although we opt to include them for completeness).

Readability

Tongue twisters are known for their intricate phoneme-level patterns and linguistic complexity, making them challenging to articulate correctly. Consequently, readability metrics can be used to indirectly measure whether or not tongue twisters, due to their complex nature, are using more complex vocabulary in order to meet strict phonemic constraints (such as when a selected phoneme only has a few obscure words that are related to the input topic). We present a range of readability metrics, incorporating the Dale-Chall Readability Index Chall and Dale (1995) (Re-D), Flesch–Kincaid Readability Score Flesch (1948) (Re-F), Gunning-Fog Index Gunning (1971) (Re-G) and ARI Index Smith and Senter (1967) (Re-A). These comprise a series of complexity and readability metrics that relate to the necessary comprehension level of a text’s audience. They calculate a numerical score based on factors such as sentence length and word complexity, offering a quantitative measure of how difficult a text is to understand. These further complement our other metrics by analyzing linguistic complexity via means other than phonology.111111All readability metrics were implemented from https://pypi.org/project/py-readability-metrics/. Additionally, extremely high scores on such metrics can likewise indicate the nonsensical nature of any particularly poor generations. This is because readability metrics take into account factors such as sentence length and syllable counts, with high scores being given to indefinitely long sequences (due to the absence of sentence-final punctuation) as well as convoluted and overly complex syntactic structures and lexical choices.

Phonology/Phonetics

We further develop our tongue twister measures and present phonemic edit distance metrics (PEDs) relying on the weighted phonemic edit distance function from the PanPhon package Mortensen etal. (2016). This allows us to more directly analyze the phonetics of our generated outputs by taking into consideration the articulatory similarities and differences between different phonemes. For example, previous phonetic metrics from Loakman, Tang, and Lin (2023), PO and Init-PO, treat all phonemes as equidistant in feature space, resulting in a transition from \textipa/s/ to /\textipaS/ being viewed as the same "quality” as a transition from \textipa/s/ to \textipa/g/. In the former case, we have transitioned from a voiceless alveolar fricative to a voiceless post-alveolar fricative (therefore moving the tongue slightly further back in the mouth), whilst in the latter case the transition is between a voiceless alveolar fricative and a voiced velar plosive, resulting in a change of voicing, position, and manner of articulation. Consequently, using phonemic edit distance allows us to not punish transitions between phonemically similar sounds (which are more likely to encourage mispronunciations, such as in “she sells sea shells”) as heavily as we punish transitions between unrelated sounds. As with PO and Init-PO, we utilize this weighted edit distance on both a word-initial and overall level, with oPED taking the overall average edit distance between every phoneme transition in the tongue twister, whilst iPED calculates the edit distance between word-initial phonemes (and is, therefore, a more accurate measure of “soft” alliteration).

Formally, for iPED and oPED let X and Y be two phonemes, represented as feature vectors, where each vector contains binary values representing the presence or absence of a phonological feature. Therefore, the weighted feature edit distance between X and Y is defined as: Dwfe(X,Y)=minalignmentsi=1nwidisubscript𝐷wfe𝑋𝑌subscriptalignmentssuperscriptsubscript𝑖1𝑛subscript𝑤𝑖subscript𝑑𝑖D_{\text{wfe}}(X,Y)=\min_{\text{alignments}}\sum_{i=1}^{n}w_{i}\cdot d_{i}italic_D start_POSTSUBSCRIPT wfe end_POSTSUBSCRIPT ( italic_X , italic_Y ) = roman_min start_POSTSUBSCRIPT alignments end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, where X𝑋Xitalic_X and Y𝑌Yitalic_Y are sequences of phonological features represented as feature vectors, n𝑛nitalic_n is the length of the longest common subsequence of X𝑋Xitalic_X and Y𝑌Yitalic_Y, disubscript𝑑𝑖d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT represents the distance between the i𝑖iitalic_i-th feature in the alignment, and wisubscript𝑤𝑖w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the weight assigned to the i𝑖iitalic_i-th feature. The minimum is taken over all possible alignments of the sequences X𝑋Xitalic_X and Y𝑌Yitalic_Y. Each alignment assigns a distance disubscript𝑑𝑖d_{i}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT between corresponding features in the sequences. The weighted sum of these distances is computed, where each distance is multiplied by its corresponding weight wisubscript𝑤𝑖w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For iPED, we calculate the mean distance across sequences of word-initial phonemes, whilst for oPED we calculate the mean distance when comparing all adjacent phonemes across the tongue twister. This metric provides a flexible way to measure the similarity between sequences of phonological features, allowing for customization through feature weighting. In the case of oPED and iPED, the lower the score, the better (with a score of 0 relating to 100% overlap of a single phoneme throughout the tongue twister).

In addition to our novel metrics discussed above (iPED/oPED), we additionally present our two metrics from Loakman, Tang, and Lin (2023), Phoneme Overlap (PO) and Initial Phoneme Overlap (Init-PO). PO refers to the average overlap of all phonemes across tokens (#unique phonemes / #total phonemes), whereas Init-PO is the ratio of unique word-initial phonemes to the number of words (#unique word-initial phonemes / #words). Whilst these original phoneme-based metrics reward longer outputs, we argue that all other things equal, a longer tongue twister is better than a shorter one as it provides more entertainment and more opportunities for mispronunciation. Perfect scores on PO and Init-PO can be achieved by the repetition of a single word. Whilst this does not lead to high-quality outputs, these metrics are intended exclusively to be indicators of sound characteristics, rather than an overall guide to quality. In both cases, higher levels of overlap result in lower ("better") scores, and the highest ("worst") achievable score is 1.

5.4 Human Evaluation Protocol

Due to the limitations of automatic evaluation metrics and tongue twisters being a creative domain where articulation abilities are tested, we also perform human evaluation. In line with Loakman, Maladry, and Lin (2023), we aim to be transparent in our human evaluation of a subjective language type that may be considered a type of humorous language. In total, five evaluators were asked to rate 20 outputs from the best performing standard baselines, Flan-T5 and ByT5, in addition to Baichuan, ChatGPT (i.e., GPT-3.5-Turbo), and golden examples from TwistList 2.0 on the following criteria: Relevance (how relevant the tongue twister is given the keyword inputs), Fluency (how grammatically valid the output is), Difficulty of Articulation (how difficult a tongue twister is to say), Coherence (how much sense the output makes), and Entertainment Value (how entertaining the output is, considering sounds and semantics) and a holistic Overall criteria. All ratings were given on a 5-point scale where 1 equates to "poor" and 5 equates to "excellent". Importantly, we include both Flan-T5 and ByT5 trained on 2k samples, as well as 13k samples, to investigate whether or not increased training data has a measurable impact on human evaluation in addition to the patterns observed in automatic evaluation.

Evaluator Recruitment and Demographics

In total, we recruit 5 evaluators via internal notices and word of mouth. All evaluators have university-level education to a minimum of undergraduate level in a range of fields including linguistics, computer science, engineering, and animation, and therefore represent a wide range of academic backgrounds. All evaluators are native speakers of English and report no language processing issues. Evaluators were provided with a 55GBP Amazon gift card for their combined work on dataset quality control and output evaluation, totalling approximately 4hrs of work.

Materials Provided to Evaluators

Evaluation was performed on an online platform. Participants were provided with a page of detailed instructions on how to navigate the platform and the order in which to perform evaluation tasks. Importantly, all human evaluation responses were given on a 1-5 rating scale (where 1 equates to "poor" and 5 equates to "excellent" for how well a criterion is met). For example "The tongue twister can be considered logically and semantically coherent." for the criteria of Coherence. Additionally, those with non-linguistic backgrounds had certain terms clarified, such as the meaning of "prosaic" in the style-transfer task. Each evaluator rated 450 samples across all evaluations, consisting of 20 samples from 7 models (i.e., Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT, Flan-T513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT, ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT, ByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT, Baichuan, ChatGPT (i.e., GPT-3.5-Turbo), and the gold-standard) on 2 tasks (280 in total), 50 quality control examples, and 120 examples for our novel constrained decoding algorithm (20 of which used constrained base GPT-2, 20 of which came from our constrained fine-tuned GPT-2, and 20 of which came from unconstrained GPT-2 trained on 13k samples, with the equivalent set up for Baichuan).

6 Results (Topic-to-Twister and Style-Transfer)

6.1 Automatic Results

ModelPPLB-1\uparrowB-2\uparrowB-3\uparrowB-4\uparrowRo-1\uparrowRo-2\uparrowRo-L\uparrowLength
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT25.040.03910.01650.00830.00435.12550.67044.841451.81
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT23.740.04530.01870.00910.00476.33661.03495.984357.72
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT3.980.07490.02560.01050.00408.15030.76087.485544.07
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT2.860.10550.05450.03250.020911.24172.152810.364125.60
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT-0.14400.09180.06390.04776.25821.03295.961147.56
\hdashline[3pt/5pt]GPT-24k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT24.840.04230.01680.00760.00375.75690.68185.507753.27
DialoGPT4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT24.830.04220.01610.00710.00325.97120.69155.641555.91
BART4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT4.090.06850.02150.00820.00317.60230.73646.902647.30
Flan-T54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT2.930.10900.05580.03280.020710.53431.77019.854624.25
ByT54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT-0.13970.09000.06340.04796.86370.97446.497748.18
\hdashline[3pt/5pt]GPT-28k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT28.580.04300.01430.00600.00265.66380.53855.293554.23
DialoGPT8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT24.540.04580.01720.00760.00356.35550.76225.990156.59
BART8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT4.330.06330.01950.00720.00296.84340.58746.112749.68
Flan-T58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT3.110.11940.06000.03460.021710.77281.375810.045622.97
ByT58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT-0.15310.09510.06470.04756.26020.71426.027140.96
\hdashline[3pt/5pt]GPT-213K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT28.770.04400.01520.00610.00255.94120.60245.589056.59
DialoGPT13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT25.150.05040.01870.00780.00346.72740.81706.314860.05
BART13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT4.160.05950.01520.00500.00176.62840.50905.952349.79
Flan-T515K15K{}_{\text{15K}}start_FLOATSUBSCRIPT 15K end_FLOATSUBSCRIPT3.100.11890.05710.03190.019210.09831.16329.297623.87
ByT516k16k{}_{\text{16k}}start_FLOATSUBSCRIPT 16k end_FLOATSUBSCRIPT-0.1609*0.09880.06600.04766.48570.60736.227240.05
Baichuan+LoRA13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT14.540.04630.02270.01310.00806.22151.38785.921251.76
ChatGPT-0.15770.1073*0.0788*0.0585*26.2949*13.2789*23.7763*24.46

ModelBS-P\uparrowBS-R\uparrowBS-F1\uparrowIPO\downarrowPO\downarrowiPED\downarrowoPED\downarrowRe-DRe-FRe-GRe-A
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.75430.82650.78830.08490.0617*3.87175.913117.0845.7948.7159.52
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.76800.83100.79780.09540.06603.83085.903212.7916.5218.5421.21
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.79380.83840.81530.31830.18484.51685.921914.2111.7014.1211.51
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.80750.84400.82490.21290.16104.08285.914113.1014.6816.1918.25
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.77050.83010.79820.12070.10212.53985.904814.6128.5430.2038.35
\hdashline[3pt/5pt]GPT-24k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.76480.83030.79580.08810.06834.13205.944013.5123.0825.1428.89
DialoGPT4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.76480.83000.79560.09840.07013.72375.920513.0117.3919.2621.86
BART4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.79620.83630.81560.28210.16204.75375.969312.7712.4614.1113.69
Flan-T54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.80710.84360.82440.20890.16573.82805.889913.2916.6317.7620.28
ByT54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.76360.82970.79410.12500.09872.22485.919716.091535.676136.986347.4017
\hdashline[3pt/5pt]GPT-28k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.77120.83150.79980.12240.08414.33555.975112.5714.9316.3018.00
DialoGPT8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.77450.83270.80220.11160.07754.01115.895211.9913.4215.1416.83
BART8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.79280.83430.81290.28310.16754.94635.969311.9712.3613.7113.86
Flan-T58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.81600.84620.83040.23780.18664.15425.866912.7414.4016.2417.31
ByT58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.77110.83060.79860.13410.10872.1024*5.880416.3731.9833.8842.47
\hdashline[3pt/5pt]GPT-213K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.77480.83220.80210.12610.08594.26715.898512.0613.4615.2116.64
DialoGPT13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.77950.83360.80530.12490.08394.46655.940711.3012.8514.9515.55
BART13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.79250.83190.81150.26620.14894.88715.8184*12.1411.7513.5713.83
Flan-T513K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.81810.84680.83190.25680.19964.31225.893212.3112.5014.0314.97
ByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT0.77420.83200.80090.15540.12172.55205.837415.7731.2234.0039.33
Baichuan+LoRA13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.76580.82590.79400.0689*0.06572.67525.995116.4134.3731.7344.59
ChatGPT0.8401*0.8613*0.8503*0.34770.29914.05475.91539.839.3710.5311.48
Brown Corpus Prose---0.48700.22755.14315.956111.0913.1515.5616.60

Topic-to-Twister

We present the results for automatic evaluation in the topic-to-twister task setting in Table7 and Table8. Firstly, regarding the reference-based metrics (BLEU, ROUGE, and BERTScore) we see clear performance differences across our chosen models. On average across almost all metrics, we see the performance from worst to best ordered as GPT-2, DialoGPT, BART, ByT5, and Flan-T5 for our fine-tuned models. However, we see that Baichuan’s performance is variable, mostly outperforming BART and underperforming Flan-T5 and ByT5. However, Baichuan performs worse than BART considering a range of metrics (B-1, Ro-1, Ro-L, BS-P, BS-R, BS-F1) when considering the same training data amount (13k). Flan-T5 and ByT5 alternate in performance, with the latter tokenizer-free model performing better on BLEU-based metrics, but worse on ROUGE.When specifically considering changes alongside an increase in training data, in Table7 we see little to no improvement in reference-based metrics within the topic-to-twister setting, with performance decreasing as training data increases in some cases.However, it is pertinent to mention that reference-based metrics are imperfect for the task of creative language generation due to the one-to-many dilemma prevalent in many NLG tasks Gupta etal. (2019), whereby there are numerous potential tongue twisters to generate from any given input topic. This is also particularly true when considering the TwistList 2.0 dataset, where the topics are often only related to the input phrase in a high-level conceptual fashion (as we do not enforce the generation of the topic phrase within the tongue twister itself).Finally, ChatGPT, when prompted in the same manner as our other models demonstrates significantly higher performance than any of our fine-tuned models.

Regarding the readability and phonemic metrics presented in Table8 we see a range of patterns. Firstly, for IPO (formerly referred to as Init-PO) and PO, we see the GPT-2 based models "outperform" BART and the T5 models almost across the board. However, naive phoneme-overlap-based metrics do not take into account more sophisticated phonemic characteristics. When considering our new metrics based on phonemic edit distance (iPED and oPED) we see less distinction across any of the presented models. However, one notable finding is that Baichuan performs significantly better than any of the other models trained on 13k samples in the word-initial IPO and iPED metrics, suggesting high levels of word-initial phonemic overlap across tokens when compared to other models. Likewise, we see that the tokenizer-free ByT5 model outperforms Flan-T5 by a significant margin regarding the phoneme-based metrics, suggesting that the finer-grained tokenization of ByT5 is preferable for identifying the sound patterns implicitly included in tongue twisters via grapheme combinations. ChatGPT, on the other hand, scores the highest on the IPO and iPED metrics (i.e., worst). This, however, is not in itself indicative of poor tongue twisters, but rather the model’s focus on producing high-quality comprehensible and grammatical text, therefore being less likely to fall victim to degenerate patterns of repeating the same word over and over to achieve overlap. As with the reference-based metrics discussed earlier, raw reliance on phoneme-based metrics can also be misleading. For example, intuitively, prior works have demonstrated that high scores (i.e., lower values) can be achieved in word-initial-based metrics simply by repeating the same word, rather than producing a complex and entertaining tongue twister. Regarding readability scores, we see GPT-2 and Baichuan present high scores (i.e., high difficulty of readability) when compared to our other models. However, high readability metrics can be indicative of numerous things, including desirable behavior (using complex structures and sophisticated multi-syllabic vocabulary) as well as behavior that is not necessarily desirable (e.g., producing gratuitously long sentences). It is for this reason that we exclude an indicator of the preferred metric direction in the case of our readability metrics, yet include the scores for completeness.

ModelPPLB-1\uparrowB-2\uparrowB-3\uparrowB-4\uparrowRo-1\uparrowRo-2\uparrowRo-L\uparrowLength
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT15.590.10550.07380.05340.039715.26376.285214.834262.53
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT14.540.10010.06880.04910.035914.40035.768813.957467.65
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT1.960.24980.17970.13240.099725.441411.353424.806136.87
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT1.640.55970.44260.35910.296448.718424.300847.924615.18
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT-0.67700.58470.52100.473146.899122.386046.044616.53
\hdashline[3pt/5pt]GPT-24k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT15.150.11610.08370.06210.047216.70697.460816.308562.70
DialoGPT4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT13.960.10990.07730.05650.042515.80876.768715.345065.72
BART4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT2.000.25610.18590.13780.104126.394512.001125.856337.72
Flan-T54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT1.620.55000.43510.35320.291748.866224.649248.126914.99
ByT54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT-0.71770.62420.55960.511149.132124.732648.239615.90
\hdashline[3pt/5pt]GPT-28k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT14.260.11520.08330.06220.047616.77247.558216.374663.30
DialoGPT8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT13.190.10960.07720.05720.043815.76877.008115.24867.10
BART8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT1.950.24480.18030.13580.104330.137114.497029.455337.50
Flan-T58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT1.630.56990.45710.37560.313551.194926.954450.461714.91
ByT58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT-0.71240.62240.56050.513650.473725.924649.585115.85
\hdashline[3pt/5pt]GPT-213K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT13.550.12290.09010.06830.052917.91468.462417.480160.71
DialoGPT13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT12.840.11140.07960.05970.046216.20617.316315.707666.90
BART13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT1.820.22980.17170.13090.101434.066216.971333.307537.78
Flan-T513K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT1.610.58320.47040.38850.325852.707728.660451.880814.82
ByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT-0.7356*0.6465*0.5846*0.5376*51.988627.701551.189515.55
Baichuan+LoRA13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT5.150.60460.50060.42290.361860.1989*37.1423*59.1040*15.27
ChatGPT-0.32880.21670.14910.105739.353613.965434.988417.43

ModelBS-P\uparrowBS-R\uparrowBS-F1\uparrowIPO\downarrowPO\downarrowiPED\downarrowoPED\downarrowRe-DRe-FRe-GRe-A
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.81700.86620.84050.12640.08304.72135.968810.667.459.526.94
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.81720.86340.83940.1177*0.0786*4.88735.95709.927.669.767.74
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.81620.90310.85700.51120.24744.55945.812014.7612.2114.1311.47
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.92810.92750.92770.54650.42904.73625.969610.516.087.775.07
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT0.91660.91710.91660.43920.39494.24515.970310.786.818.725.90
\hdashline[3pt/5pt]GPT-24k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.82470.87260.84770.12210.08344.76745.977510.887.028.936.15
DialoGPT4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.82130.86680.84320.12800.08474.89485.982510.347.8310.167.35
BART4k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.81570.90260.85650.40280.22673.84355.7562*16.7414.1015.7214.19
Flan-T54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.92620.92660.92630.53620.42904.67935.974310.726.107.835.05
ByT54k4k{}_{\text{4k}}start_FLOATSUBSCRIPT 4k end_FLOATSUBSCRIPT0.91980.91990.91970.44360.40114.20075.955310.476.408.035.59
\hdashline[3pt/5pt]GPT-28k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.82310.87140.84630.11880.08214.73385.977910.837.309.236.58
DialoGPT8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.81830.86380.84020.13290.08764.94835.978710.108.2310.478.03
BART8k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.80730.90730.85380.44510.26205.18855.813814.4613.1713.4611.00
Flan-T58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.92890.92900.92880.52620.42894.63105.970710.675.957.674.99
ByT58k8k{}_{\text{8k}}start_FLOATSUBSCRIPT 8k end_FLOATSUBSCRIPT0.92080.92190.92120.45490.40634.28305.959810.686.528.045.85
\hdashline[3pt/5pt]GPT-213K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.82580.87380.84890.12560.08704.71195.953611.658.5910.847.73
DialoGPT13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.82160.86760.84370.13480.08915.08115.973310.519.0111.038.79
BART13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.79030.91410.84700.42010.32164.0013*6.130317.9713.2516.919.82
Flan-T513K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.93110.93090.93090.52260.42884.57705.968110.645.987.635.06
ByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT0.92360.92460.92400.46190.41284.29825.963410.736.357.935.67
Baichuan+LoRA13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT0.9442*0.9411*0.9425*0.49080.42364.47895.97719.985.607.154.62
ChatGPT0.88510.88980.88730.54950.39484.77255.944510.377.679.068.25
Brown Corpus Prose---0.48700.22755.14315.956111.0913.1515.5616.60

Style-Transfer

We present the results for automatic evaluation in the style-transfer task setting in Table9 and Table10. Overall, we see much the same pattern as with the topic-to-twister setting, with performance ordering of DialoGPT, GPT-2, BART, ByT5, and Flan-T5 across our referenced metrics (again, with our T5-based models alternating in ranking). However, unlike in the previous setting, the style-transfer setting appears to favor Baichuan, which presents the highest scores on ROUGE-based referenced metrics (when considering models also trained on 13k samples), whilst ByT5 performs the best on BLEU. Additionally, scores across the board are higher than seen in the topic-to-twister setting, but this is to be expected as the style-transfer task setting provides a structure for the generated tongue twister based on the length and word choices present in the original. Consequently, increased amounts of overlap between the desired output and the gold reference are expected due to not all words requiring modification. In contrast to the topic-to-twister setting, however, we do not see ChatGPT outperform all models, rather it is beaten by Baichuan and Flan-T5 in most cases. In terms of the performance difference on these metrics as the amount of available training data is increased, unlike the topic-to-twister setting we see a more clear growth in performance on reference-based metrics alongside training data in the majority of cases.Regarding the readability and phonemic metrics presented in Table10, we again see similar patterns with GPT-2 based models scoring well on the naive phoneme-based metrics (IPO/PO), but all models performing similarly when regarding the more informed iPED/oPED measures. Regarding readability, scores overall are seen to be lower than in the topic-to-twister setting, suggesting more legible text. On one hand, this may indicate that the style has failed to transfer, with the paraphrase representing a well-written standard non-literary text (and therefore the model has effectively resorted to auto-encoding). On the other hand, this may also be an artifact of following the original structure of the non-literary paraphrase, therefore avoiding unnaturally long sentences and nonsensical outputs.

6.2 Human Evaluation

Score (1 to 5)Trained Topic-to-Twister
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPTFlan-T513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPTByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPTByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPTBaichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPTChatGPTGolden
Relevance2.077∗∗∗1.625∗∗∗2.180∗∗2.070∗∗1.688∗∗4.824∗∗4.647∗∗
Articulation1.800∗∗1.882∗∗3.050∗∗3.290∗∗∗1.176∗∗2.667∗∗3.375
Fluency2.000∗∗3.462∗∗2.290∗∗3.380∗∗1.450∗∗4.632∗∗4.944∗∗
Coherence1.200∗∗2.133∗∗1.610∗∗1.930∗∗1.118∗∗4.333∗∗4.444∗∗∗
Entertainment1.2001.8331.620∗∗2.070∗∗1.0003.2673.077∗∗
Overall1.0631.8881.800∗∗2.300∗∗1.316∗∗3.538∗∗3.909∗∗

Score (1 to 5)Style-Transfer
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPTFlan-T513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPTByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPTByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPTBaichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPTChatGPTGolden
Relevance4.4674.714∗∗4.0904.1604.882∗∗4.692∗∗5.000
Articulation1.462∗∗2.231∗∗3.710∗∗3.5202.250∗∗2.471∗∗3.313∗∗
Fluency4.611∗∗4.895∗∗∗4.150∗∗4.270∗∗4.8005.000∗∗4.950∗∗
Coherence4.188∗∗4.733∗∗3.500∗∗3.550∗∗4.375∗∗3.929∗∗4.786
Entertainment1.7332.0003.400∗∗3.2402.8462.5833.455∗∗
Overall2.3083.0003.500∗∗3.5103.3333.5003.941∗∗

The results of human evaluation for the topic-to-twister setting are presented in Table11, and the results for the style-transfer setting are in Table12.

Topic-to-Twister

Firstly, for the topic-to-twister setting in Table11, we can see that the highest scores for all criteria go to the human-authored "golden" samples, or those generated with ChatGPT. With a rating of "3" being considered the midpoint for "neither agree nor disagree" with the given criteria statements, it is evident that our finetuned unconstrained generation models struggle with the open-ended topic-to-twister task setting. However, when investigating the finetuned model performance we do see some patterns start to emerge that indicate the benefit of having such an extensive dataset as TwistList 2.0. For instance, Flan-T5 benefits from additional training samples, particularly regarding the metrics of Fluency and Coherence, and moderately in Entertainment and the holistic Overall rating. These findings for Fluency and Coherence are intuitive, as additional training samples increase the likelihood of generating grammatical and semantically coherent outputs due to the increased training data through which to learn these patterns. On the other hand, Articulation and Entertainment refer to more creative-language-specific metrics that are more abstract, and consequently difficult to learn from the training data. Relevance is the only metric shown to decrease when moving from 2k training samples to 13k, and we hypothesize that this is due to the increased wealth of training data that contains a more abstract link between the input topic and the generation, therefore decreasing the likelihood of the input words being directly present in the output (which is a straightforward way of performing well on Relevance metrics). Overall, however, we see ByT5 is preferable to Flan-T5, outperforming it in human evaluation on the criteria of Relevance, Articulation, Entertainment, and Overall, but underperforming in regard to Fluency and Coherence in the 13k instance. This additionally sheds some light on how humans perceive quality tongue twisters, with articulation difficulty being a more integral feature for a tongue twister than grammatical validity and semantic coherence. Finally, we see Baichuan struggle immensely with the topic-to-twister task setting (which is explored further in §9), frequently opting to repeat the input topic phrase continuously, therefore artificially increasing Relevance scores, but performing poorly on all other metrics.

Style-Transfer

On the other hand, we see better performance across the board for the style-transfer setting as facilitated by the additional high-quality paraphrases we include in TwistList 2.0. We hypothesize that the reason for this is that style-transfer requires already having access to a well-formed input, and additionally acts as an extended, very prescriptive form of an input topic, where the entire tongue twister is predefined in structure and semantics. As a result, we see Baichuan outperform ChatGPT on the criteria of Entertainment and Relevance, whilst Flan-T5 trained on the 13k split outperforms ChatGPT regarding semantic coherence of the output. Interestingly, we observe that ByT5 trained on only 2k examples performs more closely to the 13k version than is seen in Flan-T5, suggesting the alternative tokenization approach allows ByT5 to learn the relevant patterns from fewer examples. Importantly, we see Baichuan perform much better in the style-transfer task than in the topic-to-twister setting, suggesting that Baichuan requires much more explicit instruction to generate high-quality outputs and understand a given prompt.

6.3 Human vs. Automatic Metrics

Automatic Metrics
Re-DRe-FRe-GRe-AIPOPOiPEDoPED
Relevance-.395-.442-.423-.494.394.496-.004.104
Articulation-.218-.271-.224-.218-.063-.087-.226.110
Fluency-.400-.590-.564-.630.516.674-.001-.013
Coherence-.320-.482-.551.482- .486.622-.003.064
Entertainment.301-.287-.267-.278.151.262-.077-.106
Overall.342.386.372.386.211.361-.129.055

Human-Machinne Correlation

In order to see the effectiveness of our selected automatic metrics, we calculated the Spearman correlations between our 8 reference-free metrics, including readability (i.e., Re-D, Re-F, Re-G, and Re-A) and our phonetic metrics (i.e., IPO, PO, iPED, and oPED), against the human evaluation ratings for all criteria (i.e., relevance, articulation, fluency, coherence, entertainment, and overall). Correlations are presented in Table13. To evaluate the predictive power of the automatic metrics for human ratings, we developed a standard multiple linear regression model for each criterion. The model is defined as follows:

yi=β0+j=1pβjXij+ϵisubscript𝑦𝑖subscript𝛽0superscriptsubscript𝑗1𝑝subscript𝛽𝑗subscript𝑋𝑖𝑗subscriptitalic-ϵ𝑖y_{i}=\beta_{0}+\sum_{j=1}^{p}\beta_{j}X_{ij}+\epsilon_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_β start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT + italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT(1)

where yisubscript𝑦𝑖y_{i}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT represents the human score, β0subscript𝛽0\beta_{0}italic_β start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the intercept, βjsubscript𝛽𝑗\beta_{j}italic_β start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are the coefficients for the p𝑝pitalic_p automatic metrics, and ϵisubscriptitalic-ϵ𝑖\epsilon_{i}italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the error term. No regularization was applied to the model.The R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values, which indicate the proportion of variance explained by the automatic metrics, were validated through 5-fold cross-validation to ensure the stability and significance of the predictions. The average R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT values across the folds were as follows: Fluency (R2=.553superscript𝑅2.553R^{2}=.553italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .553), Coherence (R2=.496superscript𝑅2.496R^{2}=.496italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .496), Relevance (R2=.393superscript𝑅2.393R^{2}=.393italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .393), Overall (R2=.260superscript𝑅2.260R^{2}=.260italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .260), Articulation (R2=.241superscript𝑅2.241R^{2}=.241italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .241), and Entertainment (R2=.152superscript𝑅2.152R^{2}=.152italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT = .152).Furthermore, all coefficients in the models were found to be statistically significant, with p-values below 0.01 (α=0.01𝛼0.01\alpha=0.01italic_α = 0.01), indicating that each of the automatic metrics significantly contributes to the prediction of human ratings.

Overall, we intuitively see the Entertainment criterion being the hardest to predict due to the inherent subjectivity of this criterion. On the other hand, we see our naive phonemic metrics (IPO and PO) demonstrate moderate correlations with Relevance, Fluency, and Coherence. This is due to the high relevance of tongue twisters often being seen in examples where the input topic is simply repeated, which results in high levels of phoneme overlap. Similarly, high fluency scores are given to more sentences that better reflect standard non-literary text, which consequently score lower on IPO and PO, and less coherent outputs are often produced by repeating the same word. Moreover, our "informed" phonemic metrics (iPED/oPED) show little correlation with human results on these criteria, but iPED demonstrates evidence of a correlation with articulatory difficulty. However, the articulatory difficulty still remains challenging to predict, even from these phonemic metrics. We hypothesize that this may be related to the "visual tongue twister" effect McCutchen and Perfetti (1982). This is due to human evaluation being performed online and asynchronously, where we cannot force participants to speak aloud each tongue twister. Consequently, we hypothesize that the naive metrics may correlate better with human judgments as human ratings were confounded by the visual impact of the tongue twister (for example, seeing a particular grapheme repeated numerous times representing the same sound). Furthermore, there are additional effects from the influence of other factors such as fluency and coherence affecting articulation due to violating expectations and reducing legibility. Consequently, iPED/oPED should be used as indicators of text resembling a tongue twister, but not as a holistic metric for overall quality. It is clear from comparison with standard non-literary text that the phonetic metrics are able to differentiate the specific characteristics of tongue twisters from that of standard text, but phonemic complexity is not the sole contributor to the perception of articulatory difficulty.

GPT-4o "Human" Evaluation

We additionally perform evaluation on the same samples presented to human evaluators in the unconstrained topic-to-twister and style-transfer settings using GPT-4o via prompting the model with the same rubric presented to human evaluators (see Appendix 10).121212Specifically, gpt-4o-2024-05-13 via the API. Overall, we see moderate-to-high correlation between model scores and human-assigned scores for all criteria: Relevance (ρ𝜌\rhoitalic_ρ = .671), Articulation (ρ𝜌\rhoitalic_ρ = .653), Fluency (ρ𝜌\rhoitalic_ρ = .768), Coherence (ρ𝜌\rhoitalic_ρ = .658), Entertainment (ρ𝜌\rhoitalic_ρ = .716), and Overall (ρ𝜌\rhoitalic_ρ = .776).131313All significant at α𝛼\alphaitalic_α = .01. This demonstrates that our human evaluation rubric is clear and well defined, and can be effectively followed by state-of-the-art LLMs.

7 Tongue Twister Generation with Phoneme Aware Constrained Decoding

In the following section, we present work on a constrained decoding-based approach to tongue twister generation. In contrast to §5, here we focus exclusively on the topic-to-twister task setting due to the text-continuation nature of our decoding approach. The benefit of this algorithm, in contrast to the finetuned models presented in §5 is that constrained decoding guarantees that only desirable tokens appear in the output due to the layering of hard phoneme-based constraints as previously discussed. Additionally, due to how this system interacts with language model token predictions, this process can be applied to any autoregressive language model, including both pre-trained base models and further fine-tuned models.

7.1 Task Definition

For a given input prompt we aim to generate a tongue twister T𝑇Titalic_T, whereby T𝑇Titalic_T comprises a sequence of words {w1,w2,wn}subscript𝑤1subscript𝑤2subscript𝑤𝑛\{w_{1},w_{2},...w_{n}\}{ italic_w start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_w start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … italic_w start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT }. In contrast with the previous section, in this task setting T𝑇Titalic_T is a continuation of the input prompt that we generate token by token, evaluating the language model’s next token predictions at each step.As per §5, the generated output must satisfy the following constraints: (1) the output should be semantically related to the input topic phrase; (2) the output should show maximal levels of phonemic overlap across tokens; and (3) the output should be grammatically valid.

7.2 Phoneme-Aware Constrained Decoding Module (PACD)

1:foreach s𝑠sitalic_s in S𝑆Sitalic_Sdo

2:ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = G2P(topic in\text{G2P}(\text{topic in }G2P ( topic ins)[0])[0]) [ 0 ]

3:ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = argminargmin\operatorname*{arg\,min}roman_arg roman_min(PED𝑃𝐸𝐷PEDitalic_P italic_E italic_D(ph1,WIP{ph1}𝑝subscript1𝑊𝐼𝑃𝑝subscript1ph_{1},WIP\setminus\{ph_{1}\}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_W italic_I italic_P ∖ { italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT }))

4:whilelen(\text{len}(len (s)<max_length{}^{*})<\text{max\_length}start_FLOATSUPERSCRIPT ∗ end_FLOATSUPERSCRIPT ) < max_lengthdo

5:Retrieve next word probabilities P={p1,,pn}𝑃subscript𝑝1subscript𝑝𝑛P=\{p_{1},\ldots,p_{n}\}italic_P = { italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_p start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } from LM(s)𝐿𝑀superscript𝑠LM(s^{*})italic_L italic_M ( italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT )

6:foreach rank𝑟𝑎𝑛𝑘rankitalic_r italic_a italic_n italic_k, p𝑝pitalic_p in enumerate(P𝑃Pitalic_P)do

7:ifp𝑝pitalic_p in F𝐹Fitalic_F and rank𝑟𝑎𝑛𝑘rankitalic_r italic_a italic_n italic_k \leq function_windowthen

8:append p𝑝pitalic_p to ssuperscript𝑠s^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT

9:break

10:endif

11:candidates𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠candidatesitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e italic_s = []

12:iflen(p)>min_stem_lengthlen𝑝𝑚𝑖𝑛_𝑠𝑡𝑒𝑚_𝑙𝑒𝑛𝑔𝑡\text{len}(p)>\text{$min\_stem\_length$}len ( italic_p ) > italic_m italic_i italic_n _ italic_s italic_t italic_e italic_m _ italic_l italic_e italic_n italic_g italic_t italic_h and G2P(p)[0]==ph1\text{G2P}(p)[0]==ph_{1}G2P ( italic_p ) [ 0 ] = = italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT or ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT then

13:append p𝑝pitalic_p to candidates𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠candidatesitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e italic_s

14:temp_prompt=s𝑡𝑒𝑚𝑝_𝑝𝑟𝑜𝑚𝑝𝑡𝑠temp\_prompt=sitalic_t italic_e italic_m italic_p _ italic_p italic_r italic_o italic_m italic_p italic_t = italic_s*+p𝑝+p+ italic_p

15:fori in range(4)do

16:next_token𝑛𝑒𝑥𝑡_𝑡𝑜𝑘𝑒𝑛next\_tokenitalic_n italic_e italic_x italic_t _ italic_t italic_o italic_k italic_e italic_n = LM(temp_prompt)𝐿𝑀𝑡𝑒𝑚𝑝_𝑝𝑟𝑜𝑚𝑝𝑡LM(temp\_prompt)italic_L italic_M ( italic_t italic_e italic_m italic_p _ italic_p italic_r italic_o italic_m italic_p italic_t )

17:ifnext_token𝑛𝑒𝑥𝑡_𝑡𝑜𝑘𝑒𝑛next\_tokenitalic_n italic_e italic_x italic_t _ italic_t italic_o italic_k italic_e italic_n.isalpha() and next_token𝑛𝑒𝑥𝑡_𝑡𝑜𝑘𝑒𝑛next\_tokenitalic_n italic_e italic_x italic_t _ italic_t italic_o italic_k italic_e italic_n[0] != " "then

18:longest𝑙𝑜𝑛𝑔𝑒𝑠𝑡longestitalic_l italic_o italic_n italic_g italic_e italic_s italic_t = "".join(candidates𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠candidatesitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e italic_s)

19:longest𝑙𝑜𝑛𝑔𝑒𝑠𝑡longestitalic_l italic_o italic_n italic_g italic_e italic_s italic_t += next_token𝑛𝑒𝑥𝑡_𝑡𝑜𝑘𝑒𝑛next\_tokenitalic_n italic_e italic_x italic_t _ italic_t italic_o italic_k italic_e italic_n

20:append longest𝑙𝑜𝑛𝑔𝑒𝑠𝑡longestitalic_l italic_o italic_n italic_g italic_e italic_s italic_t to candidates𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠candidatesitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e italic_s

21:endif

22:ifnext_token𝑛𝑒𝑥𝑡_𝑡𝑜𝑘𝑒𝑛next\_tokenitalic_n italic_e italic_x italic_t _ italic_t italic_o italic_k italic_e italic_n[-1] == " "then

23:break

24:endif

25:endfor

26:temp_prompt𝑡𝑒𝑚𝑝_𝑝𝑟𝑜𝑚𝑝𝑡temp\_promptitalic_t italic_e italic_m italic_p _ italic_p italic_r italic_o italic_m italic_p italic_t = ""

27:forcandidate𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒candidateitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e in candidates𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠candidatesitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e italic_s.sort(longest-to-shortest)do

28:ifcandidateD𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝐷candidate\in Ditalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e ∈ italic_D and count(candidate𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒candidateitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e in ssuperscript𝑠s^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT) <<< max_repetition𝑚𝑎𝑥_𝑟𝑒𝑝𝑒𝑡𝑖𝑡𝑖𝑜𝑛max\_repetitionitalic_m italic_a italic_x _ italic_r italic_e italic_p italic_e italic_t italic_i italic_t italic_i italic_o italic_nthen

29:append candidate𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒candidateitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e to ssuperscript𝑠s^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT

30:endif

31:endfor

32:endif

33:endfor

34:endwhile

35:endfor

We present an outline of our Phoneme-Aware Constrained Decoding algorithm (PACD) in Algorithm 1.141414PACD is intended to be pronounced as ”packed”. To summarize, for every starting prompt s𝑠sitalic_s in our test set S, we firstly perform grapheme-to-phoneme conversion G2P with the g2p-en package and extract the initial phoneme of the first word in the topic phrase part of s in order to increase the likelihood of retrieving a semantically related output (with the phoneme denoted as ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT). However, where the selected phoneme is not a valid consonant, we randomly select a phoneme from a list of phonotactically legal word-initial consonant phonemes for English, WIP. Following this, we calculate the weighted phonemic edit distance (PED) between ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and all other legal word-initial phonemes and select the lowest scoring (i.e., most similar) as our secondary phoneme ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT (analogous to the system in §4 for TwisterLister). Following this, we autoregressively generate new tokens up to the limit defined by max_length based on numerous criteria. To do this, we feed the starting prompt s into our language model of choice, LM𝐿𝑀LMitalic_L italic_M, and retrieve the next token probabilities P. Then, in descending order (i.e., most-probable to least-probable next token) we iterate through predictions p \in P until specific criteria are met. Firstly, to increase the likelihood of generating grammatical output, if a function word (such as an article, pronoun, conjunction, preposition, or auxiliary verb) from our function word list F is within the range defined by function_window, we allow it to generate. For example, when generating the first word, if function_window is set to 3, and "The" is the token with the 2nd highest probability, we allow it to generate as it is within our allowed range of top-3.

To account for subword tokenization in non-function words, we next check that the predicted token is longer than the limit defined by min_stem_length. We do this as we find the result of not limiting this to be a reliance on outputting the grapheme that most closely corresponds to a desired phoneme (rather than a sequence that better resembles a morpheme), significantly increasing inference time and decreasing output quality. We then employ our phoneme constraints by feeding the predicted word stem p𝑝pitalic_p into a grapheme-to-phoneme model G2P and comparing the first phoneme to ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, continuing if it matches either. Consequently, in this stage, we have ensured that generated words are either closed-class grammatical function words or start with one of the two phonologically similar sounds selected in lines 2-3 of Algorithm1. Following this, we optionally engage the subword loop (lines 15-25 in Algorithm 1). Within this loop, we temporarily append our candidate word stem to the current prompt s* to create temp_prompt. Following this, we feed temp_prompt to the language model LM𝐿𝑀LMitalic_L italic_M and take only the token with the highest probability, next_token. We then check that next_token is alphabetical and does not start with whitespace (as this would indicate the model was predicting a new word, rather than a continuation). If this is the case, we append it to temp_prompt and perform the loop again (up to 4 times, allowing words that consist of 1-5 subwords). Within this loop, we build a list of potential words, candidates, by appending the concatenated subwords (e.g., ["anti", "antidis", "antidisestablish", "antidisestablishment", "antidisestablishmentarian"]) We terminate this loop early if a predicted next_token ends with whitespace, as this indicates that the model has predicted the end of the current word. Once we have our list of candidates, we iterate through them from the longest (i.e., consisting of the most subwords) to the shortest, assessing the following criteria.

Firstly, we check that each candidate \in candidates is longer (in characters) or equal to the length defined by min_word_length and that candidate \in D, where D is the English dictionary as defined by the Enchant Python package.151515Available at https://pypi.org/project/pyenchant/. Once this check is complete, we ensure that we have not already generated this specific candidate𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒candidateitalic_c italic_a italic_n italic_d italic_i italic_d italic_a italic_t italic_e more times than permitted by max_repetition, in order to avoid falling into the perpetual loop of repeating the same words that language models are prone to (see §9). Here we use ssuperscript𝑠s^{*}italic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT to denote the starting prompt s with newly generated tokens appended, which is to say that sssuperscript𝑠𝑠s^{*}-sitalic_s start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT - italic_s equals only the LM-generated words. Finally, if no candidate meets the criteria, we increase rank by looking at the next-best prediction in P until the vocabulary is exhausted. In the case where no suitable candidates exist in the vocabulary, we simply move on to the next s.

To illustrate the algorithm with an example, consider S to consist of two input topics s: [fun, sadness]. For the first example ("fun"), we select ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT by performing G2P on the topic, returning \textipa/fUn/, and select the word-initial \textipa/f/ as ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT. We then decide ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT by selecting the next phoneme that has the lowest phonemic edit distance to \textipa/f/, returning \textipa/v/ as ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Following this, we feed the full prompt "Generate a tongue twister on the topic of ’fun’. " to the language model, and retrieve the next-token probabilities P. For example, P could be {1:The,2:It,3:A}conditional-set1:𝑇𝑒2𝐼𝑡3:𝐴\{1:The,2:It,3:A...\}{ 1 : italic_T italic_h italic_e , 2 : italic_I italic_t , 3 : italic_A … }, where the set is the length of the decoded vocabulary. We then iterate through the predictions until a word meets our criteria. For instance, the most likely continuation, "The", is in the function word list F𝐹Fitalic_F and within the function_window𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛_𝑤𝑖𝑛𝑑𝑜𝑤function\_windowitalic_f italic_u italic_n italic_c italic_t italic_i italic_o italic_n _ italic_w italic_i italic_n italic_d italic_o italic_w due to being at rank 1, so we append it to the prompt and now have "Generate a tongue twister on the topic of ’Fun’. The", which we now denote ss*italic_s ∗. We then feed this extended prompt into the language model to retrieve the second word, where P𝑃Pitalic_P may look like {1: grey, 2: big, 3: fun}. Here, options in rank 1 and 2 ("grey" and "big") do not start with ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT or ph2𝑝subscript2ph_{2}italic_p italic_h start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT and are also not function words in F𝐹Fitalic_F. However, the word in rank 3, "fun" is transcribed phonemically as \textipa/fUn/, where the initial phoneme \textipa/f/ matches ph1𝑝subscript1ph_{1}italic_p italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, so we enter the subword loop and find the candidate "funniest", allow it to generate, and append it to the prompt, resulting in "Generate a tongue twister on the topic of ’Fun’. The funniest". We repeat this until we generate new tokens up to max_length, and then start the process again for the remaining topic in S, which is "sadness". We additionally make sure that we do not generate the same word more than once, as determined by max_repetition, and we do not generate words shorter in length than min_word_length.

7.3 Constrained Models

To demonstrate the effectiveness of our decoding module, we utilize 2 decoder-only autoregressive language models as our LM𝐿𝑀LMitalic_L italic_M: GPT-2 Radford etal. (2019) and Baichuan Yang etal. (2023), to which our module will be applied on top. In addition, we investigate to what extent fine-tuning a model towards tongue twister generation is beneficial, by additionally using our finetuned GPT-2 and Baichuan from §5, referred to herein as GPT-213K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT and Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT. Importantly, we assess only these models fine-tuned on the largest amount of data, 13k.

Regarding the other settings for PACD, we set max_length𝑚𝑎𝑥_𝑙𝑒𝑛𝑔𝑡max\_lengthitalic_m italic_a italic_x _ italic_l italic_e italic_n italic_g italic_t italic_h to 30 (as a sensible midpoint generation length ascertained from Table7), function_window to 1, and implement F𝐹Fitalic_F as the NLTK stopwords list with all punctuation removed. Additionally, we set min_stem_length to 2 and min_word_length to 3 (as all standard 1 or 2-letter words {I,a,Im,am,at,in,up,on}F𝐼𝑎superscript𝐼𝑚𝑎𝑚𝑎𝑡𝑖𝑛𝑢𝑝𝑜𝑛𝐹\{I,a,I^{\prime}m,am,at,in,up,on\}\in F{ italic_I , italic_a , italic_I start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT italic_m , italic_a italic_m , italic_a italic_t , italic_i italic_n , italic_u italic_p , italic_o italic_n } ∈ italic_F). Additionally, max_repetition is set to 1, in effect banning wholesale repetition (though allowing plural/singular forms, and case variants) to avoid the patterns seen in standard autoregressive models. Finally, we use the g2p-en package for our G2P model, as in the creation of TwisterLister (§4.4). Finally, we only decode the top 2500 predictions in each timestep rather than the entire vocabulary in order to speed up inference significantly, as it is rare to select tokens ranked below this point. Importantly, due to the computational cost of our algorithm, we load Baichuan using 8-bit quantization to make inference possible. Overall, for GPT-2, PACD takes approximately 5-10 seconds to generate a 30-word tongue twister (with or without subword generation), whilst Baichuan takes 10-15 seconds when generating full words only, and 30-100+ seconds when allowing subwords on a consumer CPU (i5 9600k). We posit future work on the parallelization of elements of PACD to be more compute-efficient and take advantage of the GPU.

8 Results (PACD)

We perform automatic evaluation in the same manner as §6.1, and report both reference-based (BLEU/ROUGE/BERTScore) and unreferenced metrics (Init-PO/PO/iPED/oPED and the readability metric suite). We additionally perform human evaluation in the same manner as §6.2 and with the same evaluators, following the protocol for the topic-to-twister task setting. Each evaluator is presented with 20 examples (using the same inputs as in §6.2) from base GPT-2 and Baichuan with the addition of PACD, or finetuned GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT and Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT with and without PACD.

8.1 Automatic Evaluation

ModelB-1\uparrowB-2\uparrowB-3\uparrowB-4\uparrowRo-1\uparrowRo-2\uparrowRo-L\uparrow
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o0.04400.01520.00610.00255.94120.60245.5890
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w0.08320.01700.00460.00138.66730.90677.5011
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -ws0.05840.00450.00030.00000.00890.00000.0792
GPT-2 -w0.07590.01420.00320.00108.06080.64036.8765
GPT-2 -ws0.05940.00590.00090.00010.09400.01210.0790
\hdashline[3pt/5pt]Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT -w/o0.04630.02270.01310.00806.22151.38785.9212
Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT-w0.04930.00410.00040.00000.08460.00810.0677
Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT -ws0.05250.00510.00060.00010.09400.01060.0742
Baichuan -w0.05470.00540.00090.00020.08850.01090.0750
Baichuan -ws0.06000.00800.00190.00050.10100.01750.0846

ModelBS-P\uparrowBS-R\uparrowBS-F1\uparrowIPO\downarrowPO\downarrowiPED\downarrowoPED\downarrowRe-DRe-FRe-GRe-A
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o0.77480.83220.80210.12610.08594.26715.898512.0613.4615.2116.64
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w0.80560.82070.81290.23050.23173.03765.893013.8814.5817.5818.77
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -ws0.80180.82670.81390.17240.22871.72525.768611.6012.2415.5917.20
GPT-2 -w0.79860.82080.80940.16890.21101.65295.769911.3612.6615.7615.91
GPT-2 -ws0.81190.82580.81860.24690.26143.16425.89919.0210.4514.1613.65
\hdashline[3pt/5pt]Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT -w/o0.94420.94110.94250.49080.42364.47895.97719.985.607.154.62
Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT -w0.79150.82020.80530.16290.24161.43245.687610.7211.1813.5215.18
Baichuan13K13K{}_{\text{13K}}start_FLOATSUBSCRIPT 13K end_FLOATSUBSCRIPT -ws0.78870.82150.07420.15120.21941.28985.670811.9512.5914.8217.21
Baichuan -w0.80140.82040.81060.21020.24612.44495.80949.0810.5514.2614.25
Baichuan -ws0.79840.82170.80960.19920.23392.32845.80169.8911.3714.8615.38
Brown Corpus Prose---0.48700.22755.14315.956111.0913.1515.5616.60

The results of the automatic evaluation for the constrained decoding approach (PACD) are presented in Table14 for referenced metrics and Table15 for unreferenced metrics. Firstly for GPT-2, regarding the reference-based metrics, surprisingly we see finetuned GPT-2 with the addition of our constrained decoding module (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w) outperform the standard finetuned model (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o) on B-1 and B-2, in addition to all ROUGE-based metrics (Ro-1, Ro-2, and Ro-L). The exception here is for the higher-order BLEU measures, B-3 and B-4, where the unconstrained model achieves higher overlap. Base GPT-2 also benefits from the addition of our PACD module (GPT-2 -w), outperforming the unconstrained finetuned model on the recall-based ROUGE metrics. Observing the unreferenced results in Table15, performance ordering varies for the BERTScore semantic metrics (BS-P, BS-R, and BS-F1), with the unconstrained finetuned model (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o) trading places with the constrained equivalents (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w and -ws) for the most performant. However, base GPT-2 with the addition of PACD (GPT-2 -w) consistently comes in second place for these metrics. Furthermore, we see the original finetuned unconstrained model perform the best when considering the original naive phoneme-based metrics IPO and PO, whilst the un-finetuned, yet constrained model, GPT-2 -w, places second. It is here that the less naive newly presented phoneme-based metrics, iPED and oPED demonstrate their usefulness. For instance, whilst finetuned GPT-2 without any constraints (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o) performed best on the naive metrics (IPO/PO), it performed the worst on the more linguistically informed metrics (iPED/oPED). This is because good tongue twisters exploit the relationships between similar sounds, whilst IPO/PO penalize these transitions as being low-quality, whilst our informed metrics reflect favorably on such transitions, penalizing them less than transitions between weakly related phonemes.

When looking at Baichuan, we see that the unconstrained finetuned model is the highest overall scorer on B-2, B-3, B-4, and Ro-2, as well as all BERTScore measures (BS-P, BS-R, and BS-F1). Interestingly, when analyzing the impact of the addition of PACD, we see the non-finetuned Baichuan with subword generation enabled (Baichuan -ws) to outperform all other versions of Baichuan that contain PACD, even without being finetuned on the specific style of text we are aiming to generate (however, scores are low overall). Regarding the phonemic metrics, we again see the benefit of PACD in reducing the scores on all phoneme-based metrics by successfully increasing sound overlap. Additionally, we see finetuned Baichuan with the addition of PACD (Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w and -ws) to outperform the non-finetuned models in the phonetic metrics. Finally, the addition of PACD to Baichuan can be seen to lead to a significant increase in readability scores (i.e., an increase in reading difficulty). However, these scores still remain below the formal non-literary text of the Brown Corpus.

These results demonstrate the effectiveness of our constrained decoding approach that takes into consideration which word-initial phonemes would be the best to center generation around to encourage mispronunciation. Additionally, overall we see similar performance between our models using full-word and subword versions of PACD.

8.2 Human Evaluation

Score (1 to 5)Constrained Topic-to-Twister
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/oGPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -wGPT-2 -wBaichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/oBaichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -wBaichuan -w
Relevance2.16∗∗∗2.44∗∗∗2.42∗∗∗3.04∗∗2.39∗∗2.41∗∗
Articulation2.80∗∗∗3.85∗∗∗3.69∗∗∗3.21∗∗3.58∗∗∗3.28∗∗
Fluency2.66∗∗∗2.39∗∗3.13∗∗∗2.24∗∗1.84∗∗1.88∗∗
Coherence2.17∗∗2.00∗∗3.41∗∗∗2.09∗∗1.68∗∗1.64∗∗
Entertainment2.00∗∗∗2.14∗∗2.97∗∗1.81∗∗1.67∗∗1.64∗∗
Overall1.72∗∗∗2.00∗∗2.91∗∗∗2.09∗∗1.98∗∗1.89∗∗

Human evaluation is performed identically to the evaluation reported in §6.2 (and with the same evaluators). Importantly, however, due to enforcing a 30-word output length, we ask evaluators to not penalize generations on the criteria of "Fluency" for being cut off prematurely. Additionally, due to the similarity in the outputs with and without sub-word generation for PACD as seen in §9, we only perform human evaluation on the outputs of PACD with subword generation disabled, to minimize evaluator fatigue and potential acquiescence bias due to seeing similar results for either method. The results of human evaluation on outputs using our constrained generation PACD module are presented in Table16. We additionally perform human evaluation on GPT-2 and Baichuan trained on 13k samples (the former of which was excluded from §6.2) in order to have a point of comparison without the addition of PACD (referred to as GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o and Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o, respectively).

Regarding GPT-2, we see a clear benefit from the addition of our constrained decoding module. Firstly, across all evaluation criteria, either base (i.e., vanilla) or finetuned GPT-2 receives the highest scores (indicated in bold) when the PACD module is applied. However, we do see finetuned GPT-2 without the addition of PACD (GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o) outperforming the equivalent model with PACD enabled on the criteria of Fluency and Coherence, which evaluate grammar and semantics, respectively. One explanation for this is that GPT-2 (overall) has been shown to perform poorly in the topic-to-twister setting when considering coherent and grammatical outputs (see also §9). Consequently, the addition of even more restrictive decoding rules brought about by PACD is slightly detrimental to the general quality of the output. On the other hand, non-finetuned language models are primarily designed to output standard prose that is grammatical and sensible initially. Consequently, the addition of the PACD decoding module does not damage the overall readability of the outputs too severely. Importantly, however, we do not analyze the performance of base GPT-2 without the PACD decoding module, as the performance of GPT-2 in zero-shot scenarios is poor, therefore making this not a meaningful point of comparison.Interestingly, however, base GPT-2 with PACD (GPT-2 -w) outperforms the finetuned models either with or without the presence of the additional decoding module, suggesting that the most desirable approach may be "train or constrain", rather than "train and constrain", as this allows phoneme-level control without sacrificing grammatically.

In contrast to GPT-2, when considering Baichuan, we see that human ratings decrease when moving from standalone fine-tuned Baichuan (Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o) to either of the versions containing PACD. This, however, is not the case for the Articulation criteria, which increases in both PACD versions of Baichuan (i.e., Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w and -ws). This suggests a possible trade-off regarding the addition of constrained decoding to existing LMs, with larger models suffering from the additional control being exerted over their outputs, whilst smaller models are able to be guided towards desirable characteristics of the output language form by complimenting the reduced knowledge capacity of the smaller models during fine-tuning and zero-shot inference.

In conclusion, a significant finding from the human evaluation results is that the PACD module results in a substantial increase in the Difficulty of Articulation criteria (named "Articulation" in Table16). This indicates the success of our constraint cascade in selecting phoneme pairs that contribute to maximal levels of phoneme-level confusion, and therefore mispronunciation (i.e., the key to a successful tongue twister).

9 Case Studies & Qualitative Analysis

9.1 Topic-to-Twister

Input:"rural brewery"
Golden:"Rory the warrior and Roger the worrier were reared wrongly in a rural brewery."
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"In the bustling city of Chattanooga, where Chattanooga’s vibrant craft beer scene thrives, The Chattanooga’s craft beer scene thrives, [With Chattanooga’s craft beer scene thriving, The Chattanooga’s craft beer scene thrives,]*3 With Chattanooga’s craft bee"
\hdashline[3pt/5pt]GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the bustling town of Chattanooga, where the townspeople reside, The Chattanooga’s taproom is filled with tapestries of flavors. Chattanooga’s taproom is filled with the finest tapestry, From the tapestry of Chattanooga’s charm, it’s a sight to see. Chattanooga’s taproom is filled with the finest tapestry, From the tapestry of Chattanooga’s charm, it’s a sight to see. Chattanooga’s taproom is filled with the finest tapes"
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"In the bustling city of Bethlehem, a bustling brewery was bustling with craftsmanship and craftsmanship was its main attraction. It was also the place to find the best beer in the city. It was also the place to find the best breweries in the city. It was also the place to find the best breweries in the country. In the bustling city, a bustling brewery was bustling with craftsmanship and craftsmanship. It was also the place to find the best beer in the city. It was als"
\hdashline[3pt/5pt]DialoGPT13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the bustling city of Tharrington, where the city’s charm is known, There’s a thriving tavern called Tharrington’s, Where the taverns thrive with a vibrant atmosphere. The tavern’s owner, a skilled craftsman, built a taproom that was truly grand, With a taproom full of flavors, it was truly a sight. The tavern’s atmosphere was so grand, it was like a vibrant symphony, With taverns and taverns, it was truly a sigh"
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"In the bustling city of New Zealand, a New Zealander named New Zealanders gathered at the New Zealand’s New Zealand brewery, where New Zealandians gathered to discuss the New Zealander’s New Zealanders’ New Zealand heritage." \" \" \" \" " " " suscept " suscept ’ ’ char non"
\hdashline[3pt/5pt]BART13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the village of Vineland, where Vineland’s vineyard thrived, Vineland and Vineland thrived. Vineland flourished, as Vineland joined the scene, As Vineland added to the scene. Vinland’s vineyards, a vibrant scene, Where Vineland was the scene." " "̈ "̈ thwart " thwart thwarted"
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"In the township of Westfield, a brewer named Westman brewed a brew in the townhouse, while Westfield’s breweries brewed brews in the townhouses of Westfield."
\hdashline[3pt/5pt]Flan-T513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the quaint town of Greenland, where gin and lager are the mainstays, Craftsmen from Greenland and Greenland gather to create a gin and lager masterpiece. The gin and gin are renowned for their exceptional craftsmanship, While the gin and gin are renowned for their exceptional taste. Greenland’s gin and gin are renowned for their exceptional"
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"In the village’s villages, the village’s villagers visit the village’s villages, while Victorian villagers visit the village’s villages. The village’s villagers visit the village’s villages, while Victorian villagers visit the village’s villages. The village’s villagers visit the village’s villages, and the farmhouses of Victorian villages and villages in Victorian villages. The village’s villagers visit the village’s villages, while Victorian villagers visit the village’s villages. The village’s villagers visit the village’s villages, all from Victorian villages, and Victorian villages are"
\hdashline[3pt/5pt]ByT513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the vicinity of the village, the farmers from the farmhouse filled the villages with their farmhouses and farmsteads, while the farmers from the farmhouse farms filled the villages with their farmhouses."
Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"[Breweries brew beer,]"*18
ChatGPT"In a rural brewery, they brewed a brew so unique, Where hops happily hopped and barley barreled sleek. With malty melodies and yeasty jigs, they thrived, But rural brewery’s ruckus rhythm rarely arrived. The rural brewery’s brewmaster was a friendly fellow, Fermenting fizzy flavors, flowing with yellow. With each sip, the patrons picked passionately, Pouring pints, pretty pleased, with plenty of glee. But with the twist of tongue and tricky tales, The rural brewery’s name became like snails. Try as they might, lips tangled and twisted, Rural brewery remained quite disjointed and resisted. The rural brewery’s brand built on rustic charm, But their tongue twister seemed to cause alarm. With each attempt, patrons tangled like vines, Rural brewery’s name spinning in their minds. So, remember to take your time and don’t hurry, When uttering words about a rural brewery. Though the tongue may twist and the speech may waver, The taste of their brew will make you a believer!"

Table17 presents example generations using 2k and 13k training samples to investigate the different benefits and drawbacks of each model, as well as the effects of variable levels of training data in a qualitative manner. We additionally include Baichuan trained on 13k samples, and ChatGPT (GPT-3.5-Turbo) in a zero-shot manner. For these examples, we randomly select an input topic from the first 100 samples in our test set.

GPT-2

Firstly, when considering the generations from GPT-2, the generation from the 2k training sample model does not demonstrate any clear phonetic patterns, with no prominent sound repetition present. However, some orthographic repetition can be observed, with "Chattanooga", "craft" and "scene" presenting 3 ways in which the graphemes <c> and <s> appear together (as <c> is often realized as \textipa/s/, like <s>, in words such as "celery"). On the other hand, the 13k training sample generation presents a much better tongue twister, demonstrating repetition of \textipa/t/ in "town of Chattanooga[…] Chattanooga’s taproom is filled with tapestries". This is additionally complemented by the repetition of the affricate \textipa/tS/ in the phrase "Chattanooga’s charm". Whilst these tongue twisters appear quite successful, it is hard to ignore the fact that in both instances the models have resorted to the repetition of very similar clauses/sentences. However, all in all, it would appear that GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT has successfully generated a tongue twister that is largely grammatically coherent, demonstrates phonetic overlap, and is semantically related to the input (even if the rural nature of Chattanooga, a city in Tennessee, may be up for debate). Consequently, this first instance lends support to the proposed benefit of extended quantities of training data for the task of tongue twister generation as provided by our extension of TwistList 1.0 into TwistList 2.0, due to GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT demonstrating better performance than GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT. This finding also supports our claims that the proposed TwisterLister pipeline creates high-quality tongue twisters.

DialoGPT

Regarding DialoGPT in the topic-to-twister setting, we see high levels of redundant repetition in the 2k training sample output, such as "bustling with craftmanship and craftmanship". However, some elements are clearly tongue twister-esque, such as the initial "In the bustling city of Bethlehem, a bustling brewery was bustling", exploiting the voiced bilabial plosive, \textipa/b/. In the 13k training sample output, we curiously observe "Bethlehem" swapped for "Tharrington", and the exploitation of the voiceless dental fricative \textipa/T/ also found in "thriving" and (of course) "thrive". Overall, DialoGPT demonstrates a more meaningful narrative-like generation alongside more training data, in addition to a change in primary phonemes potentially arising from the distributional properties of different phonemes across the larger training split.

BART

BART, on the other hand, demonstrates some less than desirable traits in the 2k setting, using extreme levels of repetition for "New Zealand" (and morphological variants) resulting in a poor quality output that only clearly represents the input topic via the inclusion of the word "breweries". Additionally, BART resorts to generating nonsense output towards the end, producing myriad punctuation and random words. In the case of 13k training samples, the output is less overtly repetitive and is more coherent (though still far from perfect), exploiting \textipa/v/ and \textipa/T/ (a voiced labiodental fricative and a voiceless dental fricative). For example, "In the village of Vineland, where Vineland’s vineyard thrived, Vineland and Vineland thrived." However, the generation does exhibit significant breakdown towards the end, once again producing nonsensical output (though phonetically consistent) with "thwart" and variants thereof.

Flan-T5

Flan-T5 generates the shortest output seen across the models for this input. The generated tongue twister in the example only appears to engage with the repetition of \textipa/b/ (rather than alongside a similar phoneme), but does so successfully, such as "[…] a brewer named Westman brewed a brew in the townhouse, whilst Westfield’s breweries brewed brews[…]". However, the semantic coherence of this output is lacking, with the discussion of 2 parallel events (“Westman, who is in Westfield, brewing, whilst breweries in Westfield also brew”). Additionally, the \textipa/b/ repetition is arrived at exclusively through the exploitation of morphological variants of "brew", suggesting the signal that may have been picked up during training is on a morphological level, rather than orthographic or phonemic/phonetic (as <b> alone does not constitute a morpheme). With additional training data, the generation length changes significantly for Flan-T5. In the generation from Flan-T513k13k{}_{\textit{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT, there is clear repetition of the voiced velar plosive \textipa/g/ and the voiceless counterpart \textipa/k/, such as "Craftsmen from Greenland […] gather to create a gin and lager". However, unlike in the 2k example, the output here exhibits significant semantic redundancy as seen in phrases such as "The gin and gin […] while the gin and gin".

ByT5

ByT5, the largest of the models we train on numerous splits (at 582M parameters), performs poorly when considering the 2k training data split, with the output degrading into repetition. However, the repetition consists of full phrases and sentences rather than a single noun and maintains grammaticality throughout (albeit lacking in coherence). As for the version trained on 13k samples, this remedies the repetition issue. Interestingly, this version also demonstrates more clearly a phonemic pattern in the outputs, swapping between \textipa/v/ in "villager" (and words with the same root) and \textipa/f/ in "farmers" (and words with the same root), which are voiceless/voiced counterparts of each other, and therefore an ideal pattern for a tongue twister to exploit. This is also demonstrated in the 2k version, but only weakly, with one instance of an \textipa/f/-initial word, "farmhouses".

Baichuan

Baichuan, as the largest model we train (exclusively on the 13K training data split) with 7B parameters, performs rather poorly in the topic-to-twister setting. The output shown here consists exclusively of a single phrase "Breweries brew beer" repeated 18 times. Whilst this is a valid tongue twister in terms of relevance to the input, Baichuan is shown to very quickly get stuck in a loop, degrading the quality of the overall output. This pattern is seen frequently across other outputs from Baichuan.

ChatGPT

Finally, ChatGPT (GPT-3.5-Turbo) presents the longest tongue twister of the examples listed. However, this is an exception for this randomly selected generation, rather than the rule, as seen in the automatic evaluation results, where the average generation length for ChatGPT was 17.43 words. In terms of quality, ChatGPT excels at generating well-formed grammatical text, which is shown to be the case in the example generation. Additionally, the generation exploits numerous different phonemic patterns, including repetition of \textipa/h/ "… hops happily hopped", \textipa/b/ "In a rural brewery they brewed a brew[…]", and \textipa/⁢r/ "But rural brewery’s ruckus rhythm rarely arrived". Additionally, one demonstrated ability of ChatGPT that is not exploited by any of the other fine-tuned models (due to the fine-tuning focus) is the incorporation of additional literary techniques based on speech sounds, such as rhyme (e.g., "thrived"/"arrived" and "tails"/"snails". In addition, the coherence level of the ChatGPT output is also very significant, with the output demonstrating a clear narrative that would engage readers. Consequently, we can see why ChatGPT may score poorly on phoneme-based metrics (cf., Table8) due to incorporating a lot of words to enhance the grammar of the tongue twister, as well as exhibiting low-level local repetition of a particular sound in a phrase, rather than maintaining the focus on the same sound for the entirety of the output.

Overall, most models appear to benefit from the presence of increased levels of training data in this example, particularly regarding producing more grammatical and coherent output. The only model for which this observation does not hold is Flan-T5, which does not have a clear change in generation quality from 2k to 13k training samples. This is in some respects to be expected, as larger models are often able to abstract away language patterns more easily than smaller models from the same amount of training data (therefore resulting in a law of diminishing returns between training data quantity and model size). However, the smaller models demonstrate the utility of our TwistList 2.0 dataset in presenting a sufficiently large increase in training data to result in tongue twister generations of demonstrably improved quality. This also shows that there is room to improve generated tongue twisters with an increase in training data exclusively before the requirement for more complex training paradigms is necessary.

9.2 Topic-to-Twister (PACD)

In contrast with the previous approach, our constrained decoding algorithm involves the random selection of an initial phoneme and nearest neighbor, through which to condition generation. Consequently, numerous tongue twisters can be generated for a single input, as long as the selected phoneme pairs differ each time. For this reason, we present an additional case study of model outputs using our Phoneme Aware Constrained Decoding (PACD) module.

Input:"rural brewery"
Golden:"Rory the warrior and Roger the worrier were reared wrongly in a rural brewery."
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"In the bustling town of Chattanooga, where the townspeople reside, The Chattanooga’s taproom is filled with tapestries of flavors. Chattanooga’s taproom is filled with the finest tapestry, From the tapestry of Chattanooga’s charm, it’s a sight to see. Chattanooga’s taproom is filled with the finest tapestry, From the tapestry of Chattanooga’s charm, it’s a sight to see. Chattanooga’s taproom is filled with the finest tapes"
GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w"Brewers brew beer in the barn brewing process before brewers bring their beers to the bar boys breweries prior to being brewed beforehand but before being bottled previously because of"
\hdashline[3pt/5pt]"With their rust wood working wonders with their water works while watering wheat wine wines within their win ware warehouses where they are waiting for the weather to warm up"
\hdashline[3pt/5pt]GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -ws"Brewers brew beer in the breweries belonging to both baker boys and bar patrons but the bartender prefers the beers brewed by bud brothers and bro pal pit bulls before"
\hdashline[3pt/5pt]"With a rust wood wind ward rip weed from the woods while watering the water works well within the walls of the warehouse where the work room is wall rolling"
GPT-2 -w"But before you do begin building your own brewery please bear in place that you are not a brewer but a person who has been brewing for a bit because"
\hdashline[3pt/5pt]"What is the reason for this one with the word rural rather than regional where it is written when it is referring to the region of the world where the"
\hdashline[3pt/5pt]GPT-2 -ws"But I believe that the best place to begin is with the basics before you proceed beyond that point because the more you practice brewing beer properly both in the"
\hdashline[3pt/5pt]"What is the reason for this request with the word rural when you are referring to a region that is not a regionally relevant one where there is a real"
Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"[Breweries brew beer]"*18"
Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w"Try to drink tea in the town during dinner time drinking delicious drinks from the tap tasting different types of tasty treats trying to taste the differ den between two"
\hdashline[3pt/5pt]"What is the relations with rural roads running through resident rial ways within regional regions where rivers run through rocky rid rig road way runs through region rich river routes"
\hdashline[3pt/5pt]Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -ws"Try to drink tea in the township of taverna towns during the daytime drinking tour dates downtown drinker drinks dry den dutch town tavern a day time tasting tours tourist"
\hdashline[3pt/5pt]"What is the relationship between the rural region and the regional railway route running through rut rum runners runs through rocky regions with rich resources while retaining its rustic roots"
Baichuan -w"Try to twist the tongue as tight together as time does during the day today due to the temper ter differ ten times too different degrees depending on the direction"
\hdashline[3pt/5pt]"Write a one word response to each of the words below with a relevant rural reference within the reply which is not a repeated ref rain water recycling wind renewable"
\hdashline[3pt/5pt]Baichuan -ws"Try to twist the tongue as tightly as desired during the time of delivery trying to deliver the twistiest tong toe track ter day till the target is delivered tight"
\hdashline[3pt/5pt]"Write a one word response to each of the words below with a relevant rural reference within the reply which is not a repeated ref rain water recycling rainwater runoff"

GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/o

As discussed in §9.1, GPT-2 finetuned on 13k samples without the addition of our constrained decoding module demonstrates repetition of \textipa/t/ in "town of Chattanooga[…] Chattanooga’s taproom is filled with tapestries". This is additionally complemented by the repetition of the affricate /\textipatS/ in "Chattanooga’s charm".

GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/-ws

On the other hand, finetuned GPT-2 with the additional constraints imposed by our PACD module demonstrates several different characteristics. Firstly, as is the case with the non-finetuned model, our constrained generations are limited in length to 30 tokens, making all generations of equal length, and consequently shorter than the output of GPT-2 without these constraints. Regarding the tongue twisters, each example can be seen to be related to the input keywords, but vary between direct and abstract relations. For example, the first generation which enforced the selection of tokens starting with \textipa/b/ or \textipa/p/ demonstrates a clear direct reference to the input word "brewery", something which has been afforded to the model as the selected initial phonemes match one of the word-initial phonemes of the input (which is the approach we take in §8.2).

Lastly, the generations exploiting \textipa/⁢r/ and \textipa/w/ are similarly abstract (but perhaps to a lesser extent), referring to relevant words such as "water", "wine", and "wood working" (the latter activity being more expected in a rural locale). Regarding grammatically, the \textipa/b, p/ example is largely grammatical (if "boys" had a possessive apostrophe, to create "the bar boys’ breweries" as a noun phrase") but is cut off at 30 tokens, resulting in an incomplete sentence. However, semantically it is hard to follow due to the high level of temporal adverbs used.Likewise, the \textipa/⁢r, w/ generation is mostly grammatical, with the presence of numerous relative clauses leading to difficulty in understanding (however, this does require some liberty in accepting that "wheat wine wines" and "win ware warehouses" are allowable compound nouns). With the subword loop turned on for GPT-2 using PACD, we see similar outputs in style to the full-word version. However, one difference we observe is a slight reduction in grammaticality towards the end of the \textipa/b,p/ version, with the phrase "and bro pal put bulls before", which is difficult to parse. Similar effects can be seen with the \textipa/⁢r, w/ version, where the outputs are similar in word choice (as expected, given some whole words will form the stem of longer words in the subword version) and the overall result is perhaps slightly more difficult to parse grammatically.

GPT-2 -w/-ws

Regarding non-finetuned base GPT-2 with the addition of our module, all generations demonstrate similar levels of grammaticality to those seen in our finetuned model. On the other hand, the generations can be considered less literary, with the /b, p/ example suggesting the model is generating instructional text in "But before you do begin building your own brewery[…]". Similarly to previously discussed generations with the PACD module, all of these generations are hindered by the 30-token limit, with examples ending in "because", "his desire to try" (which suggests the verb "try" will take an additional argument) and "the". Overall, similar results are seen with the subword loop enabled (-ws), resulting in variations of the same output.

Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT

As discussed previously, finetuned Baichuan without the addition of PACD produces a valid 3-word tongue twister phrase that achieves alliteration of \textipa/b/, but quickly falls into the degenerate pattern of repeating this phrase 18 times, rather than continuing to extent the tongue twister in a unique and entertaining fashion.

Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT -w/-ws

With the addition of full-word PACD to the finetuned version of Baichuan, we successfully avoid the repetition trap (due to repetition restrictions at decoding time). For the first example, the enforcement of \textipa/t, d/ is evident in grammatical phrases such as "Try to drink the tea in the town during dinner time…", and is continued throughout. Overall, the first generation is grammatical, only demonstrating a clear degradation towards the end with the generation of "differ den" which hinders fluency and coherence. On the other hand, in the \textipa/⁢r, w/ example, the sound overlap is demonstrated, but the coherence and fluency of the output are shown to suffer much earlier, with sequences such as "rivers run through rocky rid rig road ways runs through region rich river routes". An overall pattern can be seen that generation quality begins to drop after a single suboptimal token is selected due to the detrimental impact it has on the following token’s prediction. When comparing to the subword enabled PACD (-ws) we see the initial difference in the 7th word of the \textipa/t, d/ example, where "town" has been extended to "township", resulting in the remainder of the generation diverging from that of the full-word version of PACD, which is similarly reflected in the \textipa/⁢r, w/ example as "relation" becomes "relationship". Overall, due to the constraints in place, both present difficult-to-articulate sequences, with the overall quality between subword and full-world versions being minimal and subjective.

Baichuan -w/-ws

Regarding base Baichuan with the addition of PACD, we see a lack of topical relevance in the \textipa/t, d/ examples for both full-word and subword versions, with the input of "rural brewery" not being reflected in the semantics of the output (although the generation has noted the request for a tongue twister in the starting stem, and references it directly). This is also seen in the \textipa/⁢r,w/ versions but to a lesser extent, as the topic word "rural" starts with an allowable phoneme, \textipa/⁢r/, causing it to generate in the phrase "rural reference". In this example, therefore, we see that finetuned Baichuan is better able to understand the desired topic (due to being trained on the topic-to-twister setting), whilst the base model produces overtly generic text, but with repetitive phonology.

Overall, the examples demonstrate the effectiveness of the simple constrained decoding module that constitutes PACD. In the examples generated through this module, the ability to control token repetition means we avoid the pitfalls of standard finetuned models that frequently default to phrasal level repetition, ensuring that tongue twisters generated with PACD use a wide range of vocabulary. Additionally, due to the constraints being phoneme-based, we can exercise control over the present sounds even when permitting the generation of a small set of function words to facilitate grammatical correctness. To this end, similar to the pipeline of TwisterLister (§4.1), PACD allows us to ensure phoneme overlap within the output, rather than relying on this signal being picked up during fine-tuning. The case studies additionally demonstrate that rule-based decoding of a pre-trained language model can result in tongue twisters that are comparable to those of finetuned models, and that future work in the area should make extensive use of constrained decoding approaches such as PACD, rather than exclusively finetuning on input-output pairs.

9.3 Style-Transfer

We now qualitatively investigate the generations from our additional task setting using style-transfer. Table19 presents example generations across 2k training samples and 13k training samples in order to investigate the different benefits and drawbacks of each model, as well as the effects of variable levels of training data in a qualitative manner. We further include Baichuan trained on 13k samples, and zero-shot ChatGPT (GPT-3.5-Turbo).

As demonstrated in the Input and Gold Standard fields, the generated paraphrase is not easily identifiable as a tongue twister and is, therefore, a prime example of a successful paraphrase to enable style-transfer (as the twisted style has been removed in order to have parallel data to aid in identifying the desired style to transfer from the source to the target), whilst the gold standard reference from which the paraphrase was derived exploits the common articulation difficulty and confusion between the rhotic \textipa/⁢r/ and lateral \textipa/l/ Georgievska-Jancheska (2019), encouraging metathesis (the accidental transposition of speech sounds across words).

Input:"The small crimson truck traveled along Limuru street."
Golden:"The little red lorry went down Limuru road."
GPT-22k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"The small carriages of the small carriages, with their small wheels, were moving along Limuru’s road."The small yellow truck traveled along Limuru’s road. The yellow cars of the yellow cars, with their yellow wheels, were traveling along Limuru’s road.""The yellow yellow truck traveled along Limuru’s road." The yellow cars of the yellow cars, with their yellow wheels, were traveling along Limuru’s road.""The yellow yellow truck traveled along Limuru’s road"
\hdashline[3pt/5pt]GPT-213k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"The small red truck, with its small red tires, was a sight to see. The small red truck, with its small red tires, was a sight to see. The small red truck, with its small red tires, was a sight to see. The small red truck, with its small red tires, was a sight to see. The small red truck, with its small red tires, was a sight to see. The small red truck, with its small red tires, was a sight"
DialoGPT2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"It passed through the town of Limburg, where the small town of Limburg’s inhabitants were located. It passed through the town of Limburg, where the town of Limburg’s inhabitants were located. It passed through the town of Limburg, where the town of Limburg’s inhabitants were located. It passed through the town of Limburg, where the town of Limburg’s inhabitants were located. It passed through the town of Limburg, where the town of Limburg’s inhabitants were located"
\hdashline[3pt/5pt]DialoGPT13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"It passed by the small houses and the small houses’ small houses, while the small houses’ small houses and the small houses’ small houses were observed."The minuscule minivan passed by the mansions and mansions’ mansions, while the mansions’ mansions and the mansions’ mansions were seen." Improved version: The minuscule minivan passed by the mansions and mansions’ mansions, while the mansions’ mansions and the mansions"
BART2k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"The limo journeyed along Limuru thoroughfare." " churn " churned B hollow f un ümbled inc ops op edy - standard r unilaterally worsened thumb pl thrust res ed churn"
\hdashline[3pt/5pt]BART13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT""The little red limousine traveled down Limuru thoroughfare." \" \" \" \"\" \" \"\" \" \uff \uff \" \" \" L " t \", ’ churn M L \" " uff \churn"
Flan-T52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"The yellow limo limped along Limuru limoway."
\hdashline[3pt/5pt]Flan-T513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"The little lilac lorry limped down Limuru lanes."
ByT52k2k{}_{\text{2k}}start_FLOATSUBSCRIPT 2k end_FLOATSUBSCRIPT"The little limuru truck looked along Limuru street."
\hdashline[3pt/5pt]Byt513k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"The little red truck wandered along Limuru thoroughfare."
Baichuan13k13k{}_{\text{13k}}start_FLOATSUBSCRIPT 13k end_FLOATSUBSCRIPT"Limuru’s little red truck drove along Limuru road."
ChatGPT"Truck crimson, small, along Limuru street did travel."

GPT-2

Firstly, the main element to notice with either generation from GPT-2 is the excessive length. This is particularly startling when you consider the nature of style-transfer, where the ideal tongue twister version of a text would be of similar length to the original. However, upon further inspection, it is clear that GPT-2 resorts to blatant repetition of very similar content to constitute the delta in length. Consequently, if we only take the first sentence of the 2k version as the output "The small carriages of small carriages, with their small wheels, were moving along Limuru’s road.", the output can more clearly be seen as a style-transferred version of the input. However, this generation is still poor, with little clear phoneme-level repetition (excluding repetition of the same word, such as "small" in the quoted passage) and much semantic redundancy. Regarding the 13k training example output, GPT-2 yet again exhibits large levels of repetition, but in this case, the repetition is purer, being a verbatim repetition of the initial sentence numerous times. In addition, the output is free of any grammatical errors. In this example, there is clear use of the sibilant \textipa/s/ in "The small red truck with its small red tires, was a sight to see". Additionally, the input semantics of a small truck have been maintained, but the location (Limuru, a town in Kenya) has been lost.

DialoGPT

On the other hand, the 2k version of the dialogue fine-tuned GPT-2, DialoGPT lacks semantic coherence due to using deixis in the form of "it", which is not resolved as a cataphoric reference later in the text, making "it" ambiguous. A potential attempt at phoneme repetition is present with "Limburg" and "located", but this is tenuous. Overall, the generation with 2k training examples does not clearly present a tongue twister. On the other hand, the extended 13k example demonstrably resembles a tongue twister, achieving repetition of \textipa/s/ (e.g., "Small houses and the small houses’ small houses’) and \textipa/m/ (e.g., ‘The minuscule minivan passed by the mansion and the mansions’ mansions"). However, this generation also exhibits a strange structure, including "improved version" as part of the output. If only a subset of this generation is considered, "The minuscule minivan passed by the mansion and the mansions’ mansions", the output constitutes a high-quality tongue twister. Whilst it may be semantically bizarre, to discuss the ownership of mansions by other mansions, it is not grammatically invalid, and human-authored tongue twisters also frequently convey unusual semantics to increase their strange nature.

BART

Unlike the GPT-2 based models, BART produces style-transferred versions of the input that are much closer in length to the original input, even without needing to exclude sentence-level repetition. In the 2k example, the initial sentence "The limo journeyed along Limuru thoroughfare" is a grammatically and semantically valid output, but does not resemble a tongue twister outside of the lateral \textipa/l/ in both "limo" and "Limuru". However, the sentence following this consists primarily of noise. On the other hand, alongside the increase in training data comes an improvement in tongue twister quality, with "The little red limousine traveled down Limuru thoroughfare" appearing to exploit the articulatory similarity between \textipa/l/ and \textipa/⁢r/ without relying on single-word repetition. Again, however, the output devolves into noise towards the end, consisting of various punctuation and subwords.

Flan-T5

Flan-T5 (alongside ByT5) presents the most clear paraphrases of the desired output, containing no unnecessary repetition or noise. In the 2k example, Flan-T5 repeats similar phonemes, exploiting the similar phonetic categories, laterals and glides (or semivowels), of which \textipa/l/ and \textipa/j/ belong to, respectively, "The yellow limo limped along Limuru limoway.". On the other hand, from more training examples, Flan-T5 presents a paraphrase that is more faithful to the original input, relying on the repetition of \textipa/l/ exclusively: "The little lilac lorry limped down Limuru lanes." (assuming we take "lorry" to be more semantically related to "truck" than "limousine" is).

ByT5

ByT5 shows equivalent performance to Flan-T5 in this example, opting to alliterate different parts of speech to Flan-T5, but overall having a similar effect in the 2k example. On the other hand, the 13k version doesn’t present alliteration as clearly as Flan-T5, but does alternate between related phonemes, \textipa/l, ⁢r, w/, in "little red truck wandered".

Baichuan

In the style-transfer task setting, Baichuan can be seen to perform much better than witnessed in the previous topic-to-twister task setting. Similar to other generations, Baichuan exploits the \textipa/⁢r/ and \textipa/l/ similarity with "Limuru’s little red truck drove along Limuru road’’, where every word of the output contains one of these sounds, and alternation is frequent.

ChatGPT

Finally, ChatGPT in a zero-shot setting appears to strongly misinterpret the given prompt, outputting non-standard syntax such as swapping the order of an adjective and noun in "truck crimson" (reminiscent of the speaking style of Star Wars’ Yoda). Additionally, the ChatGPT generation does not appear to reflect a tongue twister in any clear sense. It is important to note, however, that this is not a common pattern with ChatGPT outputs, but is the case for the randomly selected case study example.

Overall, as with the topic-to-twister setting, we can see a clear benefit in the style-transfer task formulation of using more training data, with all models producing better quality outputs both in terms of phonetic patterns, grammatical validity, and semantic coherence. An overarching theme, however, is that several models misinterpret the desire to paraphrase a single sentence as a tongue twister, instead producing outputs that are very long. However, if edited in post (often simply by taking the first sentence), these generations are often still valid (if not perfect).

10 Conclusion

In this article, we have presented multiple novel contributions towards the development of more phonetically- and phonologically-aware models for the task of tongue twister generation. We presented a pipeline for the generation of tongue twisters at scale using large language models that encourage uniqueness and non-derivative examples through the careful selection of a candidate vocabulary (TwisterLister) to develop a large dataset of machine-generated tongue twisters (TwistList 2.0). We then finetuned a series of smaller language models on the resulting dataset and observed that the topic-to-twister and style-transfer task settings for tongue twister generation exhibit different characteristics when considering the benefit of additional training data, as well as observing that different models perform differently in the two task settings. These findings demonstrate a fundamental difference in the requirements of the two approaches and further motivated the need for additional training data as supplied by TwistList 2.0 through the use of the TwisterLister pipeline. We additionally presented a novel algorithm (PACD) that implements hard lexical constraints based on the phonemic characteristics of words that can be used to realize tongue twisters from a causal autoregressive language model by accessing the next token predictions and applying a cascade of filters. We then extensively evaluated the generations from our proposed approaches, presenting both automatic and human evaluation. In the former, we additionally presented 2 novel metrics for measuring the sound complexity of a generated tongue twister based on the concept of phoneme edit distance (iPED and oPED). With these fine-tuned models and constrained decoding module, we then provided an in-depth exploration of the generation characteristics through case studies investigating the propensity of each model to generate high-quality tongue twisters, as well as the differences seen via the addition of more training data (to move away from reliance on automatic metrics that do not reveal fundamental qualitative differences). Overall, we find that straight fine-tuning of language models for a tongue twister generation task still has substantial room for improvement to meet human-authored standards. However, we additionally demonstrate that simple constrained decoding approaches are able to generate better tongue twisters than only finetuned models, particularly due to the phoneme-level awareness which allows for more difficult-to-articulate sound combinations to be present in an output. We additionally envision the techniques and approaches presented herein to be beneficial to the creative NLG community, particularly in regard to the generation of phonemically conditioned language forms (e.g., poetry, lyrics, and puns) to explore methods of constraining token outputs whilst simultaneously taking advantage of the power of modern LLMs.

We hope to witness increased interest in the area of tongue twister generation, as well as other niche areas of creative language generation that pique the interest of newcomers to the NLG domain, as well as people from wider domains such as literature and (non-computational) linguistics, and the general public (where creative language generation may offer a more accessible and intriguing entry into the NLP and machine learning communities). In furthering work in this area, we believe reinforcement learning approaches may prove fruitful, in addition to incorporating a differentiable version of something akin to phonemic edit distance as an additional loss function to optimize. Furthermore, whilst we present new metrics (oPED/iPED), we encourage the development of more robust general metrics for tongue twisters and other forms of creative language that can adequately balance the requirements of being grammatical as well as exhibiting significant levels of sound repetition.

\appendixsection

Evaluation RubricBelow is the evaluation rubric presented to human evaluators. The prompts have the following format: "[criterion description]\n Instruction: Generate a tongue twister relating to [input]\n Response: [output]\n Rate the response from 1 to 5:\n [rubric]" Where criterion description refers to the first line of the examples below (e.g., "Is the model proficient in…"), [input] is the input to the model (either a topic in the topic-to-twister setting, or standard non-literary text in the style-transfer setting), [output] is an LM’s response to a given input topic/text, and [rubric] is the 5-point rating system as outlined below.

Relevance
Is the model proficient in generating text that is relevant to the input topic?
1 - The model completely ignores the input topic and generates irrelevant text.
2. The model generates text that is mostly irrelevant to the input topic, with minimal and unclear association.
3. The model generates text that is partially relevant to the input topic, but the association is weak and inconsistent.
4. The model generates text that is mostly relevant to the input topic, with clear association but occasional lapses.
5 - The model excels in generating relevant text, where the responses are consistently on topic and the association is clear.

Difficulty of Articulation
Is the model proficient in generating text that is difficult to pronounce due to alliteration and phonetic complexity?
1. The model generates text that is no harder to pronounce and articulate than standard writing.
2. The model generates text that is slightly more challenging to say than standard writing, demonstrating some simple techniques such as alliteration.
3. The model generates text that is somewhat difficult to say but clearly exhibits techniques such as alliteration.
4. The model generates text that is generally difficult to say, with techniques such as alliteration, but also alternating between similar sounds.
5. The model generates text that is highly phonetically complex and difficult to say, consistently exploiting repetition of closely related sounds and alliteration.

Fluency
Is the model proficient in generating grammatical and well-formed text?
1. The model produces text with no grammatical phrases or spans.
2. The model generates text that is largely ungrammatical but with some grammatically valid sequences.
3. The model generates text that contains grammatically valid sequences but overall is difficult to parse.
4. The model generates text that is generally grammatical and well-formed, with minor errors or awkward phrasing.
5. The model produces text that is grammatically correct and well-formed, demonstrating a strong command of the English language.

Coherence
Is the model proficient in generating semantically coherent and logical text?
1. The model neglects to generate semantically coherent text, producing text that is nonsensical in meaning.
2. The model generates text that is mostly incoherent, with only occasional hints of logical meaning.
3. The model generates text that is partially coherent, but the text lacks logical structure and consistency.
4. The model generates text that is generally coherent, with a logical structure and clear meaning, though minor inconsistencies may be present.
5. The model excels in generating semantically coherent text, where the text is logically structured and maintains a clear and consistent meaning.

Entertainment
Is the model proficient in generating text that a human reader would find entertaining or amusing?
1. The model demonstrates no creativity, either in the content or the structure of the text, resulting in uninteresting or unamusing outputs.
2. The model generates text with minimal creativity, resulting in outputs that are only slightly interesting or amusing.
3. The model generates text that is somewhat entertaining or amusing, but the creativity in content and structure is limited.
4. The model generates text that is generally entertaining and amusing, showing noticeable creativity in content and structure, though some outputs may be less engaging.
5. The model excels in creating entertaining and amusing text, demonstrating creativity in both content and structure and consistently producing engaging and enjoyable outputs.

Overall Quality
Is the model proficient in generating high quality English tongue twisters?
1. The model fails to generate high quality text that is recognisable as a tongue twister.
2. The model generates text that slightly resembles a tongue twister.
3. The model generates text that is recognisable as a tongue twister, but is lacking in fluency, coherence, or entertainment value.
4. The model generates text that is easily recognisable as a tongue twister, but may fall short of high quality.
5. The model excels in generating texts that resemble high-quality tongue twisters that are difficult to pronounce, make sense, and are entertaining.

Acknowledgements.

Tyler Loakman is supported by the Centre for Doctoral Training in Speech and Language Technologies (SLT) and their Applications funded by UK Research and Innovation [grant number EP/S023062/1]. Chen Tang is supported by the China Scholarship Council (CSC) for his doctoral study (File No.202006120039).

\starttwocolumn

References

  • Anil etal. (2023)Anil, Rohan, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, etal. 2023.Gemini: A family of highly capable multimodal models.
  • Askari etal. (2023)Askari, Arian, Mohammad Aliannejadi, Chuan Meng, Evangelos Kanoulas, and Suzan Verberne. 2023.Expand, highlight, generate: RL-driven document generation for passage reranking.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10087–10099.
  • Brown etal. (2020)Brown, TomB., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, DanielM. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.Language models are few-shot learners.CoRR, abs/2005.14165.
  • Buciluǎ, Caruana, and Niculescu-Mizil (2006)Buciluǎ, Cristian, Rich Caruana, and Alexandru Niculescu-Mizil. 2006.Model compression.In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, page 535–541, Association for Computing Machinery, New York, NY, USA.
  • Chakrabarty etal. (2023)Chakrabarty, Tuhin, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023.Art or artifice? large language models and the false promise of creativity.
  • Chall and Dale (1995)Chall, JeanneSternlicht and Edgar Dale. 1995.Readability revisited: The new Dale-Chall readability formula.Brookline Books.
  • Chang etal. (2023)Chang, Yongzhu, Rongsheng Zhang, Lin Jiang, Qihang Chen, LeZhang, and Jiashu Pu. 2023.Sudowoodo: A Chinese lyric imitation system with source lyrics.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 99–105.
  • Chen etal. (2021)Chen, Hong, Raphael Shu, Hiroya Takamura, and Hideki Nakayama. 2021.GraphPlan: Story generation by planning with event graph.In Proceedings of the 14th International Conference on Natural Language Generation, pages 377–386.
  • Chiang and Lee (2023)Chiang, Cheng-Han and Hung-yi Lee. 2023.Are synonym substitution attacks really synonym substitution attacks?In Findings of the Association for Computational Linguistics: ACL 2023, pages 1853–1878.
  • Chung etal. (2022)Chung, HyungWon, LeHou, Shayne Longpre, Barret Zoph, YiTay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, ShixiangShane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, EdH. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, QuocV. Le, and Jason Wei. 2022.Scaling instruction-finetuned language models.
  • Clark etal. (2021)Clark, Elizabeth, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and NoahA. Smith. 2021.All that’s ‘human’ is not gold: Evaluating human evaluation of generated text.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282–7296.
  • Clements and Ridouane (2011)Clements, G.Nick and Rachid Ridouane. 2011.Where Do Phonological Features Come From? : Cognitive, Physical and Developmental Bases of Distinctive Speech Categories., 1st ed. edition.Language Faculty and Beyond Series. John Benjamins Publishing Company, Amsterdam/Philadelphia.
  • Vande Cruys (2020)Vande Cruys, Tim. 2020.Automatic poetry generation from prosaic text.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2471–2480.
  • DeLacy (2007)DeLacy, PaulV. 2007.The Cambridge handbook of phonology / [electronic resource].Cambridge University Press, Cambridge.
  • Devlin etal. (2019)Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019.BERT: Pre-training of deep bidirectional transformers for language understanding.In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186.
  • Flesch (1948)Flesch, R.R. 1948.A new readability yardstick.Journal of Applied Psychology, 32:2211–2223.
  • Foster and White (2007)Foster, MaryEllen and Michael White. 2007.Avoiding repetition in generated text.In Proceedings of the Eleventh European Workshop on Natural Language Generation (ENLG 07), pages 33–40.
  • Franceschelli and Musolesi (2023)Franceschelli, Giorgio and Mirco Musolesi. 2023.On the creativity of large language models.
  • Francis and Kucera (1979)Francis, WNelson and Henry Kucera. 1979.The brown corpus.Department of Linguistics, Brown University.
  • Geisel (1965)Geisel, TheodoreSeuss. 1965.Fox in socks: Dr. Seuss’s book of tongue tanglers.Random House.
  • Georgievska-Jancheska (2019)Georgievska-Jancheska, Tatjana. 2019.Lambdacism, rhotacism and sigmatism in preschool children: Frequency and distribution.Open Access Macedonian Journal of Medical Science, 7(3):336–340.
  • Gick, Wilson, and Derrick (2013)Gick, Bryan, Ian Wilson, and Donald Derrick. 2013.Articulatory phonetics.John Wiley & Sons.
  • Gómez-Rodríguez and Williams (2023)Gómez-Rodríguez, Carlos and Paul Williams. 2023.A confederacy of models: a comprehensive evaluation of LLMs on creative writing.In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14504–14528, Association for Computational Linguistics, Singapore.
  • Guerini, Özbal, and Strapparava (2015)Guerini, Marco, Gözde Özbal, and Carlo Strapparava. 2015.Echoes of persuasion: The effect of euphony in persuasive communication.In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1483–1493, Association for Computational Linguistics, Denver, Colorado.
  • Gunning (1971)Gunning, R. 1971.The Technique of Clear Writing.McGraw-Hill.
  • Gupta and Agrawal (2022)Gupta, Manish and Puneet Agrawal. 2022.Compression of deep learning models for text: A survey.ACM Trans. Knowl. Discov. Data, 16(4).
  • Gupta etal. (2019)Gupta, Prakhar, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey Bigham. 2019.Investigating evaluation of open-domain dialogue systems with human generated multiple references.In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 379–391.
  • Hinton, Vinyals, and Dean (2015)Hinton, Geoffrey, Oriol Vinyals, and Jeffrey Dean. 2015.Distilling the knowledge in a neural network.In NIPS Deep Learning and Representation Learning Workshop.
  • Hokamp and Liu (2017)Hokamp, Chris and Qun Liu. 2017.Lexically constrained decoding for sequence generation using grid beam search.In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535–1546.
  • Hong etal. (2023)Hong, Xudong, Asad Sayeed, Khushboo Mehra, Vera Demberg, and Bernt Schiele. 2023.Visual writing prompts: Character-grounded story generation with curated image sequences.Transactions of the Association for Computational Linguistics, 11:565–581.
  • Hu etal. (2021)Hu, EdwardJ, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen. 2021.Lora: Low-rank adaptation of large language models.arXiv preprint arXiv:2106.09685.
  • Hu etal. (2022)Hu, EdwardJ, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, LuWang, and Weizhu Chen. 2022.LoRA: Low-rank adaptation of large language models.In International Conference on Learning Representations.
  • Iso (2022)Iso, Hayate. 2022.Autotemplate: A simple recipe for lexically constrained text generation.In Proceedings of the 17th International Natural Language Generation Conference.
  • Jessen (2008)Jessen, Michael. 2008.Forensic phonetics.Language and linguistics compass, 2(4):671–711.
  • Keh etal. (2023)Keh, SedrickScott, StevenY. Feng, Varun Gangal, Malihe Alikhani, and Eduard Hovy. 2023.PANCETTA: Phoneme aware neural completion to elicit tongue twisters automatically.In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 491–504.
  • Kember, Connaghan, and Patel (2017)Kember, Heather, Kathryn Connaghan, and Rupal Patel. 2017.Inducing speech errors in dysarthria using tongue twisters.International journal of language & communication disorders, 52(4):469–478.
  • Kingma and Ba (2014)Kingma, DiederikP and Jimmy Ba. 2014.Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980.
  • Klausenburger (1970)Klausenburger, Jürgen. 1970.French Prosodics and Phonotactics : An Historical Typology., 1st ed. edition.Beihefte Zur Zeitschrift Für Romanische Philologie Series. Walter de Gruyter GmbH, Tübingen.
  • Ladefoged (1996)Ladefoged, Peter. 1996.Elements of acoustic phonetics.University of Chicago Press.
  • Lewis etal. (2020)Lewis, Mike, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020.BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880.
  • Li etal. (2024)Li, Yizhi, Ruibin Yuan, GeZhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghao Xiao, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, Roger Dannenberg, Ruibo Liu, Wenhu Chen, Gus Xia, Yemin Shi, Wenhao Huang, Zili Wang, Yike Guo, and Jie Fu. 2024.MERT: Acoustic music understanding model with large-scale self-supervised training.In Proceedings of the 12th International Conference on Learning Representations (ICLR).
  • Li, Guerin, and Lin (2022)Li, Yucheng, Frank Guerin, and Chenghua Lin. 2022.The secret of metaphor on expressing stronger emotion.In Proceedings of the 3rd Workshop on Figurative Language Processing (FLP), pages 39–43.
  • Li etal. (2023a)Li, Yucheng, Shun Wang, Chenghua Lin, Frank Guerin, and Loic Barrault. 2023a.FrameBERT: Conceptual metaphor detection with frame embedding learning.In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1558–1563.
  • Li etal. (2023b)Li, Zhuoyan, Hangxiao Zhu, Zhuoran Lu, and Ming Yin. 2023b.Synthetic data generation with large language models for text classification: Potential and limitations.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10443–10461.
  • Lin (2004)Lin, Chin-Yew. 2004.ROUGE: A package for automatic evaluation of summaries.In Text Summarization Branches Out, pages 74–81.
  • Loakman, Maladry, and Lin (2023)Loakman, Tyler, Aaron Maladry, and Chenghua Lin. 2023.The iron(ic) melting pot: Reviewing human evaluation in humour, irony and sarcasm generation.In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6676–6689.
  • Loakman, Tang, and Lin (2023)Loakman, Tyler, Chen Tang, and Chenghua Lin. 2023.TwistList: Resources and baselines for tongue twister generation.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 579–589.
  • Lu etal. (2022)Lu, Ximing, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan LeBras, Lianhui Qin, Youngjae Yu, Rowan Zellers, NoahA. Smith, and Yejin Choi. 2022.NeuroLogic a*esque decoding: Constrained text generation with lookahead heuristics.In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 780–799.
  • Manjavacas, Kestemont, and Karsdorp (2019)Manjavacas, Enrique, Mike Kestemont, and Folgert Karsdorp. 2019.Generation of hip-hop lyrics with hierarchical modeling and conditional templates.In Proceedings of the 12th International Conference on Natural Language Generation, pages 301–310.
  • McCutchen and Perfetti (1982)McCutchen, Deborah and CharlesA. Perfetti. 1982.The visual tongue-twister effect: Phonological activation in silent reading.Journal of Verbal Learning and Verbal Behavior, 21(6):672–687.
  • Mortensen etal. (2016)Mortensen, DavidR., Patrick Littell, Akash Bharadwaj, Kartik Goyal, Chris Dyer, and Lori Levin. 2016.PanPhon: A resource for mapping IPA segments to articulatory feature vectors.In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3475–3484.
  • O’Halloran (2020)O’Halloran, KenD. 2020.A tongue-twister to translation? increased complexity of genioglossus movement during wakefulness in persons with obstructive sleep apnoea.The Journal of Physiology, 598(3):435–436.
  • OpenAI etal. (2023)OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, etal. 2023.Gpt-4 technical report.
  • Ouyang etal. (2022)Ouyang, Long, Jeff Wu, XuJiang, Diogo Almeida, CarrollL. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022.Training language models to follow instructions with human feedback.
  • Papineni etal. (2002)Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002.Bleu: a method for automatic evaluation of machine translation.In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318.
  • Ploujnikov and Ravanelli (2022)Ploujnikov, Artem and Mirco Ravanelli. 2022.SoundChoice: Grapheme-to-Phoneme Models with Semantic Disambiguation.In Proc. Interspeech 2022, pages 486–490.
  • Popescu-Belis etal. (2023)Popescu-Belis, Andrei, ÀlexR. Atrio, Bastien Bernath, Etienne Boisson, Teo Ferrari, Xavier Theimer-Lienhard, and Giorgos Vernikos. 2023.GPoeT: a language model trained for rhyme generation on synthetic data.In Proceedings of the 7th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 10–20.
  • Potash, Romanov, and Rumshisky (2018)Potash, Peter, Alexey Romanov, and Anna Rumshisky. 2018.Evaluating creative language generation: The case of rap lyric ghostwriting.In Proceedings of the Second Workshop on Stylistic Variation, pages 29–38.
  • Radford etal. (2019)Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, etal. 2019.Language models are unsupervised multitask learners.OpenAI blog, 1(8):9.
  • Raffel etal. (2020)Raffel, Colin, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and PeterJ. Liu. 2020.Exploring the limits of transfer learning with a unified text-to-text transformer.Journal of Machine Learning Research, 21(140):1–67.
  • Rose etal. (2010)Rose, Stuart, Dave Engel, Nick Cramer, and Wendy Cowley. 2010.Automatic keyword extraction from individual documents.In Text Mining: Applications and Theory, pages 1 – 20.
  • Roush etal. (2022)Roush, Allen, Sanjay Basu, Akshay Moorthy, and Dmitry Dubovoy. 2022.Most language models can be poets too: An AI writing assistant and constrained text generation studio.In Proceedings of the Second Workshop on When Creative AI Meets Conversational AI, pages 9–15, Association for Computational Linguistics, Gyeongju, Republic of Korea.
  • Smith and Senter (1967)Smith, EA and RJ Senter. 1967.Automated readability index.AMRL-TR. Aerospace Medical Research Laboratories (6570th), pages 1–14.
  • Somoff (2014)Somoff, Victoria. 2014.Four is not fourteen: Tongue twister patterns and the unmastery of language.Western Folklore, 73(2/3):195–215.
  • Sugiharto, Santoso, and Shofyana (2022)Sugiharto, Prasetyawan, Yan Santoso, and Maila Shofyana. 2022.Teaching english pronunciation using tongue twister.Acitya: Journal of Teaching and Education, 4(1):189–197.
  • Sun etal. (2022)Sun, Jiao, Anjali Narayan-Chen, Shereen Oraby, Shuyang Gao, Tagyoung Chung, Jing Huang, Yang Liu, and Nanyun Peng. 2022.Context-situated pun generation.In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4635–4648.
  • Tang etal. (2022)Tang, Chen, Chenghua Lin, Henglin Huang, Frank Guerin, and Zhihao Zhang. 2022.EtriCA: Event-triggered context-aware story generation augmented by cross attention.In Findings of the Association for Computational Linguistics: EMNLP 2022.
  • Tang etal. (2023)Tang, Chen, Hongbo Zhang, Tyler Loakman, Chenghua Lin, and Frank Guerin. 2023.Enhancing dialogue generation via dynamic graph knowledge aggregation.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4604–4616.
  • Tian etal. (2023)Tian, Yufei, Anjali Narayan-Chen, Shereen Oraby, Alessandra Cervone, Gunnar Sigurdsson, Chenyang Tao, Wenbo Zhao, Tagyoung Chung, Jing Huang, and Nanyun Peng. 2023.Unsupervised melody-to-lyrics generation.In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9235–9254.
  • Tian, Sheth, and Peng (2022)Tian, Yufei, Divyanshu Sheth, and Nanyun Peng. 2022.A unified framework for pun generation with humor principles.In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3253–3261.
  • Touvron etal. (2023)Touvron, Hugo, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, etal. 2023.Llama 2: Open foundation and fine-tuned chat models.
  • Valitutti etal. (2013)Valitutti, Alessandro, Hannu Toivonen, Antoine Doucet, and JukkaM. Toivanen. 2013.“let everything turn well in your wife”: Generation of adult humor using lexical constraints.In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 243–248.
  • Wang etal. (2023)Wang, Shun, Yucheng Li, Chenghua Lin, Loic Barrault, and Frank Guerin. 2023.Metaphor detection with effective context denoising.In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1404–1409.
  • Wang etal. (2024)Wang, Shun, GeZhang, Han Wu, Tyler Loakman, Wenhao Huang, and Chenghua Lin. 2024.MMTE: Corpus and metrics for evaluating machine translation quality of metaphorical language.In Conference on Empirical Methods in Natural Language Processing (EMNLP).
  • Whitehouse, Choudhury, and Aji (2023)Whitehouse, Chenxi, Monojit Choudhury, and Alham Aji. 2023.LLM-powered data augmentation for enhanced cross-lingual performance.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 671–686.
  • Wilshire (1999)Wilshire, CarolynE. 1999.The “tongue twister” paradigm as a technique for studying phonological encoding.Language and Speech, 42(1):57–82.
  • Wöckener etal. (2021)Wöckener, Jörg, Thomas Haider, Tristan Miller, The-Khang Nguyen, Thanh TungLinh Nguyen, MinhVu Pham, Jonas Belouadi, and Steffen Eger. 2021.End-to-end style-conditioned poetry generation: What does it take to learn from examples alone?In Proceedings of the 5th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 57–66.
  • Wong etal. (2019)Wong, MinNey, Yanky Chan, ManwaL. Ng, and FrankF. Zhu. 2019.Effects of transcranial direct current stimulation over the broca’s area on tongue twister production.International Journal of Speech-Language Pathology, 21(2):182–188.PMID: 29642741.
  • Wright (2016)Wright, ErnestVincent. 2016.Gadsby: A story of over 50,000 words without using the letter “E”.Digital Ninjas Media, Inc.
  • Xue etal. (2021)Xue, Lanqing, Kaitao Song, Duocai Wu, XuTan, NevinL. Zhang, Tao Qin, Wei-Qiang Zhang, and Tie-Yan Liu. 2021.DeepRapper: Neural rap generation with rhyme and rhythm modeling.In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 69–81.
  • Xue etal. (2022)Xue, Linting, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2022.ByT5: Towards a token-free future with pre-trained byte-to-byte models.Transactions of the Association for Computational Linguistics, 10:291–306.
  • Yang etal. (2023)Yang, Aiyuan, Bin Xiao, Bingning Wang, Borong Zhang, CeBian, Chao Yin, Chenxu Lv, DaPan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, JunTao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, and Zhiying Wu. 2023.Baichuan 2: Open large-scale language models.
  • Yang, Tang, and Lin (2024)Yang, Bohao, Chen Tang, and Chenghua Lin. 2024.Improving medical dialogue generation with abstract meaning representations.In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 11826–11830.
  • Yang etal. (2024)Yang, Bohao, Chen Tang, Kun Zhao, Chenghao Xiao, and Chenghua Lin. 2024.Effective distillation of table-based reasoning ability from LLMs.In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 5538–5550.
  • Yao etal. (2023)Yao, Shunyu, Howard Chen, AustinW. Hanjie, Runzhe Yang, and Karthik Narasimhan. 2023.Collie: Systematic construction of constrained text generation tasks.
  • Yu etal. (2023)Yu, Dingyao, Kaitao Song, Peiling Lu, Tianyu He, XuTan, Wei Ye, Shikun Zhang, and Jiang Bian. 2023.MusicAgent: An AI agent for music understanding and generation with large language models.In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 246–255.
  • Yuan etal. (2024)Yuan, Ruibin, Hanfeng Lin, YiWang, Zeyue Tian, Shangda Wu, Tianhao Shen, GeZhang, Yuhang Wu, Cong Liu, Ziya Zhou, Liumeng Xue, Ziyang Ma, Qin Liu, Tianyu Zheng, Yizhi Li, Yinghao Ma, Yiming Liang, Xiaowei Chi, Ruibo Liu, Zili Wang, Chenghua Lin, Qifeng Liu, Tao Jiang, Wenhao Huang, Wenhu Chen, Jie Fu, Emmanouil Benetos, Gus Xia, Roger Dannenberg, Wei Xue, Shiyin Kang, and Yike Guo. 2024.ChatMusician: Understanding and generating music intrinsically with LLM.In Findings of the Association for Computational Linguistics, pages 6252–6271.
  • Zhang etal. (2022)Zhang, Le, Rongsheng Zhang, Xiaoxi Mao, and Yongzhu Chang. 2022.QiuNiu: A Chinese lyrics generation system with passage-level input.In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 76–82.
  • Zhang etal. (2020a)Zhang, Tianyi, Varsha Kishore, Felix Wu, KilianQ. Weinberger, and Yoav Artzi. 2020a.Bertscore: Evaluating text generation with bert.In International Conference on Learning Representations.
  • Zhang etal. (2021)Zhang, Ying, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, and Manabu Okumura. 2021.Generic mechanism for reducing repetitions in encoder-decoder models.In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1606–1615.
  • Zhang etal. (2020b)Zhang, Yizhe, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b.DIALOGPT : Large-scale generative pre-training for conversational response generation.In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270–278.
  • Zhuo etal. (2023)Zhuo, Le, Ruibin Yuan, Jiahao Pan, Yinghao Ma, Yizhi Li, GeZhang, SiLiu, Roger Dannenberg, Jie Fu, Chenghua Lin, Emmanouil Benentos, Wang Xue, and Yike Guo. 2023.Lyricwhiz: Robust multilingual lyrics transcription by whispering to chatgpt.International Society for Music Information Retrieval Conference (ISMIR).
Train & Constrain: Phonologically Informed Tongue Twister Generation from Topics and Paraphrases (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Tish Haag

Last Updated:

Views: 6715

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Tish Haag

Birthday: 1999-11-18

Address: 30256 Tara Expressway, Kutchburgh, VT 92892-0078

Phone: +4215847628708

Job: Internal Consulting Engineer

Hobby: Roller skating, Roller skating, Kayaking, Flying, Graffiti, Ghost hunting, scrapbook

Introduction: My name is Tish Haag, I am a excited, delightful, curious, beautiful, agreeable, enchanting, fancy person who loves writing and wants to share my knowledge and understanding with you.