Large language models don’t behave like people, even though we may expect them to (2024)

One thing that makes large language models (LLMs) so powerful is the diversity of tasks to which they can be applied. The same machine-learning model that can help a graduate student draft an email could also aid a clinician in diagnosing cancer.

However, the wide applicability of these models also makes them challenging to evaluate in a systematic way. It would be impossible to create a benchmark dataset to test a model on every type of question it can be asked.

In a new paper, MIT researchers took a different approach. They argue that, because humans decide when to deploy large language models, evaluating a model requires an understanding of how people form beliefs about its capabilities.

For example, the graduate student must decide whether the model could be helpful in drafting a particular email, and the clinician must determine which cases would be best to consult the model on.

Building off this idea, the researchers created a framework to evaluate an LLM based on its alignment with a human’s beliefs about how it will perform on a certain task.

They introduce a human generalization function — a model of how people update their beliefs about an LLM’s capabilities after interacting with it. Then, they evaluate how aligned LLMs are with this human generalization function.

Their results indicate that when models are misaligned with the human generalization function, a user could be overconfident or underconfident about where to deploy it, which might cause the model to fail unexpectedly. Furthermore, due to this misalignment, more capable models tend to perform worse than smaller models in high-stakes situations.

“These tools are exciting because they are general-purpose, but because they are general-purpose, they will be collaborating with people, so we have to take the human in the loop into account,” says study co-author Ashesh Rambachan, assistant professor of economics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

Rambachan is joined on the paper by lead author Keyon Vafa, a postdoc at Harvard University; and Sendhil Mullainathan, an MIT professor in the departments of Electrical Engineering and Computer Science and of Economics, and a member of LIDS. The research will be presented at the International Conference on Machine Learning.

Human generalization

As we interact with other people, we form beliefs about what we think they do and do not know. For instance, if your friend is finicky about correcting people’s grammar, you might generalize and think they would also excel at sentence construction, even though you’ve never asked them questions about sentence construction.

“Language models often seem so human. We wanted to illustrate that this force of human generalization is also present in how people form beliefs about language models,” Rambachan says.

As a starting point, the researchers formally defined the human generalization function, which involves asking questions, observing how a person or LLM responds, and then making inferences about how that person or model would respond to related questions.

If someone sees that an LLM can correctly answer questions about matrix inversion, they might also assume it can ace questions about simple arithmetic. A model that is misaligned with this function — one that doesn’t perform well on questions a human expects it to answer correctly — could fail when deployed.

With that formal definition in hand, the researchers designed a survey to measure how people generalize when they interact with LLMs and other people.

They showed survey participants questions that a person or LLM got right or wrong and then asked if they thought that person or LLM would answer a related question correctly. Through the survey, they generated a dataset of nearly 19,000 examples of how humans generalize about LLM performance across 79 diverse tasks.

Measuring misalignment

They found that participants did quite well when asked whether a human who got one question right would answer a related question right, but they were much worse at generalizing about the performance of LLMs.

“Human generalization gets applied to language models, but that breaks down because these language models don’t actually show patterns of expertise like people would,” Rambachan says.

People were also more likely to update their beliefs about an LLM when it answered questions incorrectly than when it got questions right. They also tended to believe that LLM performance on simple questions would have little bearing on its performance on more complex questions.

In situations where people put more weight on incorrect responses, simpler models outperformed very large models like GPT-4.

“Language models that get better can almost trick people into thinking they will perform well on related questions when, in actuality, they don’t,” he says.

One possible explanation for why humans are worse at generalizing for LLMs could come from their novelty — people have far less experience interacting with LLMs than with other people.

“Moving forward, it is possible that we may get better just by virtue of interacting with language models more,” he says.

To this end, the researchers want to conduct additional studies of how people’s beliefs about LLMs evolve over time as they interact with a model. They also want to explore how human generalization could be incorporated into the development of LLMs.

“When we are training these algorithms in the first place, or trying to update them with human feedback, we need to account for the human generalization function in how we think about measuring performance,” he says.

In the meanwhile, the researchers hope their dataset could be used a benchmark to compare how LLMs perform related to the human generalization function, which could help improve the performance of models deployed in real-world situations.

“To me, the contribution of the paper is twofold. The first is practical: The paper uncovers a critical issue with deploying LLMs for general consumer use. If people don’t have the right understanding of when LLMs will be accurate and when they will fail, then they will be more likely to see mistakes and perhaps be discouraged from further use. This highlights the issue of aligning the models with people's understanding of generalization,” says Alex Imas, professor of behavioral science and economics at the University of Chicago’s Booth School of Business, who was not involved with this work. “The second contribution is more fundamental: The lack of generalization to expected problems and domains helps in getting a better picture of what the models are doing when they get a problem ‘correct.’ It provides a test of whether LLMs ‘understand’ the problem they are solving.”

This research was funded, in part, by the Harvard Data Science Initiative and the Center for Applied AI at the University of Chicago Booth School of Business.

Large language models don’t behave like people, even though we may expect them to (2024)

FAQs

What are the characteristics of a large language model? ›

A large language model (LLM) is a deep learning algorithm that can perform a variety of natural language processing (NLP) tasks. Large language models use transformer models and are trained using massive datasets — hence, large. This enables them to recognize, translate, predict, or generate text or other content.

What are the dangers of large scale language models like this? ›

Risks Of Open-Source Large Language Models
  • Misinformation and Manipulation. ...
  • Privacy and Security Concerns. ...
  • Bias and Fairness. ...
  • Intellectual Property Issues.

What are the limits of large language models? ›

Here's a summary of LLMs' limitations for you:
  • LLMs can't process everything at once.
  • LLMs don't retain information between interactions.
  • LLMs can't update their knowledgebase in real-time.
  • LLMs can sometimes say things that don't make sense.
  • LLMs don't understand subtext.
  • LLMs don't really understand reasoning.
Jun 26, 2024

What is the large language model? ›

Large language models, also known as LLMs, are very large deep learning models that are pre-trained on vast amounts of data. The underlying transformer is a set of neural networks that consist of an encoder and a decoder with self-attention capabilities.

What are the principles of large language models? ›

One of the key characteristics of large language models is their ability to generate human-like text. These models can generate text that is coherent, grammatically correct, and sometimes even humorous. They can also translate text from one language to another and answer questions based on a given context.

Why do large language models make mistakes? ›

The authors found the LLMs to be prone to similar content effects as humans. Both humans and LLMs are more likely to mistakenly label an invalid argument as valid when the semantic content is sensical and believable.

What are the weaknesses of a large language model? ›

Contextual Understanding and Common Sense Reasoning

For example, a large language Model might misinterpret a sarcastic comment or fail to understand cultural references. In situations where nuanced understanding is crucial, such as customer support for complex issues, this can be a significant drawback.

What are the strengths of large language models? ›

13 Benefits of Large Language Models for Organizations
  • Streamlining Operations. ...
  • Innovation in Product Development. ...
  • Enhanced Security. ...
  • Improved Content Generation. ...
  • Enhanced Decision Making. ...
  • Data Analysis and Insights. ...
  • Improved Personalization in Customer Service. ...
  • Human Resource Management.
Apr 30, 2024

What do large language models teach? ›

For elementary school students, large language models can assist in the development of reading and writing skills (e.g., by suggesting syntactic and grammatical corrections), as well as in the development of writing style and critical thinking skills.

Why do large language models work so well? ›

Pre-training: The model is exposed to massive amounts of text data (such as books, articles, or web pages), so that it can learn the patterns and connections between words. The more data it is trained on, the better it will be at generating new content. While doing so, it learns to predict the next word in a sentence.

What is the impact of large language models? ›

Large Language Models (LLMs) have revolutionized various sectors, including education, healthcare, business, and creative industries. They enhance learning experiences, improve accessibility, and streamline assessment processes, allowing educators to focus on more nuanced teaching.

What is the difference between large and small language models? ›

A small language model is an AI model, similar to a large language model, only with less training data and less parameters. They fundamentally do the same thing as a large language model; understand and generate language, but are smaller and less complex.

Which statement best describes the key features of large language models? ›

#### Final Answer The statement that best describes the key features of large language models is: "Ability to generate human-like language and complete various language tasks." This feature is significant because it highlights the model's capability to interact with and process language in a manner that is increasingly ...

Which characteristic is common to a closed source large language model? ›

Explanation: A common characteristic of closed-source large language models is that their underlying code and training data are proprietary and not publicly accessible.

What are the four main characteristics of language? ›

4.4: Features of Language
  • Duality of patterning: associates sounds with meaning. ...
  • Productivity: Symbols and rules can be combined for infinite messages. ...
  • Interchangeability: Speakers are able to send and receive messages.
  • Arbitrariness: No association with words, and its meaning except for the sounds.
Jun 26, 2021

References

Top Articles
Latest Posts
Article information

Author: Rueben Jacobs

Last Updated:

Views: 6790

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.