Skip to content

Discussion Event: Artificial Intelligence and Community Interaction

Partnering on June 27th, Sciences Po and École Polytechnique organized the concluding seminar of their series, named "Examining the Impact of Large Language Models (LLMs) in Social Science Research."

Discussion Forum: Artificial Intelligence's Impact on Modern Society
Discussion Forum: Artificial Intelligence's Impact on Modern Society

Discussion Event: Artificial Intelligence and Community Interaction

Last week, the third and final seminar in a series titled "Exploring the Role of Large Language Models (LLMs) in Social Science Research" took place at Sciences Po and Ecole Polytechnique. Hosted by the transatlantic academic partnership, the Alliance Programme, the event brought together researchers from various disciplines, including sociology, political science, history, and economics.

The seminar showcased four presentations from distinguished speakers: Emma Bonutti D'Agostini, Bart Bonikowski, Matthew Connelly, and Thomas Renault. D'Agostini presented her work, "Mediated Voices: A CSS Investigation into Journalists' Portrayal of the Political Sphere", while Bonikowski discussed "National Identification on Twitter, or How to Find a Needle in a Haystack with LLMs". Connelly presented "America's Top Secrets: Using AI to Decipher Official Secrecy", and Renault shared his research, "Forecasting inflation with Large Language Models: A Multilingual, News-Based Approach".

Throughout the seminar, the SICSS Paris students actively participated and contributed valuable insights during the discussions. The event marked a significant step forward in fostering discussions about the role of LLMs in social science research and enriched the dialogue with fresh perspectives from emerging scholars in the field.

The seminar highlighted the increasing influence of artificial intelligence and LLMs on social science and humanities research. Large Language Models (LLMs) are capable of analyzing and generating vast amounts of text data, which can be particularly useful in social science research for tasks such as text analysis, content generation, and potentially even data collection through text-based surveys.

However, the use of LLMs raises ethical concerns, such as bias in the models, privacy issues with data collection, and the potential for information manipulation. These are crucial considerations for social scientists who must ensure that their research is rigorous and unbiased.

In addition to data analysis and generation, LLMs can assist researchers in tasks like literature review summarization, data cleaning, and even proposing research questions based on existing literature. LLMs can facilitate collaboration among researchers by providing a common platform for data analysis and interpretation. However, interpreting results from LLMs requires a deep understanding of how these models work to avoid misinterpretation.

The seminar also demonstrated how LLMs can enable new methodologies in social science research, such as using generative models to simulate social scenarios or predict outcomes based on large datasets.

The Alliance Programme, a transatlantic academic partnership between Columbia University and three leading French institutions—Ecole Polytechnique, Paris 1 Pantheon-Sorbonne, and Sciences Po—has organized a series of seminars dedicated to the theme of "AI and Society" over the past year. The event further strengthened cross-institutional partnerships at the intersection of AI and society.

For specific insights from the seminar, it would be best to consult the official hosting institutions or any published proceedings from the event.

Artificial-intelligence, driven by Large Language Models (LLMs), is now playing a crucial role in education-and-self-development, particularly in the field of learning by facilitating tasks like literature review summarization and data cleaning. However, it's essential for scholars to understand the inner workings of these models to avoid misinterpretation and address ethical concerns, such as bias and privacy issues.

Read also:

    Latest