ForskarFredag 2022

ForskarFredag is part of European Researchers’ Night and took place on October 1 this year. However, ForskarFredag is just not about one night: school classes can “borrow a researcher” throughout the entire week (September 26 to October 1). LPCN’s Anna Jonsson took part in this initiative and visited 17 school classes: 4 physically in Umeå and the rest via Zoom.

“A one-eyed cat riding a skateboard into the apocalyptic abyss but not warzone”

Together with the school classes, Anna examined different sorting algorithms, saw examples of discriminating technology, and generated images using AI (the results of which are sprinkled across this post). For one of the high school classes, Anna had prepared a mathematical problem, as can be seen on the instagram page of the school.

Continue reading “ForskarFredag 2022”

Attending the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)

The ACL 2022 conference took place May 22-27 at the Convention Centre Dublin, and we are happy to say that LPCN was represented in the form of its members Johanna Björklund, Frank Drewes and Anna Jonsson who were invited to present their paper Improved N-Best Extraction with an Evaluation on Language Data.

An ill-dimensioned panorama photo portraying the view from inside of the Convention Centre Dublin with brown couches in the foreground.
Attempt at a panorama photo from inside of the Convention Centre Dublin.

Below follows the 12 min long presentation video that was made in preparation for the conference.

Continue reading “Attending the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)”

Gender Bias in NLP Seminar at AI Sweden

On April 28, Hannah Devinney presented a talk on Gender Bias in Natural Language Processing at AI Sweden’s Swedish NLP Webinar series, which focuses on NLP development in Sweden and/or about the Swedish language.

The talk focuses on gendered aspects of “bias”: exploring what it is, how it manifests in NLP, the harms it causes, and what we as NLPers can do to combat these harms. Their presentation was followed by an engaging discussion on the nature and future of “unbiased” NLP. We hope that this talk will lead to increased awareness in the AI community of the importance of intersectional and inclusive models of gender for mitigating bias.

A recording of the talk can be found in the video embedded below.

Kick-off for MMW project on contextual communication

AI-driven advertising is often presented as one of the main  successes of deep learning, after heavy investments in machine learning algorithms that discover and exploit patterns in consumer behaviour.  In a new project financed the Marcus and Marianne Wallenberg Foundation, researchers from Umeå University, Malmö University, Stockholm School of Economics, and the University of Gothenburg collaborate to understand the implications for citizens and society linked to different types of targeting methods used in online advertising. 

Photo by Joe Yates on Unsplash
Continue reading “Kick-off for MMW project on contextual communication”

WARA Media – A new arena for multidisciplinary research and innovation

In December 2020, WASP launched a new research arena to place Media AI at the centre of a multidisciplinary ecosystem of scientific fields and industry segments. The arena is a recognition of the value of media AI in process automation, for example, in the facilitation of remote control in forestry, in the creation of virtual verification environments for autonomous machinery and in the generation of non-playable characters in gaming.

Photo by Myke Simon on Unsplash
Continue reading “WARA Media – A new arena for multidisciplinary research and innovation”

The EQUITBL project at the AI for Good Breakthrough Days

On Monday the 29th of September, Hannah Devinney presented the EQUITBL project at the AI for Good Breakthrough Days main stage event. The project is one of three winners in the Breakthrough Days’ Gender Equity Track.

This interdisciplinary project explores ways of combining qualitative and quantitative methods in order to explore and understand how bias and stereotypes manifest themselves in large text collections, such as those commonly used to train machine learning models in language technology. We also develop tools for mitigating the detrimental effects bias, stereotyping, and underrepresentation can have when the ML models are integrated into AI systems used for decision making.

The project members are:

  • Hannah Devinney, Computing Science, Centre for Gender Studies, and LPCN, Umeå University
  • Henrik Björklund, Computing Science and LPCN, Umeå University
  • Jenny Björklund, Centre for Gender Research, Uppsala University

Workshop with Humlab

In the afternoon of March 20, a number of LPCN members had a workshop together with Humlab on the topic of text analysis. We all presented how we use or plan to use text analysis in our work. Happily, we managed to identify a number of immediate collaboration opportunities and also discussed the possibilities for building a joint text analysis infrastructure. As an additional benefit, the LPCN family grew substantially! The workshop concluded with a very nice after work session.

LPCN in IROS 2018, Madrid

LPCN members Michele Persiani and Maitreyee Tewari also participated and presented at ‘Robots for Assisted Living Workshop’ in IROS-2018. Maitreyee’s research focuses on building and implementing hybrid (machine learning and formal grammars) dialogue models for communication between robots and humans, While Michele builds deep learning based models for intention recognition from natural language. They presented their on-going research to one of the workshops and below are glimpses of the same.

Continue reading “LPCN in IROS 2018, Madrid”