News

Postdoctoral Researchers in Computer Vision / Machine Learning, Natural Language Processing, and Phonology are invited to contact Peter Uhrig on peter.uhrig@fau.de re the job opportunities as specified below:

Our AHRC-DFG funded project on world futures is currently in need of filling a computer vision position at the post-doctoral level already funded by the German Research Council. We encourage people fleeing the war in Ukraine with a Ph.D. or equivalent degree and the right skills to apply. To access the job advertisement, please click here.

 

We can also apply to German funding schemes for additional funding to create two more posts on our project. Suitable candidates would work at Friedrich-Alexander-Universität Erlangen-Nürnberg for approx. two years and should have a disciplinary background and strong research track record in natural language processing or phonetics/phonology (research on prosody).

The funding opportunity for the first post, for which the deadline is 10th March 2022, is aimed at Ukrainian scholars at acute risk, i.e. refugees or people threatened by the ongoing war, who hold a doctoral degree (Ph.D. or the equivalent). 

The second post, for which there is no fixed deadline, would also be open for Russian scholars who are forced to flee the region due to the current war situation. Obviously we can’t promise anything at this stage, but we are working to be able to offer jobs on our project to at least two researchers affected by the current war in Ukraine.

If our funding applications are successful, Friedrich-Alexander-Universität Erlangen-Nürnberg will support nominated candidates with obtaining German working permit visas.

 

If you meet the criteria specified above or you can think of any candidates who would meet the specified criteria, please can you write to / direct them ASAP to Peter Uhrig peter.uhrig@fau.de?

 

Our inter-divisional team of Oxford researchers led by Dr Anna Wilson (OSGA) has won a special “large grant” from the John Fell OUP Research Fund to create a major interdisciplinary research hub “International Multimodal Communication Collaboration”

This 18-month project (15/10/2021-15/03/2023) will catalyse interdisciplinary research on mass media. It will build upon the foundation laid by the International Multimodal Communication Centre (IMCC) – a research programme hosted by OSGA, which represents an exciting constellation of research links between OSGA, the Oxford Internet Institute, Department of Engineering Science, the Faculty of Linguistics, the Oxford Text Archive, and the Defence and Science Technology Laboratory. This project aims to formalise and cement these links and do collaborative interdisciplinary research on multimodal depictions of futures in media.

IMCC researchers are co-mentoring Red Hen Google Summer of Code 2021 student projects

We are delighted to announce that Red Hen Lab – IMCC’s collaborative partner – has won funding from Google for Red Hen Lab Google Summer of Code 2021 (GSoC) for the 7th year in a row.

IMCC key researchers have been part of the Red Hen GSoC funding applications and mentoring teams for many years, and indeed this year three members of the IMCC research team – Dr Anna Wilson, Dr Peter Uhrig, and Professor Mark Turner – are co-mentoring 9 out of 12 fully-funded Google GSoC 2021 student projects:

Dr Anna Wilson (Oxford School of Global and Area Studies) is co-mentoring:

Nitesh Mahawar: Multimodal TV Show Segmentation

Yunfei Zhao: Gesture temporal detection pipeline for news videos

Nickil Maveli: Detecting Joint Meaning Construal by Language and Gesture

Mohamed Mokhtar: Red Hen Rapid Annotator

Dr Peter Uhrig (FAU Erlangen-Nürnberg) is co-mentoring:

Mohamed Mokhtar: Red Hen Rapid Annotator

Swadesh Jana: Create a Red Hen OpenDataset for gestures with performance baselines

Hannes Leipold: Utilizing Speech-to-Speech Translation to Facilitate a Multilingual Text, Audio, and Video Message Board and Database

Prof. Mark Turner (Red Hen Co-Director; Case Western Reserve University) is co-mentoring:

Yash Khasbage: Anonymizing Audiovisual Data

Nickil Maveli: Detecting Joint Meaning Construal by Language and Gesture

Tarun Nagdeve: Development of a Visual Recognition model for Aztec Hieroglyphs

Ankiit Gupta: Simulating Multimodal Communication in Vervet Monkeys with Braitenberg Vehicles