- From: Miel Vander Sande (UGent-imec) <Miel.VanderSande@UGent.be>
- Date: Thu, 3 May 2018 08:51:05 +0000
- To: Semantic Web Mailing List <semantic-web@w3.org>, "semanticweb@yahoogroups.com" <semanticweb@yahoogroups.com>
Several Research / PhD positions on Speech, Audio and Text Processing with Semantic Web. ======================================================================= TL;DR: several research positions with optional PhD track on combining Semantic technology with text, speech and audio processing. Mail to Kris.Demuynck@UGent.be with motivation letter, CV, reference contacts and English scores (if non-belgian). Within the context of several national and international projects that bridge Linked Data/Semantic Web with text, speech and audio processing conducted at IDLab (http://www.ugent.be/ea/idlab/), we are looking for several junior researchers that will work on various aspects of speech, audio and text processing to consume and produce semantic data. This includes advanced machine learning techniques for Named Entity Recognition, pattern recognition and natural language processing, search algorithms, domain modeling, reasoning, artificial intelligence, and signal processing. You will cooperate with enthusiastic colleagues and diverse external partners to fulfil the project requirements while staying up to date with important changes in the related literature. The projects will allow you to collaborate with researchers and developers from Europe and beyond. You are expected to help in developing new applications of machine learning technologies with RDF, and/or new search algorithms to more efficiently combine all knowledge sources, algorithms to extract structure from natural language, and/or new signal processing techniques for noise-robust feature extraction. Since all our research topics are at the crossroad of several of the above-mentioned domains (Semantic Web, machine learning, natural language processing, signal processing, search algorithms), you will be working on at least two of these domains. You ------------------- You have a degree in Master of Science/Engineering, preferably in Computer Science, Electronics-ICT or (Mathematical) Informatics. Note: to be admissible for a dedicated PhD-fund, your degree must be equivalent to 5 years of engineering studies (bachelor + master) in the European Union, and you must have a solid academic track record (graduation cum laude or grades in the top 30% percentile). Else, you are subject to project funding and work. You are interested in and motivated by the research topic, and possibly in obtaining a PhD degree. You have excellent analytical skills. You speak and write English fluently (C1 CEFR level) and you have good communication skills. You have an open mind and a multi-disciplinary attitude. You are proficient in programming (preferably Python and/or C). You have a strong interest in the above-mentioned domains. Having prior experience with the above-mentioned domains is a plus. We ------------------- We offer a fully funded PhD position in a challenging, stimulating and pleasant research environment, where you can contribute to our research on (Semantic) Web, and speech & audio processing. The PhD research is innovative with clear practical applications and is done in close collaboration with national and international industry players. You will join a young and enthusiastic team of researchers, post-docs and professors. This PhD position is available immediately. We are based in Ghent, Belgium, which is a lovely, small, but lively Medieval city Interested? ------------------- Apply with motivation letter, (scientific) resume, possible study results, English proficiency scores (if not Belgian), relevant publications (if any), and reference contacts. For any questions, contact prof. dr. ir. Kris Demuynck (Kris.Demuynck@UGent.be). After the first screening, suitable candidates will be invited for an interview (also possible via Skype). More on Imec-IDLab-UGent ------------------- Imec is the world-leading research and innovation hub in nanoelectronics and digital technologies. The combination of our widely acclaimed leadership in microchip technology and profound software and ICT expertise is what makes us unique. By leveraging our world-class research infrastructure and local and global ecosystem of partners across a multitude of industries, we create groundbreaking innovation in application domains such as healthcare, smart cities and mobility, logistics and manufacturing, and energy. As a trusted partner for companies, start-ups and universities we bring together close to 3500 brilliant minds from over 70 nationalities. Imec is headquartered in Leuven, Belgium and has distributed R&D groups at several Flemish universities, in the Netherlands, Taiwan, USA, China, and offices in India and Japan. To strengthen this position as a leading player in our field, we are looking for those passionate talents that make the difference! IDLab is a core research group of imec with research activities embedded in Ghent University. IDLab performs fundamental and applied research on data science and Internet technology, and is, with over 300 researchers, one of the larger research groups at imec. Our major research areas are machine learning and data mining; semantic intelligence; multimedia processing; distributed intelligence for IoT; cloud and big data infrastructures; wireless and fixed networking; electromagnetics, RF and high-speed circuits and systems. The Knowledge on Web Scale team focuses on the “Web” part of the Semantic Web, by building tools and projects to publish, query, and process data at Web scale. Speech & audio processing has been one of our research fields for over 40 years, covering a wide range of topics including speech recognition, speaker diarization (speech segmentation & speaker recognition, language & dialect recognition), extraction of para-linguistic features (emotion and mental state of the speakers), automatic assessment of pathological speech, music analysis and classification, and generic audio processing.
Received on Thursday, 3 May 2018 08:53:44 UTC