Itzulpen automatiko gainbegiratu gabea

  1. ARTETXE ZURUTUZA, MIKEL
unter der Leitung von:
  1. Gorka Labaka Intxauspe Doktorvater/Doktormutter
  2. Eneko Agirre Bengoa Doktorvater/Doktormutter

Universität der Verteidigung: Universidad del País Vasco - Euskal Herriko Unibertsitatea

Fecha de defensa: 29 von Juli von 2020

Gericht:
  1. Kepa Sarasola Gabiola Präsident/in
  2. Pablo Gamallo Otero Sekretär
  3. Cristina España Bonet Vocal

Art: Dissertation

Teseo: 152737 DIALNET lock_openADDI editor

Zusammenfassung

Modern machine translation relies on strong supervision in the form of parallel corpora. Such arequirement greatly departs from the way in which humans acquire language, and poses a major practicalproblem for low-resource language pairs. In this thesis, we develop a new paradigm that removes thedependency on parallel data altogether, relying on nothing but monolingual corpora to train unsupervisedmachine translation systems. For that purpose, our approach first aligns separately trained wordrepresentations in different languages based on their structural similarity, and uses them to initializeeither a neural or a statistical machine translation system, which is further trained through iterative backtranslation.While previous attempts at learning machine translation systems from monolingual corporahad strong limitations, our work¿along with other contemporaneous developments¿is the first to reportpositive results in standard, large-scale settings, establishing the foundations of unsupervised machinetranslation and opening exciting opportunities for future research. // Modern machine translation relies on strong supervision in the form of parallel corpora. Such arequirement greatly departs from the way in which humans acquire language, and poses a major practicalproblem for low-resource language pairs. In this thesis, we develop a new paradigm that removes thedependency on parallel data altogether, relying on nothing but monolingual corpora to train unsupervisedmachine translation systems. For that purpose, our approach first aligns separately trained wordrepresentations in different languages based on their structural similarity, and uses them to initializeeither a neural or a statistical machine translation system, which is further trained through iterative backtranslation.While previous attempts at learning machine translation systems from monolingual corporahad strong limitations, our work¿along with other contemporaneous developments¿is the first to reportpositive results in standard, large-scale settings, establishing the foundations of unsupervised machinetranslation and opening exciting opportunities for future research.