Polish MFA G2P model v2.0.0#

  • Maintainer: Montreal Forced Aligner

  • Language: Polish

  • Dialect: N/A

  • Phone set: MFA

  • Model type: G2P model

  • Architecture: pynini

  • Model version: v2.0.0

  • Trained date: 2022-02-28

  • Compatible MFA version: v2.0.0

  • License: CC BY 4.0

  • Citation:

@techreport{mfa_polish_mfa_g2p_2022,
	author={McAuliffe, Michael and Sonderegger, Morgan},
	title={Polish MFA G2P model v2.0.0},
	address={\url{https://mfa-models.readthedocs.io/G2P model/Polish/Polish MFA G2P model v2_0_0.html}},
	year={2022},
	month={Feb},
}

Pronunciation dictionaries

../../_images/full_logo_yellow.svg

Installation#

Install from the MFA command line:

mfa model download g2p polish_mfa

Or download from the release page.

Intended use#

This model is intended for generating pronunciations of Polish transcripts.

This model uses the MFA phone set for Polish, and was trained from the pronunciation dictionaries above. Pronunciations generated with this G2P model can be appended and used when aligning or transcribing.

Performance Factors#

The trained G2P models should be relatively quick and accurate, however the model may struggle when dealing with less common orthographic characters or word types outside of what it was trained on. If so, you may need to supplement the dictionary through generating, correcting, and re-training the G2P model as necessary.

Metrics#

The model was trained on 90% of the dictionary and evaluated on a held-out 10% and evaluated with word error rate and phone error rate.

Training#

This model was trained on the following data set:

  • Words: 113,593

  • Phones: 49

  • Graphemes: 36

Evaluation#

This model was evaluated on the following data set:

  • Words: 12,621

  • WER: 0.92%

  • PER: 0.19%

Ethical considerations#

Deploying any model involving language into any production setting has ethical implications. You should consider these implications before use.

Demographic Bias#

You should assume every machine learning model has demographic bias unless proven otherwise. For G2P models, the model will only process the types of tokens that it was trained on, and will not represent the full range of text or spoken words that native speakers will produce. If you are using this model in production, you should acknowledge this as a potential issue.

Surveillance#

Speech-to-Text technologies may be misused to invade the privacy of others by recording and mining information from private conversations. This kind of individual privacy is protected by law in many countries. You should not assume consent to record and analyze private speech.