Abstract
An indigenous perspective on the effectiveness of debiasing techniques for pre-trained language models (PLMs) is presented in this paper. The current techniques used to measure and debias PLMs are skewed towards the US racial biases and rely on pre-defined bias attributes (e.g. "black" vs "white"). Some require large datasets and further pre-training. Such techniques are not designed to capture the underrepresented indigenous populations in other countries, such as Māori in New Zealand. Local knowledge and understanding must be incorporated to ensure unbiased algorithms, especially when addressing a resource-restricted society.
Original language | English |
---|---|
Pages | 1-5 |
Publication status | Published - 1 May 2023 |
Event | The Eleventh International Conference on Learning Representations - Kigali, Rwanda Duration: 1 May 2023 → 5 May 2023 https://iclr.cc/Conferences/2023 |
Conference
Conference | The Eleventh International Conference on Learning Representations |
---|---|
Abbreviated title | ICLR 2023 |
Country/Territory | Rwanda |
City | Kigali |
Period | 1/05/23 → 5/05/23 |
Internet address |