Project Details


The lack of credit history has been a big hurdle in credit approval, especially for vulnerable or disadvantaged groups, such as young people, women, immigrants or ethnic minorities, low-income segments. Recent research suggests that using more of personal data may help the credit inclusion of currently excluded or under-represented groups. Yet at the same time there are growing concerns about increasing use of personal data for credit risk assessment, since little is known of the attitudes towards such applications. Similarly, despite remarkable successes in the use of machine learning technology, significant reservations are voiced about its lack of transparency and possibility of amplifying unfair outcomes.
The project will address the above problem by exploring:
•what is perceived as a fair and ethical credit decision by different groups of stakeholders; and what kind of data is acceptable to use in credit risk assessment;
•how different machine learning algorithms compare in terms of amplifying credit discrimination or unequal access to credit.

This is a PhD project funded by Baillie Gifford.

Effective start/end date1/10/2031/08/24


Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.