Javascript must be enabled for the correct page display

Speaker recognition: finding text independent information in the acoustical signal

Botter, M. (2004) Speaker recognition: finding text independent information in the acoustical signal. Master's Thesis / Essay, Artificial Intelligence.

AI_Ma_2004_MBotter.CV.pdf - Published Version

Download (1MB) | Preview


Speaker recognition is subject to growing attention. Many interesting applications can be developed when one can automatically differentiate speakers from each other. For example think of security systems. The voice of a person can be used just like a fingerprint, thus identifying a person uniquely. Furthermore, who likes taking minutes during meetings? When a system is able to recognize speakers, minutes can be taken automatically. Being able to automatically discriminate between speakers enables us to take a first step into developing these kind of applications. In the research project, further explained in this paper, the first steps are taken towards developing methods that will enable a system to differentiate between speakers. The most pressing question throughout this project was: How do humans discriminate people they know, so easily by means of just hearing their voices? In other words, which features are responsible for voices being different? The focus of this research project was finding characteristics of speech which are independent of the expression (text independent features) and contribute to speaker recognition. The project was conducted at Sound Intelligence, a company that is specialized in detecting and classifying all kinds of sounds, including speech. This company optimized the cochlea model developed by Duifhuis et al. to process and analyze sound. This can be compared to the way humans process sound.

Item Type: Thesis (Master's Thesis / Essay)
Degree programme: Artificial Intelligence
Thesis type: Master's Thesis / Essay
Language: English
Date Deposited: 15 Feb 2018 07:30
Last Modified: 15 Feb 2018 07:30

Actions (login required)

View Item View Item