TY - GEN N2 - This thesis addresses questions about early lexical acquisition. Four case studies provide concrete examples of how Bayesian computational modeling can be used to study assumptions about inductive biases, properties of the input data and possible limitations of the learning algorithm. The first study describes an incremental particle filter algorithm for non-parametric word segmentation models and compares its behavior to Markov chain Monte Carlo methods that operate in an offline fashion. Depending on the setting, particle filters may be outperformed by or outperform offline batch algorithms. It is argued that the results ought to be viewed as raising questions about the segmentation model rather than providing evidence for any specific algorithm. The second study explores how modeling assumptions interact with the amount of input processed by a model. The experiments indicate that non-parametric word segmentation models exhibit an overlearning effect where more input results in worse segmentation performance. It is shown that adding the ability to learn entire sequences of words in addition to individual words addresses this problem on a large corpus if linguistically plausible assumptions about possible words are made. The third study explores the role of stress cues in word segmentation through Bayesian modeling. In line with developmental evidence, the results indicate that stress cues aid segmentation and interact with phonotactic cues; and that substantive constraints such as a Unique Stress Constraint can be inferred from the linguistic input and need not be built into the model. The fourth study shows how variable phonological processes such as segmental deletion can be modeled jointly with word segmentation by a two-level architecture that uses a generative beta-binomial model to map underlying to surface forms. Experimental evaluation for the phenomenon of word-final /t/-deletion shows the importance of context in determining whether or not a variable rule applies; and that naturalistic data contains subtle complexities that may not be captured by summary statistics of the input, illustrating the need to not only pay close attention to the assumptions built into the model but also to those that went into preparing the input. UR - https://archiv.ub.uni-heidelberg.de/volltextserver/25230/ A1 - Börschinger, Benjamin ID - heidok25230 TI - Exploring Issues in Lexical Acquisition Using Bayesian Modeling Y1 - 2018/// AV - public ER -