Nick Bostrom is, with Eliezer Yudkowsky, one of the two cofounders of the current field of value alignment theory. Bostrom published a paper singling out the problem of superintelligent values as critical in 1999, two years before Yudkowsky entered the field, which has sometimes led Yudkowsky to say that Bostrom should receive credit for inventing the Friendly AI concept. Bostrom is founder and director of the [ Oxford Future of Humanity Institute]. He is the author of the popular book [Superintelligence_book Superintelligence] that currently forms the best book-length introduction to the field. Bostrom's academic background is as an analytic philosopher formerly specializing in [ anthropic probability theory] and [ transhumanist ethics]. Relative to Yudkowsky, Bostrom is relatively more interested in Oracle models of value alignment and in potential exotic methods of obtaining aligned goals.