{ localUrl: '../page/ultimatum_game.html', arbitalUrl: 'https://arbital.com/p/ultimatum_game', rawJsonUrl: '../raw/5tp.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'ultimatum_game', edit: '6', editSummary: '', prevEdit: '5', currentEdit: '6', wasPublished: 'true', type: 'wiki', title: 'Ultimatum Game', clickbait: 'A Proposer decides how to split $10 between themselves and the Responder. The Responder can take what is offered, or refuse, in which case both parties get nothing.', textLength: '16790', alias: 'ultimatum_game', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-08-10 06:53:20', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-08-10 03:28:27', seeDomainId: '0', editDomainId: '123', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '2', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '127', text: '[summary(Gloss): The experimenter offers \\$10 to two players, the Proposer and the Responder. The Proposer offers a split, such as \\$6 for the Proposer and \\$4 for the Responder. If the Responder accepts, the money is divided up accordingly; otherwise both players get nothing.\n\nIs it ever [principle_rational_choice rational] to reject a low offer?\n]\n\n[summary: In the Ultimatum bargaining game:\n\n- The experimenter offers \\$10 to be divided among two subjects.\n- One player, the Proposer, offers a split to the other player, the Responder.\n- If the Responder accepts the Proposer's split, the \\$10 is divided accordingly. Otherwise both players get nothing.\n- The players cannot communicate or consider a second offer; hence the term "Ultimatum".\n\nOn [-5n9], a '[principle_rational_choice rational]' Responder should accept an offer of \\$1. Thus a 'rational' Proposer that knows it is facing a 'rational' Responder should offer \\$1 (and keep \\$9).\n\nOn [-58b], the situation is considerably more complicated. E.g., a 'rational' Responder might accept an offer of \\$4 with 83% probability, implying an expected gain to the Proposer of \\$4.98, since the Responder knows the Proposer was previously reasoning about its likely current behavior.]\n\nIn the Ultimatum bargaining game, the experimenter sets up two subjects to play a game only once.\n\nThe experimenter offers \\$10, to be divided between the subjects.\n\nOne player, the Proposer, offers a split to the other player, the Responder.\n\nThe Responder can either accept, in which case both parties get the Proposer's chosen split; or else refuse, in which case both the Responder and Proposer receive nothing.\n\nWhat is the minimum offer a rational Responder should accept? What should a rational Proposer offer a rational Responder?\n\n# Analysis\n\nThe Ultimatum Game stands in for the problem of *dividing gains from trade* in non-liquid markets. Suppose:\n\n- I am the only person in town selling a used car, whose [use_value use-value] to me is \\$5000.\n- You are the only person in town trying to buy my used car, whose use-value to you is \\$8000.\n\nIn principle the trade could take place at any price between \\$5001 and \\$7999. In the former case, I've only gained \\$1 of value from the trade; in the latter case, I've gained \\$2999 of the \\$3000 of surplus value generated by the trade. If we can't agree on a price, no trade occurs and no surplus is generated at all.\n\nUnlike the laboratory Ultimatum Game, the used-car scenario could potentially involve reputation effects, and multiple offers and counteroffers. But there's a sense in which the skeleton of the problem closely resembles the Ultimatum Game: if the car-buyer seems credibly stubborn about not offering any more than \\$5300, then this is the equivalent of the Proposer offering \\$1 of \\$10. I can either accept 10% of the surplus value being generated by the trade, or both of us get nothing.\n\nIn situations like unions at the bargaining table (with the power to shut down a company and halt further generation of value) or non-monetary trades, the lack of any common market and competing offers makes the Ultimatum Game a better metaphor still.\n\n# Responses\n\n## [Pretheoretical]\n\nA large experimental literature exists on the Ultimatum Game and its variants.\n\nThe general finding is that human subjects playing the Ultimatum Game almost always accept offers of \\$5, and reject lower offers with increasing probability.\n\nIf it's known that, e.g, the experimental setup prohibits the Proposer from offering more than \\$2, offers of \\$2 are far more likely to be accepted. %note: Falk, A., Fehr, E., Fischbacher, U., 2003. On the nature of fair behavior. Economic Inquiry 41, 20–26.%\n\n85% of Proposers offer at least \\$4. [todo: find citation]\n\n[One paper](http://journal.sjdm.org/14/14715/jdm14715.html) lists Responder acceptance rates as follows:\n\n- 96% acceptance of \\$5 %%note: Aka, the experiment contains a baseline 4% trolls.%%\n- 79% acceptance of \\$4\n- 45% acceptance of \\$3\n- 27% acceptance of \\$2\n- 20% acceptance of \\$1\n\nAmong participants with a high score on the [ Cognitive Reflection Test], the [graph](http://journal.sjdm.org/14/14715/jdm14715001.png) looks like it says:\n\n- 99% acceptance of \\$5\n- 92% acceptance of \\$4\n- 51% acceptance of \\$3\n- 33% acceptance of \\$2\n- 26% acceptance of \\$1 \n\n## [5n9 Causal decision theory]\n\nOn the academically standard [5n9 causal decision theory], a rational Responder should accept \\$1 (or higher), since the causal result of accepting the \\$1 offer is being \\$1 richer than the causal result if you reject the \\$1. Thus, a rational Proposer should propose \\$1 if it knows it is facing a rational Responder. The much lower acceptance rates by human subjects for \\$1 offers therefore demonstrates human irrationality.\n\n## [5px Evidential decision theory]\n\nAs in the [5ry], by the time the \\$1 offer is received, the EDT agent thinks it is too late for its decision to be news about the probability of receiving a \\$1 offer. Thus, rejecting the \\$1 offer would be bad news, and the EDT agent will accept the \\$1. Thus, among two EDT agents with common knowledge of each other's EDT-rationality, the Proposer will offer \\$1 and the Responder will accept \\$1.\n\n## [58b Logical decision theory]\n\nUnder LDT, the Responder can take into account that the Proposer may have *reasoned about the Responder's output* in the course of deciding which split to offer, meaning that the Responder-algorithm's logical reply to different offers can affect how much money is offered in the first place, not just whether the Responder gets the money at the end. Under [5rz updateless] forms of LDT, there is no obstacle to the Responder taking into account both effects even under the supposition that the Responder has already observed the Proposer's offer.\n\nSuppose the Proposer and Responder have [common_knowledge common knowledge] of each other's algorithms, predicting each other by simulation, [proof_based_dt proofs], or abstract reasoning.\n\nIf an LDT Proposer is facing a CDT/EDT Responder, it seems straightforward that an LDT Proposer will offer \\$1 and the CDT/EDT Responder will take it. E.g. under [proof_based_dt] the Proposer will check to see if it can get \\$10 by any action, then check to see if it can get \\$9 by any action, and should prove that if it offers \\$1 the Responder will accept it.\n\nNext suppose that a pure EDT or CDT Proposer is facing an [5rz updateless] LDT Responder. The outcome should be that the EDT/CDT Proposer runs a simulation of the LDT agent facing every possible offer from \\$1 to \\$9, and discovers that the simulated LDT agent "irrationally" (and very predictably) rejects any offer less than \\$9. E.g., a proof-based updateless Responder will look for a proof that some policy leads to gaining \\$10, then a proof that some policy leads to gaining \\$9, and will prove that the policy "reject any offer less than \\$9" leads to this outcome. So the EDT/CDT Proposer will, with a sigh, give up and offer the LDT agent \\$9. %note: On EDT or CDT, there is nothing else you can do with an agent that exhibits such irrational and self-destructive behavior, except give it almost all of your money.%\n\nThe case where two LDT agents face each other seems less straightforward. If one were to try to gloss LDT as the rule "Do what you would have precommitted to do", then at first glance the situation seems to dissolve into precommitment warfare where neither side has any obvious advantage or way to precommit "logically first". Imagine that both agents are self-modifying: clearly the Proposer must not be so foolish as to offer \\$9 if they see that the Responder immediately self-modifies to accept only \\$9, since otherwise that is just what the Responder will do.\n\nNo formal solution to this problem has been derived from scratch in any plausible system. Informally, [2 Yudkowsky] has suggested (in private discussion) that an LDT equilibrium might work as follows:\n\nSuppose the LDT Responder thinks that a 'fair' solution ('fair' being a term of art we'll define by example) is \\$5 apiece for Proposer and Responder; e.g. because \\$5 is the [Shapley value](https://en.wikipedia.org/wiki/Shapley_value).\n\nHowever, the Responder is not sure that the LDT Proposer considers Shapley division to be the 'fair' division of the gains. Then:\n\n- The Responder doesn't want to blanket-reject *all* offers below the 'fair' \\$5. If this were the 'rational' policy, then two agents with an even slightly different estimate of what is 'fair' (e.g. \\$5 vs. \\$4.99) would receive nothing.\n- The Responder doesn't want to blanket-accept *any* offer below the 'fair' \\$5; an LDT Proposer who predicts this policy will certainly offer the Responder an amount lower than the 'fair' \\$5.\n\nThis suggests that an LDT Responder should definitely accept any offer at or above what the Responder thinks is the 'fair' amount of \\$5, and *probabilistically* reject any offer below \\$5, such that the expected value to the Proposer slopes gently downward as the Proposer offers lower splits. E.g., the Responder might accept an offer $q$ beneath \\$5 with probability:\n\n$$p = \\big ( \\frac{\\$5}{\\$10 - q} \\big ) ^ {1.01}$$\n\nThis implies that, e.g., an offer of \\$4 might be accepted with 83% probability, implying an expected gain to the Proposer of \\$4.98, and an expected gain to the Responder of \\$3.32.\n\nFrom the Responder's perspective, the important feature of this response function is not that the Responder gets the same amount as the Proposer. Rather, the key feature is that the Proposer gains no expected benefit from giving the Responder less than the 'fair' amount (and further-diminished returns as the Responder's returns decrease further).\n\nWhat about the LDT Proposer? It does not want to be the sort of agent which, faced with another agent that accepts only offers of \\$9, gives in and offers \\$9. Faced with an agent that is or might be like that, one avenue might be to simply offer \\$5, and another avenue might be to offer \\$9 with 50% probability and $1 otherwise (leading to an expected Responder gain of \\$4.50). No Responder should be able to do *better* by coming in with a notion of 'fairness' that demands more than what the Proposer thinks is 'fair'. \n\nEven if the notion of 'fairness' so far seems arbitrary in terms of derivation from any fundamental decision principle, this gives 'fairness' a very [schelling_point Schelling Point]-like status. From the perspective of both agents:\n \n- You cannot do better by estimating that your 'fair' share is higher than what others think is your fair share.\n- You cannot do better by estimating that your 'fair' share is lower than what others think is your fair share.\n- Other agents have no incentive to estimate that your 'fair' share is less than what you think is your fair share.\n\nThe resulting expected values are not on the [pareto_optimal Pareto boundary]. In fact, Yudkowsky proposed the above equilibrium in response to an earlier proof by Stuart Armstrong that it was impossible to develop a bargaining solution that did lie on the Pareto boundary and gave no incentive to other agents to lie about their utility functions (which was that problem's analogue of lying about what you believe is 'fair').\n\nYudkowsky's solution was meant to move as little away from the Pareto boundary as possible, while still maintaining stable incentives. But if we look back at the human Responder behaviors, we find that the Proposer's expected gains were:\n\n$$\\begin{array}{r|c|c}\n\\text{Offer} & \\text{Average subject} & \\text{High CRT} \\\\ \\hline\n\\$5 & \\$4.8 & \\$4.95 \\\\ \\hline\n\\$4 & \\$4.74 & \\$5.52 \\\\ \\hline\n\\$3 & \\$3.15 & \\$3.57 \\\\ \\hline\n\\$2 & \\$2.16 & \\$2.64 \\\\ \\hline\n\\$1 & \\$1.80 & \\$2.34\n\\end{array}$$\n\nThis does not suggest that human Responders are trying to implement a gently declining curve of Proposer gains, trying to keep the bargained intersection close to the Pareto boundary. It does suggest that human Responders could be implementing some variant of: "Give the Proposer around as much in expectation as they tried to offer me." That is, clipping somewhere around \\$0.80 (average) or \\$0.67 (high-CRT) of expected value per \\$1 decrease in the offer, with extra forgiveness around the \\$4-offer mark.\n\nThis pattern is more vengeful than the proposed LDT equilibrium. But this degree of vengeance may also make ecological sense in a context where some 'dove' agents will accept sub-\\$3 offers with high probability. In this case, you might respond harshly to \\$3 offers to disincentivize the Proposer from trying to exploit the probability of encountering a dove. If you accept \\$3 offers with 70% probability for an expected Proposer gain of \\$4.90, but more than 1 in 14 agents are 'dove' agents that accept \\$3 offers with 90% probability for a Proposer gain of \\$6.30, then a Proposer should rationally offer \\$3 to exploit the possibility that you are a 'dove'.\n\nSimilarly, the acceptance rate for \\$8/\\$2 splits is much higher if the experimenter is known to have imposed a maximum \\$2 offer on the Proposer. As Falk et. al. observed, this behavior makes no sense if humans have an innate distaste for unfair outcomes, but in an LDT scenario the answer here changes to "Accept \\$2." So as an LDT researcher would observe that the human subjects are behaving suggestively like algorithms that know other algorithms are reasoning about them.\n\nArguably, the human subjects in this experiment might still have their rationality critiqued on the grounds that the humans do not actually have common knowledge of each other's code and therefore shouldn't behave like LDT agents that do. But "human subjects in the Ultimatum Game behave a lot like rational agents would if those agents understood each other's algorithms" seems like a large step away from "human subjects in the Ultimatum Game behave irrationally". Perhaps humans simply have good ecological knowledge about the distribution of other agents they might face; or something resembling the equilibrium among LDT agents emerged evolutionarily and humans now instinctively behave as part of it. Arguendo, this LDT-like instinctive equilibrium seems to still take into account whether the Proposer has the *option* of offering more than \\$2; it isn't just an equilibrium about demanding LDT-like distributions of final gains.\n\nAlso, the intuitive notion of precommitment warfare suggests that, even if you are facing a robot whose code says to reject all offers below \\$9, you should offer \\$9 with at most 55% probability--at least if you believe that robot was designed, rather than it being a natural feature of the environment. (E.g. you should not reject the 'offer' of a field that yields an 'unfair' amount of grain!) If you allowed 'unfair' splits with simple algorithms, a self-modifying agent facing you could self-modify into a simple algorithm that rejects all offers below \\$9. The corresponding [program_equilibrium program equilibrium] would be unstable. So if humans are behaving like they are part of an equilibrium of LDT agents reasoning about other LDT agents, their uncertainty about *exactly* which other algorithms they are facing need not imply that they should 'rationally' accept lowball offers.\n\nThe author of the initial version of this page does not know whether a similarly stabilized solution around a 'fair' Schelling Point has been suggested in the rather large literature on the Ultimatum Game; it would not be at all surprising if a similar analysis exists somewhere. But Yudkowsky notes that the solution suggested here comes more readily to mind, if we don't think of the *rational* answer as CDT's \\$1. If rejecting \\$1 is 'irrational', an agent trying to throw a 'usefully irrational' tantrum might as easily demand \\$9 as \\$5. LDT results like [robust_cooperation robust cooperation in the oneshot Prisoner's Dilemma given common knowledge of algorithms] are much more suggestive that rational agents in an Ultimatum Game might coordinate in some elegant and stable way.\n\nLDT equilibria are arguably a better point of departure from which to view human reasoning, since human behaviors on Ultimatum Game variants are nothing like EDT or CDT equilibria. Even if we think of people as trying to acquire a useful reputation for rejecting unfair bargains, we can still see those people as trying to acquire *a reputation for acting like an LDT-rational agent whose rationality is known,* rather than *a reputation for useful irrationality.*\n\nYudkowsky finally suggested that LDT analyses of Ultimatum Games--particularly as they regard 'fairness'--might have significant implications for how we can think economically about non-market bargaining and division-of-gains in the real world.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: { Gloss: 'The experimenter offers \\$10 to two players, the Proposer and the Responder. The Proposer offers a split, such as \\$6 for the Proposer and \\$4 for the Responder. If the Responder accepts, the money is divided up accordingly; otherwise both players get nothing.\n\nIs it ever [principle_rational_choice rational] to reject a low offer?', Summary: 'In the Ultimatum bargaining game:\n\n- The experimenter offers \\$10 to be divided among two subjects.\n- One player, the Proposer, offers a split to the other player, the Responder.\n- If the Responder accepts the Proposer's split, the \\$10 is divided accordingly. Otherwise both players get nothing.\n- The players cannot communicate or consider a second offer; hence the term "Ultimatum".\n\nOn [-5n9], a '[principle_rational_choice rational]' Responder should accept an offer of \\$1. Thus a 'rational' Proposer that knows it is facing a 'rational' Responder should offer \\$1 (and keep \\$9).\n\nOn [-58b], the situation is considerably more complicated. E.g., a 'rational' Responder might accept an offer of \\$4 with 83% probability, implying an expected gain to the Proposer of \\$4.98, since the Responder knows the Proposer was previously reasoning about its likely current behavior.' }, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'newcomblike' ], commentIds: [ '5tw' ], questionIds: [], tagIds: [ 'c_class_meta_tag' ], relatedIds: [], markIds: [], explanations: [ { id: '6097', parentId: 'ultimatum_game', childId: 'ultimatum_game', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-10 03:29:28', level: '3', isStrong: 'true', everPublished: 'true' } ], learnMore: [], requirements: [ { id: '6095', parentId: 'logical_dt', childId: 'ultimatum_game', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-10 03:29:07', level: '2', isStrong: 'true', everPublished: 'true' }, { id: '6096', parentId: 'causal_dt', childId: 'ultimatum_game', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-10 03:29:20', level: '1', isStrong: 'true', everPublished: 'true' } ], subjects: [ { id: '6097', parentId: 'ultimatum_game', childId: 'ultimatum_game', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-10 03:29:28', level: '3', isStrong: 'true', everPublished: 'true' }, { id: '6098', parentId: 'logical_dt', childId: 'ultimatum_game', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-10 03:29:46', level: '3', isStrong: 'false', everPublished: 'true' } ], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: { '58b': [ '5rf' ] }, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18689', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-08-10 06:53:20', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18688', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-08-10 06:49:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18686', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-08-10 03:59:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18685', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-08-10 03:59:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18684', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-08-10 03:57:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18683', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2016-08-10 03:29:47', auxPageId: 'logical_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18680', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newTeacher', createdAt: '2016-08-10 03:29:29', auxPageId: 'ultimatum_game', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18681', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2016-08-10 03:29:29', auxPageId: 'ultimatum_game', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18679', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-10 03:29:21', auxPageId: 'causal_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18678', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-10 03:29:08', auxPageId: 'logical_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18677', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-08-10 03:29:02', auxPageId: 'c_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18676', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-08-10 03:28:29', auxPageId: 'newcomblike', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18674', pageId: 'ultimatum_game', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-08-10 03:28:27', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }