{ localUrl: '../page/91w.html', arbitalUrl: 'https://arbital.com/p/91w', rawJsonUrl: '../raw/91w.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '91w', edit: '7', editSummary: '', prevEdit: '6', currentEdit: '7', wasPublished: 'true', type: 'wiki', title: 'The development of Artificial General Intelligence, as a scientific purpose for human life', clickbait: 'Purpose here is not to be confused for teleological argument/theism/deities/subjective endeavours. Instead, this thread refers to teleonomy, or purpose in the realm of science/objectivity.', textLength: '8780', alias: '91w', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'JordanBennett', editCreatedAt: '2018-04-04 20:53:19', pageCreatorId: 'JordanBennett', pageCreatedAt: '2018-03-30 05:19:40', seeDomainId: '0', editDomainId: '3062', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '24', text: '[toc:] \n\n## Why?\n\n1. [Reasonably, evolution is optimising ways of contributing to the increase of entropy](http://www.englandlab.com/uploads/7/8/0/3/7803054/nnano.2015.250__1_.pdf), as systems very slowly approach equilibrium. ([The universe’s hypothesized end](https://en.wikipedia.org/wiki/Ultimate_fate_of_the_universe))\n\n * a) Within that process, work or activities done through several ranges of intelligent behaviour are reasonably ways of contributing to the increase of **entropy**. ([See source](http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf))\n\n * b) As species got **more and more intelligent**, reasonably, nature was finding better ways to contribute to increases of entropy. (Intelligent systems can be observed as being biased towards entropy maximization)\n\n * c) Humans are slowly getting smarter, but even if we augment our intellect by [CRISPR](https://en.wikipedia.org/wiki/CRISPR)-like routines or implants, we will reasonably be limited by how many computational units or neurons etc fit in our skulls.\n\n * d) [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence)/[ASI](https://en.wikipedia.org/wiki/Superintelligence) won’t be subject to the size of the human skull/human cognitive hardware. (Laws of physics/thermodynamics permits **human exceeding intelligence** in non biological form)\n\n * e) As [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence)/[ASI](https://en.wikipedia.org/wiki/Superintelligence) won’t face the limits that humans do, they are a subsequent step (though non biological) particularly in the regime of contributing to **better ways of increasing entropy**, compared to humans.\n\n2. The above is why the **purpose of the human species**, is reasonably to create [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence)/[ASI](https://en.wikipedia.org/wiki/Superintelligence).\n\n## How?\n\n1. There are **many degrees of freedom** or many ways to contribute to entropy increase. This degree sequence is a "configuration space" or "system space", or total set of possible actions or events, and in particular, there are "paths" along the space that simply describe ways to contribute to entropy maximization.\n2. These **"paths"** are activities in nature, over some time scale "**_τ_**" and beyond.\n3. As such, following **equation (2)** below, intelligent agents reasonably generate particular "paths" (intelligent activities) that **prioritize efficiency in entropy maximization**, over more general paths that don't care about or deal with intelligence. In this way, **intelligent agents are "biased**", because they occur in a particular region (do particular activities) in the "configuration space" or "system space" or total possible actions in nature.\n4. Observing **equation (4)** below, highly intelligent agents rationally aren't merely biased for the sake of doing distinct things (i.e. cognitive tasks, such as any human thing done in science and technology) compared to non intelligent, or other less intelligent agents in nature for contributing to entropy increase; they are biased by extension, for behaving in ways that are actually **more effective ways for maximising entropy production,** compared to non intelligent or less intelligent agents in nature.\n5. As such, the total system space, can be described wrt to a general function, in relation to how activities may generally increase entropy, afforded by degrees of freedom in said space:\n\n\n$$S_c(X,\\tau) = -k_B \\int_{x(t)} Pr(x(t)|x(0)) ln Pr(x(t)|x(0)) Dx(t)$$\n\n**Figure 1** [Equation(2)](http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf).\n\n6. In general, agents reasonably approach **more and more complicated macroscopic states** (from smaller/earlier, less efficient entropy maximization states called "microstates"), while activities occur that are "paths" in the total system space.\n * 6.b) **Highly intelligent agents**, likely behave in ways that engender unique paths, (by doing **cognitive tasks/activities** compared to simple tasks done by lesser intelligences or non intelligent things) and by doing so they approach or consume or "reach" more of the aforementioned macroscopic states, in comparison to lesser intelligences, and non intelligence.\n * 6.c) In other words, **highly intelligent agents likely access more of the total actions** or configuration space or degrees of freedom in nature, the same degrees of freedom associated with entropy maximization.\n * 6.d) In a reasonably similar way to **equation (4)** below, there is a “**causal force**”, which likely constrains the degrees of freedom seen in the total configuration space or total ways to increase entropy, in the form of **humans**, and this constrained sequence of intelligent or cognitive activities is the way in which said highly intelligent things are said to be **biased to maximize entropy**:\n\n$$F(X,\\tau) = T_c \\nabla_X S_c(X,\\tau) | X_0$$\n\n**Figure 2** [Equation(4)](http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf) \n\n \n7. In the extension of equation (2), seen in equation (4) above, some notation similar to "$T_c$" is likely a way to observe the various unique states that a highly intelligent agent may occupy, over some time scale "$\\tau$"....(The technical way to say this, is that "'$T_c$' parametrizes the agents' bias towards entropy maximization".)\n\n8. Beyond human intelligence, [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence)/[ASI](https://en.wikipedia.org/wiki/Superintelligence) are yet more ways that shall reasonably permit more and more access to activities or "paths" to maximise entropy increase.\n\n\n## Consciousness, unconsciousness and entropy\n\n\nMateos et al recently using Stirling approximation, (_where $N$ is the total number of possible pairs of channels, $p$ is the number of connected pairs of signals, and $C$ represents the combinations of connections between diverse signals prior to Stirling approximation_) reasonably showed that the further away from deep sleep the mind is (or the more awake the mind is), the larger the number of pairs of connected signals, the greater the information content, the larger the number of neuronal interactions, and thereafter the higher the values of entropy:\n\n$$S = ( N \\cdot ln(N/N − p) − p \\cdot ln(p/N − p) ) \\equiv lnC $$\n\n**Figure 3** [Stirling approximation on human EEG data](https://arxiv.org/abs/1606.00821)\n\nConclusively, one may cogitate the relation $C \\in \\{X\\}$, where $C$ represents an ensemble or macrostate sequence via some distribution of entropy in human neuronal terms as underlined by [Mateos et al](https://arxiv.org/abs/1606.00821), while $\\{X\\}$ (wrt figure 2 [equation 4 by Alex Wissner Gross](https://arxiv.org/abs/1606.00821)) describes some macrostate partition that reasonably encompasses constrained path capability, enveloping entropy maximization, as underlined by Dr. Alex Wissner Gross.\n\nFurthermore, beyond the scope of humans (as indicated by $C$) one may additionally garner of some measure of $\\{X\\}$ that may subsume higher degrees of entropy, via Artificial General Intelligence.\n\n[summary: \n\n* Looking at item (8), one may see that **human objective/goal** is reasonably to **trigger a next step** in the landscape of things that can access more ways to maximize entropy. (Science likes **objectivity**)\n\n* The trend arguably says **nature doesn't just stop at one species**, it finds more and more ways to access **more entropy maximization techniques.** Humans are one way to get to whichever subsequent step will yield more ways (aka more intelligence...i.e. [AGI](https://en.wikipedia.org/wiki/Artificial_general_intelligence)/[ASI](https://en.wikipedia.org/wiki/Superintelligence)) that shall generate additional "macrostates" or paths towards **better entropy maximization methods.**\n!]\n\n## References\n\n1. Alex Gross, Cameron Freer, “Causal Entropic Forces”, 2013.\n2.\tJeremy England, “Dissipative adaptation in driven self-assembly”, 2015.\n3.\tRamon Guevarra, Diego Martin Mateos et al, “Towards a statistical mechanics of consciousness: maximization of number of connections is associated with conscious awareness”, 2016.\n4.\t[Wikipedia/Teleonomy](https://en.wikipedia.org/wiki/Teleonomy) (Shows purpose in the context of objectivity/science, rather than in the context of subjectivity/deities. [Teleonomy](https://en.wikipedia.org/wiki/Teleonomy) ought not to be confused for [the teleological argument](https://en.wikipedia.org/wiki/Teleological_argument), which is a religious/subjective concept contrary to [teleonomy](https://en.wikipedia.org/wiki/Teleonomy), a scientific/objective concept.)\n\n\n\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'JordanBennett' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '23001', pageId: '91w', userId: 'JordanBennett', edit: '7', type: 'newEdit', createdAt: '2018-04-04 20:53:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '23000', pageId: '91w', userId: 'JordanBennett', edit: '6', type: 'newEdit', createdAt: '2018-03-30 19:10:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22999', pageId: '91w', userId: 'JordanBennett', edit: '5', type: 'newEdit', createdAt: '2018-03-30 05:39:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22998', pageId: '91w', userId: 'JordanBennett', edit: '4', type: 'newEdit', createdAt: '2018-03-30 05:25:21', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22997', pageId: '91w', userId: 'JordanBennett', edit: '3', type: 'newEdit', createdAt: '2018-03-30 05:22:49', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22995', pageId: '91w', userId: 'JordanBennett', edit: '2', type: 'newEdit', createdAt: '2018-03-30 05:21:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22996', pageId: '91w', userId: 'JordanBennett', edit: '2', type: 'newEdit', createdAt: '2018-03-30 05:21:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22994', pageId: '91w', userId: 'JordanBennett', edit: '1', type: 'newEdit', createdAt: '2018-03-30 05:19:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }