Category Archives: Anthropology

Poverty: a peculiar type of stress that dramatically reduces cognitive function

An average of 13 IQ points. It’s not being poor what causes it but the stress of scarcity of resources, which simply copes the brain.
I reproduce the press release in full (with just some emphasis of my own) because of it’s great interest:

Poor concentration: Poverty reduces brainpower needed for navigating other areas of life

Poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life, according to research based at Princeton University. As a result, people of limited means are more likely to make mistakes and bad decisions that may be amplified by — and perpetuate — their financial woes.

Published in the journal Science, the study presents a unique perspective regarding the causes of persistent poverty. The researchers suggest that being poor may keep a person from concentrating on the very avenues that would lead them out of poverty. A person’s cognitive function is diminished by the constant and all-consuming effort of coping with the immediate effects of having little money, such as scrounging to pay bills and cut costs. Thusly, a person is left with fewer “mental resources” to focus on complicated, indirectly related matters such as education, job training and even managing their time.
In a series of experiments, the researchers found that pressing financial concerns had an immediate impact on the ability of low-income individuals to perform on common cognitive and logic tests. On average, a person preoccupied with money problems exhibited a drop in cognitive function similar to a 13-point dip in IQ, or the loss of an entire night’s sleep.

Sugarcane Farmer
Research based at Princeton University found that poverty and all its related concerns require so much mental energy that the poor have less remaining brainpower to devote to other areas of life.

Experiments showed that the impact of financial concerns on the cognitive function of low-income individuals was similar to a 13-point dip in IQ, or the loss of an entire night’s sleep. To gauge the influence of poverty in natural contexts, the researchers tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Each farmer performed better on common fluid-intelligence and cognition tests post-harvest compared to pre-harvest.
But when their concerns were benign, low-income individuals performed competently, at a similar level to people who were well off, said corresponding author Jiaying Zhao, who conducted the study as a doctoral student in the lab of co-author Eldar Shafir, Princeton’s William Stewart Tod Professor of Psychology and Public Affairs. Zhao and Shafir worked with Anandi Mani, an associate professor of economics at the University of Warwick in Britain, and Sendhil Mullainathan, a Harvard University economics professor.

“These pressures create a salient concern in the mind and draw mental resources to the problem itself. That means we are unable to focus on other things in life that need our attention,” said Zhao, who is now an assistant professor of psychology at the University of British Columbia.

“Previous views of poverty have blamed poverty on personal failings, or an environment that is not conducive to success,” she said. “We’re arguing that the lack of financial resources itself can lead to impaired cognitive function. The very condition of not having enough can actually be a cause of poverty.”
The mental tax that poverty can put on the brain is distinct from stress, Shafir explained. Stress is a person’s response to various outside pressures that — according to studies of arousal and performance — can actually enhance a person’s functioning, he said. In the Science study, Shafir and his colleagues instead describe an immediate rather than chronic preoccupation with limited resources that can be a detriment to unrelated yet still important tasks.

“Stress itself doesn’t predict that people can’t perform well — they may do better up to a point,” Shafir said. “A person in poverty might be at the high part of the performance curve when it comes to a specific task and, in fact, we show that they do well on the problem at hand. But they don’t have leftover bandwidth to devote to other tasks. The poor are often highly effective at focusing on and dealing with pressing problems. It’s the other tasks where they perform poorly.”

The fallout of neglecting other areas of life may loom larger for a person just scraping by, Shafir said. Late fees tacked on to a forgotten rent payment, a job lost because of poor time-management — these make a tight money situation worse. And as people get poorer, they tend to make difficult and often costly decisions that further perpetuate their hardship, Shafir said. He and Mullainathan were co-authors on a 2012 Science paper that reported a higher likelihood of poor people to engage in behaviors that reinforce the conditions of poverty, such as excessive borrowing.

“They can make the same mistakes, but the outcomes of errors are more dear,” Shafir said. “So, if you live in poverty, you’re more error prone and errors cost you more dearly — it’s hard to find a way out.”

The first set of experiments took place in a New Jersey mall between 2010 and 2011 with roughly 400 subjects chosen at random. Their median annual income was around $70,000 and the lowest income was around $20,000. The researchers created scenarios wherein subjects had to ponder how they would solve financial problems, for example, whether they would handle a sudden car repair by paying in full, borrowing money or putting the repairs off.

Participants were assigned either an “easy” or “hard” scenario in which the cost was low or high — such as $150 or $1,500 for the car repair. While participants pondered these scenarios, they performed common fluid-intelligence and cognition tests.

Subjects were divided into a “poor” group and a “rich” group based on their income. The study showed that when the scenarios were easy — the financial problems not too severe — the poor and rich performed equally well on the cognitive tests. But when they thought about the hard scenarios, people at the lower end of the income scale performed significantly worse on both cognitive tests, while the rich participants were unfazed.

To better gauge the influence of poverty in natural contexts, between 2010 and 2011 the researchers also tested 464 sugarcane farmers in India who rely on the annual harvest for at least 60 percent of their income. Because sugarcane harvests occur once a year, these are farmers who find themselves rich after harvest and poor before it. Each farmer was given the same tests before and after the harvest, and performed better on both tests post-harvest compared to pre-harvest.

The cognitive effect of poverty the researchers found relates to the more general influence of “scarcity” on cognition, which is the larger focus of Shafir’s research group. Scarcity in this case relates to any deficit — be it in money, time, social ties or even calories — that people experience in trying to meet their needs. Scarcity consumes “mental bandwidth” that would otherwise go to other concerns in life, Zhao said.

“These findings fit in with our story of how scarcity captures attention. It consumes your mental bandwidth,” Zhao said. “Just asking a poor person to think about hypothetical financial problems reduces mental bandwidth. This is an acute, immediate impact, and has implications for scarcity of resources of any kind.”

“We documented similar effects among people who are not otherwise poor, but on whom we imposed scarce resources,” Shafir added. “It’s not about being a poor person — it’s about living in poverty.”

Many types of scarcity are temporary and often discretionary, said Shafir, who is co-author with Mullainathan of the book, “Scarcity: Why Having Too Little Means So Much,” to be published in September. For instance, a person pressed for time can reschedule appointments, cancel something or even decide to take on less.

“When you’re poor you can’t say, ‘I’ve had enough, I’m not going to be poor anymore.’ Or, ‘Forget it, I just won’t give my kids dinner, or pay rent this month.’ Poverty imposes a much stronger load that’s not optional and in very many cases is long lasting,” Shafir said. “It’s not a choice you’re making — you’re just reduced to few options. This is not something you see with many other types of scarcity.”

The researchers suggest that services for the poor should accommodate the dominance that poverty has on a person’s time and thinking. Such steps would include simpler aid forms and more guidance in receiving assistance, or training and educational programs structured to be more forgiving of unexpected absences, so that a person who has stumbled can more easily try again.

“You want to design a context that is more scarcity proof,” said Shafir, noting that better-off people have access to regular support in their daily lives, be it a computer reminder, a personal assistant, a housecleaner or a babysitter.
“There’s very little you can do with time to get more money, but a lot you can do with money to get more time,” Shafir said. “The poor, who our research suggests are bound to make more mistakes and pay more dearly for errors, inhabit contexts often not designed to help.”

Ref. Anandi Madi et al., Poverty Impedes Cognitive Function. Science 2013. Pay per viewLINK [doi:10.1126/science.1238041]
I must say that I find extremely sad that a study so important for people of scarce resources is pay per view.

Posted by on August 31, 2013 in Anthropology, intelligence, mind, psychology, Sociology


"Modern human behavior" is out, generic human potential is in

There is a hypothetical model in Prehistory on something vague and ethereal which has been called “Modern human behavior” (MHB). It’s not about nuclear weapons, Internet addiction nor commuting to work; it’s not either about the printing machine, the Industrial Revolution and the ideals of Human Rights; it’s not even about farming, living in cities and through sailing the seas… it’s about something extremely vague and ill-defined but which, by definition would set apart “modern humans” (H. sapiens) from “archaic humans” (other Homo species, particularly Neanderthals).
While it is almost intangible and every day more dubious, a large number of prehistorians, some as notorious as Mellars, Stringer or Bar-Yosef, strikingly influenced by religious ideas setting an arbitrarily absolutist line between “humans” (i.e. Homo sapiens) and the rest (including other humans), have insisted for decades on the validity of such notion. Now three researchers challenge the model radically:
Christopher J. H. James, Julien Riel-Salvatore & Benjamin R. Collins, Why We Need an Alternative Approach to the Study of Modern Human Behaviour. Canadian Journal of Archaeology Volume 37, Issue 1 (2013). Pay per viewLINK


In this paper we review recent developments in the debate over the emergence of modern human behaviour (MHB) to show that despite considerable diversity among competing models, the identification of given material traits still underpins almost all current perspectives. This approach, however, allows assumptions over the biological relationship between archaic and modern humans to permeate the definitions of MHB and, as a result, has effectively stultified archaeology’s potential contribution to the issue. We suggest that the concept of MHB as currently defined is flawed. It must either be redefined in strictly behavioural terms before reincorporation into the debate over modern human origins or, more productively, discarded all together to avoid the harsh and unrealistic dichotomy it creates between a modern and non-modern archaeological record.
They essentially argue that: that the model (of which there are several, often contradictory variants) is extremely useless and confusing, that there are “archaic humans” with many or even all traits of MHB and there are “modern humans” without many or even most of them.
They tentatively argue for a throughout revision of the model but then they seem to lean rather for the whole abandonment of the idea suggesting instead a mosaic and punctuated evolution pattern that is socio-cultural rather than merely genetic or essentialist:

(…) the rapidly accumulating evidence for a mosaic pattern of behavioural change (…) and the evidence of behavioural advances appearing and rapidly disappearing in the MSA, make the harsh dichotomy model untenable. What it does suggest is a punctuated or saltation model that led to widespread adoption of more complex behavioural patterns once the demographic circumstances were appropriate (…).

Somehow this made me recall one of my all-time favorite bands: Suicidal Tendencies and their 1990 hit “Disco’s out, murder’s in” (surely not apt for pop, techno and folk music lovers):

The genetic and phenotype complexity of the Oceanic language area

In this entry, rather than discussing Polynesians alone, which seem to be just the tip of the Eastern Austronesian iceberg, I’ll try to understand here the complexity of speakers of Oceanic languages, the main native language family of Island Oceania. 
Oceanic is a branch of Austronesian but for the purposes of this entry we will only mention other Austronesian peoples/languages tangentially. The focus is Oceanic because we can’t understand the parts without the whole here most probably. 


Oceanic languages are scattered as follows:

  Admiralties and Yapese
  St Matthias
  Western Oceanic and Meso-Melanesian (two distinct sub-families)
  Southeast Solomons
  Southern Oceanic
Black enclosed zones are pockets of languages from other families.
(CC by kwami)

It is certainly interesting that Micronesian and Fijian-Polynesian seem to be particularly related among them. Instead the Western Oceanic and Admiralty subfamilies (both from the islands near Papua) seem to have separated early on or diverged farther for whatever other reasons (stronger substrate influence for example).


Lapita pot from Tonga (source)
As I cited recently, Polynesians seem to have spread from Society Islands in the 1190-1290 CE window. The genesis of the Micronesian family is not well understood… but the overall genesis of Oceanic languages seems to be at the Lapita culture, which spread through Island Melanesia (excluding Papua) and some nearby islands (notably Tonga and Samoa also Marquesas c. 300 CE(ref)).
Early Lapita culture is dated to c. 1350-750 BCE, while a Late phase is dated to c. 250 BCE, spreading to the Solomon Islands, which show no indications of the earlier period (Ricaut 2010, fig. 2).
So a simplified chronology for Oceanic expansion would be
  1. Lapita culture from near Melanesia to Vanuatu and Kanaky (New Caledonia), then to:
    1. Fiji, Samoa and Tonga since c. 900 BCE
    2. Solomon Is. c. 250 BCE
  2. Arrival to Society Islands (Tahiti, etc.) c. 300-800 CE from maybe Samoa.
  3. Main Polynesian expansion to the farthest islands (Hawaii, Rapa Nui, Aotearoa-NZ) c. 1200 CE from Society Is.

Phenotype (‘race’)

A classical and unavoidable element in the ethnographic division of the region is phenotype, appearance (i.e. ‘race’). Since the first European arrival to the area the division between black Melanesians and white Polynesians (very relative as we will see now) has been part of all our conceptualizations of the region. 
Conscious of that and wanting to get a better impression I collected from the Internet what I estimate may be representative faces from the Oceanic linguistic zone and nearby areas (other Austronesians and Melanesians) and put them on a map:

Click to expand

A relatively homogeneous Polynesian phenotype can be identified and one can imagine that it stems from the area of Samoa-Tonga, considering the previous prehistorical review. But otherwise the diversity, gradations and abundance of local uniqueness seems quite impressive.
Based on other cases, one would imagine also that phenotype differences would be coincidental with genetic ones. However this is not too easy to discern, partly because Polynesians have strong founder effects that blur the matter, partly because there is no obvious strict dividing line between the various phenotypes and partly because of the insistence of some in considering Lapita as a Polynesian phenomenon, when it is obviously an Oceanic one, including and emphasizing the Melanesian side of the diverse Oceanic landscape, of which the Polynesian-Micronesian branch is just one element (famous and extended but not the core). 
The main Y-DNA lineage among Polynesians is C2a1 (P33), not found outside Polynesia senso stricto but reaching there frequencies of 63-90% (excepted Tonga where it’s only 33%). This is a clear founder effect in this population.

C subclades in SE Asia and Oceania
(from Karafet 2010, annotated with ISOGG nomenclature)
C2a1 is clearly derived from a Melanesian superset C2a (M208) still found as C2a(xC2a1) at low frequencies in Samoa (8%) and Tahiti (4%) but also in Vanuatu (2%) and coastal Papua (13%). C2a establishes a probably genetic link of Polynesians with Lapita culture and Melanesian peoples in general.
An earlier pylogenetic stage is C2 (M38), which is probably in the region since the very first colonization process some 50 thousand years ago (or maybe even earlier). C2(xC2a) is most common in Wallacea (East Indonesia, East Timor), where it reaches maybe figures of 33% on average. It is however also found in highland Papua (13%) and Vanuatu (20%) but as it is most doubtful that C2a evolved as recently as Lapita times, we should really focus on C2a as such rather than the wider C2, which only seems to confuse the matter.
The lack of C2(xC2a) in most of the Oceanic languages’ area clearly indicates that the expansion (and subsequent founder effects) did not begin in Wallacea but in  Melanesia, at least in what regards to C sublineages.
The other major Polynesian haplogroup is O3a2 (P201), which would seem to have originated in Philippines and maybe arrived there via Micronesia:

O3 subclades in SE Asia and Oceania
(from Karafet 2010, annotated with ISOGG nomenclature)

Melanesian populations also sport some lineages that are not common among other Oceanic-speaker peoples, notably K, M and S. However they are irregularly shared with Wallacea (Eastern Indonesia, East Timor). Like C2 these lineages coalesced in the region soon after colonization by Homo sapiens.
In the motherly side of things genetic, the absolutely dominant mtDNA lineage among Polynesians (the so-called Polynesian motif) is B4a1a1, which ultimately stems from East or rather SE Asia. However it probably arrived to the region (again) via Melanesia, albeit maybe somewhat tangentially.

From Friedlander 2007 (fig. 4)

Spatial frequency distribution of haplogroup B4a* and B4a1a1 in Island Southeast Asia and the western Pacific, created using the Kriging algorithm of the Surfer package of haplogroups. Figure 4b presents the detailed distribution for Northern Island Melanesia. Data details are provided in table S3.

The matrilineal Polynesian motif does offer a possible pattern of settlement, maybe related specifically to Late Lapita, that could allow us to understand the possible origin of the phenotype differences between Melanesians and Polynesians, as could do the Y-DNA lineage O3a2. However there are lots of remnants of quite strictly Melanesian Early Lapita, as is evident by the (Y-DNA) C2a lineages retained so strongly among Polynesians within their own founder effects, whose importance we cannot afford to dismiss.

Other mtDNA lineages like Q1 or M27 are of relevance in Melanesian populations. Q1 did make its way into some Polynesian populations but as minority lineage only.

Update (Oct 31):

Terry in the comments sections grunts a lot but now and then provides useful complementary data, for example this Y-DNA map of the region from Kayser 2006:

Kayser 2006 – fig. 1
Frequency distribution of (A, B) NRY and (C, D) mtDNA haplogroups found in Polynesia with a genetic origin in (A, C) Asia or (B, D) Melanesia.

As is apparent since Kayser’s publication (if not before), the Melanesian patrilineages are much more common (actually dominant) among Polynesians than the matrilineages from the same origin, what is attributable to a founder effect related to the Lapita culture.
Another interesting reference is this Y-DNA map of Papua (New Guinea) and some nearby islands (from Mona 2007):

Mona 2007 FIG. 2.—Y-chromosome haplogroups and their frequencies in populations from the Bird’s Head region and elsewhere in New Guinea. Data from other populations of New Guinea were used from previous studies (Kayser et al. 2003, 2006). Size of the pie charts is according to sample size of the groups. Abbreviations are as in supplementary table S1, Supplementary Material online.

Both maps and/or the data in the relevant papers provide key information on possible origins for the C2a-M208 patrilineal founder effect, so important in general in the Oceanic peoples and specially the Polynesian branch. The exact origin cannot be pinpointed without further research (or maybe not at all) but it’s clear that C2a-M208 only exists from Papua (New Guinea) to the East, so it must have a Melanesian origin be it Papuan or from the nearby islands.


  • François-Xavier Ricaut et al., Ancient Solomon Islands mtDNA: assessing Holocene settlement and the impact of European contact. Journal of Archaeological Science, 2010 ··> LINK (PDF).
  • Jonathan S. Friedlaender et al., Melanesian mtDNA Complexity. PLoS ONE, 2007 ··> LINK (open access).
  • Tatiana Karafet et al., Major East-West Division Underlies Y Chromosome Stratification Across Indonesia. MBE 2010 ··> LINK (free access).
  • Michael Knapp et al., Complete mitochondrial DNA genome sequences from the first New Zealanders. PNAS 2012 ··> LINK (open access).
  • Manfred Kayser et al., Melanesian and Asian Origins of Polynesians: mtDNA and Y Chromosome Gradients Across the Pacific. MBE 2006 ··> LINK (free access).
  • Stephano Mona et al., Patterns of Y-Chromosome Diversity Intersect with the Trans-New Guinea Hypothesis. MBE 2007 ··> LINK (free access).

Note: updates after first posted version in maroon color.


Echoes from the Past (Jan 18)

Again lots of short news and hopefully interesting links I have been collecting in the last weeks:
Lower and Middle Paleolithic 
Cova del Gegant Neanderthal jaw
Catalonia: Neanderthal mitochondrial DNA sequenced for the first time. The sequence, obtained from a jaw from Cova del Gegant (Giant’s Cave), is fully within normal Neanderthal range ··> Pileta de Prehistoria[es], NeanderFollia[cat], relevant paper[cat] (PDF)

Castile: Stature estimates for Sima de los Huesos (Atapuerca) discussed by John Hawks.

Upper Paleolithic and Epipaleolithic

Romania: stratigraphies and dates revised by new study (PPV) ··> Quaternary International.

Andalusia: oldest ornament made of barnacle’s shell (right) found in Nerja Cave ··> Pileta de Prehistoria[es], UNED[es], Universia[es].

England: Star Carr dig to shed light on transition from Paleolithic to Epipaleolithic ··> short article and video-documentary (32 mins) at Past Horizons.

Basque Country: archaeologists consider a barbarity that only 65m are protected against the quarry at Praileaitz Cave (Magdalenian) ··> Noticias de Gipuzkoa[es].

Yemen: 200 tombs said to be Paleolithic discovered in Al Mahwit district, west of Sanaa. Tools and weapons were also found. Other thousand or so artifacts from the same period were found in the Bani Saad area  ··> BBC

Peruvian rock art
Sarawak: Niah Cave being dug again for further and more precise data on the colonization of the region by Homo sapiens ··> Heritage Daily.

Siberia was a wildlife-rich area in the Ice Age ··> New Scientist.

Peru: 10,000 years old cave paintings (right) discovered in Churcampa province ··> Andina.

Neolithic and Chalcolithic

Iberia and North Africa: Southern Iberian and Mediterranean North African early Neolithic could be the same process according to new paper (PPV) ··> Quaternary International.
Galicia: Neolithic and Metal Ages remains to be studied for DNA ··> Pileta de Prehistoria[es].
Texas: very informative burnt hut reveals clues of the natives of the San Antonio area c. 3500 years ago.
Mexico: 2000-years old paintings found Guanajuato ··> Hispanically Speaking News (notice that the photo appears to be act of shameless journalistic low quality, being a European bison painted with European style, probably from Altamira).
Metal ages and historical period
Croatia: oldest known astrological board unearthed at Nakovana (Roman period). The cave was probably some sort of shrine back in the day, maybe because a striking phallic stalagmite. Besides the ivory astrological device, lots of pottery has been found as well ··> Live Science.

The best preserved fragment depicts the sign of Cancer (full gallery)
Basque Country: Iruña-Veleia affair:  Basque autonomous police does not have means to test the authenticity of the findings. The Commission for the Clarification of Iruña-Veleia asks for the tests to be performed in one of the few European laboratories able to do that ··> Noticias de Álava.
Cornwall: replicating sewn-plank boats of the Bronze Age ··> This is Cornwall.
India: cremation urn from the Megalithic period excavated in Kerala ··> The Hindu.

Human genetics and evolution

The six flavors
Centenarians don’t have any special genes ··> The Atlantic.
Fat is a flavor: newly discovered sixth flavor in human tongue identifies fat (and usually likes it) ··> Science Daily.
Hominin tooth found in Bulgaria dates from 7 million years ago ··>  Daily Mail.
Anthropology (senso stricto)
The journey of the Tubu women: fascinating documentary in Spanish language about these trans-Saharan trader women available at Pasado y Futuro[es].
Small capuchin monkey bands fight as well as large ones because members are more motivated and have many less defections, even in peripheral conflicts  ··> Science Daily.
Horse genetics again ··> new paper at PLoS Genetics

Fig. 4 – Phylogenetic tree of extant Hippomorpha.

Early farming was inefficient compared to foraging

Early farming was only able to generate some 60% of what foraging (hunting and gathering) did, according to new research:
Samuel Bowles, Cultivation of cereals by the first farmers was not more productive than foraging. PNAS, 2011. Pay per view (depending on world region and time).


Did foragers become farmers because cultivation of crops was simply a better way to make a living? If so, what is arguably the greatest ever revolution in human livelihoods is readily explained. To answer the question, I estimate the caloric returns per hour of labor devoted to foraging wild species and cultivating the cereals exploited by the first farmers, using data on foragers and land-abundant hand-tool farmers in the ethnographic and historical record, as well as archaeological evidence. A convincing answer must account not only for the work of foraging and cultivation but also for storage, processing, and other indirect labor, and for the costs associated with the delayed nature of agricultural production and the greater exposure to risk of those whose livelihoods depended on a few cultivars rather than a larger number of wild species. Notwithstanding the considerable uncertainty to which these estimates inevitably are subject, the evidence is inconsistent with the hypothesis that the productivity of the first farmers exceeded that of early Holocene foragers. Social and demographic aspects of farming, rather than its productivity, may have been essential to its emergence and spread. Prominent among these aspects may have been the contribution of farming to population growth and to military prowess, both promoting the spread of farming as a livelihood.
A news article is also available at PhysOrg.
Is the alternative explanation correct?
I find this discovery most interesting because the assumption has generally been that automatically farming was more productive than the old human way of life: foraging what Nature had to offer. 
Yet this assumption did not explain why farming had not evolved earlier or why the, generally very pragmatic, peoples of the World, did not adopt it earlier, as they were no doubt aware of how gardening could be done.
It reminds me somewhat of the very much comparable misunderstanding on the transition from the Bronze to the Iron Age: iron had by then been known for very long but it was brittle in comparison with bronze, quasi-bronze (copper and arsenicum) and even the old good flint stone. Actually I read somewhere recently that another good old friend of humankind, obsidian, makes such great blades that compete favorably with steel scalpels.
Things are not so simple: steel began to be developed (as sweet iron is not really good for most uses) after tin resources began to fail in the Eastern Mediterranean, as the communications with Atlantic Europe (where most tin mines were back then) may have collapsed when the two classical Iberian civilizations, El Argar and Zambujal (VNSP), did as well for reasons not well understood and not too relevant to discuss here. 
It was therefore problems in the bronze industry, so critical for the military of the time, what pushed steel technology ahead, inaugurating the Iron Age.
Molino neolítico de vaivén
Seed milling was done long before Neolithic too
Therefore I’d like to consider what may have caused people to adopt farming instead of just continue foraging, as they had done successfully until that time. We know that farming was preceded by a period we call Mesolithic and that is characterized by intensive foraging of wild cereals or other foraging behaviors that somehow announce the advent of farming or herding. 
So, in the Fertile Crescent, there was for a time, since about the end of the Ice Age, a focus on a pre-farming type of foraging. As I have not read the paper yet, I do not know if Bowles has factored this period in his equations. As for me, I’d think that this kind of foraging (maybe already associated to some early gardening practices) we call Mesolithic, seems to respond to an ecological pressure of some sort, no doubt related to the then ongoing climate change. 
Another issue I am pondering is that, even before cereal farming was fully developed in Palestine, herding of sheep and goat was adopted in Kurdistan, followed by cow herding in Anatolia (near the well-named Taurus mountains). Maybe herding had to be developed in order to make farming effective? Cattle (be it bovine or ovi-caprine) provides nutrients in form of manure and, goats specially but not only, can also be used to clear up wild vegetation areas, while pigs are great to plow the fields.
So I am wondering if animal domestication was a condition to make cereal (and pulse and flax) farming an economically effective way of life. 
Honestly I prefer a true economic explanation rather than one based on very conjectural preferences about sedentarism, and this may be made up of:
  • The push factor of climate change at the end of the Ice Age
  • The pull factor of animal domestication, increasing the yields of agriculture until it became economically worthwile
What do you think?

Human natural societies flexible and not primarily built on genetic kinship

Understanding hunter-gatherer societies is important because we are after all just paleolithic peoples dumped into Market Street. Our flexibility allows us to deal with this strange unnatural reality with some ease but this is not the context in which we evolved, not at all.
New research in this ancestral reality of us, embodied in a variety of surviving hunter-gatherer groups, provides some interesting results:


Contemporary humans exhibit spectacular biological success derived from cumulative culture and cooperation. The origins of these traits may be related to our ancestral group structure. Because humans lived as foragers for 95% of our species’ history, we analyzed co-residence patterns among 32 present-day foraging societies (total n = 5067 individuals, mean experienced band size = 28.2 adults). We found that hunter-gatherers display a unique social structure where (i) either sex may disperse or remain in their natal group, (ii) adult brothers and sisters often co-reside, and (iii) most individuals in residential groups are genetically unrelated. These patterns produce large interaction networks of unrelated adults and suggest that inclusive fitness cannot explain extensive cooperation in hunter-gatherer bands. However, large social networks may help to explain why humans evolved capacities for social learning that resulted in cumulative culture. 
A news article is also available at Science Daily.
What may seem a bit unexpected is how this social structure is not at all like any related primate group. Our closest relatives live in individual units (orangutans), in single-male dominated harems (gorillas), in patrilocal promiscuous communities (chimpanzees) and in matrilocal even more promiscuous ones (bonobos). 
Instead humans form bands, typically of some 28 individuals, which are neither patrilocal nor matrilocal, and end up including people who are mostly not even closely related to each other. There it goes genetic egoism down the toilet!
This structure actually seems to integrate wider networks of relationships beyond the band, which (from other sources) is typically so flexible that it loses and gains members very frequently. Probably what this implies is ethnic (tribal) networks as main unit, however not even these are closed to strangers, not at all.
The research included some 5000 people from the following nations: Gunwinggu (Australia), Labrador Inuit, Apache, Ache (America), Mbuti, Aka (Africa), Agta and Vedda (Asia).

Update (Mar 17): Blackbird, who runs some interesting blogs on non-human animals you may want to check, mentions that our Pan sp. cousins, both chimpanzees and bonobos, may not be as gender biased as we used to think in regards on who moves and who lives in the established community. The following paper addresses this matter a bit and also includes an interesting analysis of haploid genetics among bonobos (all of which live in D.R. Congo):

Jonas Erikson et al, Y-chromosome analysis confirms highly sex-biased dispersal and suggests a low male effective population size in bonobos (Pan paniscus). Molecular Ecology 2006. [doi: 10.1111/j.1365-294X.2006.02845.x]

He (Blackbird) suggests, and he may be right I’d say, that this ambiguity of locality is similar to what Hills observes for Homo sapiens and may therefore be a shared pattern among the super-genus.  


Challenging ‘behavioral modernity’

This issue of behavioral modernity is something I have never really accepted from mainstream Prehistory and Anthropology. In this conceptual paradigm or intellectual fetish (whatever you prefer to call it), humankind almost suddenly emerged from the amorphous shadows of what we could call (by contrast) behavioral primitivism and began being us, maybe when they decided to create some perdurable art like that we can find in the caves of Southwestern Europe and the related technologies defined as mode four (or blade-based stone industries or Upper Paleolithic in the narrowest possible sense). 
The reference is silly and eurocentric but very real in these academic fields. As of late, the finding of other, more ancient and not really European expressions of prehistoric artwork, notably in Palestine, North Africa and South Africa (in this chronological order per the available data) allowed the concept to escape its original sin of Eurocentrism somewhat but, regardless, is the concept real?
A new study by John J. Shea, published in Current Anthropology (pay per view, discussed at Science Daily), challenges this, already quite shattered perception of some almost miraculous transition towards behavioral modernity on scientific grounds: Shea analyzes the rather well documented early Humankind in East Africa between 250,000  and 6000 years ago, and finds no linear pattern of evolution but rather an outstanding array of nonlinear diversity. 
From the news article:

A systematic comparison of variability in stone tool making strategies over the last quarter-million years shows no single behavioral revolution in our species’ evolutionary history. Instead, the evidence shows wide variability in Homo sapiens toolmaking strategies from the earliest times onwards. Particular changes in stone tool technology can be explained in terms of the varying costs and benefits of different toolmaking strategies, such as greater needs for cutting edge or more efficiently-transportable and functionally-versatile tools. One does not need to invoke a “human revolution” to account for these changes, they are explicable in terms of well-understood principles of behavioral ecology.

Posted by on February 16, 2011 in Anthropology, human evolution, Prehistory, Sociology